Large scale research study on use of National Benchmark Tests (NBTs)

The use of forms of testing alternative to the school leaving examination has a long history in South Africa given that disparities in the quality of schooling mean that not all learners have a fair chance in national assessments intended to measure their achievements at secondary level.

Although the National Benchmark Tests (NBTs)—the last in a series of tests developed under the leadership of colleagues at the University of Cape Town—were never intended to be used for admissions, they have been used for this purpose particularly in high stakes programmes such as medicine and engineering because of the fine gradations in performance they are able to identify.  However, because of debate about their use in admissions, much of which has been contentious, their value in making decisions about whether, for example, to place a student on an Extended Programme has tended to be overlooked. Also arguably unheeded have been the valuable insights the NBTs can provide into curriculum development both of ‘developmental courses’ in Extended Programmes and regular courses in the early years of an undergraduate programme.

Earlier this year, a large research project, funded by the Michael and Susan Dell Foundation, explored the predictive validity of the NBTs, something which had long been a source of dispute with different claims being made about them in different institutions.  This very rigorous piece of research involved three data sets: HEMIS data for all 26 universities data sourced from the DHET; NSC results for 770,000 students; and NBT results for 250,000 students. Analysis of this data showed that:

  • Both the NBTs and the National Senior Certificate (NSC) had similar value in predicting performance at tertiary level; and
  • The NBTs had strong diagnostic value because of the way they are able to analyse students’ performance at a sub-domain level (i.e. performance within the broad constructs of ‘literacy’ and ‘mathematics’).  The NSC simply cannot do this in its current form.

‘Diagnostic value’ means that the tests are able to identify gaps in students learning which will need to be addressed before they can proceed at tertiary level at a detailed level.

The implications of these findings for AD practitioners are really significant. For the first time, we have valid and reliable evidence from a really wide-scale study to support the use of information from NBTs in curriculum development and placement on initiatives such as Extended Programmes.

The NBTs provide information at cohort, institutional, faculty and individual levels.  This means that it is possible, for example, for AD staff working in a faculty to use the detailed information they provide to inform the curricula of ‘developmental’ courses whether they form part of Extended Programmes or not.  It also means that support can be tailored to the needs of individual students depending on their performance on the NBTs.

Universities South Africa (USAf) is planning a workshop on the use of the NBTs for diagnostic purposes and curriculum development in Johannesburg on 6 November 2018.  All institutions will be invited to send delegates to this event. It is important to note that historically it has been people working with admissions who have been nominated to attend.

The need for AD practitioners to understand the potential of the NBTs to diagnose and inform curriculum development is really important.  It would be wonderful to see more of us who are working with curriculum development at the workshop so we can capitalize on the results of this significant study in our own work. The invitation to the workshop will go to DVCs. HELTASA members are encouraged to approach relevant people in their own universities to see who is being nominated for the workshop and to ask whether or not delegates working with teaching and learning can be included