The story of the HPCSA’s assessment list …

The Health Professions Council of South Africa (HPCSA) initially took responsibility to classify and review psychometric measures to ensure high assessment standards and appropriate application of them. Since 2014, the process by which the HPCSA (as the statutory body governing psychological acts, which includes psychometric test use) has been classifying and certifying assessments has been of concern for many practitioners and employees alike. You can read more about the ATP court case here and its outcome here, but in short, the process was found to be inefficient and at times outdated, by both clients and distributors, as well as difficult to manage from an HPCSA perspective given their limited resources available to evaluate assessments. Not only did it take years to get assessments on the ‘list of classified assessments’, this list, also had outdated products which were clearly in need of review. In 2019, it was decided that the HPCSA’s list serves to merely ‘classify’ an assessment as measuring a psychological construct or not, and that Assessment Standards South Africa (ASSA) will in future be taking on the responsibility to review and ensure high standards in terms of psychological properties.  Established in 2019, ASSA are currently piloting the new test review process, which will be launched later this year. This independent body serves to provide a transparent and objective review and evaluation of assessments in a timeous manner – more about exactly what should be considered in evaluating an assessment can be viewed on the Assessment Standards South Africa website here.

Professionals, please remember that the onus has always also been on you to request assessment technical manuals and to review psychometric properties – according to Form 223 (Annexure 12): Rules of Conduct Pertaining Specifically to the Profession of Psychology, it is an ethical and legal requirement that you are able to review a test by understanding and explaining the psychometric properties of the instruments you use in your practice. Please do not assume that when an assessment is on the HPCSA list that there has been a review.  

At JVR, we work hard to ensure that our assessments have local research and are standardised locally wherever possible, and in the case of newer international assessments added to our catalogue, have already established adequate evidence for the reliability, validity, and fair and equitable use of the assessment until we are able to conduct local research. As a test publisher, we are guided by the evaluation requirements set out by Assessment Standards South Africa before making an assessment available in the local market, and as a practitioner, you can also view this review document. However, there are also a few tips you can refer to if you ‘quickly’ want to decide whether one of the assessments we offer is suitable for your needs.

Tips on what to look for in a technical manual:  

  • Theoretical background – what is the underlying theory of this assessment? Is it a well-known theory or model? What are the strengths and shortcomings of the assessment? If you are unsure what the background of the assessment is, it is difficult to determine what it’s aiming to measure and whether it is the correct assessment to use for your purposes – linking to validity and test interpretation discussed below. For example, be sure to check that you are not using an assessment meant for development purposes in your selection battery.
  • Representation - If local research has been conducted, look for representation of the major ethnic groups in our country, within reason. If sample size allows (normally a minimum of 100 is required), many of the coefficients and analyses discussed in the following bullets should also be provided for the key ethnic and gender groups to further support fair use of the assessment. However, although local research is preferred, an assessment should not be discarded simply on the basis of it not having local research – please use your professional judgement in this regard and objectively evaluate whether the assessment would work differently in our South African context. Also note that it is often only after sufficient local use of an assessment that research is possible. For more on fairness, please read our previous piece where we specifically addressed fairness in selection.
  • Reliability – ensure that the assessment has been shown to be reliable (consistently provide the same results). Although reliability coefficients of 0.70 and above are acceptable, for standardised instruments, reliabilities in the range of 0.80 and 0.90 are preferred (Anastasi & Urbina, 1997; Smit, 1996). Also make sure that the reliabilities reported are relevant – it doesn’t matter if the manual doesn’t report on inter-rater reliability coefficients for a self-rating personality instrument, but it does matter if parallel-form (also known as alternate-form) reliability is not mentioned for an assessment that has more than one possible version.
  • Validity – manuals also need to indicate that validity was evaluated (demonstrating that the assessment does measure what it says it’s measuring). Once again, make sure that the information shared here is relevant. Some ways of showing validity can be by means of a factor analysis that shows that the new assessment fits the theoretical model on which it was based, showing how a new assessment correlates to a similar measure known to assess the same construct, and especially in the case of assessment centre exercises and situational judgement tests, the relevance of the scenarios (face and content validity) can also be discussed.
  • Sub-group differences – the purpose of analyses such as t-tests, ANOVA’s, Rasch and Differential Item Functioning are to identify whether differences between sub-groups exist, and are appearing more frequently in technical manuals, and are particularly useful for determining that the assessment is fair and not biased towards any group.
  • An important one: Test Administration, Interpretation, and Feedback – remember that the psychometric properties indicated in the manual are all statistical indicators of the scientific soundness of the tool. However, it is also up to you as the practitioner to ensure that the administration of the assessment is performed in a standardised manner. Some manuals have specific instructions on how to administer the assessment, but the typical basics to keep in mind is to inform your candidate about the purpose of the assessment, administer the assessment in a quiet environment with no distractions, and in some cases, have no support material such as calculators (especially for most cognitive assessments) and a stable Internet connection (for online assessments). Following standardised administration, also ensure that you can competently and confidently interpret the results. You also need to be able to contextualise these interpretations for the client during your feedback session with them. Please note that even if an assessment has excellent psychometric properties, if you do not administer and interpret it correctly, you are negatively affecting the validity of the results.

If you still have questions or concerns about the assessments you are using in your practice, please feel free to contact
If you would like to refresh your understanding of psychometric properties, you can also register for the Psychometrics in a Nutshell e-Learnings.