A POSITION STATEMENT BY JVR PSYCHOMETRICS


1. INTRODUCTION

Even though there seemingly was a resolution on the use of personality in selection in 2007, criticisms of the predictive validity of off-the-shelf measures of personality do emerge from time to time.  Points of critique that are expressed in practice include claims that these measures are (1) designed to describe a theory, not predict future behaviour, (2) developed for academic, not business application, and (3) that no evidence exists to suggest that these measures can impact business results.  We have previously discussed the history of personality assessments in the workplace (JvR Africa Group, 2012), but would like to make the focus of this paper one that addresses the criticisms that personality assessments face from time to time.  We will do this by reflecting on the conclusions by Morgeson, et al. (2007), as well as the counterarguments by Ones, et al. (2007) and Tett and Christiansen (2007).  Even though off-the-shelf measures of personality have distinct advantages in selection processes, it is important to acknowledge that the field of personality psychology is not stagnant and that research in the field continues to provide an ever more nuanced perspective on the use of personality in selection.  We consider a few more nuanced perspectives on personality that might increase predictive validity in selection.

2. HISTORICAL OVERVIEW

One of the first authors to have criticised the use of personality assessments in selection was Guion and Gottier (1965).  Based on a vote-counting approach, Guion and Gottier (1965) came to the conclusion that conventional measures of personality, at the time, did not demonstrate sufficient evidence to be useful in selection (Guion & Gottier, 1965). Arguments put forward included the need for more (1) predictive rather than concurrent validity studies, (2) personality theories that are relevant to the workplace, and (3) generalisable evidence that personality assessments can be recommended as useful tools in selection. According to Morgeson, et al. (2007), this view prevailed for 25 years until the publication of two meta-analyses performed by Barrick and Mount (1991) and Tett, Jackson, and Rothstein (1991), who, even though they found similar estimates of validity in the past, still concluded that (1) corrected estimates of validity were meaningful, and (2) that it was still meaningful to use personality in selection.  Morgeson, et al. (2007) argue that these two seminal publications led to a proliferation in publications on the usefulness of personality in selection processes.

In response to the proliferation of personality in selection processes, Morgeson, et al. (2007) convened a panel discussion with Michael Champion, Robert Dipboye, John Hollenbeck, Kevin Murphy, and Neil Schmitt to reinvigorate a healthy scepticism about the use of personality assessments in selection.  The findings of this panel discussion, as well as the comments of these scholars, were captured in Morgeson, et al.'s (2007) article but soon rebutted by Ones, Dilchert, Viswesvaran, and Judge (2007), and Tett and Christiansen (2007).  The debate in 2007 will take central focus in this position statement, however, it is important to note that other authors have also made important contributions to the broader discussion, such as the publication by Hogan, Barrett, and Hogan (2007).  This position statement is, therefore, by no means a complete literature review but instead an attempt to highlight the points of critique in a very specific debate, while also proposing a way forward.

3. POINTS OF THE DEBATE

As mentioned previously, this paper is by no means a thorough review of the debate on the use of personality assessment in selection.  The bibliographies and citations of Morgeson, et al. (2007), Ones, et al. (2007), and Tett and Christiansen (2007) are extensive enough to justify its own review.  Rather, the intention is to provide a snapshot of the conclusions reached by Morgeson, et al. (2007) and Ones, et al. (2007).  Tett and Christiansen (2007) will be referenced in conjunction with Ones, et al. (2007) to outline the complexity of the issues raised.  If you are interested in a more thorough review, you are encouraged to read the full-length articles, browse through the bibliography for further references, and conduct a search on subsequent citations of the authors’ articles.

3.1 (Non) effects of faking on self-report measures of personality

Based on a panel discussion with, including the comments of Michael Champion, Robert Dipboye, John Hollenbeck, Kevin Murphy, and Neil Schmitt, Morgeson, et al. (2007) reach three conclusions about faking in self-report measures of personality, which are quoted below.

“(a) Faking on self-report personality tests should be expected, and it probably cannot be avoided, although there is some disagreement among the authors on the extent to which faking is problematic” (Morgeson, et al. 2007, p. 720).

“(b) Faking or the ability to fake may not always be bad. In fact, it may be job-related or at least socially adaptive in some situations” (Morgeson, et al. 2007, p. 720).

“(c) Corrections for faking do not appear to improve validity. However, the use of bogus items may be a potentially useful way of identifying fakers” (Morgeson, et al. 2007, p. 720).

Ones, et al. (2007) counter point (a) in Morgeson, et al.'s (2007) article by highlighting evidence from Hough (1998) that supports the criterion-related validity of personality measures under high-stake situations, such as selection.  Ones, et al. (2007) further indicate that the construct validity of self-report measures of personality remains consistent across selection and non-selection situations, providing less evidence that social desirability could distort results on personality measures (Bradley & Hauenstein, 2006; Robie, Zickar, & Schmit, 2001).  In disagreement with point (b), Ones, et al. (2007) further argue that, based on the meta-analytical evidence of Li and Bagger (2006) and others, that social desirability scales are not predictive of job performance.  Ones, et al. (2007) contradict the use of faking measures since the cumulative evidence suggests that the use of faking measures do not maximise the prediction of performance (Schmitt & Oswald, 2006).  Tett and Christiansen (2007) concur with Ones, et al. (2007) on the use of scales of faking in order to predict performance.  A direct quotation on the relevant conclusions from the debate by Ones, et al. (2007) is provided below.

 “4. Faking does not ruin the criterion-related or construct validity of personality scores in applied settings” (Ones, et al. 2007, p. 1020).

Tett and Christiansen (2007) further outline the complexity of research on faking by critiquing the traditional way in which social desirability was determined and concludes that:

“8: Past research suggesting that faking does not affect personality test validity under true applicant conditions is uninformative to the degree it relies on social desirability measures” (Tett & Christiansen, 2007, p. 982).

“9: Past research suggesting that faking does not affect personality test validity is uninformative to the degree it relies on statistical partialing techniques” (Tett & Christiansen, 2007, p. 982).

“10: Applicant faking attenuates personality test validity but enough trait variance remains to be useful for predicting job performance” (Tett & Christiansen, 2007, p. 984).

“12: It has not been shown that faking indicates social competence or that faking predicts future job success. Claims that faking may be desirable, even for some jobs, are premature.”

A study conducted by Odendaal (2015) suggests that the validity and fairness in the use of social desirability scales in South Africa should be seriously questioned. Odendaal (2015) provides evidence that social desirability scales measure faking differently across cultural and language groups. As a result, discrimination might occur based on factors that are unrelated to the requirements of the job, thereby adversely impacting black applicants for jobs.

Summary of JvR’s position on point 3.1

As per the arguments of Ones, et al. (2007) and Tett and Christiansen (2007) we conclude that faking, in as far as it is assessed using social desirability measures, does not negatively impact the accuracy with which personality predicts job performance.  In fact, inferences based on social desirability scales could cause harm in selection processes (Odendaal, 2015).  However, we are still open to considering ongoing research on novel ways to determine the impact of applicant faking, such as a recent study performed by Dunlop, et al. (2019), which indicates that overclaiming might occur more in questionnaires that contain job-relevant instead of job-irrelevant content.  These studies might provide invaluable guidelines to refine and improve existing ways of measuring personality.

3.2 Predictive validity of self-report personality measures

Morgeson, et al. (2007) take a dim view on the predictive validity of self-report measures of personality. They stated:

“(d) We must not forget that personality tests have very low validity for predicting overall job performance. Some of the highest reported validities in the literature are potentially inflated due to extensive corrections or methodological weaknesses” (Morgeson, et al. 2007, p. 720).

In conducting quantitative meta-analytical summaries, Ones, et al. (2007) not only found practically meaningful relationships between personality and (1) job performance, but also personality and (2) leadership effectiveness, (3) entrepreneurship, and (4) work motivation and attitudes.  A meta-analysis of South African studies reaffirmed the importance of the Big Five personality traits as important predictors of job performance and, based on the invariance of the personality-performance relation across countries, claims that this relation is culturally universal (Van Aarde, Meiring, & Wiernik, 2017).  Ones, et al. (2007) also investigated the predictive validity of conscientiousness, including its facets in the selection process and found that it was on par with other frequently used predictors.  Ones, et al. (2007) finally indicated that, if one were to properly review the literature on personnel psychology (Barrick & Mount, 1991), the corrections used in meta-analyses for personality were more conservative than meta-analysis conducted on alternative selection measures.  Direct quotations on the relevant conclusions from the debate by Ones, et al. (2007) are provided below.

 “1. Personality variables, as measured by self-reports, have substantial validities, which have been established in several quantitative reviews of hundreds of peer-reviewed research studies” (Ones, et al. 2007, p. 1020).

“2. Vote counting and qualitative opinions are scientifically inferior alternatives to quantitative reviews and psychometric meta-analysis” (Ones, et al. 2007, p. 1020).

3. “Self-reports of personality, in large applicant samples and actual selection settings (where faking is often purported to distort responses), have yielded substantial validities even for externally obtained (e.g. supervisory ratings, detected counterproductive behaviours) and/or objective criteria (e.g. production records)” (Ones, et al., 2007, p. 1020).

Tett and Christiansen (2007) outline the complexity of predicting job performance from personality in meta-analyses and argue that critique against the use of personality in selection is often based on inaccurate inferences by concluding that:

“1: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring variability in the strength and direction of population validity coefficients” (Tett & Christiansen, 2007, p. 973).

“2: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring the value of confirmatory over exploratory research strategies, failing to reflect how personality tests are used and should be used in actual selection practice” (Tett & Christiansen, 2007, p. 974).

“3: Mean r between personality tests and job performance measures, even when derived using a confirmatory strategy, underestimates the potential validity of personality tests by ignoring the added value of personality-oriented job analysis” (Tett & Christiansen, 2007, p. 975).

“4: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring the value of narrow over broad trait and criterion measures” (Tett & Christiansen, 2007, p. 976).

“5: Mean r between personality tests and job performance measures underestimates the potential validity of personality tests by ignoring incremental validity expected from combining scores on multiple trait measures” (Tett & Christiansen, 2007, p. 977).

“6: Personality test validity in predicting job performance can be expected to improve over currently available estimates in light of untapped theory and corresponding developments in job analysis methods targeting the situations in which specific traits are expressed and then evaluated as job performance” (Tett & Christiansen, 2007, p. 978).

“7: Personality test validity in predicting job performance can be expected to improve over currently available estimates in light of possible interactions among traits in their relations with relevant workplace criteria” (Tett & Christiansen, 2007, p. 979).

Summary of JvR’s position on point 3.2

In agreement with the meta-analytical findings of Ones, et al. (2007) and Van Aarde, et al. (2017), we conclude that personality is a valid predictor of job performance and can, therefore, be used for selection purposes.  We would also like to point out that, in agreement with one of the panellists listed in Morgeson, et al.'s (2007) article, that practitioners should start to think in a more nuanced way about the job criteria used to infer the predictive validity of personality measures.  This issue will receive more attention later in this paper (Schmitt, 2014).  In agreement with Schmitt (2014), we also acknowledge that, given the lack of correlation between cognitive ability and personality, their combination might be a powerful predictor of job performance (Schmidt & Hunter, 1998).

3.3 Off-the-shelf vs home-grown assessments of personality

Morgeson, et al. (2007) reach two conclusions about the use of off-the-shelf assessments in selection processes, which are quoted below.

(e) Due to the low validity and content of some items, many published self-report personality tests should probably not be used for personnel selection. Some are better than others, of course, and when those better personality tests are combined with cognitive ability tests, in many cases validity is likely to be greater than when either is used separately (Morgeson, et al. 2007, p. 720-721).

“(f) If personality tests are used, customized personality measures that are clearly job-related in face valid ways might be more easily explained to both candidates and organisations” (Morgeson, et al. 2007, p. 721).

Ones, et al. (2007) provides a rebuttal to points (e) and (f) in Morgeson, et al.'s (2007) article, stating that it is unclear why home-grown (or more customised) assessments would necessarily lead to higher predictive validity.  In this respect, Ones, et al. (2007) argue that home-grown assessments can be lacking in construct and criterion-related validity, unless a considerable amount of resources are invested in the construction of the assessment, thereby matching the psychometric properties of established personality assessments.  Furthermore, off-the-shelf personality assessments might also have more extensive norm groups for comparison (Ones, et al. 2007).  Direct quotations on the relevant conclusions from the refutation by Ones, et al. (2007) are provided below.

 “6. Customized tests are not necessarily superior to traditional standardised personality tests” (Ones, et al. 2007, p. 1020).

Summary of JvR’s position on point 3.3

In agreement with Ones, et al. (2007), we believe that established measures of personality have a competitive advantage in terms of the sheer amount of evidence that support their validity.  That does not preclude the development of new measures, but cautions against potential errors in judgement when generalisations are made based on smaller samples (Ones, Viswesvaran, & Schmidt, 2016).  This might be a particularly important issue given the replication crisis in which psychology currently finds itself (Ones, et al. 2016).

3.4 An evaluation of alternatives

In an evaluation of alternatives, Morgeson, et al. (2007) reach two conclusions:

“(g) Future research might focus on areas of the criterion domain that are likely to be more predictable by personality measures” (Morgeson, et al. 2007, p. 721).

“(h) Personality constructs certainly have value in understanding work behaviour, but future research should focus on finding alternatives to self-report personality measures. There is some disagreement among the authors in terms of the future potential of the alternative approaches to personality assessment currently being pursued” (Morgeson, et al. 2007, p. 721).

Ones, et al. (2007) refutes the alternative raised in Morgeson, et al.'s (2007) article, namely the use of conditional reasoning tests (implicit measures of the extent to which individuals use justification mechanisms to rationalise their behaviour) and ipsative scale formats.  Ones, et al. (2007) indicate that the predictive validities of conditional reasoning measures are comparable at best with personality assessments and, with the added disadvantage of the costs involved with the development of these measures, might not be a feasible solution for selection (Ones, et al. 2007).  Scores derived from ipsative scales from a classical test theory perspective, impose certain psychometric difficulties in terms of reliability estimation and threats to construct validity (Brown & Maydeu-Olivares, 2013).  Ones, et al. (2007), however, acknowledge the potential of item response theory in deriving normative scores from ipsative scales in the future.  The findings of Brown and Maydeu-Olivares (2013) suggest that Thurstonian Item Response Theory could be used to perform inter-individual comparisons based on ipsative scales.  Finally, Ones, et al. (2007) recognise the incremental validity of others’ ratings on personality traits in predicting job performance.  Direct quotations on the relevant conclusions from Ones, et al. (2007) are provided below.

“7. When feasible, utilising both self- and observer ratings of personality likely produces validities that are comparable to the most valid selection measures” (Ones, et al. 2007, p. 1020).

“8. Proposed palliatives (e.g. conditional reasoning, forced-choice ipsative measures), when critically reviewed, do not currently offer viable alternatives to traditional self-report personality inventories” (Ones, et al. 2007, p. 1020).

Summary of JvR’s position on point 3.4

The research on the use of personality in selection is not stagnant and it is important to consider various alternatives that might help practitioners to increase the predictive validity of personality assessments.  In the section to follow, we will consider alternatives to ensure the appropriate prediction of job performance from established measures of personality.

 

 

4. THE WAY FORWARD

In contrast to the three points of critique expressed in the introduction, we believe that it is dangerous not to follow a theory-driven approach. However, we affirm the notion that well-reasoned conclusions can emerge from top-down (theories developed by academics, which are tested in practice) or bottom-up (insights that emerge from practice that leads to theory development) approaches to theory development (Latham & Locke, 2006; McAbee, Landis, & Burke, 2017; Spector, Rogelberg, Ryan, Schmitt, & Zedeck, 2014). Both top-down and bottom-up approaches can be useful to, firstly construct psychological measures, and secondly to inspect the predictive validity of personality traits for job performance.  In fact, a well-reasoned conclusion is probably more likely to help decision makers explain why using one construct in predicting job performance is fairer than using another.  When opportunistic relationships between predictor and outcome variables are derived from one-shot correlational studies, especially when there are not sufficient resources to replicate findings or logical reasons for it, spurious conclusions can be drawn based on arbitrary relationships, which is a fear that has been reiterated with emergence of big data methods (McAbee, et al. 2017; Wax, Asencio, & Carter, 2015).  However, even with well-designed big data research projects, the scientist and practitioner still require some theoretical orientation to help clients make sense of people-based findings (McAbee, et al. 2017).  In summary, as phrased by the late Professor Kurt Lewin (1952, p. 169), there is “nothing more practical than a good theory”.

Point 2 of the critique raised in the introduction is in contrast with the scientist-practitioner model, which recognises the importance of basing practice on scientific findings (Briner & Rousseau, 2011).  Improving performance in the workplace does not have to exclusively depend on scientists or practitioners, as both parties could contribute in meaningful ways to the improvement of theory and practice when using personality in the workplace (Latham, 2019).

We also disagree with point 3 of the critique expressed in the introduction. The meta-analytical study conducted by Ones, et al. (2007) proves that off-the-shelf measures of the Big Five model of personality adds value to selection processes (Dilchert, Ones, & Krueger, 2019).  As concluded in point (e) of Morgeson, et al.'s (2007) article, we agree that not all off-the-shelf measures of personality assessments should be thrown out with the proverbial bathwater, but that practitioners should be cautious with what they purchase for selection purposes.  Furthermore, the research conducted on the added value of personality to selection processes is not stagnant and developments in research must be considered.  In this respect, the too-much-of-good-thing effect, interactions effects between personality traits, focus on broad vs narrow traits, the influence of contextual factors, and the importance of others’ ratings have to be taken into consideration by practitioners who are serious about using personality assessments for selection.

4.1 The too-much-of-a-good-thing effect

The too-much-of-good-thing effect dispels the notion that excess in a psychological characteristic is always a good thing (Pierce & Aguinis, 2013).  For example, Whetzel, Mcdaniel, Yost, and Kim (2010) reported a curvilinear relationship between conscientiousness and job performance, where too-high levels of conscientiousness had an unintended negative consequence for job performance.  However, Le, et al. (2011) indicated that the inflexion point where the effect of conscientiousness and job performance goes down on the curve depends on the complexity of the task (Le, et al. 2011).  Tasks that are less complex require greater speed and, therefore, less persistence and dutifulness.  By contrast, when tasks are more complex, greater accuracy is required, in which case being more dutiful and persistent will count in an employee’s favour (Le, et al. 2011).  Hogan and Hogan (2009) also provides compelling evidence on areas where excesses in eleven personality traits could lead to derailment in the workplace.  Excesses in these traits are linked to personality disorders (Hogan & Hogan, 2001), which include the traits excitable, sceptical, cautious, reserved, leisurely, bold, mischievous, colourful, imaginative, diligent, and dutiful.

The findings caution practitioners to not assume that the relationship between “good” traits and job performance are straightforward, but to consider the nature of the role and the unintended negative effects of too high levels of what might be assumed to be “preferable” traits.  It also brings to light that the techniques used to investigate the relationship between personality and job performance should account for the non-linear relationships by, for example, conducting polynomial regressions, accounting for moderation effects, or using artificial neural networks (Minbashian, Bright, & Bird, 2010).

4.2 Interaction effects between personality traits

Building on the complexity of the relationship between personality and job performance, it is important to note that traits do not predict performance in isolation (Ones, et al. 2007).  For example, in the case where cooperation with others is required, the interaction effect of conscientiousness and agreeableness provide a clearer picture as to why some individuals, who have the same levels of conscientiousness, perform better than others in interpersonal setting such as teams (Witt, Burke, Barrick, & Mount, 2002).  Practitioners’ selection decisions should, therefore, not be based on a single trait from a personality questionnaire but consider the evidence that supports the interaction of several traits that might be relevant for the requirements of a role.

4.3 Broader versus narrower traits

A new perspective that is emerging, namely the proposition of meta-traits, suggests that the Big Five traits of personality could be reduced to two meta-traits, which can be arranged in a circumplex model (Strus & Cieciuch, 2017, 2019; Strus, Cieciuch, & Rowiński, 2014).  According to Strus and Cieciuch (2019), the circumplex model provides a more comprehensive integration of personality traits, making its applications more specific and dynamic.  The two meta-traits that are differentiated include alpha (stability) and beta (plasticity) (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014). Alpha refers to the covariance of neuroticism, conscientiousness, and agreeableness (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014). Taken together, alpha reflects social self-regulation or the stability of employees’ emotional, motivational, and social functioning (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014).  Beta, in contrast, encapsulate the covariance between openness to experience and extroversion and reflects an employee’s plasticity (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014). Plasticity, in this context, refers to employees’ proclivity to explore and voluntarily engage in new experiences (Strus & Cieciuch, 2017, 2019; Strus, et al. 2014).  Integrity, which similarly to the meta-trait of stability is a compound trait consisting of conscientiousness, neuroticism, and agreeableness (Ones, Viswesvaran, & Dilchert, 2005), is reported to be a strong predictor of job performance (Ones, et al. 2007; Schmidt & Hunter, 1998).

On the other side of the continuum, there is a call for narrowing the use of traits in predicting performance (Anglim & O’Connor, 2019).  Narrow traits refer to facets that a factor such as conscientiousness are composed of, for example, order, self-discipline, dutifulness, effort, and prudence (Anglim & O’Connor, 2019; Taylor & de Bruin, 2013).  There is evidence that narrow traits offer enhanced predictive validity, especially when narrow aspects of job performance are predicted (Anglim & Grant, 2016, 2014; Dudley, Orvis, Lebiecki, & Cortina, 2006; Pletzer, Oostrom, Bentvelzen, & de Vries, 2020).

Given the opposing need for both broadening and narrowing traits at the same time, the wisdom of Hough, Oswald, and Ock (2015) might provide a meaningful and practical way to discern what is required in the selection process. In this respect, it is important that practitioners determine (1) what the breadth of the measure of job performance is, (2) how much time is available for assessments, and (3) whether the organisation has become more reliant on traditional representations of personality.  Subsequently, the practitioner can strike a balance as to whether broader or more narrower traits of personality should be measured ( Hough, et al. 2015).  The ideal, however, would remain to measure as many aspects of personality in conjunction with clear aspects of performance, in an as detailed way as possible (He, Donnellan, & Mendoza, 2019).  In doing so, the practitioner can move between different levels of abstraction when making selection decisions.

4.4 Contextual factors

Personality and its relationship with job performance were viewed as constant across different situations in the past (Huang & Ryan, 2011).  However, as with the relationship between cognitive ability and job performance, the characteristics of the situation might influence the relationship between personality and job performance ( Hough & Oswald, 2008).  In the case of the relationship between cognitive ability and job performance, the complexity of tasks (situational variable) has a practical meaningful effect on the strength of the relationship (Schmidt & Hunter, 1998).  When tasks are more complex, the relationship between cognitive ability and job performance is stronger (Schmidt & Hunter, 1998).  In personality, for example, the relationship between conscientiousness and job performance might be higher for more autonomous jobs (Barrick & Mount, 1993; Hough & Oswald, 2008).  Rather than viewing personality as static, it might be meaningful for practitioners to consider the ways in which situations can activate personality traits to the benefit or detriment of job performance in an organisation ( Hough & Oswald, 2008; Tett & Burnett, 2003; Tett & Christiansen, 2007).  In order to think about situations in a more informed way, it might be meaningful to consider the effects of five distinct situational features, namely job demands, distractors, constraints, releaser, and facilitator (Tett & Burnett, 2003).

It has also been found that when items in a questionnaire are rephrased to be more context-specific, the predictive validity of personality measures increases (Holtrop, Born, De Vries, & De Vries, 2014; Holtrop, Born, & De Vries, 2014).  For example, instead of phrasing an item as “I follow through with my plans”, an item can be contextualised by phrasing it as “I follow through with my plans at work”.

 

4.5 The importance of others’ ratings

According to Hogan and Sherman (2020), identity refers to a person’s imagined self, which is loosely based on reality.  By contrast, reputation refers to the “you” that other people know (Hogan & Sherman, 2020), which is purported to be a more important factor for productive social life in organised groups.  Hogan and Sherman (2020) acknowledge that self-reports contain identity claims, however, Hogan and Sherman (2020) also indicated that well-constructed self-report items could be highly correlated with important reputation-based outcomes in the workplace.

Oh, Wang, and Mount (2011) and Connelly and Ones (2010) indicate that, if it can be established that another person knows an employee well, then others’ ratings do add incremental validity to predicting job performance.  However, it might be difficult to obtain others’ ratings of employees in selection processes where candidates were recruited from external to the organisation. Creative ways have to be found around this challenge, such as using personality assessments as part of a reference check (Oh, et al. 2011).

5. CONCLUSIONS

In this position statement, JvR revisits the debate between Morgeson, et al. (2007); Ones, et al. (2007); and Tett and Christiansen (2007) in order to determine what the value of off-the-shelf measures of personality are in selection processes.  It was evident, from meta-analytical research (Ones, et al. 2007; Van Aarde, et al. 2017), that personality plays a valuable role in predicting performance and can, therefore, be a valuable part in selection processes.  However, it is acknowledged that research on personality in the workplace is not stagnant.  Subsequently, (relatively) new developments in the field of personality psychology were considered, namely the too-much-of-a-good thing effect, the interaction of traits, focus on broader versus narrower personality traits, the influence of situational variables, and importance of others’ ratings.  In summary, we believe that personality measures add valuable and essential information in selection.  However, given the dynamic nature of personality research, we should continue to enhance and think more nuanced about the measurement of personality and its ability to predict job performance.

 

 

REFERENCES

Anglim, J., & Grant, S. (2016). Predicting psychological and subjective well-being from personality: Incremental prediction from 30 facets over the Big 5. Journal of Happiness Studies, 17(1), 59–80. https://doi.org/10.1007/s10902-014-9583-7

Anglim, J., & Grant, S. L. (2014). Incremental criterion prediction of personality facets over factors: Obtaining unbiased estimates and confidence intervals. Journal of Research in Personality, 53, 148–157. https://doi.org/10.1016/j.jrp.2014.10.005

Anglim, J., & O’Connor, P. (2019). Measurement and research using the Big Five, HEXACO, and narrow traits: A primer for researchers and practitioners. Australian Journal of Psychology, 71(1), 16–25. https://doi.org/10.1111/ajpy.12202

Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44(1), 1–26. https://doi.org/10.1111/j.1744-6570.1991.tb00688.x

Barrick, M. R., & Mount, M. K. (1993). Autonomy as a moderator of the relationships between the Big Five personality dimensions and job performance. Journal of Applied Psychology, 78(1), 111–118. https://doi.org/10.1037/0021-9010.78.1.111

Bradley, K. M., & Hauenstein, N. M. A. (2006). The moderating effects of sample type as evidence of the effects of faking on personality scale correlations and factor structure. Psychology Science, 48(3), 313–335.

Briner, R. B., & Rousseau, D. M. (2011). Evidence-based I-O psychology: Not there yet. Industrial and Organizational Psychology, 4(1), 3–22. https://doi.org/10.1111/j.1754-9434.2010.01287.x

Brown, A., & Maydeu-Olivares, A. (2013). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods, 18(1), 36–52. https://doi.org/10.1037/a0030641

Connelly, B. S., & Ones, D. S. (2010). An other perspective on personality: Meta-analytic integration of observers’ accuracy and predictive validity. Psychological Bulletin, 136(6), 1092–1122. https://doi.org/10.1037/a0021212

Dilchert, S., Ones, D. S., & Krueger, R. F. (2019). Personality assessment for work: Legal, I-O, and clinical perspective. Industrial and Organizational Psychology, 12(2), 143–150. https://doi.org/10.1017/iop.2019.27

Dudley, N. M., Orvis, K. A., Lebiecki, J. E., & Cortina, J. M. (2006). A meta-analytic investigation of conscientiousness in the prediction of job performance: Examining the intercorrelations and the incremental validity of narrow traits. Journal of Applied Psychology, 91(1), 40–57. https://doi.org/10.1037/0021-9010.91.1.40

Dunlop, P. D., Bourdage, J. S., de Vries, R. E., McNeill, I. M., Jorritsma, K., Orchard, M.,  Choe, W.-K. (2019). Liar! Liar! (when stakes are higher): Understanding how the overclaiming technique can be used to measure faking in personnel selection. Journal of Applied Psychology, (October). https://doi.org/10.1037/apl0000463

Guion, R. M., & Gottier, R. F. (1965). Validity of personality measures in personnel selection. Personnel Psychology, 18(2), 135–164. https://doi.org/10.1111/j.1744-6570.1965.tb00273.x

He, Y., Donnellan, M. B., & Mendoza, A. M. (2019). Five-factor personality domains and job performance: A second order meta-analysis. Journal of Research in Personality, 82, 103848. https://doi.org/10.1016/j.jrp.2019.103848

Hogan, J., Barrett, P., & Hogan, R. (2007). Personality measurement, faking, and employment selection. Journal of Applied Psychology, 92(5), 1270–1285. https://doi.org/10.1037/0021-9010.92.5.1270

Hogan, R., & Hogan, J. (2001). Assessing leadership: A view from the dark side. International Journal of Selection and Assessment, 9(1&2), 40–51. https://doi.org/10.1111/1468-2389.00162

Hogan, R., & Hogan, J. (2009). Hogan Development Survey Manual. Tulsa, OK: Hogan Assessment Sytems.

Hogan, R., & Sherman, R. A. (2020). Personality theory and the nature of human nature. Personality and Individual Differences, 152(2020), 1–5. https://doi.org/10.1016/j.paid.2019.109561

Holtrop, D., Born, M. P., De Vries, A., & De Vries, R. E. (2014). A matter of context: A comparison of two types of contextualized personality measures. Personality and Individual Differences, 68, 234–240. https://doi.org/10.1016/j.paid.2014.04.029

Holtrop, D., Born, M. P., & De Vries, R. E. (2014). Predicting performance with contextualized inventories, no frame-of-reference effect? International Journal of Selection and Assessment, 22(2), 219–223. https://doi.org/10.1111/ijsa.12071

Hough, L. (1998). Personality at work: Issues and evidence. In M. Hakel (Ed.), Beyond multiple choice: Evaluating alternatives to traditional testing for selection (pp. 131–166). Mahwah, NJ: Erlbaum.

Hough, L. M., & Oswald, F. L. (2008). Personality testing and industrial–organizational psychology: Reflections, progress, and prospects. Industrial and Organizational Psychology, 1(3), 272–290. https://doi.org/10.1111/j.1754-9434.2008.00048.x

Hough, L. M., Oswald, F. L., & Ock, J. (2015). Beyond the Big Five: New directions for personality research and practice in organizations. Annual Review of Organizational Psychology and Organizational Behavior, 2(1), 183–209. https://doi.org/10.1146/annurev-orgpsych-032414-111441

Huang, J. L., & Ryan, A. M. (2011). Beyond personality traits: A study of personality states and situational contingencies in customer service jobs. Personnel Psychology, 64(2), 451–488. https://doi.org/10.1111/j.1744-6570.2011.01216.x

JvR Africa Group. (2012). History of personality assessments in the workplace. Retrieved from https://jvrafricagroup.co.za/blog/history-of-personality-assessments-in-the-workplace

Latham, G. P. (2019). Perspectives of a practitioner-scientist on organizational psychology/organizational behavior. Annual Review of Organizational Psychology and Organizational Behavior, 6(1), 1–16. https://doi.org/10.1146/annurev-orgpsych-012218-015323

Latham, G. P., & Locke, E. A. (2006). Enhancing the benefits and overcoming the pitfalls of goal setting. Organizational Dynamics, 35(4), 332–340. https://doi.org/10.1016/j.orgdyn.2006.08.008

Le, H., Oh, I. S., Robbins, S. B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good thing: Curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 96(1), 113–133. https://doi.org/10.1037/a0021016

Lewin, K. (1952). Field theory in social science: Selected theoretical papers by Kurt Lewin. London, UK: Tavistock.

Li, A., & Bagger, J. (2006). Using the BIDR to distinguish the effects of impression management and self-deception on the criterion validity of personality measures: A meta-analysis. International Journal of Selection and Assessment, 14(2), 131–141. https://doi.org/10.1111/j.1468-2389.2006.00339.x

McAbee, S. T., Landis, R. S., & Burke, M. I. (2017). Inductive reasoning: The promise of big data. Human Resource Management Review, 27(2), 277–290. https://doi.org/10.1016/j.hrmr.2016.08.005

Minbashian, A., Bright, J. E. H., & Bird, K. D. (2010). A comparison of artificial neural networks and multiple regression in the context of research on personality and work performance. Organizational Research Methods, 13(3), 540–561. https://doi.org/10.1177/1094428109335658

Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007). Reconsidering the use of personality tests in personnel selection contexts. Personnel Psychology, 60(3), 683–729. https://doi.org/10.1111/j.1744-6570.2007.00089.x

Odendaal, A. (2015). Cross-cultural differences in social desirability scales: Influence of cognitive ability. SA Journal of Industrial Psychology, 41(1), 1–13. https://doi.org/10.4102/sajip.v41i1.1259

Oh, I. S., Wang, G., & Mount, M. K. (2011). Validity of observer ratings of the Five-Factor Model of Personality Traits: A meta-analysis. Journal of Applied Psychology, 96(4), 762–773. https://doi.org/10.1037/a0021832

Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60(4), 995–1027. https://doi.org/10.1111/j.1744-6570.2007.00099.x

Ones, D. S., Viswesvaran, C., & Dilchert, S. (2005). Personality at work: Raising awareness and correcting misconceptions. Human Performance, 18(4), 389–404. https://doi.org/10.1207/s15327043hup1804_5

Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (2016). Realizing the full potential of psychometric meta-analysis for a cumulative science and practice of human resource management. Human Resource Management Review, 27(1), 201–215. https://doi.org/10.1016/j.hrmr.2016.09.011

Pierce, J. R., & Aguinis, H. (2013). The too-much-of-a-good-thing effect in management. Journal of Management, 39(2), 313–338. https://doi.org/10.1177/0149206311410060

Pletzer, J. L., Oostrom, J. K., Bentvelzen, M., & de Vries, R. E. (2020). Comparing domain- and facet-level relations of the HEXACO personality model with workplace deviance: A meta-analysis. Personality and Individual Differences, 152(July 2019), 109539. https://doi.org/10.1016/j.paid.2019.109539

Robie, C., Zickar, M. J., & Schmit, M. J. (2001). Measurement equivalence between applicant and incumbent groups: An IRT analysis of personality scales. Human Performance, 14(2), 187–207. https://doi.org/10.1207/S15327043HUP1402_04

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. https://doi.org/10.1037/0033-2909.124.2.262

Schmitt, N. (2014). Personality and cognitive ability as predictors of effective performance at work. Annual Review of Organizational Psychology and Organizational Behavior, 1, 45–65. https://doi.org/https://doi.org/10.1146/annurev-orgpsych-031413-091255

Schmitt, N., & Oswald, F. L. (2006). The impact of corrections for faking on the validity of noncognitive measures in selection settings. Journal of Applied Psychology, 91(3), 613–621. https://doi.org/10.1037/0021-9010.91.3.613

Spector, P. E., Rogelberg, S. G., Ryan, A. M., Schmitt, N., & Zedeck, S. (2014). Moving the pendulum back to the middle: Reflections on and introduction to the inductive research special issue of Journal of Business and Psychology. Journal of Business and Psychology, 29(4), 499–502. https://doi.org/10.1007/s10869-014-9372-7

Strus, W., & Cieciuch, J. (2017). Towards a synthesis of personality, temperament, motivation, emotion and mental health models within the Circumplex of Personality Metatraits. Journal of Research in Personality, 66, 70–95. https://doi.org/10.1016/j.jrp.2016.12.002

Strus, W., & Cieciuch, J. (2019). Are the questionnaire and the psycho-lexical Big Twos the same? Towards an integration of personality structure within the Circumplex of Personality Metatraits. International Journal of Personality Psychology, 5(July), 18–35. https://doi.org/10.21827/ijpp.5.35594

Strus, W., Cieciuch, J., & Rowiński, T. (2014). The circumplex of personality metatraits: A synthesizing model of personality based on the Big Five. Review of General Psychology, 18(4), 273–286. https://doi.org/10.1037/gpr0000017

Taylor, N., & de Bruin, G. P. (2013). The Basic Traits Inventory. In S. Laher & K. Cockcroft (Eds.), Psychological assessment in South Africa: Research and applications (pp. 232–243). Johannesburg, South Africa: Wits University Press.

Tett, R. P., & Burnett, D. D. (2003). A personality trait-based interactionist model of job performance. Journal of Applied Psychology, 88(3), 500–517. https://doi.org/10.1037/0021-9010.88.3.500

Tett, R. P., & Christiansen, N. D. (2007). Personality tests at the crossroads: A response to Morgeson, Campion, Dipboye, Hollenbeck, Murphy, and Schmitt (2007). Personnel Psychology, 60(4), 967–993. https://doi.org/10.1111/j.1744-6570.2007.00098.x

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta‐analytic review. Personnel Psychology, 44(4), 703–742. https://doi.org/10.1111/j.1744-6570.1991.tb00696.x

Van Aarde, N., Meiring, D., & Wiernik, B. M. (2017). The validity of the Big Five personality traits for job performance: Meta-analyses of South African studies. International Journal of Selection and Assessment, 25(3), 223–239. https://doi.org/10.1111/ijsa.12175

Wax, A., Asencio, R., & Carter, D. R. (2015). Thinking big about big data. Industrial and Organizational Psychology, 8(4), 545–550. https://doi.org/10.1017/iop.2015.81

Whetzel, D. L., McDaniel, M. A., Yost, A. P., & Kim, N. (2010). Linearity of personality-performance relationships: A large-scale examination. International Journal of Selection and Assessment, 18(3), 310–320. https://doi.org/10.1111/j.1468-2389.2010.00514.x

Witt, L. A., Burke, L. A., Barrick, M. R., & Mount, M. K. (2002). The interactive effects of conscientiousness and agreeableness on job performance. The Journal of Applied Psychology, 87(1), 164–169. https://doi.org/10.1037/0021-9010.87.1.164