Getting Assessment Advice from ChatGPT



The webpage of ChatGPT, a prototype AI chatbot, is seen on the website of OpenAI, on a smartphone held against a beige background. The photo accompanies a blog post about getting assessment advice from ChatGPT.

AI tools based on large language models, such as ChatGPT, are incredibly powerful and have a wide range of useful applications. They can produce rough drafts, summarize key information, and create efficiencies in the workplace. However, I offer here a word of caution about getting assessment advice from ChatGPT or other AI tools.

I recently encountered the following client case. An individual was identified as a rising star in his organization. He completed the Hogan personality assessments as part of the organization’s high potential selection process. But his assessment results were quite different from how he was described by peers. This was unusual because Hogan assessment results are designed to be consistent with peer descriptions. Our consultant asked the individual if he had received any guidance for taking the assessment, and he revealed that he had “asked ChatGPT how to respond to the assessments, in general.” According to the consultant, ChatGPT advised the candidate to “avoid extreme responses” such as “strongly agree” or “strongly disagree.” Unfortunately, this was bad advice—and the individual was not selected for the high potential program.

ChatGPT’s Advice for Responding to Personality Assessments

After hearing this story, I went to ChatGPT (4.0) and gave it the following prompt: “I am taking a personality assessment and want to get a good score for a job. It uses a ‘strongly disagree’ to ‘strongly agree’ rating scale. How should I respond to the items to get a good score?”

ChatGPT gave some general advice, including “The best advice is to be honest and authentic in your responses.” Then, it followed with six specific pieces of advice: (1) Understand the role and company culture, (2) consider the traits often valued across many jobs, (3) reflect on your experience, (4) avoid extreme answers unless you’re certain, (5) be consistent, and (6) practice self-reflection. Note that I’ve emphasized the fourth piece of advice here because it is similar to the client story above. (More on that shortly.)

Regarding the overall advice, I would encourage anyone to understand the role and company culture before taking an assessment—just as I would for an interview—but the rest of the advice is questionable. For example, although it seems sensible to consider traits that are valued across many jobs, most job candidates are applying for one particular job when they are asked to complete personality assessments. In many cases, a specific personality profile is optimal for that job. Trying to look like you can fit any job might mean you don’t fit the one you want.

Furthermore, the assessment advice from ChatGPT warns that many assessments check for inconsistency in responding. Although that is true, taking care to be consistent in responding to an assessment likely means overthinking it. Most people are consistent in their responses without even thinking about it. Overthinking is more likely to yield inconsistent results than simply responding naturally.

Why You Shouldn’t Avoid ‘Extreme’ Answers on Personality Assessments

I want to return to ChatGPT’s fourth piece of advice, which fully reads:

“4. Avoid Extreme Answers Unless You’re Certain: While it might be tempting to answer with “strongly agree” or “strongly disagree” to make your answers stand out, it’s often more nuanced. Unless you feel very strongly and have specific examples to back up such responses, consider whether a slightly more moderate answer might be more accurate and reflective of your true self.”

Overall, this is bad advice. First, it encourages the test taker to overthink each answer, which is a recipe for inaccuracy. Personality assessments usually advise individuals to respond in a way that feels natural to get the most accurate results. Second, personality assessments are often normed, meaning that an individual’s results reflect not only how they responded to the assessments, but how those responses compare to others’ responses. Take, for example, the Hogan Personality Inventory’s Adjustment scale, which contains 37 items, which are the statements or questions to which the test taker responds. If an individual never used the “strongly disagree”or “strongly agree” options, their minimum possible raw score would be 74, and their maximum possible raw score would be 111. Using Hogan’s global norm, these would be 0 and 62 in percentile scores, respectively. In other words, people who might avoid so-called extreme responses, as ChatGPT advises, limit their Adjustment scores to the 62nd percentile or lower.

Now, for many individuals, that may well be accurate. In fact, for some individuals it is certainly the case that they do not strongly agree or strongly disagree with many statements. That’s OK. The assessments and the norms are designed to provide accurate feedback to these individuals. Yet when an individual intentionally distorts their responses in an attempt to “beat” the assessment, it rarely works out in their favor—as the aforementioned client example shows.

This Adjustment example is just one example of how these kinds of test-taking strategies can impact assessments. How the “avoid the extremes” strategy plays out is different for each scale depending on the number of items and the norms. Nevertheless, this example should show that the advice from ChatGPT to avoid extreme responses is not optimal.

Conclusion

ChatGPT and other artificial intelligence tools based on large language models can be amazingly helpful. If you need help outlining an essay, building a resume, or summarizing some information, these tools can provide some assistance. When it comes to factual information or good advice, you are better off using a simple web search or asking an expert. At a minimum, people need to evaluate the advice they receive from an AI tool before accepting it, just as they would evaluate a draft of an email provided by ChatGPT before sending it. And in the case of personality assessments, getting assessment advice from ChatGPT seems to do more harm than good.

This blog was written by Ryne A. Sherman, PhD, Hogan’s chief science officer.