Back to Guides and Insights

Artificial intelligence is probably older than you think. AI has existed as a concept for more than 70 years,1 and the first models were built in the mid-1950s. While the technology is not brand new, it’s the center of public attention right now. This is especially true regarding the use of AI in personality tests and other talent management applications. We’ve put together this guide to answer some of your most pressing questions about AI, personality tests, and talent management.

Keep in mind that this guide is like a snapshot. It shows what AI is now, how AI is used in workplace assessments, and what the implications for organizations are at one moment in time. The landscape is evolving so rapidly—sometimes hour by hour—that the technology is subject to sudden, significant change. Consequently, in this guide, we’ve emphasized ideas and strategy to help decision-makers navigate personality assessments in the era of AI.

What is artificial intelligence, or AI?

Artificial intelligence, or AI, refers to a computer system that imitates human thinking. Examples of tasks that require humanlike intelligence are perceiving, understanding language, synthesizing information, making inferences, solving problems, and making decisions. Making predictions is another way that an AI can mimic human thought processes. An AI that performs this task analyzes a lot of data and attempts to predict an outcome. It can refine its predictions over time or “learn” how to predict more accurately.

We should review a few essential terms related to artificial intelligence:

  • Artificial intelligence, or AI – An artificial intelligence is a computer system that automates human thought processes.
  • Algorithm – An algorithm is a step-by-step set of instructions or rules for a computer system to solve a problem or complete a task.
  • Machine learning – Machine learning is a type of artificial intelligence in which computer systems learn from data and improve their performance without being explicitly programmed.
  • Natural language processing – Natural language processing is a type of technology that allows computer systems to understand and use human language.
  • Large language model – A large language model is a type of AI technology that uses natural language processing to produce content based on a vast amount of data. ChatGPT, for example, is powered by a large language model.

When many people think of AI, they probably imagine computers or robots that can speak and act like a human. Most AI systems today are computer applications. They are different from other types of programs or software because of how they complete tasks. Modern AI systems learn not by direct programming but by the experience of trial and error—one of the ways humans learn. In other words, machine learning is the attempt to use complex statistical modeling to allow the computer to learn from its errors.

Keep reading to learn more about the use of AI in talent management and, specifically, AI in personality tests.

Can AI predict personality?

Yes, AI can predict personality. Of course, that depends on what we mean by “personality.”

“If we think about personality as our core biology or our reputation, AI can predict that somewhat,” said Ryne Sherman, PhD, chief science officer at Hogan. “But not nearly as strongly as it can predict the kinds of things that we say about ourselves,” he added. AI can analyze various sources of data, such as text, speech, and social media activity, to calculate how someone might respond to questions on a personality assessment. So, to an extent, AI can predict the scores people are likely to get via personality assessment.

Targeted advertisements are a familiar analogy for the predictive ability of AI. If someone searches for camping gear and asks friends for advice about places to eat in Denver, it’s not a huge logical leap to assume they’re planning a camping trip to Colorado. An AI system might then show them ads for high-altitude tents or hiking shoes suitable for mountainous terrain.

In the same way, if an AI has personal data about someone, its machine learning algorithms can analyze that data to predict personality. Recent research showed that when an AI chatbot inferred personality scores based on the text of online interviews, it was overall reliable.2 The easiest way to find out someone’s personality assessment scores, though, is to ask them to take a personality assessment!

“Technology drives many trends in our industry, some of which have more staying power than others,” said Allison Howell, MS, vice president of market innovation at Hogan. “The future of AI is incredibly exciting, but it’s important to remember that the technology is still in its infancy. As we explore potential applications, our commitment to quality and sound science remains a top priority.”

Allison Howell, MS, vice president of market innovation at Hogan, is quoted as saying, 'Technology drives many trends in our industry, some of which have more staying power than others. The future of AI is incredibly exciting, but it’s important to remember that the technology is still in its infancy. As we explore potential applications [for AI in personality tests], our commitment to quality and sound science remains a top priority.'

To be successful at prediction, any AI needs to learn from the right data, and it also needs feedback about whether it has made the right associations. If an AI makes a prediction based on incorrect data, the prediction won’t be accurate. That’s why traditional personality assessment should be just one of many factors that humans should consider when making any talent decisions.

How is artificial intelligence used in personality tests?

In personality psychology, artificial intelligence can be used to analyze responses to questions, identify patterns in data, and make predictions about personality characteristics. Whether it should do so raises questions about ethics and regulations, which we address later in this guide.

AI can use data either from personality assessments or from other sources, such as a person’s social media or web search history, to predict an outcome (for example, job performance). Some AI programs can even analyze audio and video to make inferences about an individual’s personality. However, when people make hiring decisions based on AI interviews or AI face scanning, bias is likely.3

One use for AI in personality tests is to help write questions or items for the assessment. Assessment companies could use AI to write questions or agree-disagree statements to identify how much conscientiousness someone is likely to show, for example. The accuracy of an AI’s outputs—in this example, assessment items or job performance predictions—depends on what data it uses for input and how many adjustments to its algorithms it has learned to make.

Do the Hogan personality assessments use artificial intelligence?

No, Hogan does not use AI in personality tests. “Our assessments are built based on traditional psychometric theories that have been rigorously researched and tested,” explained Weiwen Nie, PhD, research consultant. “This is why the way we build our assessments is the gold standard in the field of personality research.”

Hogan Assessments Research Scientist Weiwen Nie, PhD, is quoted as saying, 'Our assessments are built based on traditional psychometric theories that have been rigorously researched and tested. Because of that, the way we build our personality tests is the gold standard in the field of personality research.' Nie is responding to a question about the use of AI in personality tests and if Hogan uses AI in its personality tests.

Ultimately, our goal in measuring personality is not only to provide insight about individuals but also to predict their workplace performance. Hogan has decades of scientific evidence showing how we achieve these goals.

If an organization claims to use AI in personality tests, but the AI doesn’t use evident algorithms or adhere to reliable psychometric theory, the results are not interpretable. Even if the results of the assessment describe personality characteristics, no one can know for sure if they are fair or even relevant if the algorithm used to generate them isn’t evident. This is what’s known as the black-box problem. When we don’t know what factors are driving an assessment’s predictions, the results are not useful for talent development—and they are unethical for use in talent acquisition. (More on that later.)

Now, Hogan does take advantage of some benefits of using AI in talent analytic processes. We use natural language processing, or NLP, to help classify job descriptions into job families. Natural language processing also helps us code subject-matter experts’ data when we perform job analyses. Each time, our subject-matter experts review the results and approve them. AI helps us automate these processes so we can create the best personality profile for a specific job. Using AI saves us time and resources and, in some cases, it even improves our analyses.

We believe that AI has the potential for more beneficial uses, which we are committed to exploring on an ongoing basis. Our assessments themselves, however, remain based on traditional psychometric theory.

Is it possible to “cheat” on personality tests using AI?

The answer is yes, but doing so is not advantageous. Our research shows that AI systems will usually answer personality assessment items with socially desirable response patterns—regardless of the context. For example, even if we prompt the AI to answer as if it were applying for a job as finance analyst or a salesperson, it will respond to the item in the same way.

The obvious responses make it easy to detect AI test results. In fact, Hogan has even built a tool that can determine if an assessment taker used ChatGPT to complete the Hogan personality assessments. We conducted a study to evaluate the tool’s efficacy at detecting cheating using 100 sets of assessment results simulating the response patterns of ChatGPT. To ensure that the tool would not falsely flag genuine responses, we also tested the tool on assessment results collected from 512,084 respondents prior to the emergence of ChatGPT. The results? Hogan’s tool detected 100 percent of ChatGPT responses and flagged zero percent of genuine responses.

Aside from being easily detectable, asking a computer program with no personality for help with a personality assessment is misguided. This type of dishonest candidate behavior is likely to be detectable during other stages of the hiring process too.

How can AI be used to improve talent management processes?

The benefits of using artificial intelligence to improve talent management processes are many. The practical applications of AI include informing decision-making in areas such as recruiting, onboarding, performance management, learning and development, and succession planning. It can summarize text, keep records, compare data, and assist with research, organization, and writing rough drafts.

“The strong suit of AI is in analyzing a large amount of data efficiently and making predictions based on that analysis,” said Chase Winterberg, JD, PhD, director of the Hogan Research Institute. He mentioned that an AI might help manage a large volume of applicants by prioritizing candidates, allowing humans to do more meaningful work instead of tedious, repetitive tasks. Similarly, AI chatbots might handle routine HR inquiries, while redirecting nuanced questions to humans.4 (Keep in mind that there are risks when using data from AI in making talent decisions, but we’ll mention those a little later.)

MAR_AI_Pillar_1200x630_Chase

In talent acquisition, AI can help determine which competencies are most relevant for a job description. It can also help identify which personality characteristics are most important for performance on that job.

In talent development, an AI program might analyze worker time usage and make personalized suggestions for increasing efficiency or streamlining processes. An AI chatbot can even act as an on-demand virtual coach, helping people improve their performance at work. It also could provide customized career recommendations for a given personality profile or offer a reasonable series of steps to reach certain career objectives.

What are the risks of using AI in talent acquisition and talent development?

The risks of using AI in talent acquisition include making decisions using AI-generated information that is potentially biased. AI-driven decisions might inadvertently reinforce existing biases or create new ones, leading to unfair treatment of certain groups of candidates. For instance, an AI might incorrectly assume that protected characteristics, education level, or previous work experience is necessary to perform well in a job—and exclude candidates that don’t match its assumptions.

“Effective utilization of AI in talent acquisition requires a deep understanding of the data being used,” said Alise Dabdoub, PhD, Hogan’s director of product innovation. “Advanced statistical methods alone cannot compensate for inadequate research design. It’s crucial to have a comprehensive grasp of the data to avoid potential risks and biases in decision-making.”

The risks of using AI in talent development are a lack of inclusivity and accessibility. If an organization were to use AI for coaching, for instance, the AI might suggest that a person who belongs to a historically marginalized group behave like someone belonging to a group with more historical privilege. Not only is that not the best route for them, but it also perpetuates systemic biases. AI systems have an algorithmic process that they use to perform tasks, but that process isn’t always visible. Without a way to verify the algorithms, we cannot know for sure how an AI system is using its data.

Using AI in people decisions is perceived negatively by many US workers. Seventy-one percent of US adults oppose employers’ use of AI for making a final hiring decision.5 Even for reviewing job applications, 41 percent oppose employers’ use of AI.5 “There’s a risk of misinformation, confusion, and difficulty in making informed decisions,” Dr. Winterberg said. Talent management professionals must be very selective when using AI as a decision-making aid.

How can talent management professionals mitigate bias and prevent adverse impact when using artificial intelligence?

To mitigate bias and prevent adverse impact when using artificial intelligence, talent professionals can focus on data quality and maintaining transparency.

Focusing on data quality can help mitigate bias and prevent adverse impact with AI systems. If the data are low-quality or insufficiently diverse, then AI systems will produce outcomes that are low-quality or potentially biased. “We want to only consider variables that are job relevant, or important for succeeding in the job,” Dr. Winterberg said.

One way to know if job-relevant data are high-quality is to test or audit the AI system’s outputs. Rigorous AI testing can identify opportunities for improving data to generate an improved result. “Basically, you always need to be auditing AI systems for potential bias,” Dr. Sherman said.

Maintaining transparency into the decision-making process using AI systems can also help mitigate bias and prevent adverse impact. The need for transparency in any talent management process isn’t new. “Transparency is the cornerstone for building trust and ensuring ethical practices in talent acquisition,” said Dr. Dabdoub. “It is imperative to provide clear evidence that any selection system is job relevant, predictive of performance, and fair.”

Hogan Product Innovation Director Alise Dabdoub, PhD, is quoted as saying, 'Transparency is the cornerstone for building trust and ensuring ethical practices in talent acquisition.'

If data that are generated by an AI system aren’t transparent, HR leaders should be wary of using them to make decisions in talent management. Organizations should create internal processes for identifying bias and build diverse teams for AI development until the technology meets quality standards.6

What regulations exist around using AI to make talent decisions?

At the present time, policymakers around the globe are still debating the best way to regulate using artificial intelligence in talent management. It’s challenging to decide how much risk to allow without reducing the benefits that AI can provide. However, laws already exist that basically apply to any employment decision, whether it’s a human decision or not. Dr. Winterberg pointed out the bottom line: “It’s illegal to discriminate on protected classes.”

We’ve listed several notable regulations here, and many more are being developed. Keep in mind that some items in the following list are best practices, while some are legal requirements:

  • Ethical guidelines from the American Psychological Association state that only qualified individuals should interpret psychological test results, meaning that AI should not be used to interpret assessments.7
  • The Society for Industrial and Organizational Psychology (SIOP) has published best practice recommendations covering the development, validation, and use of all hiring practices, including AI. SIOP also released a statement specific to using AI-based assessments for employee selection.8
  • The European Commission has provided three overarching principles for what makes AI systems trustworthy. Artificial intelligence should be lawful, ethical, and robust.9
  • The Uniform Guidelines are US federal recommendations for complying with Title VII of the Civil Rights Act, which protects employees and applicants from employment discrimination. The guidelines apply to all employment decision tools, including AI.10
  • New York City adopted new rules about required bias audits for automated employment decision tools, which include AI.11

Because regulations vary by jurisdiction, organizations should consult with legal experts to ensure legal compliance.

What are some ethical guidelines for using AI to make talent decisions?

The lines between lawful and ethical don’t always overlap. “AI technology can be built for one purpose and be used for other,” Dr. Sherman pointed out. “We’re at a place with AI that’s very similar to when scientists started colliding atoms.”

Hogan Assessments Chief Science Officer Ryne Sherman, PhD, is quoted as saying 'We're at a place with AI [in personality tests] that's very similar to when scientists started colliding atoms.'

What makes using AI for talent decisions potentially unethical is the unknown element. This is the aforementioned black-box problem. To recap, different types of AI systems use algorithms that are either evident or hidden. If the algorithms are evident, it is easy for humans to know how the AI made its prediction. If the algorithms are hidden (as if they were inside a black box), we cannot see the steps that the AI took to reach its conclusion. This means the results could be irrelevant or unfair.

Common themes among most AI-related ethical guidelines are job relevance and transparency. It’s important to make sure that data the AI uses is relevant to the job. “It needs to actually be related to performance without having negative outcomes for any group of people who could succeed in the job. That sums up the basic implications for humans,” said Dr. Winterberg. It’s also important for AI use to be transparent in documentation and data privacy policies.12,13 At Hogan, even though our assessments don’t use AI, we provide transparency into our validity and reliability, our logic, and how we predict workplace performance. We can show evidence for anything we do.

“The work we do has a profound impact on people’s lives, which is something we cannot take lightly,” said Howell. “Our clients trust us because our science is best-in-class. AI can help us serve our clients better, but applications absolutely must be developed as ethically as possible.”

The ethical thing to do when using AI is to publicize when and how it affects people. “Ethical considerations in AI usage demand transparency in communicating the impact on individuals,” emphasized Dr. Dabdoub. “It is crucial to publicize when and how AI decisions affect people. Keeping those affected informed is a fundamental aspect of responsible AI deployment.”

How should talent professionals select an assessment?

Organizations need to bring in people who are familiar with AI technology and can understand the potential implications for employees and risks for the business. They should also be able to provide proof that how they are using AI is fair, especially when it comes to AI in personality tests or other tools for making talent decisions.

Unsure about how to evaluate your assessment options? You’re not alone—let us help.

Learn how to select a personality assessment.

Contributors

We thank our contributors, listed here in alphabetical order, for sharing their expertise.

Alise Dabdoub, PhD, is the director of product innovation at Hogan. At Hogan, she has created an automated process for conducting cross-language equivalency of assessments, built norms, and conducted norm-shift impact analyses. She has an interest in quantitative methods, specifically the assessment of test and item fairness, and critical statistical methodology. She received her PhD in IO psychology from the University of Oklahoma.

Allison Howell, MS, is the vice president of market innovation at Hogan, where she leads the marketing and product development teams. She is passionate about leveraging Hogan’s best-in-class research to solve real problems for clients. She holds a master’s degree in science communication from the School of Journalism and Mass Communication at the University of Wisconsin-Madison.

Weiwen Nie, PhD, is a research consultant on the product innovations team at Hogan. He leads the application of the natural language processing and machine learning models to automate talent analytics processes. In 2023, he was part of a team that won an elite machine learning competition at the Society for Industrial and Organizational Psychology’s annual conference. He holds a PhD in industrial-organizational psychology from Virginia Tech.

Ryne Sherman, PhD, is the chief science officer at Hogan. He is an expert on personality assessment and data analytics, including the use of artificial intelligence and machine learning with personality assessment. Dr. Sherman has written more than 50 scientific papers, and he is the cohost of the popular podcast The Science of Personality. He received his PhD in personality and social psychology from the University of California, Riverside.

Chase Winterberg, JD, PhD, is the director of the Hogan Research Institute. In this role, he coordinates and communicates research to refine the theoretical foundation and understanding of best practices for implementing Hogan’s solutions to help solve organizational problems. He holds a JD from the University of Tulsa College of Law and a PhD in industrial-organizational psychology from the University of Tulsa.

References

1. Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59, 433–460. https://doi.org/10.1093/mind/LIX.236.433

2. Fan, J., Sun, T., Liu, J., Zhao, T., Zhang, B., Chen, Z., Glorioso, M., & Hack, E. (2023). How Well Can an AI Chatbot Infer Personality? Examining Psychometric Properties of Machine-Inferred Personality Scores. Journal of Applied Psychology, 108(8), 1277–1299. https://doi.org/10.1037/apl0001082

3. Harlan, E., & Schnuck, O. (2021, 16 February). Objective or Biased. Bayerischer Rundfunk. https://interaktiv.br.de/ki-bewerbung/en/

4. Grensing-Pophal, L. (2022, May 24). How HR Is Using Virtual Chat and Chatbots. SHRM. https://www.shrm.org/resourcesandtools/hr-topics/technology/pages/how-hr-is-using-virtual-chat-and-chatbots.aspx

5. Rainie, L., Anderson, M., McClain, C., Vogels, E., & Gelles-Watnick, R. (2023, April 20). AI in Hiring and Evaluating Workers: What Americans Think. Pew Research Center. https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/

6. Kim-Schmid, J., & Raveendhran, R. (2022, October 13). Where AI Can — and Can’t — Help Talent Management. Harvard Business Review. https://hbr.org/2022/10/where-ai-can-and-cant-help-talent-management

7. American Psychological Association. (2017). Ethical Principles of Psychologists and Code of Conduct. https://www.apa.org/ethics/code

8. The Society for Industrial and Organizational Psychology (SIOP). (2023, January). Considerations and Recommendations for the Validation and Use of AI-Based Assessments for Employee Selection. SIOP. https://www.siop.org/Research-Publications/Items-of-Interest/ArtMID/19366/ArticleID/7327/SIOP-Releases-Recommendations-for-AI-Based-Assessments

9. High-Level Expert Group on AI (AI HLEG). (2019, 8 April). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

10. Equal Employment Opportunity Commission (1978). Uniform Guidelines on Employee Selection Procedures, Federal Register, 43, 38290–38315.

11. New York City Department of Consumer and Worker Protection (DCWP). (2023). Notice of Adoption of Final Rule. https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/

12. Association for Talent Development (ATD). (2022, December 12). The Responsibility of TD Professionals in the Ethics of Artificial Intelligence. HRDive. https://www.hrdive.com/spons/the-responsibility-of-td-professionals-in-the-ethics-of-artificial-intellig/638136/

13. Golbin, I., & Axentehttps, M. L. (2021, June 23). 9 Ethical AI Principles for Organizations to Follow. World Economic Forum. https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/