idea_engineering

SMD’s Prediction Comes True: Rage Against the Machines

We’re continuing to follow up on our “Crystal Ball” prediction series from last year, which outlined upcoming trends and the impact on you, your organization and the entire HR field. Multiple recent findings have proven our “Rage Against the Machines” prediction. See how below.

Prediction from 2016

Rage Against the Machines – The race to apply automatic analytics (i.e., machine learning) will result in more bad predictions than good ones. Predicting human behavior is complex and difficult and even when done well it can only account for a fraction of variance in outcomes. So, when a company starts telling you who to hire based on the font in the candidate’s resume, you should be VERY skeptical. Unfortunately, some of these companies will find customers to buy these very flawed tools. These looming failures will make organizations slow to adopt all analytics solutions – even the ones that will add significant value.

Proof – The Race & Concern to Adopt

A recent survey¹ from PricewaterhouseCoopers said that 52 percent of CEOs are exploring the benefits of humans and machines working together to add value to the company, with 39 percent considering the impact of artificial intelligence (AI) on future skills needs.

Many companies, from startups to FedEx, are using AI software to speed up the vetting process when looking at new hires. It was reported in Fortune² that one such software, “… can capture not only so-called book knowledge, but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using ‘please’ and ‘thank you,’ for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail.”

This approach raises so many questions:

  • How accurate are these algorithms at identifying “intangible human qualities”? Meaning, how often does the profile correctly identify a candidate’s qualities?  How is this approach even validated?
  • So, are we comfortable screening “out” a candidate based on how often they say “please” and “thank you?” The potential for adverse impact based on language seems likely to us …
  • When done well, predictive assessments can explain 25 percent of the variance in human performance. So, are the new “assessment” methods being applied in AI getting ahead of the actual science?  Let’s just say the science better be bulletproof the first time a discrimination case is filed against a company using this approach!

Jon Williams, chief of PWC’s global People and Organization practice, said, “Getting people strategy right in a world where humans and machines work alongside each other will be the biggest challenge leaders will ever face. No one can be sure how the world of work will evolve, so organizations must prepare for any scenario.”

Proof –Complex Human Behavior & Prejudices

While AI can be more objective than human-based screening in some cases, the advanced technology can also be too smart for its own good. “Recent research has shown that as machines are developing more ‘human-like language abilities,’ they are also acquiring the biases hidden within our language. The research looked at word embeddings – the numerical representation of the meaning of a word based on the words it most frequently appears with. This has shown that biases that exist in society are also being learned by algorithms. For example, the words, ‘females’ and ‘woman’ were more closely associated with the home and arts and humanities occupations, while the words ‘male’ and ‘man’ were more aligned to math and engineering. This research suggests that AI, unless explicitly programmed to counteract this, will continue to reinforce the same prejudices that still exist in our society today,” states HR consultant, Cate Oliver³.

No matter the advances in AI, there will always be a place for people in HR. In a recent article¹, Rosemary Haefner, CareerBuilder’s chief HR officer said, “What robots and AI can’t replace is the human element of HR that shapes the company culture, provides an environment for employees built on IQ and EQ, works hand in hand with company leaders to meet business goals and ensures employees have the training and support to thrive. You need living, dynamic people who can navigate the ‘grey’ that do that, not robots that can quickly work through black and white.”

Proof – Be Skeptical

So, with the opportunity to adopt analytic solutions to advance the hiring process in a smart way, how do you manage the risk? Dominic Holmes4, partner at Taylor Vinters, says, “In my view, the answer lies in the end-users of AI solutions working together with those who create them. Machines have the potential to make more objective, consistent decisions than humans. They can be more reliable, more accurate and work 24/7 if needed, without getting tired or distracted. However, they are not foolproof and humans may still be required to intervene and manage any unintended outcomes.”

 

1.) Human Resources Director. (2017, May). Can robots work in human resources? Human Resources Director. Retrieved from http://www.hrdmag.com.sg/news/can-robots-work-in-human-resources-236681.aspx

2.) Alsever, J. (2017, May). How AI is changing your job hunt. Fortune. Retrieved from http://fortune.com/2017/05/19/ai-changing-jobs-hiring-recruiting/

3.) Oliver, C. (n.d.). Artificial intelligence systems are reinforcing gender and racial biases. [Web log post] Capital People. Retrieved from http://www.capital-people.co.uk/artificial-intelligence-systems-reinforcing-gender-racial-biases/

4.) Vinters, T. (2017, June). What happens when machines discriminate? Lexology. Retrieved from http://www.lexology.com/library/detail.aspx?g=753df850-1b23-4d51-9157-a9be917e9639