idea_engineering

SMD’s Prediction Comes True: Rage Against the Machines

We’re continuing to follow up on our “Crystal Ball” prediction series from last year, which outlined upcoming trends and the impact on you, your organization and the entire HR field. Multiple recent findings have proven our “Rage Against the Machines” prediction. See how below.

Prediction from 2016

Rage Against the Machines – The race to apply automatic analytics (i.e., machine learning) will result in more bad predictions than good ones. Predicting human behavior is complex and difficult and even when done well it can only account for a fraction of variance in outcomes. So, when a company starts telling you who to hire based on the font in the candidate’s resume, you should be VERY skeptical. Unfortunately, some of these companies will find customers to buy these very flawed tools. These looming failures will make organizations slow to adopt all analytics solutions – even the ones that will add significant value.

Proof – The Race & Concern to Adopt

A recent survey¹ from PricewaterhouseCoopers said that 52 percent of CEOs are exploring the benefits of humans and machines working together to add value to the company, with 39 percent considering the impact of artificial intelligence (AI) on future skills needs.

Many companies, from startups to FedEx, are using AI software to speed up the vetting process when looking at new hires. It was reported in Fortune² that one such software, “… can capture not only so-called book knowledge, but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using ‘please’ and ‘thank you,’ for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail.”

This approach raises so many questions:

  • How accurate are these algorithms at identifying “intangible human qualities”? Meaning, how often does the profile correctly identify a candidate’s qualities?  How is this approach even validated?
  • So, are we comfortable screening “out” a candidate based on how often they say “please” and “thank you?” The potential for adverse impact based on language seems likely to us …
  • When done well, predictive assessments can explain 25 percent of the variance in human performance. So, are the new “assessment” methods being applied in AI getting ahead of the actual science?  Let’s just say the science better be bulletproof the first time a discrimination case is filed against a company using this approach!

Jon Williams, chief of PWC’s global People and Organization practice, said, “Getting people strategy right in a world where humans and machines work alongside each other will be the biggest challenge leaders will ever face. No one can be sure how the world of work will evolve, so organizations must prepare for any scenario.”

Proof –Complex Human Behavior & Prejudices

While AI can be more objective than human-based screening in some cases, the advanced technology can also be too smart for its own good. “Recent research has shown that as machines are developing more ‘human-like language abilities,’ they are also acquiring the biases hidden within our language. The research looked at word embeddings – the numerical representation of the meaning of a word based on the words it most frequently appears with. This has shown that biases that exist in society are also being learned by algorithms. For example, the words, ‘females’ and ‘woman’ were more closely associated with the home and arts and humanities occupations, while the words ‘male’ and ‘man’ were more aligned to math and engineering. This research suggests that AI, unless explicitly programmed to counteract this, will continue to reinforce the same prejudices that still exist in our society today,” states HR consultant, Cate Oliver³.

No matter the advances in AI, there will always be a place for people in HR. In a recent article¹, Rosemary Haefner, CareerBuilder’s chief HR officer said, “What robots and AI can’t replace is the human element of HR that shapes the company culture, provides an environment for employees built on IQ and EQ, works hand in hand with company leaders to meet business goals and ensures employees have the training and support to thrive. You need living, dynamic people who can navigate the ‘grey’ that do that, not robots that can quickly work through black and white.”

Proof – Be Skeptical

So, with the opportunity to adopt analytic solutions to advance the hiring process in a smart way, how do you manage the risk? Dominic Holmes4, partner at Taylor Vinters, says, “In my view, the answer lies in the end-users of AI solutions working together with those who create them. Machines have the potential to make more objective, consistent decisions than humans. They can be more reliable, more accurate and work 24/7 if needed, without getting tired or distracted. However, they are not foolproof and humans may still be required to intervene and manage any unintended outcomes.”

 

1.) Human Resources Director. (2017, May). Can robots work in human resources? Human Resources Director. Retrieved from http://www.hrdmag.com.sg/news/can-robots-work-in-human-resources-236681.aspx

2.) Alsever, J. (2017, May). How AI is changing your job hunt. Fortune. Retrieved from http://fortune.com/2017/05/19/ai-changing-jobs-hiring-recruiting/

3.) Oliver, C. (n.d.). Artificial intelligence systems are reinforcing gender and racial biases. [Web log post] Capital People. Retrieved from http://www.capital-people.co.uk/artificial-intelligence-systems-reinforcing-gender-racial-biases/

4.) Vinters, T. (2017, June). What happens when machines discriminate? Lexology. Retrieved from http://www.lexology.com/library/detail.aspx?g=753df850-1b23-4d51-9157-a9be917e9639

artificial intelligence square

Artificial Intelligence & Machine Learning: The Good, the Bad & the Ugly

For the past 5 years or so, big data, human resources (HR) analytics, and predictive analytics have been new concepts floating around the HR world. Of course, with technology changing at a mind-blowing speed, two new approaches are creeping into the HR profession – artificial intelligence (AI) and machine learning. Let’s look at the benefits and pitfalls of these new techniques.

ARTIFICIAL INTELLIGENCE
AI is the movement toward “smart” machines and computing systems being able to carry out tasks the way that humans would, except much more efficiently¹ – think surgery-performing robots, self-driving cars, even the filter that sends suspected junk mail to your spam folder.

For example, IBM Watson Talent Insights uses predetermined algorithms to find patterns in the data that emerge automatically, without the need for a human to make predictions about potential relationships to look for in large datasets¹.

MACHINE LEARNING
Machine learning is an application of AI applied to the data analysis processes¹. Developers create algorithms (i.e., math equations) that can be applied to data to make “smart” decisions and arrive at specific conclusions. Developers tell the algorithm what to look for and what to do with the information, and then the algorithm completes analyses without further instruction from the developers.

Google uses machine learning to promote hiring and retention initiatives by using algorithms that predict which candidates are most likely to succeed in their new role after being hired and which employees are likely to want to leave the organization in the future, respectively².

PITFALLS
AI and machine learning techniques can provide great value for HR departments because they can eliminate the need for human processing of rote tasks and mitigate the risks of human error or boredom in things like data entry, resume screening, etc.³ These techniques may be especially interesting and appealing for organizations that do not have the capacity or capability to conduct in-depth analyses on their own and are looking for quick “data-based insights.” However, without a proper understanding of the analyses behind the scenes and of organizational and employee behaviors, blind algorithms can have potential pitfalls.

No One-Size-Fits-All Algorithm
There is no “one-size-fits-all” algorithm that can be used for processes across organizations. Relying on a single algorithm, or even collection of algorithms, to take into account the dynamic features of an organization is risky. A variety of factors are important in understanding employee experiences at work, and algorithms developed at one point in time, or for one organization, may only provide accurate insights for that timeframe or context.

High Correlations: Flawed in Predicting Behavior
Organizations should approach talent management with theory-driven ideas about factors that affect the employee experience. Most machine learning algorithms look for correlations to “predict” variables of interest. Herein lies the problem – algorithms based on finding high correlations are inherently flawed in predicting behavior; correlation does not imply causation AND a correlation is not a prediction as it implies no order of occurrence between two variables. For instance, an algorithm designed to monitor turnover risk could find that people whose names begin with “M” are likely to quit after 3 years. However, anyone can easily see how this is illogical and happenstance. There could be some other factor that isn’t captured in the data that is causing these people to quit after 3 years – the fact that all their names begin with “M” is a coincidence that the algorithm interpreted as an important pattern. Let’s not forget that part of our role as HR professionals is to mitigate risk (e.g., discrimination), and a purely machine-driven approach introduces significant risks for an organization.

The Human Element
Organizations that rely too heavily on AI or machine learning are at risk of removing the expert human element from the equation. Acting on insights gained from algorithms may cause organizations to ignore the how and why these relationships exist in the first place. Skilled and trained researchers are necessary to making these algorithms work for your organization. A trained practitioner, hopefully an Industrial/Organizational psychologist, should develop an algorithm using your organization’s data, to account for your organization’s context. Or at the least, an expert should be closely monitoring and interpreting the elements going into and out of an automated system to ensure accurate and appropriate conclusions are generated.

Careful!
The HR industry is just beginning to latch onto and understand analytics and arguably, most HR departments are not well-advanced in this space. Making the leap to AI and machine learning could be a potential mine field for organizations that do not have a solid analytic foundation.

 

1. http://www.quantumworkplace.com/future-of-work/artificial-intelligence-in-hr-what-you-need-to-know
2. http://www-03.ibm.com/software/products/en/talent-insights
3. https://www.wsj.com/articles/the-algorithm-that-tells-the-boss-who-might-quit-1426287935