Why HR should take care when using intelligence for diversity hiring

Charles Hipps, CEO & Founder, WCN discusses how HR need to balance risks and opportunities in diversity hiring

There’s little doubt about the benefit of hiring a diverse workforce.

McKinsey found in one UK study that greater gender diversity on the senior-executive team could positively affect performance. For every 10% diversity increase, they saw profits rise by up to 3.5%. Organisations know it’s needed, but often lack the ability to ensure diversity hiring is happening within their own talent acquisition programs.

Big Data helps find the best fit

In recruiting, Big Data reveals which applicants are a better fit for positions within the company, correlating skills and work values to numbers and percentages. It’s not about the name on the CV or cultural background of an interviewee. Instead, companies can focus on the candidates with the right expertise, experience and potential to be productive within their already established teams, provided the humans in the equation eschew their own biases.

 

Tired humans overlook good candidates

To take just one example, in an increasingly competitive job market, an organisation may receive applications from hundreds of highly-qualified, hopeful graduates for just a few vacancies.  Often, it will take a disproportionate amount of human effort to sift through them.  Crucial experience, context or personal attributes may be lost in the morass of information.  Application forms may be divided amongst several people who each take a slightly different approach.  Some may not be given due attention, simply because they are considered at the end of a long day.  As a result, gifted candidates may be overlooked due to human fallibility or unintentional bias.

 

AI Can be used for an initial review

To meet these challenges, AI systems can put to effective use in conducting an initial review of applications, to produce a shortlist of candidates for interview. One of the most frequent questions I get asked when talking about the attractiveness of adopting artificial intelligence into recruitment techniques to help boost diversity is what happens when machines discriminate?

A blog post by the White House staff captions this fear perfectly. It cautions: “The era of big data is full of risk. The algorithmic systems that turn data into information are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them. Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype. Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity.”

 

What if machines discriminate?

The reality is – like any traditional talent acquisition methodology – balancing the risks and opportunities of artificial intelligence is imperative. In recognising the opportunities that AI brings, we must also be mindful of the possible pitfalls.  In particular, workers and job candidates are protected from discrimination related to certain protected characteristics (such as age, disability, sex, race, sexual orientation and religion or belief). When asking machines to make decisions for us, there remains a risk that they will throw up potential discrimination issues.

Used well, blind recruitment can help ensure demonstrable insights that show differences as a strength. Protected data is not used as a criteria by the machine to make decisions. But it is crucial to keep in mind that whilst blind recruitment limits the impact that unconscious bias may have on sifting by removing information that has nothing to do with past success or experiences, such as one’s name, nationality or area of residence – it cannot end cultures alone. A degree of personal responsibility is required still.

Yes, blind recruiting does allow businesses to visibly demonstrate a commitment to demonstrating how they work hard to ensure a fair and inclusive environment for all, where the unique insights, perspectives and backgrounds of individuals are valued. But after this, the company must physically show this and prove their worth.

 

Expect the unforeseeable

In practice, very few employers are likely to only use AI to make decisions that they know will result in less favourable treatment because of a protected characteristic (known as “direct discrimination”). However, what about unforeseen discriminatory outcomes arising from the use of AI?

For example, a machine may make automated decisions (or influence humans in making non-automated decisions) across a large population with a roughly equal gender split, but which inadvertently place women at a particular disadvantage.  Unless the approach can be objectively justified as a proportionate means of achieving a legitimate aim, it will constitute unlawful “indirect discrimination”.

 

Factoring in reasonable adjustment

Similarly, where employers have a duty to make reasonable adjustments to level the playing field for disabled workers, this would need to be factored in to any machine learning processes. There are also questions over who might be liable for any discriminatory conduct or reputational damage.

Therefore, it makes sense to adopt a collaborative approach that is aimed at spotting issues early, agreeing who is responsible for putting them right and refining automated processes to avoid repeat mistakes and looking at adopting internal guidance for employees who use (or, as the case may be, develop) AI tools and an external policy or agreement which sets out clearly how discrimination issues will be managed.

 

 

Author: Editorial Team

Share This Post On