For Indian Companies: Adopt AI-Based HR Tools But With Caution
Written by: Mimansa Sidhnath
Artificial Intelligence (AI) is the future, we’re told. Experts say that around 40 percent jobs will be lost to automation in the next 15 years. According to the industry trends, HR seems to be one of the targets on our path to singularity.
AI tools are increasingly being built through machine learning to replace the hiring team in large companies. Soon, rather than sitting in front of a panel of interviewers, it will be AI tools assessing our personality traits based on the data that we share about and/or of ourselves. With the current pandemic-ridden world, this may turn into common reality sooner than we had imagined, or were ready for.
How Do AI-Based Hiring Tools Work?
So, you know the person that you send your resume to? Replace that person with a program that has been trained by humans for the process of recruitment by using machine learning algorithms. The experiential hiring tool is fed with lots and lots of information that acts as the knowledge bank for the hiring tool.
Once it has been fed and trained on all the information that the humans deem important, the tool is used to hire people based on behavioural traits, language, educational achievements, etc – basically the goal is to ‘familiarise’ the tool with what consists of ‘normal’ but also ‘desirable’ in the prospective employees.
AI tools are also increasingly being used to free up HR officials of the cumbersome task of periodically following up on the employees and their performance manually. Recently, Ungender spoke with Kanishka Mallick, HR Lead at Times Internet Network, where he mentioned that they’ve been using an AI bot called Amber that “reaches out to employees at their fixed tenures and chats with them and gives us a pulse of the organisation at various points in time.” AI tools are being seen as something that shifts the focus of hiring officials from non-value adding tasks to more value adding tasks, as described by L’Oreal about their hiring chatbot, Mya.
For example, Mya goes through job applications to filter the ‘best fit’. At the same time, it also answers recurring questions on policies, cultures and other basic questions about the company that the applicants ask. The Global HR of L’Oreal said that they saved 45 days on a six-month period for the UK team.
How Are These Tools Changing The Scenario In The Hiring Process?
AI enthusiasts see such hiring tools as a game-changer. They claim that the introduction of AI can help weed out biases that come with humans gatekeeping the hiring process, as has been the case so far. Professionals involved in the hiring process have been known to give disproportionate importance to how someone looks, their dress sense, body language, spoken skills, etc.
Experts see these new technology hiring tools as something that helps them save time and takes over the redundancy of questions that comes with the interviewing process. A common argument in support of AI is that it can be changed or altered to address biases and eventually remove them whereas people – takes quite a bit to change deep-rooted mindsets about how the society functions.
Is AI Free Of Human Intervention?
The straight answer to this question is: no. Although singularity may not be a very distant future, social biases of the world we live in is deeply embedded in AI. Why?
Let’s take the example of a hypothetical book to understand this. Let’s say a member of the Khap Panchayat writes a book on what an ideal village should look like. Now, think about it for a while and ask yourself if it’s possible that that book can be devoid of the directives on how ‘good women’ should behave? Even if we like to think it won’t, the deeply-rooted misogyny and patriarchy that has come up ever so often, is highly likely to find its way into the contents of the book.
Our perceptions shape all that we communicate to the outer world.
Machine learning algorithms are also programmed/written by humans; it will only know what’s fed to it. Since, the selecting, sorting and feeding is done by humans, the biases leak into the codes/scripts, too. It can develop its own pattern of thinking, yes, but that pattern in itself wholly depends on the knowledge that’s fed into it.
The social cannot be scooped out of the technical just as the vice-versa is impossible. We live in a socio-technical world. It’s time that we normalise the notion of society and technology working in a symbiotic relationship instead of assuming that one is divorced from the other. The technology is within us and we are within the technology.[1]
How Can AI Systems With Biases Impact The Recruitment Process?
In their essay, ‘Design Justice, A.I., and Escape from the Matrix of Domination’, Sasha Constanza-Chock narrates an incident at the airport to emphasize the biases that makes an AI-infused social life harder for anyone who is not a cis, white person who conforms to gender-binary. They talk about the matrix of domination that gets embedded in the AI systems through the environment that they are trained in: the people, their ideologies, their biases.
Amazon recently did away with its AI recruiting system because they found out that it was discriminating against women applicants. Their AI was trained on resumes that were submitted to the company over the course of 10 years. Since most of them came from men, the AI inherently came to favour patterns that appeared in male applicants’ resumes. This is an important example to understand the effect of the information with the help of which the AI is trained. The AI adopted the bias that runs large in the STEM field: the scarcity of women/non-males in the field which in turn is a result of hundreds of years of dominant patriarchal power structures.
What Should Companies Be Aware Of When They Use AI-Based Tools To Manage Their Workforce?
Ensure D&I In The Tech Team
Only when those who code and train these complex machine learning algorithms come from different backgrounds of not just gender but also class, caste, race, ethnicities, age, and more, can we truly begin to think AI-based hiring tools will have a positive impact on how professionals are hired and managed. A homogenous group with little to no diversity cannot be expected to know how to be cognizant of a range of biases – there will be obvious slips and that will cost professionals who occupy a marginalised social position.
Reality Checks
Company leaders of those adopting AI-based HR tools and leaders of organisations making these tools need to be constantly aware of the changing times and adapt accordingly. When Ungender spoke with Aparna Devi Moola at EdGE Networks, on pushing for better AI tools based on changing times and client preferences, she said, “A lot of our clients are coming back to us today, saying, “We have D&I policies in place. What is your product doing to complement that?” If we don’t have that in there, they’re not interested in looking at us.”
Companies need to constantly keep a check on its policies and update them, as and when necessary. For that, the company leadership will need to ensure their D&I framework is robust enough that their workforce has the psychological safety to address issues.
Steps To Systematically Tackle Bias In AI-Based Tools
The first step puts the onus on the companies. As more and more companies adopt AI recruiting tools, experts suggest that weeding out biases in AI should be regarded as a part of their Corporate Social Responsibility (CSR).
The second step could be a national-level law. An Algorithmic Accountability Act was introduced in the U.S. House of Representatives which makes a legal demand of the companies to audit AI systems for bias before using them in their processes. This is being seen as the first step towards the governance of AI systems. It would be a step in the right direction if the Indian government comes up with a similar legislation in the near future.
AI is the most important part of what is being called the ‘Fourth Industrial Revolution’ and singularity is the alleged end product. If it will be useful or devastating for us is still very much a contested issue. Only time will tell. Meanwhile, while we’re still friends with the AI and not its servers, the most we can do is to use it to make the world a better place than what we have achieved till now: a society riddled with deeply entrenched social biases.
About the author: Mimansa Sidhnath is a freelance writer and Communications Associate at Oorvani Foundation. She likes words and wishes that the vice-versa will come true someday. She can be reached at mimansa.info@gmail.com.
Ungender Insights is the product of our learning from advisory work at Ungender. Our team specializes in advising workplaces on workplace diversity and inclusion. Write to us at contact@ungender.in to understand how we can partner with your organization to build a more inclusive workplace.
Read our insights about diversity, legal updates and industry knowledge on workplace inclusion at Ungender Insights. Visit our Blog.
Sign up to stay up-to-date with our free e-mail newsletter.
The above insights are a product of our learning from our advisory work at Ungender. Our Team specialises in advising workplaces on gender centric laws.
or email us at contact@ungender.in