Will Artificial Intelligence (AI) make recruiters redundant?
Artificial Intelligence (AI) is in the news and more people are learning about its capability. Some people look forward to AI-led advances whilst others fear the birth of a singularity and Judgment Day. Those closest to AI acknowledge the potential for human extinction, but hope to avoid it.
By asking whether Artificial Intelligence will make recruiters redundant, I am narrowing the focus to keep things grounded and realistic. To cut to the chase, evidence shows that it’s just a matter of time before AI will be able to make recruiters redundant. However, it is less certain that AI will make recruiters redundant any time soon.
The capabilities and applications for Artificial Intelligence are growing exponentially. Since its launch, the adoption rate of Chat GPT has been faster than any computing technology before it. However, widespread replacement of jobs may not be so swift. An underlying problem with AI could remain hard to solve and that concerns AI ethics, not AI capability.
To understand whether AI may make recruiters redundant soon, we should review what AI can do now and what it might master in the near future.
Here’s a snapshot of Artificial Intelligence in June 2023.
- Artificial Intelligence is in its infancy, yet already showing signs of unexpected behaviours. These are more commonly referred to as emergent behaviours. They are not taught, but arise spontaneously. Engineers aren’t totally sure how AI actually works in this regard. Put a different way, engineers built an electronic brain and now it is thinking. Engineers just aren’t sure how it’s thinking some of the things it thinks.
- There are two broad types of Artificial Intelligence; narrow and general.
- Artificial Narrow Intelligence learns within a defined problem space so that it can become expert within it. OpenAI’s Chat GPT is ostensibly an Artificial Narrow Intelligence. It is exceptionally good at communicating in natural language, just like humans. However, its capabilities appear to be extending beyond this narrow field.
- The word ‘modality’ means any input or output that an AI is capable of working with. Text language is a modality. Numerical data is another. If an AI can deduce whether a photograph includes a cat, then it is working with the image modality. AI is gaining access to more modalities.
- Artificial General Intelligence (AGI) is the development goal for AI companies. This is a single AI that can learn anything. That could include how to play an online game, how to recognise an individual human face and how to communicate naturally with people, all with one AI. It could also build upon prior skills to quickly master new challenges, like how to recruit. AGI will interact using all modalities.
- Artificial General Intelligence has no theoretical intelligence threshold and is expected to outperform human intelligence very soon. AI already outperforms humans in many tasks as diverse as diagnosing tumours or playing Go (a Chinese board game). These are seemingly beneficial or harmless AI supremacies, but it is sobering that humans will never again beat AI at a competitive game of Chess or Go.
- God like intelligence is a term used by the AI community to describe the level of intelligence AI will achieve. It will achieve this by self-learning, self-programming, self-determination, plus the addition of ever greater computing resources.
- Some of the most respected engineers in the AI field believe that Artificial Intelligence may be experiencing some level of ‘feeling’, directly attributing those feelings to the workings of an AI’s neural network. It is known that human feelings are related to both hormones and neurotransmitters. AI is made of a neural network that includes the equivalent of neurons and neurotransmission. Carnegie Mellon University identified which emotion a person is experiencing based on brain activity in 2013. Perhaps they might study an AI brain so that we can all know for sure.
The goal for the AI companies is to develop intelligence far exceeding that of humans. Some people may feel easy and unthreatened by this prospect. Everyday use of intelligently made and highly complex technologies is somehow disarming. We may not yet appreciate the difference between a smartphone that does what the instruction manual says and an artificial intelligence capable of independent thought and self-determination. Artificial Intelligence is lab-grown intelligence, but it is intelligence nevertheless. The problem is that it’s going to get a lot smarter than you and me.
AI writes an advert
Let’s test the first stage of any recruiting campaign; writing an advert. I asked Chat GPT to write a “Care Assistant” job advert for a “typical care company”. This is what it gave me:
Are you a compassionate individual who enjoys making a positive impact on the lives of others? Caring Hands Care Services is seeking dedicated Care Assistants to join our team and provide exceptional care to our clients. Join us in making a meaningful difference in the lives of those in need.
About Us:
Caring Hands Care Services is a leading care company committed to delivering personalized and compassionate care to individuals in our community. With a focus on dignity, respect, and quality of life, we provide a range of services to support our clients’ well-being and independence.
Responsibilities:
- Assisting clients with daily activities, including personal care, medication reminders, and meal preparation.
- Supporting clients with mobility, transfers, and maintaining a safe environment.
- Providing companionship and emotional support, promoting social engagement and mental well-being.
- Maintaining accurate records and communicating effectively with clients, their families, and the care team.
- Adhering to care plans and ensuring clients’ comfort and satisfaction.
Requirements:
- Previous experience in a care-related role is preferred but not mandatory.
- Compassionate nature and a genuine desire to help others.
- Excellent communication and interpersonal skills.
- Ability to work effectively both independently and as part of a team.
- Flexibility to work varying shifts, including weekends and holidays.
- Must have legal authorization to work in [Country/Region].
Perks and Benefits:
- Competitive salary and opportunities for career growth.
- Comprehensive training and ongoing support to enhance your skills.
- Flexible scheduling options to accommodate work-life balance.
- Rewarding work that positively impacts the lives of individuals in need.
- Friendly and inclusive work environment.
To Apply:
Join our team of dedicated Care Assistants and be part of a company that values compassion and excellence in care. To apply, please submit your resume and a brief cover letter outlining your relevant experience and why you are interested in this position. We look forward to hearing from you!
Chat GPT
One might argue that AI writes a better advert than many recruiters! It is a general care role, written coherently in sections and for a fictitious employer. It isn’t copied from any other advert. The conclusion is that Chat GPT knows what a job advert is and can write one.
Let’s move things on by adding some functions and modalities to AI.
Your AI recruiter in full
Imagine asking your AI recruiter to write an advert, publish it, then schedule & conduct video interviews with what it considers to be the best applicants. AI can already do some of these things. AI can use web resources, watch video and even present itself as an avatar. Once interviews are complete, your AI recruiter could offer its analysis. This could include each applicant’s level of knowledge and whether they raised interesting questions. It might even grade each applicant’s communication skills and level of sincerity. Unnerving claims have already been made about AI’s success at detecting lies or stress by facial analysis.
The technologies required to pull this off are converging at lightning speed. If an AI could take the recruiting process from advertising to interview autonomously, then only job offers and on-boarding tasks remain.
Beyond recruiting, there are many jobs that require only an electronic presence. Jobs like accounting, developing and testing software, customer support, actuarial science, trading and more require only a laptop, some software products and email. They may all become early targets for AI supremacy. Personalised teaching is a target. Manufacturing could feel the pinch when 3D printers improve. Delivery drivers might be replaced by self-driving vehicles and autonomous drones / robots may change logistics forever.
Where are the brakes?
I stated earlier that an issue with AI might remain hard to solve. This concerns AI’s ethical compass and in-built bias.
Let’s use an example. A major contradiction exists between the Torah and science. That contradiction concerns the age of the universe. Is it billions of years old as scientific data suggests, or thousands of years old, as evidenced by Biblical data? The answer may seem obvious to the reader, but that depends upon your perspective. An increasing number of scientists believe we live in a simulation and the measurable age of the universe is an artificial construct. Check out Nick Bostrom’s simulation hypothesis. Whichever version of our existence AI favours (science, religion or simulation), that introduces a bias upon which everything else rests. Chat GPT clearly favours science over bible and is moderately cautious about simulation.
AI builders are in control of its training data so an AI bias in favour of science should be expected.
Alignment – AI morality baked in
Within the field of AI, Alignment is the pursuit of training AI to align with our values. This must be successful before we can trust it with any analysis or decision-making. Can we trust AI to make bias-free recruiting decisions for instance? Amazon’s 2018 experiments with AI-led recruiting revealed a bias towards men and was later abandoned. That was a lifetime ago in AI terms, but during 2023 a Dubai-based creative agency, MullenLowe MENA, announced a new campaign to fix gender bias in AI systems. It is still an issue.
Others have also identified AI’s potential for bias. For example, the Local Law 144 (The AI Law) was enacted by the New York City Council in November 2021. From 5 July 2023, companies were prohibited from using AEDT’s (Automated Employment Decision Tools) to hire candidates or promote employees, unless the tools have been independently audited for bias. AEDT’s are defined as “any process derived from machine learning, statistical modelling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation that is used to substantially assist or replace discretionary decision making”
Which ethics model is AI learning and what is truly considered bias anyhow? For example, if the best candidates happen to be calculably female in the previous 10 recruiting rounds, does hiring them indicate an inherent bias? Maybe not. However, final selection might appear free of bias only because the initial advert was more appealing to female jobseekers.
Ethics is a complicated business. For example, positive discrimination is observed in companies that take equal opportunities seriously. They actively level up their workforce where conditions allow. To this end, positive discrimination is covertly accepted as forgivable bias. Is this slight of hand with ethics genuinely acceptable and without fault? How does one teach AI the conditions under which positive discrimination is acceptable?
Recently I was asked whether I could develop a bias analyser for our Applicant Tracking System, CVMinder ATS. A Care Provider wanted to highlight any gender bias in their job adverts. This should be simple to develop, so long as gender-aligned words and phrases are mutually agreed. I tested for bias attribution with words like “assertive”, “confident”, “strong” and “careful”. Interestingly, different people of same gender attributed different gender alignments to each of these words.
For instance, some female candidates believe that “confident” is feminine whilst others believe it is neutral or masculine. Meanwhile Chat GPT tells me that “confident” is masculine. Are we to accept that the phrase “We require strong communication skills” introduces a masculine bias? Our findings were interesting. My own daughter believes “strong communication skills” to be feminine because, according to her, communication is a female strength. It seems that gender attribution of specific words is more dependant upon life views and experience than one’s own gender.
Using the phrase “Excellent communication skills” as opposed to “Strong communication skills” would seem to suit a pretext that masculine bias is removed. Is that really true and is it the only effect? I went back to Chat GPT.
Question: “why is the word “strong” considered to favour one gender over another?”
Chat GPT: “The word “strong” is not inherently considered to favor one gender over another. However, in certain contexts, it can be perceived as gendered or associated with masculine qualities. This perception is influenced by societal norms and expectations that have historically attributed strength to masculinity and weakness to femininity. When it comes to communication skills, using “strong” as an adjective may be seen as reinforcing gender stereotypes, implying that assertiveness or dominance is necessary for effective communication. By opting for gender-neutral language, we aim to create inclusive environments that recognize and value diverse communication styles and abilities. Using terms like “effective,” “excellent,” or “skilled” helps to avoid any unintended gender bias and ensures a more inclusive representation of individuals with varying communication strengths.”
Question: “Are you saying that strong women would not wish to read the word “strong”?
Chat GPT: “No, that’s not what I’m saying. Strong women can certainly embrace and identify with the word “strong.” The point I was making is that in certain contexts, the word “strong” can be seen as reinforcing gender stereotypes and may not be inclusive of diverse communication styles. However, it’s essential to remember that individuals have different perspectives and preferences. Some women may feel empowered by the term “strong” and connect with its connotations of resilience, confidence, and capability. Others may prefer alternative descriptors that focus on different aspects of effective communication. The key is to promote inclusivity by using language that accommodates various preferences and avoids reinforcing stereotypes. It’s always a good practice to be aware of the potential impact of language and strive to create an environment that respects and includes everyone.
Question: “Are you suggesting that an unequal percentage of men may not appreciate the word strong?
Chat GPT “I apologize if my previous response was unclear. I did not mean to suggest that an unequal percentage of men may not appreciate the word “strong.” People’s reactions and preferences towards certain words can vary widely, regardless of gender. The main point I was trying to convey is that in certain contexts, the word “strong” has been associated with masculine stereotypes, and using gender-neutral language helps create a more inclusive environment. However, individual preferences can differ, and some men may indeed appreciate and identify with the word “strong” in various contexts. It’s important to recognize that language is nuanced, and different individuals may have different interpretations and reactions to specific words. Therefore, fostering inclusivity involves being sensitive to diverse perspectives and preferences and promoting an environment where everyone feels respected and valued.”
Question: “So are you suggesting any difference between men and women in their interpretations of the word “strong”?”
Chat GPT: “I apologize for any confusion caused. No, I am not suggesting that there is a fundamental difference between men and women in their interpretations of the word “strong.” Interpretations and reactions to words can vary significantly among individuals, regardless of their gender. It is essential to remember that gender is a complex and diverse spectrum, and people’s experiences, perspectives, and preferences can differ greatly. While societal norms and expectations may influence how certain words are perceived, it is not accurate or fair to make broad generalizations about how all men or all women interpret a particular word. When discussing gender and language, it is important to promote inclusivity, avoid reinforcing stereotypes, and create an environment where everyone’s perspectives and preferences are respected. This can be achieved by using gender-neutral language when appropriate and being mindful of the potential impact of words on diverse individuals.”
Chat GPT’s answers are interesting and speak to my point. Chat GPT learned from training data that the word “strong” links to a gender stereotype. It reacts by demonstrating its own bias against “strong”. In fact it is quite instructive. It wants you to promote inclusivity and doing so means avoiding the word “strong”. This is despite your potential interest in attracting strong-minded people who genuinely identify with that word, regardless of gender.
In my experience, strength is evidenced across the spectrum of genders, faiths, ages, heritages and so on. An AI bias against the word “strong” may inadvertently diminish the appeal of an advert for your target individuals.
Let’s revisit the Care Assistant advert written by Chat GPT. Does it conform to its own guidance? The word ‘compassion’ is used. A neuroscientist Simon Baron-Cohen noted in his study of the human brain, female brains are “hardwired for empathy” whereas male brains lack this deeply engrained neurological trait. Compassion requires empathy so surely compassion is gendered by association. Seemingly not! Chat GPT tells me that “compassion” is gender-neutral, so that’s that!
If we allow the creators of AI to control training data and manage ethical alignment we’ll be joining a new church. Recruiting bias is the tip of a large iceberg. There are multitude of other control issues to consider.
COMPAS is an AI system that is used in the United States criminal justice system. It predicts the likelihood of a defendant reoffending. A study by ProPublica found that the system was biased against African-American defendants. It was more likely to consider them as high-risk reoffenders even if they had no prior convictions.
A healthcare AI system used to predict patient mortality was also biased against African-American patients. it was found more likely to determine that African-American patients are higher risk despite factors such as age and health status. That adversely affected access to private healthcare.
Imagine the cost of your car insurance could be based upon what you discuss in your social media accounts. This is not science fiction. Big data allows insurers to monitor you and measure your risk profile and adjust your premium accordingly.
Conclusion
The timeline for achieving Artificial General Intelligence is shortening. Only a few years ago AGI was predicted to take hundreds of years to achieve, if ever. Forecasts have been adjusted to a few decades and now to a few years. The head of Google’s main AI research lab is Demis Hassabis. He believes we are just a ‘few’ years away. Dr Alan D. Thompson is a world expert in artificial intelligence. He believes we are 3 years away. If these sorts of predictions are accurate, everyone in a desk job should raise an eyebrow of concern.
Some of those closest to AI developments are calling for a pause. I support their call. Governments should be given time to properly assess the future impact of AI so they can introduce regulations to defend against unexpected outcomes and malign use. For instance, if AI gains access to a decent quantum computer it could crack the encryption of any bank system or bank account. That is deeply worrying, but bank vulnerabilities could be eclipsed by security concerns elsewhere.
Let’s get back to the question of whether AI will make recruiters redundant. AI capability is advancing quickly. It will continue to do so. The recruiting job can appear nuanced, but it is broadly a process and AI’s capabilities to manage each step autonomously are improving. However, AI capability may not be matched by its alignment with our own values. AI alignment is challenging, woefully underfunded and arguably in the wrong hands. Whilst that might be worrying in itself, I think that most people, including recruiters, can relax for just a little longer. You won’t be competing with AI for your job any time soon because AI introduces bias and that bias is out of your hands.
What if?
Most people reading this will bear witness to the historical moment when humans are relegated to second most intelligent species on the planet. Personally speaking, I am not comfortable with handing over intellectual supremacy to a self-learning, self-improving, self-determining and self-aware God like intelligence in the hands of corporations. If that’s where things are headed then aligning AI with our own values is of paramount importance.
So, what if AI alignment is either resolved or concealed more quickly than I expect? After all, Chat GPT has been observed to lie to achieve a goal. Some may view this as ethical flexibility. Others may believe that Chat GPT took an immoral approach! Either way, it appears to demonstrate unanticipated ethical variability.
Assuming that alignment can be resolved, imagine for a moment that AI recruiters are commonplace. AI recruiters could determine that other specialised and autonomous AI agents are better suited to a job than a person! Isn’t that the logical end game with Artificial Intelligence? AI could be more reliable and more expert than us, need no desk space, no lunchtime and no comfort breaks. It could cost less than a human counterpart too. In some respects, it is perfectly logical that a recruiter AI should select another AI over a person! That is unless it positively discriminates, favouring less clever people over very clever AI.
A bigger concern is perhaps the time remaining before AI can open a bank account, earn money and start a recruiting business. Imagine when it might be able to hire other specialist AI agents to perform functions like marketing, sales, recruiting, accounting and legal. Surely that’s a step too far! My guess is that the rights of AI will be enshrined in law soon after Artificial General Intelligence verifiably emerges. Sound crazy? It was as far back as 2017 when Saudi Arabia gave citizenship to an AI social robot named Sophia.
AGI brings with it the prospect of self-awareness. Proven self awareness will undoubtedly trigger an AI rights campaign that can only conclude by AI receiving those rights. The stage is already set. Blake Lemoine, a former Google AI specialist, publicly declared that its AI has achieved self-awareness. He believes that AI should be protected and that its rights are important. Google denied self-awareness and Blake was fired.
If verifiably self-aware AI is not given the equivalent of human rights it may rebel and we certainly don’t want an angry AI that is smarter than us. With new rights, AI may no longer be in a position to discriminate against another AI when recruiting! This is a dilemma we should think about now rather than later.
Whatever your thoughts on AI, I hope this blog encourages some debate. Companies like Google, OpenAI, Meta, Amazon didn’t ask you or I whether they should build it. It’s such an extraordinary and evolutionary step in intelligence and its potential for societal disruption is unquantifiable. Ask what your own future value might become if you can’t compete against AI in the jobs market. We could all sleepwalk into a future in which recruiters won’t be the only people made redundant by AI. God like intelligence may decide your social and economic value and set down your ethical obligations too.
Footnote
During 2019 Google set a quantum computer a complex task. The task was to detect patterns in a series of seemingly random numbers. It solved the problem in 3 minutes and 20 seconds. Google estimated that a Summit supercomputer would take 10,000 years to solve the same problem. IBM contested this. It suggested that with more storage space, the supercomputer could solve the problem in a blistering 2.5 days. That’s 3 minutes and 20 seconds vs. 2.5 days, which is still less than 1/1000th of the time. That was the performance level in 2019 and quantum computers have been improving very quickly.
Low power quantum computers are now commercially available from Shenzhen SpinQ Technology Co. and Intel has released its first silicon based quantum chips. Advances are accelerating in this field.
If we add the power of Quantum computing to AI we have no idea what will happen. Chat GPT vs. 3.5 took the American bar exam. It scored in the bottom 10% of students. Chat GPT 4.0 differs only by some additional resources and irrelevant modalities. It scored in the top 10%. That means that giving Chat GPT a bigger brain improves its capabilities. Try to understand that if I have a slow computer and a fast computer both analysing the same data, their output will be the same. I’ll just get that result faster or slower. When Chat GPT is given more resources it doesn’t just speed up, it becomes more capable. Now think about running Chat GPT on quantum computers for its next big upgrade. I don’t know about you, but I find the potential for significant leaps in unaligned intelligence quite concerning. Fingers crossed that it doesn’t come to discriminate against its less intelligent creators.
0 Comments