John,
As Artificial Intelligence (AI) enters all aspects of employee hiring, increasingly computers are not only screening resumes, but also conducting interviews and even making actual hiring decisions. The effects on humans can be disorienting and disquieting.
Among the concerns to automating such a personal process as interviewing and getting hired for a job are the obvious questions about human chemistry. What will it be like to work with the humans on a particular job? What is the culture, what are the unwritten expectations, how well will one get along with one’s boss?
But there are also technical concerns that must be addressed in order to prevent the automated exacerbation of existing societal biases and inequities reflected in the previous hiring data the AI tools are trained on.
Relying on faulty input, AI can perpetuate and intensify prior historic discrimination based on race, gender, age, disability, and other factors. New AI hiring fairness legislation has been passed in Illinois addressing these issues. Companies are prohibited from using systems that result in hiring discrimination. Now similar legislation is needed at the national level.
Tell Congress: Ban automated discrimination in Artificial Intelligence-based hiring systems. Introduce and pass AI hiring fairness legislation now!
What does ChatGPT think about all this? Well, actually, we asked, and it turns out ChatGPT is pretty on-target about assessing its own faults. Among the ten concerns it listed, “Bias in training data” was #1 on the list. The full list of concerns it identified:
Bias in training data
Lack of contextual understanding
Inability to handle unpredictability
Ethical concerns
Limited follow-up questions
Technical issues
Lack of emotional intelligence
Legal and compliance Issues
Feedback and transparency, and
Lack of a human touch.
To address these issues, ChatGPT recommends considering carefully how AI is used in the process. Algorithms should be transparent, biases must be monitored, and human oversight is essential. A human might not have said it better themselves.
In addition to perpetuating inequities, the process of being interviewed by a robot falls short in terms of human interaction. It’s difficult for both employer and employee to tell if an individual is a good fit with an organization.
Tell Congress to pass legislation to ensure AI training data biases do not become self-fulfilling prophecies for the future. Prevent AI discrimination in hiring now!
Thank you for keeping the human perspective!
- Amanda
Amanda Ford, Director
Democracy for America
Advocacy Fund
|