Contact Information

Theodore Lowe, Ap #867-859
Sit Rd, Azusa New York

We Are Available 24/ 7. Call Now.

Introduction

Have you ever been curious about how decisions are made now that there is so much access to data? How do employers make a well informed decision about who to hire when there are over 1000 applicants? All of these decisions use Artificial Intelligence (AI) at the core of the decision making process. These decisions are majorly automated using preset algorithms that are trained with an abundance of data. The preset algorithms that make the decisions also continue to keep learning to make better decisions for future purposes, this opens a door to a critical problem about the quality and fairness of the early decisions made by the machine. If not fair, who is to be held accountable when such decisions can affect lives as they can be biased or even entirely wrong. If ever you get in a situation where the system proceeds with an unfair decision can you request for a human review for the system’s error to ensure justice?

The nature of AI and automated decisions is to make efficient fast decisions while processing a high number of data. We will see how these efficient decisions allow room for biases to arise, when these decisions can be unfair, and also on how you as an individual can protect yourself from unfair decisions that have been made via AI. We will also explore how to request for human overview when automated systems go rogue. Since automated responses could be used in different critical sectors like healthcare, job hirings, criminal records and such, these responses could mean very critical to an individual while for the organization it is just another daily decision they can lay off to a machine.


What Is Artificial Intelligence (AI)?

Artificial Intelligence is basically the ability of machines to learn and perform tasks that generally need human intelligence such as recognizing speech or even being able to predict certain things based on data. AI uses algorithms, data and computing to “think and act” in ways that mimic human reasoning (Google Cloud, What is Artificial Intelligence? (Google, 2024)). Unlike traditional software that follows instructions based on the predefined program, AI uses real time data to adapt and improve their performance. AI will not forget any data and can quickly process data faster than the human speed but it constantly learns like a human does as they are fed with more information.

The building blocks of AI use the basic foundation of machine learning and this allows AI to transform daily activities for corporations and individuals, for example, AI can give recommendations for products or marketing strategies or it can be used as a quick go to doctor for minor medical concerns. This also comes with a cost of allowing machines to make decisions for us, intentionally or not, some major decisions are made by AI. This also comes with concerns regarding accountability and fairness especially when these decisions impact one’s life in unfathomable ways.


What Are Automated Decisions?

Automated decisions are decisions that are made by machines, generally algorithms powered by AI without any direct input from humans. For example, we all have heard about ATS friendly CVs, an application tracking system used to skim through CVs in companies that have over a hundred if not thousands of applicants for a single position for a short list of suitable candidates, or even predict if someone has a past record that suggests they might likely commit a crime (Oliver Wyman, Automated Decision-Making (oliverwyman.com, 2016)).

These systems generally rely on algorithms that are meant to predict future possibilities and are trained using historic data. If the algorithm is poorly developed or the training contains a lot of outliers and biases, even the automated decision would learn that outliers is the new norm and would reproduce the biases or even at occasion increase unfairness.


How AI Uses Automated Decisions

When AI uses automated decisions, it not only automates routine tasks but rather integrates automated decisions in a broader system to automate the entire process. An AI powered system would analyse data, convert it into information, identify the patterns present in the data then proceed to provide a set of recommendations. For example, an application tracking system would review thousands of applications, based on the algorithm it would automatically reject applicants that do not fit the required profile, then it would proceed to use historic data to decide on which candidates are “qualified” enough according to its own judgement(Decisions, The Decision Catalyst: How AI Process Automation Is Reshaping Decision Management (decisions.com, 2024)).

Inserting AI driven platforms to automated decisions in effort to make the lengthy decision making process a lot more convenient. AI systems are used as a decision catalyst. This process does make workflow faster and efficient but also gives an opening for errors to occur in the decision making. The errors or biases can go unnoticed and affect a large number of people rapidly.


How Automated Decisions Work

Automated decisions work just like traditional software in a way with the core 3-step process being input, processing and output. Here the input is data collection, the processing uses algorithmic functions and historic data learning to generate an output. Systems use data from various sources, such as purchasing patterns, social media activity, application forms or even movement. Algorithms then use this data to produce a generated outcome such as prediction or categorize the data according to requested output; this directly translates to final decisions (Kohezion, Automated Decision-Making (kohezion.com, 2024)). For example, an application tracking system would collect data from multiple CVs, reject incomplete CVs then analyze the qualifying CVs using its trained model to generate a decision of who is deemed “qualified” and proceed to move forward with only a shortlisted number of applicants.

These systems work rapidly and have a very high capacity of processing data, but they also inherit the flaws of the training models and develop biases over time if the historic data presents with a lot of biases. For example, if a company tends to hire people in the age bracket of 20-50. A candidate who is strongly qualified but is 18 may be deemed unqualified compared to a barely qualified 21 year old.


When Automated Decisions Are Fair (and When They’re Not)

Imagine that you apply for a job, before anyone even sees your CV, an AI-driven ATS decides if you move forward. It is used by major organizations because it helps companies quickly filter candidates based on clear skills or qualifications whilst at time the system might unfairly reject you just because your CV doesn’t use the “right” keywords or uses a different format, or worse, because of hidden biases in the data it was trained on (Oliver Wyman, Automated Decision-Making, (OLIVER WYMAN, 2016)).

The big question with automated decisions is are they always fair? Or are they just designed to always have a flaw with biases?

Fair example: An applicant tracking system (ATS) screens candidates who’ve hit the desired skills in the job description and recruiters would be able to double-check those findings.

Unfair example: On the same ATS, select “brand universities” or ugly districts, the system rejects applicants for similar discriminatory reasons learned from historical hiring data.

Scary part? In most cases, the applicants do not even know an algorithm made the choice of deeming them “unqualified”.


Why Bias Creeps into AI

The example used about people with their age and qualification having no correlation above can show us how, we can assume these systems are likely to be biased against women and minorities because the data that goes into them is biased. If the past hiring records show a company mostly hired men for tech roles, then the AI may continue rejecting female applicants no matter how promising their resumes are. 

Basically, bad jobs in the past means bad jobs in the future as AI is also using historical data with no actual ability to differentiate between good or bad.


How We Can Fix Bias in AI

The good news is that bias is beatable. There are plenty of smart (and simple) methods for fair AI treatment.

  • Clean up your data. During training, make sure that the examples are representative and diverse.
  • Regularly test your system: Don’t just throw something out there and forget it, test it and look to see if its results are unfair.
  • Be transparent about how it functions: Companies should explain what formulas or mathematical models are used and why people make it off a shortlist.
  • Hold leadership accountable: Company leadership should also see fairness as something integral to company operations, not just an optional add-on.

Taking these steps will reduce bias and improve confidence: job applicants will no longer feel as if they are stuck in the hands of a black box.


Why Human Review Is Essential

We should not assume that even perfectly-designed artificial intellectual systems operate faultlessly. After all, the system error has occurred so many times in recent history

For automated systems to have the final say on all human affairs is a serious error, especially when it comes to hiring; for one letter of an application can block out a person’s entire career opportunity.

Then comes human review. Human beings can add to what machines cannot offer by way of:

  • Context: Your resume may not have exactly the same keyword, but you’ve done the work in another role.
  • Clarity: A recruiter can explain why you were or weren’t shortlisted.
  • Responsibility: Decisions about people should be made by humans, not machines.

In Europe, for example, data protection laws (GDPR) already provide applicants with a right to challenge automated decisions and ask for human intervention. This is a big step toward justice.


How to Ask for Human Review

So what can you do if you think an ATS (or any AI) treated you unfairly? Here are some steps:

  1. Find out who made the decision
  2. Ask for details
  3. Use your rights 
  4. Give extra context 
  5. Escalate if needed

Asking for a human review isn’t being difficult, it’s about making sure you’re judged fairly.


Protecting Yourself from Unfair Algorithms

We have used examples of ATS but beyond that, AI systems make decisions based on the data they collect about us every day. You can take a few simple steps to limit how much they know:

  • Lock down your privacy settings on apps and social media.
  • Don’t overshare sensitive info online.
  • Use strong, unique passwords.
  • Avoid storing bank details on your phone.

Little actions like these make it harder for algorithms to create an unfair or incomplete picture of who you are.


Conclusion

AI and automated decision making allows for faster and more efficient decision making, it also offers immense opportunities but it comes with the cost of risking fairness, justice and human dignity. Automated systems are trained in order to mimic humans and can cause mistakes just like humans do. To prevent such biases and errors affecting one’s life critically, it is important for people to retain the right to request a human review when affected by these decisions.

Fairness with automated decisions is not just limited to technical challenges but also ethical and moral challenges. By adding multiple layers of regulatory oversight of humans and organisational responsibility with technical support it is possible to harness AI’s potential without sacrificing justice. The future of AI and machine learning does not need to be one that is full of risks and hidden costs but rather one where efficiency and unbiased learning can go in accordance with each other.


References

Author : Tapan Tyagi



Share:

administrator

Leave a Reply

Your email address will not be published. Required fields are marked *