Blog Image

Discrimination and Biases in the Digital Age

Discrimination and Biases in the Digital Age
-Alice, NUSRL Ranchi

HOW THE DIGITAL WORLD DISCRIMINATES?

Digital discrimination can be understood as a process of maltreating individuals, unethically and unjustly, based on their data, resulting from the automatic processing algorithms. The algorithms used in the present decade vary from techniques like Artificial Intelligence to machine learning. The algorithm chooses one or many parameters such as income, education, gender, age, ethnicity, or religion to form a prejudice against an individual. The alarming concern takes its significant place primarily due to the increasing number of delegated tasks to computers, mobile devices, and autonomous systems. The essential argument in favor of using these algorithms was that they would reduce bias; however, they eventually scale the bias due to the insignificant basis of these algorithms. 

The legislation around discrimination is age-old and has been transforming over all these years internationally and within the country too. From the International Covenant on Civil and Political Rights to the U.S. Civil Rights Act or the Indian Constitution\'s Fundamental rights, all advocate against discrimination in any form. However, the concept of digital discrimination is still alien to the discrimination policies. For instance, many big firms today utilize different software to hire employees who function on automated algorithms, and these algorithms cannot be claimed to be hundred-percent fool proof of discrimination. The evidence around such discrimination is discussed further in the article. 

In the 21st Century, given the pandemic where even small businesses are going online, and various other forms of digitalization are taking place, an individual\'s exposure to such discriminatory algorithms is on an alarming rise. The job to which an individual applies, or the products one surf on the internet, the news that flash on digital platforms, everything is pre-supposed to be biased under some of the other algorithms. The algorithm uses experience combined with artificial intelligence to make various sensitive decisions for a human. 

The target of these algorithms can be as harmful as wrecking a government, as these algorithms can be utilized and are being used currently too to predict political affiliation from social network data. Further, Multi-Million businesses predict the income of their potential clients through their purchase data. 

THE ROOT CAUSE & REAL-LIFE INCIDENTS

Like any other discrimination, digital discrimination finds its roots in the age-old personal prejudices of people, which mostly return values of biases in the context of gender, race, income, location, or lifestyle. 

An actual life illustration of digital discrimination can be the famous Google case. Google, the child company of Alphabet, was accused of showing the males such ads that encouraged the use of coaching services for high-paying lucrative jobs more frequently than females. The implications observed were indirect discrimination against women and their pay scale as this algorithm was promoting the gender pay gap. The more concerning point of these algorithms is that these algorithms are primarily opaque in their operations; hence nobody can ever determine the root causes of the effect. A more detailed study on the issue showed that the effect of this discrimination evident on paper was realized in the physical space as these ads significantly drove men. 

Another digital epitome that was charged with digital discrimination is Amazon. The online retailer, which had a history of employment regarding gender in the ratio of 60:40, and 74 percent of the company\'s strong positions were held by men, was discovered to be using a gender-biased recruiting algorithm. The tech giant itself agreed to the charges and discontinued the use of such software. The cause behind such bias was found in the history of employment of the company, where the algorithm used the record of employment over the past 10-years, which was predominant by the white males. The algorithm was designed to recognize word patterns in the applicants\' resumes rather than relevant skill sets, and as we saw the algorithm matched the records with past 10-years employees, the dominant class was found to be men and not women. Further, the algorithm predominantly excused the word "women" while evaluating the resumes, further downgrading women applicants\' chances. 

POTENTIAL REMEDIES

All these real-life examples showcased the potential threat humankind has from these algorithms or the digital age discrimination; hence it is pertinent to call for some remedial measures to extract the potential of artificial intelligence and machine learning in the future with the trust of its users. The result of such admission is that we should be aware of how these algorithms can help eradicate bias from the system and where it is alternatively creating biases. The employers of such algorithms should anticipate such bias in their data to explore the possible fairness. 

Furthermore, the explored biased AI algorithms should be assessed with tools and procedures to highlight potential "sources" of bias to reveal the traits in the data that most heavily influence the outputs. The modification in operational strategies could range from improving data collection through more cognizant samples to using third parties to audit the data. Also, as we saw in the above illustrations, one of the loopholes of these algorithms is that their processes are primarily opaque, which hinders the search of the root causes of such effects. Hence, the processes should be made transparent to promote fairness in the application of AI algorithms. 

Apart from discussing the technical aspects concerning the issue, we should also have a human approach. This could include considering situations and use-cases when the application of AI algorithms in decision making should be permitted and when the decisive powers to be held by humans. A practical application can be a system where the AI algorithm provides recommendations or options, which a human then double-checks. Human conscience will help decide how much weightage to be given to the AI recommendations from a use-case perspective. 

CONCLUSION

Discrimination and biases that were a part of human society for ages have now found their place in the new world of people, i.e., digital space. In the past few decades, the digital revolution has realized the significance of AI algorithms; however, the repercussions are coming out recently only. The present legislation around the cause has not developed itself to cover these biases; hence no strict actions are being taken till now, but the promising developments in the field, which is continuously working towards scrutinizing the system for fairness, have given some hope. A practical approach to the issue with this advancement is necessary if humankind is determined to revolutionize the digital world with people\'s trust.