Gender Bias in Futuristic Technology – AI in Pop Culture
Bias gender bias around the world is no longer gasping news. He is tightly handcuffed into the social fabric by stereotypes and traditionally respected norms. Such systems create spaces of division, both socially and numerically. While these voids are clearly visible from a socio-political perspective, they remain latent in emerging futuristic technologies, primarily machine learning with big data. With a huge proportion of the population using online digital services and networks today, we create several gigabytes of data every day. This in turn powers AI algorithms to become smarter with improved precision and quality outputs.
We are starting to rely more and more on algorithms globally every day for decision making, regarding mortgage decisions, insurance risk, job screening, valuations, bond size setting , sentencing recommendations, predictive policing, among many others. More so, it now facilitates a majority of everyday experiences. It is estimated that AI will contribute to approximately $ 15.7 trillion to the global economy by 2030, of which about $ 6.6 trillion will come from increased productivity, while the remaining $ 9.1 trillion will come from the side effects of consumption.
We have gradually moved from assisted or augmented intelligence, which assisted humans in their tasks, to more automated and autonomous systems, completely eliminating the need for human supervision. As exciting as this possibility may sound, our apprehensions should also remain intact, as this technology knows nothing about the socio-economic inequalities of our world. And as this begins to have an impact on the lives of individuals, it becomes an ethical necessity to examine who is responsible and who is responsible. Historical theorization of the relationship between gender and technology asserts that they are co-produced, something that will be discussed later in this article.
Ethical AI insists on placing the importance and rights of an individual on the broader utility spectrum of any digital product. Individual definitions of fairness largely govern the ethics and governance of the field. Or reasonably, according to the definitions of creators and powerful managers. Therefore, transparency and accountability become essential to its functionality. With growing confidence and preference for futuristic technologies, these values need to be more rigorously integrated and regulated in audit frameworks. The decision-making process should always be clear and not concealed with a sense of universal understanding.
Read also : Women and Technology: Bridging the Gender Gap Through Feminist Technology
AI applications and gender bias
A nuanced sociological exploration teaches us how AI applications spread gender bias among their users. It’s certainly no coincidence that all AI assistants have historically been female voices by default, like Microsoft’s Cortana, Apple’s Siri, Google Assistant, and now Amazon’s Alexa. Late 20th-century tech researchers attributed this growing stereotype to some studies (which were later disproved) claiming that women’s voices were more intelligible due to the treble. This even led to the creation of an entire female-dominated industry of telemarketers and telephone operators.
Now, this trend is supported by studies indicating that the public responded better to a woman’s voice, describing them as “Friendly and pleasant” while adapting better to the “image of a conscientious assistant”. Likewise, in 2016 when Google tried to launch its new assistant with male and female voices, it couldn’t because there were no male voice training tools before. It was explained that all the precursor text-to-speech systems had been qualified only on female voices and therefore only worked better with them.
Professor Safiya Noble, an expert on the subject, has repeatedly cited that increasing socialization with female virtual assistants is rapidly diminishing the importance of women to “a female that responds to demand ”. In addition, when we symbolically ask the young female assistant coded into the system to perform limited functions like booking airline tickets, setting reminders, creating monthly calendars, weather reports, etc. learns them as primary tasks. Therefore, its widespread approval increases only for such assignments. Something quite funny is that one of the fastest supercomputers in the world, IBM Watson, is used to make complex medical decisions and play quizzes instead of setting alarms, that too in the voice. by acclaimed “male” vocal artist, Jeff Woodman.
Speaking of media representations of these stereotypes, JARVIS, Tony Stark’s popular AI assistant in Marvel’s Avengers and Iron Man films (2008-2013) looks like a companion, instead of an automated service system. JARVIS helps him save the world and actor Paul Bettany gives him his voice. On the other hand, Samantha, an AI assistant, voiced by Scarlett Johansson in the film, Her (2013), talks about relationships and date plans for the protagonist played by Joaquin Phoenix. A classic stereotype of the role of provider / protector and caregiver!
After the heteronormative costume, the humanoid female robot, Sophie also, stated the desire to have a baby and a family, about a month after his interaction with the real world. She was even granted more freedom and rights as a member of Saudi Arabia. citizen than a human woman. It’s strange how a bot woman can have more independence than women and foreign workers in a country and yet be conditioned to conform so quickly to generations-old notions of a heterosexual family unit.
These two prejudices mainly result from the same cause, which is the sexist prejudice of the human mind, historically reproduced, recorded and anchored in us. And while this article broadly examines two ways in which this bias is perpetuated, there may be several advantages to this question. But more importantly, these two representations of bias revolve in a vicious cycle in which each produces the other. The real world bias seeps through the data (evident through algorithmic bias), which in turn is put into action by critical decisions in business, medicine, law and order, etc., thus generating deformed trends of the same biases to begin with. Thus, the fact that 92.9% of secretaries and administrative assistants of the United States were women in 2020, 83.8% of whom were white, is hardly surprising.
A 2018 study by PwC on perceptions of AI and its tools found that nearly 61% of its respondents, made up of employees working in metropolitan cities in India, favorably used these digital assistants, noting that ‘they helped them with “event reminders” and “manage their calendar”. Interestingly, 74% of those surveyed wanted their digital assistants to be “friendly”.
Whether it is voice assistants, chatbots or virtual agents, by sexing an inanimate product, technology as a discipline gives way to recycling expected social behaviors in a very modernist face of neutrality. Alain Winfield, a robot ethicist, says designing a robot with gender is a deceptive act. He emphasizes that the machines themselves cannot therefore belong to a particular genre. Therefore, when designed to shyly reject or submit to harassment, it prepares real women for further objectification. While they may bring potential changes to the over-exploited expectations of gender groups, technologies still seem to entertain women and bots today as “almost human beings with no mind of their own.” A moving UNESCO and EQUAL Reporting, shed light on a pathetic array of voice assistant responses to verbal sexual harassment. The following table shows that the feminized robot even continues to thank users for sexually inappropriate comments, trivialize the impact of insults and verbal abuse that women face every day. When these female VAs are presented to consumers as subservient objects at their disposal, technologists encourage the perception of women as “objects” for society as a whole (see table below). In turn, this contributes to the continued and trivialized presence of women in management and decision-making.
Chart: Voice assistant (VA) responses to verbal sexual harassment
At the very core of AI systems are algorithms, written and trained primarily by English speaking, white, and privileged men. While prejudices of any kind are almost invisible in modern technological infrastructure, women stand ambiguously between not being entirely a minority group and not being a privileged group. This makes gender bias one of the biggest globally recognized threats.
[The second article in the series discusses what causes algorithms and AI systems to be skewed and what can we do to make them inclusive]