Discrimination in the AI Industry Contributes to Discriminatory AI Systems


As the CEO & President of Women in the Housing & Real Estate Ecosystem (NAWRB) and Desirée Patno Enterprises, Inc. (DPE) Real Estate Brokerage, Advisor & Investor for AmicusBrain—AI for Aging Population, CSO for ZuluTime, Publisher, Connector and a National Speaker, Desirée Patno’s network and wealth of knowledge crosses a vast economic footprint. With three decades specializing in the Housing & Real Estate Ecosystem and owning her own successful brokerage, she leads her executive team’s expertise of Social Impact, Gender Equality and Access to Capital, and provides personalized consulting services to the Real Estate and Family Office community.

A new report from New York University’s AI Now Institute titled Discriminating Systems: Gender, Race and Power in AI highlights the diversity crisis in the Artificial Intelligence (AI) sector and its effect on the development of AI systems with gender and racial biases. 

The lack of diversity in the AI sector and academia spans across gender and race. Recent studies show that women comprise only 15 percent of AI research staff at Facebook and 10 percent at Google. Women make up 18 percent of authors at leading AI conferences, while more than 80 percent of AI professors are men. Representation of other minorities is also sparse. Only 2.5 percent of Google’s workforce is black, while this is true of 4 percent for both Facebook and Microsoft. 

According to researchers, AI’s lack of diversity extends past the underrepresentation of women and other minority groups to power structures and the creation and use of various AI systems. Most of all, the report suggests that historical discrimination in the AI sector needs to be addressed in tandem with biases found in AI systems. 

Main Obstacles to Diversity in AI
In addition to the apparent diversity crisis in the AI sector across race and gender, here are the other important research findings.

The AI sector needs to change how it addresses the current diversity crisis. This includes admitting that previous methods have failed due to uneven distribution of power, and recognizing the connection between bias in AI systems and historical patterns of discrimination. These are manifestations of the same problem; therefore, they need to be addressed together.

Focusing on “women in tech” is not enough to address different experiences in AI, such as the intersection of race, gender and other identities. For instance, the potential effect of prioritizing the number of women in AI alone is that this narrow scope will likely privilege white women over minority women.  

Studies into AI have a preconceived notion of gender as binary, forgetting that gender is a lot more complicated than labeling individuals as “male” or “female” based on their physical appearance. These gender terms also carry with them stereotypes about how men and women act. 

Fixing the corporate pipeline won’t fix AI’s diversity problems. Other issues need to be addressed outside the pipeline from school to the industry, such as workplace culture, power asymmetries, harassment in the workplace, exclusionary hiring practices, unfair compensation and tokenization. These issues all play a role in deterring people from entering the AI sector or causing them to leave it. 

We need to reevaluate the use of AI systems for the classification, detection and prediction of race and gender, which only reinstate pre-existing patterns of racial and gender bias. Examples of the controversial use of AI systems includes detecting sexuality from headshots, predicting criminality based on facial features and assessing worker competence through “micro-expressions.” 

The use of AI systems for the classification of race and gender based on appearance, or to determine one’s character,  is not only scientifically flawed but perpetuates socially-ingrained racial and gender bias.


Recommendations for Improving Diversity in AI 
The report also provides recommendations for improving workplace diversity and addressing bias and discrimination in AI systems. For example, the  former issue can be addressed by 

  • Publishing compensation levels broken down by race and gender, including bonuses and equity, for all job roles and categories;
  • Ending pay and opportunity inequality, and setting up pay and benefit equity goals that include contract workers, temporary workers and vendors;
  • Publishing harassment and discrimination transparency reports, including the number of claims made over a certain time period, the types of claims, and actions taken; 
  • Changing hiring practices to maximize transparency and diversity, such as targeting at other institutions other than elite universities, focusing on underrepresented groups, and creating more pathways for growth in the company; 
  • Being transparent about how candidates are leveled, compensated and promoted;
  • Increasing representation of underrepresented groups, including women, people of color and other under-represented groups at senior leadership levels across all departments;
  • Ensuring executive incentive structures are connected to increases in hiring and retention of underrepresented groups; and 
  • Committing to greater diversity in AI research at academic workplaces and AI conference committees. 
  • Recommendations for Addressing Discriminatory AI Systems
  • Addressing bias and discrimination in AI systems will require 
  • Making sure AI systems are as transparent as possible, such as tracking what these tools are used for and who is benefiting from them; 
  • Conducting rigorous testing during the lifecycle of AI systems, including pre-release trials, independent auditing, and ongoing testing for bias and discrimination; 
  • Improving research on biases and fairness to include a wider range of disciplinary expertise with a focus on the effect of the use of AI systems in a social context; and
  • Determining the usefulness of AI systems through risk assessment and assessing whether certain AI systems should be designed. 


Supporting Girls’ Interests in STEM
One well-supported reason for why there is a lower representation of women in the AI sector is that not many girls are encouraged to pursue STEM. Therefore, their interest in science and technology fields will quickly fade if that passion is not nourished with opportunity.  According to a Microsoft survey, young women in Europe report that their interest in STEM began around age 11 or 12, but faltered when they reached the ages of 15 and 16. 

In the U.S., a greater proportion of girls are likely to have an interest in a coding career compared to young women. For instance, one third of middle school girls believed they would pursue this career compared to just 27 percent of women between the ages of 18 and 30. 

Girls are likely to be encouraged to pursue a STEM career if know a female role model working in the tech field or participate in STEM clubs and activities. Middle and high school girls are 17 percent more likely to feel empowered working on STEM activities if they know a woman in STEM. This same group of girls are 26 percent more likely to feel powerful when doing STEM activities if they are in a STEM club or participate in related activities. 

Main Takeaway
While encouraging girls to pursue STEM is important in giving more women an opportunity to transcend the pipeline into the AI sector, it is not enough to address gender and racial bias and discrimination that persists in this field. This requires a multi-faceted approach from actors in a variety of disciplines to study the use and impact of AI systems. 

AI companies also need to be involved in making sure their power structures, hiring practices and executive incentive structures are reevaluated to accommodate underrepresented groups who are inhibited from excelling in the AI sector by unconscious bias. Preventing further gender and racial discrimination in this field requires the collective effort of individuals with a diverse set of backgrounds and perspectives.

Become a member of NAWRB today! LEARN MORE

Leave a Reply

Your email address will not be published. Required fields are marked *