illustration shows female character with data screens swirling around her to illustrate AI bias concept

AI bias can perpetuate stereotypes and misinformation if companies don’t take steps to fix the issue

Are you implementing AI into your workflows like we are? Chances are pretty high if you’re a B2B company or in anything tech. AI seems to be in every headline in 2023 ever since the deployment of Chat-GPT 3 and 4 along with Dall-E engine 2 and Stable Diffusion.

But as companies continue to develop and deploy these systems, you need to understand the AI bias in these systems. And you should also actively make efforts to eliminate biases when developing newer, more capable, and more widely used AI products.

While B2C products are much more visible, B2B developers should be equally aware of AI biases to prevent further systemic exclusion.

What is AI bias?

There are two types of AI bias:

  • Data biases
  • Societal biases

Data biases

Data biases are categorized by any algorithmic misrepresentations resulting from the data used to train the AI. Essentially, this means that parts of the data are overemphasized or over-represented, which skews the results.

Skewed data is bad because it can lead to low accuracy, reinforced prejudice, and skewed outcomes that the company did not intend.

There are four common subsets of data bias to watch out for:

  1. Systemic bias
    This type of AI bias happens when the data devalues some societal groups and favors others. This one can be hard to uncover, because it’s already built into our world. For example, there can be systemic bias against people with disabilities because they are underrepresented in studies. It’s institutional.
  2. Automation bias
    This AI bias can happen when a person accepts the algorithm’s suggestion without verifying the accuracy of the information.
  3. Selection bias
    Often, data scientists work with sample datasets. If the data isn’t properly randomized, the data set may not accurately reflect the population.
  4. Over-fitting or under-fitting the data
    This AI bias can happen when the algorithm either has too much data to learn on or too little. With over-fitting, the machine learning models have too much noise that can’t be applied to prediction. And with under-fitting, there isn’t enough for the model to identify underlying trends.

Societal biases

Societal biases stem from the assumptions, and perspectives of the developers of creating the AI algorithm.

  1. Reporting bias
    This is when a small subset of results are analyzed. It can turn up in many forms, including only analyzing a specific language or only showing positive results, not negative.
  2. Over-generalization bias
    This happens when you make assumptions. For example, if you assume the results of one study will carry to another.
  3. Group attribution bias
    This happens when you assume characteristics of an individual fall in line with a particular group. You can see this when members of a group favor others in their group or exclude those outside.
  4. Implicit bias
    This happens when you make assumptions about people based on your own experiences. It shows up as stereotypes, and we may not even be aware it’s happening.

A 2022 study conducted by Federico Bianchi and his team of researchers showed multiple examples of AI biases. In their analysis of modern text-to-image products like Dall-E and Stable Diffusion, they found that generic queries relating to occupations like software developers, housekeepers and others yielded results that amplified racial and gendered stereotypes of said occupations. Similar stereotypes were also prevalent when providing racial, ethnic, gender, and class specific prompts.

The results from Dall-E took place despite the developers’ efforts to improve training data and incorporate “guard rails” which were intended to mitigate biases found in the generated images.

If developers have blindspots to stereotypes and biases, guard rails are ineffective. For example, apps that use filters to add makeup or “beautify” the user have been under scrutiny for lightening skin color. The standard of beauty these filters were trained to use created a narrow and harmful view, linking an old and racist vision of lighter skin as “more beautiful.”

How do we fix AI bias?

To understand how to combat these issues, first we need to understand why these biases exist.

One explanation is the lack of data or inadequate sampling. This could be the result of data collection methods that are biased or discriminatory, but it could also be the result of historical biases.

If the training data used is out-of-date, representations are most likely incorrect, and could reflect societal biases that existed in the past.

Another reason for AI biases is that the people in the tech space aren’t representative of the larger population. According to Zippia, 58.9 percent of tech industry employees are White, 15.2 percent are Latinix, 12.3 percent are Asian, and 10.1 percent are Black. This lack of diversity establishes a more narrow perspective that will fail to account for the underlying biases that exist in the systems.

A possible solution to these issues would be to include a more diverse team in the development process. This would help broaden the perspective of development teams, taking into consideration the needs of a more diverse user base. But this is a difficult task requiring systemic change that spans over a number of years.

A more immediate solution is to incorporate equitable design practices in the development process.

What is Equitable Design?

Equitable design is an approach that goes a step further from the usability and accessibility focus of traditional design. This approach encourages designers and developers to consider the diversity of a user base and understand the socioeconomic, cultural, and linguistic differences between multiple demographics.

It makes sure the user experience is enjoyable and useful regardless of a user’s background. This is no easy task, but there are ways throughout the design process to build more inclusive experiences.

How to implement equitable design

Designers should start considering inclusivity during the user research phase. Taking care to collect detailed user demographic information will help identify various backgrounds and cultures that one should consider. Having an adequate sample size is key to achieving this goal, because it will ensure accurate representation.

Once identified, designers should dive into each demographic, taking time to learn about the cultures, backgrounds and economic situations of the user base. This will help the team empathize with a larger portion of users and support inclusive thinking. As an example, if a vulnerable demographic has issues with stability such as housing, healthcare, food, and transportation, expecting them to have easy access to a desktop computer is unreasonable. However, it is easier for people to get phones. Therefore, a product that means to reach out to this demographic should be designed primarily for mobile accessibility.

Based on this research, a designer must develop an inclusive design framework that takes time to add cultural assumptions and behaviors to user personas. There are two questions which should be asked upon brainstorming solutions:

  1. “Is this solution inclusive to the entirety of the user base?”
  2. “How might I adjust this solution to include those I did not?

In the case of Dall-E, developers can run statistical analyses of data to make sure it is representative of reality rather than rooted in systemic and historical biases.

With an inclusive solution in mind, the next step is to design inclusively.

It is important to consider cultural differences and how they can influence user’s perception of the product. Some examples are color usage or symbolism. Different cultures attach positive and negative meanings to certain colors and symbols, and these meanings often vary between cultures. For this reason, be careful of how colors and icons are used to communicate with users.

The best way to make sure designs are inclusive is to test them with a diverse group of users. Doing so will help catch patterns that designers may miss  on their own.

But why do B2B companies need to consider AI bias?

While B2B AI products are less visible to the general public, it is just as important for companies developing these tools to consider equitable design as they have wider systemic impacts. Tools designed without equity and inclusivity in mind can:

  • reinforce existing biases
  • limit opportunities for certain populations
  • push away entire groups of potential users

As mentioned above, AI can hold biases towards certain groups of people, making incorrect assumptions about their occupations, socioeconomic statuses and cultures. This has the potential to promote internal biases and stereotypes as well as limit opportunities for marginalized groups.

Impact of bias

For example, if an AI candidate evaluation tool has biases against a specific demographic, it might filter out candidates that may otherwise have been a great addition to the company. These candidates may have brought valuable insights on novel problems. When they are filtered out based on a systemic bias, companies lose out on those perspectives and reinforce a culture of uniformity. This will not only contribute to existing stereotypes of negatively impacted groups, but it will perpetuate them further.

Company culture and innovation aren’t the only areas that could be affected. AI biases might also impact marketing. If a lead generation tool holds a misunderstanding or misrepresentation, it could make suggestions that lead to faux pas or larger marketing scandals that irreparably damage the company’s image. Alternatively, certain target markets could be overlooked due to these biases, resulting in the organization missing out on huge opportunities for growth.

As we continue down this path of rapid AI growth, it is vital to recognize and address the biases that are inherent to these systems. It’s important for B2B developers and designers to eliminate biases in what they create to prevent deeper systemic exclusivity.

In order for them to do so, organizations must be aware of the different types of biases, and strive to develop inclusive products that utilize equitable design methodologies. Doing so will not only create greater opportunity and reach for the product, but work to end societal inequity.

Resources

Similar Posts