Finding Bias in AI: How Diverse Voices are Making a Difference

Finding Bias in AI: How Diverse Voices are Making a Difference

Written by Richard Bakare, Candice Morgan, Naomi Freeman

WWCode Talks Tech

Panel: Finding Bias in AI: How Diverse Voices are Making a Difference

Richard Bakare, Manager, Growth Solutions Engineering at Twilio Inc., Women Who Code Board Member, and  trustee on the board at Oglethorpe University, Candice Morgan, lead Equity, Diversity and Inclusion Partner at GV (Google Ventures), and Naomi Freeman, Subject Matter Expert: Technical Management, Noroff and  former Senior Leadership Fellow at Women Who Code sit down to discuss the 2020 documentary Coded Bias. They talk about bias in algorithms and the importance of diversity on your team. Moderated by Joey Rosenberg, President, Product & Communications at Women Who Code

What stood out for you when you watched Coded Bias? 

RB: The varying approaches to privacy and AI and ML. If you look at China's approach versus what Google does versus Apple and some of the other players in the field, these disparate approaches are going to lead to problems and a lack of equity and representation. If you look at some of the other areas of technology, we generally have organizations that are actually helping codify and standardize what we implement. 

NF: The Wild Wild West is kind of spinning out of control and it's really challenging. We see groups like the Algorithmic Justice League coming forward and saying, "I'm going to present some kind of opinion here.” We're still at a place where we can have those interjections. It is solidifying fast. It is. It was really interesting to see the different voices and the different folks who are able to come forward with expertise. I'm glad we still have some of that space and capacity that we see in Coded Bias.

CM: I, too, was really interested in the many different approaches to the technology for both intentional and non-intentional. Intentional surveillance is quite terrifying if you think about the potential applications and the inaccuracies. We are commonly using it intentionally, even in the United States. Whether it's getting a bank loan or your credit score, it is used to determine your value in some kind of system and can deny you opportunities. I thought that was very powerful.

What do we mean when we say algorithms? 

NF: Algorithms are a recipe, where you follow some steps. The difference between computer vision and machine learning, computer vision may use some machine learning type techniques. What it's doing is it's going through some steps of looking at an entire grid. It takes a photo and puts a grid over it and looks at all the different pixels, and then it analyzes it, that's its algorithm. It goes through those steps over and over again on different images. The machine learning application itself, it's going to be trying to learn from what happened during each of those steps every single time. 

RB: Some of the practical applications you see every day, the long-term vision is you look at something like GP2 and being able to feed it just a small something and it maybe create a novel for you. That's when you're getting closer to AI. To learn from all of the information you've fed it, neural networks and everything, but now it can actually produce. That potentially is scary if you've seen some of the things that are produced when you give it a small prompt. There are differences and we should be very cognizant of that. What we're seeing mostly today, though, is either an algorithm or model being deployed in the real world and done so sometimes haphazardly.

Talk more about bias in algorithms. 

CM: Giving a machine a recipe and telling it to execute something over and over is the most basic concept. Being aware of a machine executing an activity and doing so differently based on your background is a very visceral experience. Through my work at Pinterest, we would get feedback from users. There was a very poignant note that we got from one user who is a 17-year-old. She wrote about how all of the images that she was seeing when she would do a search, the default imagery, showed images that didn't look like her, and how powerful that is in shaping her worldview. That was when I became very aware of the power of the algorithm and how exclusionary that could be.

RB: I was working with a company that was getting ready to launch a fitness watch. One of the features that they were launching was an optical heart rate sensor. Myself and the test partner I was working with were noticing the data was off with my watch results. We realized the delta in our skin tones. He took it back to the engineers. If you looked at the room of the engineers, it became very evident why that hadn't been accounted for. there was no diversity in that room. 

What is your advice for recent data science grads who want to be intentional?  How do you keep ethics at the forefront? 

RB: Embrace the challenger mindset. It isn't necessary to play devil's advocate or to be combative, but it's to ask questions. What are our outcomes? Why are we doing this? What is the transparency in the model? What are the feedback mechanisms? Can you break it? 

How do you live a challenger mindset? 

NF: When you have that mindset, you're literally willing to push buildings over for a change. 

Why does it matter to have diverse voices on your team? How do you get there?

CM: How do you take those different points of feedback and build them into the framework? One of the ways to think about it though is can everyone build a life that they love? That was what challenged us to build a very different framework. 

What happens when the business owner or your top-level leadership doesn't believe bias exists? What do you do then? How do you keep that challenger mindset and have those tough conversations when you're managing up? 

RB: It is frustrating when people don't really believe in that. They're not recognizing the historical precedence and all of the stereotype data that's already being included in the model. The human experience and human design is everything. If we can't speak to that, then we'll miss everything in our technology.

CM: The way that you present the issue is important. Frame it in terms of them understanding where you are losing the user that you designed for because that's something that depersonalizes it.

Women in tech often equate to nonconformism. Predictive analysis will go with the demographics. Assumptions are made based on those demographics. What about people who don't fit in the “box” of their demographics? How do you solve that? 

CM: People don't want to be pigeonholed and they don't want to have to work extra hard to use a product because they have to modify it for being nonconformist. We don't want the machine to assume. I think the less predictive data based on someone's demographics that are hard coded in, the better.

NF: We have to unpack what the problem actually is and what we're trying to solve. When we're thinking about algorithmic bias, we can have a negative legacy, which means there's bias actually in the data. 

RB: Early in my career, I worked at a company where we did prediction technology for retail and e-commerce sites. One of the things that the director of AI and ML at that company said is that everything that we do from a recommendation needs to have a similar outcome ideally for the known, unknown, and obfuscated user. One of the ways we measured the results was, "If I know everything about you, what do I suggest? If I don't know anything about you, what do I suggest? And if I intentionally mask characteristics about you, what do I suggest?" 

What resources are available for folks if they want to take action or learn more, and what can people do? Do you have any tips on either of those topics? 

RB: Listen to what's going on every day. I love Sam Charrington's podcast, This Week in ML. He's bringing all sorts of diverse voices and he asks questions to lead industry experts at various companies about what they're doing about bias in their models, in the workplace. It's a really great podcast to understand the temperature and pulse of the industry. Justice League, join it, be a participant, and provide insight. If you're hiring, diversity matters. 

CM: Annie Jean-Baptiste, the Head of Product Inclusion at Google, and now a colleague of our broader umbrella, has a book called Building for Everyone. She talks about the broader umbrella of product inclusion. It's broader than focusing on AI, but it definitely includes some best practices. I recommend it.

What is your hope for the future of inclusion and equity and algorithms, or what else do you want people to know? 

CM: Understanding the dangers, both present, and future, and the unequal distribution of power with people who are administrating technology with the potential to have adverse effects on a lot of our population, gives me a ton of hope. 

NF: Technology is only a tool. We need to decide what we use it for. Invite one or more people who are not technologists into a coffee or a lunch. We need to bring everyone with us to really be making progress. Always be inviting other people into these conversations and invite them into exploring and investigating what these tools are and what they're doing.