Skip to content

By Amrita Pal on

Does Tech Discriminate? Panel discussion

On Thursday 25 March, for the second event in the Open Talk series, the Science Museum Group was joined by an expert panel to explore issues around discriminatory technology and how we can create technology that truly works for everyone.
Screenshot of Does Tech Discriminate Panel discussion
Screenshot of the Does Tech Discriminate? panel discussion. Clockwise: Ortis Deley, Alex Fefegha, Alice Piterova, Malika Malik, Charlton McIlwain.

Discrimination in tech continues to have a serious, measurable impact on everyday life in the twenty-first century. Whilst conscious and subconscious human decisions have historically been the major driver for discrimination, technology – created by humans – increasingly has the capacity to make decisions that might be driven by prejudice.

Facial recognition systems, online search algorithms and social media echo chambers are just some examples of technologies that have the potential to amplify discrimination, and as machine learning and AI (artificial intelligence) systems become more advanced, how do we ensure they are free of their creators’ prejudices?

To discuss these issues, Ortis Deley, presenter of The Gadget Show, was joined by creative technologist Alex Fefegha; Managing Director of AI for Good, Alice Piterova; Professor of Media, Culture, Communication at NYU Streinheart, Charlton McIlwain; and Data & AI Architect at Microsoft UK, Malika Malik as part of Science Museum Group’s series of Open Talk events.

What is AI?

Alice Piterova explained that we use AI and algorithms in our everyday lives often without realising it, from smart-home devices to weather forecasting and transport apps. More recently AI has helped with new drug discoveries including the development of the latest COVID-19 vaccines.

Malika Malik added that AI is the ability of a computer or machine to mimic its capability of human intelligence. But while humans learn from experience, AI learns from examples and data.

But what if that data is skewed?

What if the data itself is influenced by prejudice?

Last summer (2020) in the UK, a computer algorithm caused a grading crisis with 39% of A Level results downgraded by exam regulators with disadvantaged students the worst affected, as the algorithm had copied the inequalities that exist in the education system. The government later U-turned this decision and moved to teacher-led grading.

Deley asked the panel if something so important as the future of students can be left in the hands of an algorithm?

The panel agreed that grading exams is a nuanced process that requires human involvement, but transparency is key to ensure that any AI model has been designed ethically.

Malik argued that an AI can be trusted with such decisions but only if the data is holistic and it has been designed with inclusiveness in mind – but the human should always be in the loop. She added, ‘it’s important to step back and understand the role of AI – to augment humans, not replace them’.

Alex Fefegha saw this is a perfect example of how computers ‘make our bad human decisions faster’, and before we can put our trust in the AI, we need to ask what processes are in place to try to mitigate the impact of human prejudice in the first place.

‘What’s important is the data being used to train AI systems. A lot of these are taken from the internet’ and can be heavily influenced by bias. ‘We can’t eliminate bias – we have to find ways to reduce [its] impact.’

Piterova added that regulation is needed especially in areas where decisions can cause irreversible harm to people’s lives, for example, healthcare and policing, to ensure the process of collecting and using data to create algorithms isn’t itself excluding or discriminating against certain groups.

Charlton McIlwain later added that regulation has to be part of the solution: ‘We cannot black box the technology – we need to know what is going into the decision making’ and ‘make sure everything we do has minimum harmful impacts.’

Making tech anti-racist

In his book Black Software, McIlwain explores how we can counter discriminatory tech by building technology that is actively anti-racist. But he acknowledged it’s complicated:

‘The question is this – will our current or future technological tools ever enable us to outrun white supremacy? After all, this is not just our country’s founding principle. It is also the core programming that preceded the animated birth, development and first uses of computation systems.’

McIlwain was optimistic about the uptake among technologists to conceptualise anti-racist tech, but a worry is that work may already be heading off track: ‘[It’s more than just] computer bias, or fair algorithms, or platform inequality, or digital ethics.’ As crucial as it is to address these issues, the heart of the problem lies below the surface.

To undo some of the harms that technology has created and exacerbated, anti-discriminatory technology needs to work at the levels and scale in which discrimination has historically impacted our lives.

For example, in order to tackle housing discrimination in the US, we need to interrogate discrimination within advertising technology, real estate, banking, financial services, insurance, consumer protection agencies and the wider legal system, to name a few. We need to understand specifically what the discrimination is, who it has disadvantaged and how, and, vitally, the process that has led to that discrimination.

He added, this expertise is not necessarily something technologists have, but companies need to know where to go and be willing to find it.

Diversity in tech

However, as well as creating inclusive and anti-racist technology, the panel also stressed the importance of ensuring diversity and inclusion in the workforce, and inspiring more young people to consider careers in STEM.

According to McIlwain, the number of women and people of colour in the technology industries has remained relatively the same since the 1960s.

Malik added that almost 63% of teens have never considered a career in engineering, while only a small percentage of girls consider a career in STEM as their first choice; meanwhile the majority of teens who consider STEM do so only after learning of the financial benefits.

According to Malik, the solution to addressing the gender and skills gap lies in increasing tech-preparedness, providing career counselling and mentoring opportunities to help build the right digital skills needed for a career in STEM.

Piterova, who confessed she isn’t a technologist, also underlined the importance of highlighting the variety of careers available and dispelling some of the myths: ‘You don’t need to be a rocket scientist’ to succeed in a STEM career.

But as McIlwain concluded, building a diverse workforce needs to go hand in hand with building anti-discriminatory technology that is embedded into the wider systems that have historically created and perpetuated discrimination.

So what can we do next?

According to Fefegha ‘AI has a lot of potential – but it’s about being able to engage in a level of critical conversation’, and as Malik suggested, the question ‘is not about what AI can do, but what it should do’.

On a corporate scale, McIlwain urged that public interest rather than profit needs to be the driving motivation for creating and implementing anti-racist technology.

But ultimately, as Piterova highlighted, a lot of the power lies with us ‘to be more curious and ask the right questions’ about the technology we use.


Further reading:

Black software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter, Charlton McIlwain (OUP, 2019)
How To Be an Antiracist, Ibram X. Kendi (Penguin, 2019)
Sway, Pragya Agarwal (Bloomsbury, 2020)


Does Tech Discriminate? is part of the Science Museums Group’s Open Talk programme, a series of events that aims to promote STEM to everyone and to encourage an understanding of both the causes and effects of discrimination.

You can watch the full event on the Science Museum’s YouTube channel.