In the last couple of months, Pakhuis de Zwijger introduced its new series Designing Technology for All (DTFA) inspired by its predecessor project Designing Cities for All, that started in 2021. In a world increasingly shaped by artificial intelligence and big data, questions of fairness, inclusivity, and ethical responsibility are more urgent than ever. Our new series, Designing Technology for All, seeks to explore these themes. We kicked off this series through a collaboration with Myrthe Blösser and Paulina von Stackelberg, co-founders of FemData, with whom we aimed to unpack the hidden biases in data, interrogate the power dynamics behind AI systems and explore policies that shape the future of technology.
#1: Decoding Data Bias
In February we kicked off Designing Technology for All with the Decoding Data Bias episode. During this programme we identified the various data biases present within AI models and designs. One of the speakers, Paula Helm, explained how large language models tend to be exclusive in design when it comes to the proper inclusion of non-Western and indigenous linguistic and cultural understandings. Furthermore, Aurélie Lemmens and Caroline Figueroa delved deeper into the race-, gender- and class-based biases in AI models that hinder accessibility to housing opportunities as well as mental health services.
#2: Power, Data and Algorithms
We followed up with the programme of Power, Data and Algorithms. During this session we delved deeper into who owns and controls AI models and we looked into the geopolitical power dynamics that shape AI ownership. As such, Berty Bannor explained more about how Bureau Wichmann had initiated a case against Meta for its gender-based discrimination. Furthermore, Alexander Laufer told us more about his research with Amnesty International on racial profiling at Dienst Uitvoering Onderwijs (DUO) and Daniel Mügge delved deeper into the (potential) transition from American-owned AI models to European-owned AI models (within EU countries).
#3: Shaping the Future of AI
We closed off this trilogy with Shaping the Future of AI. During this session we sought to rethink AI beyond colonial, racist and neoliberal capitalist structures and imagine a future of AI that is overall more inclusive. Naomi Appelman told us about the complaint that her team at The Racism and Technology Center initiated against the VU for the algorithmic discrimination of exam proctoring services used during testing. Furthermore, Monique Steijns and Gabriel Pereira delved deeper into the colonial structures that are engraved in AI control and production.
We have finished this trilogy, but many more episodes will follow in the Designing Technology for All series. We recently had our fourth episode, AI & Human Creativity?, in collaboration with COECI where the dialogue centered around how AI can support human creativity rather than replace it. In September we will present our fifth episode with Amnesty International about social (in)justice and social media, so stay tuned!