Durham Research Methods Conversations - Decolonizing Artificial Intelligence
This blog post is on a conversation I was steering for the Research Methods Conversations and can also be found here.
Last year killings of George Floyd, Breonna Taylor, and all too many others were followed by the resurgence of the #BlackLivesMatter movement. Given these critical events, we as researchers and as members of a research institute paused to reflect on how our work speaks to the urgent injustices. Events such as #ShutDownSTEM, #ShutDownAcademia, and the annual black history month were organized to educate oneself on matters such as racial bias. However, there is no such thing as mastery of inequality. Those who hold positions of privilege - like us - are constantly learning how to challenge power and eradicate the systemic nature of racism. Thus, to ‘stay with the trouble,’ as feminist philosopher Donna Haraway would say, this conversation should be seen as the first in a series that motivates engagement with this critical topic.
In this first installment we tried to understand the colonial mechanisms of power, economics, language, culture, and thinking that influence the design, development, and deployment of artificial intelligence (AI) models and the data they depend on. The conversation was attended by many of the STEM subjects as well as anthropology and education to name but a few. The persisting difference between the demographics of the whole society and those that attend or are employed by universities in the UK was reflected by the group which attended this conversation.
Hitherto, scholars within the post-colonial and decolonial studies have identified three main places of action where change can be brought about. The first place was identified by Morgan Ndlovu, who recognized that universities in Africa are asking themselves ‘whether they are “African universities” or merely Westernized universities on the African continent’. His response to this identity crisis is a decentralization of the ‘Western epistemic centres’. One participant noticed that, especially in the UK, universities are opening new institutions abroad or attract foreign students, a trend that has been encouraged by the government Jamie Doward 2020. An interesting point came up while searching for subjects that could be taught across cultures. Here mathematics was imagined to be a safe haven, meaning that it can be taught by anyone to anyone by virtue of its expressions being universally true. However, even within the abstract world of mathematics do we have to acknowledge the significance of the difference in, i.e. Yoruban and Western counting practices, as Helen Verran observed in her work as a mathematics lecturer and teacher in Nigeria. It is important to understand the subject’s history and to adapt ‘the goals, content, and methods of mathematics education […] to the cultures and needs of the African peoples’ (Paulus Gerdes 1994).
The second place asks for additive-inclusion of existing knowledge with new and alternative approaches, by recognizing explicitly their value. The aim is to create an environment, in which new ways of creating knowledge can genuinely flourish. In this context, the discipline of computer vision was mentioned. As a participant pointed out, the unspoken paradigm called “vision as inverse graphics”, meaning that everything is in a perfect perspective projection as in a photograph, is often followed while other approaches are rarely acknowledged. It was pointed out that this paradigm only developed in the last 500 years of Western cultures, rendering its assumption of universality thus far from obvious.
The third and last place is asking us for engagement. It calls on us to examine scientific practice from the margins and place the needs of marginalised populations at the centre of the design and research process. Important questions that can guide this engagement are: where knowledge comes from, who is included and left out, whose interest is followed, who is silenced, and what unacknowledged assumptions might be at play. Here, a participant mentioned their research on racial bias within face recognition. When asking who is left out of commercially available facial analysis tasks, it was found that Uber locked out transgender drivers from being able to access their accounts to begin work (S. Melendez 2018). However, a difficulty that can arise when trying to improve facial recognition of minority groups within an originally imbalanced training dataset, is to avoid that encoding of protected attributes. A problem that the participant had to face.
At this point, the conversation has thus managed to visit the three most prominent places within the decolonial knowledge landscape (decentralization, additive-inclusion, engagement). Along the way, multiple examples were mentioned that showcase how the avoidance of these three places can cause injustice, oppression, and inequity. Instead, it might leave us:
- unaware of the fact that most data labeling relies on a outsourced and underpaid workforce of which the wast majority can be found in the global south (Madhumita Murgia, 2019)
- constructing classifiers based on too many categories or ones that do not exist
- using categories of which we do not grasp their ethicopoliticalness
- with the idea that bias sits in data but not in the algorithm and that it can be avoided
- the impression that it is always the case, that more data is better and technology can fix itself
One participant asked the difficult question that is gaining more attention in the AI community: is it possible to create a fair algorithm in an unfair world? As it remained unanswered, we shall ‘stay with the trouble’, continuing to confront the uncomfortable.