DCC Dictionary: What do we mean when we talk about relational data ethics?


How does relational ethics help to navigate ethical challenges for data-driven innovation?

By SJ Bennett, a postdoctoral researcher in a research collaboration between the Centre for Technomoral Futures and the Data for Children Collaborative with UNICEF, and Iwona Soppa our Advocacy and Relations Manager.

 
Small groups of people on white background, with a web of black lines overlay.

What is Data Ethics?

Data Ethics describes principles which guide socially and ethically responsible use of data and data-driven methods.

Data-driven innovation uses various data-led methods, for example, Machine Learning (ML), to develop new mechanisms and insights in many different contexts. Outputs of these methods are often adopted by policy-makers and practitioners. As such, data-driven innovation projects can have implications for the freedoms and rights of different groups of people. Using existing datasets can reproduce and reinforce biases and injustices that many groups of people suffer not only today but historically. This is can manifest itself in the world of finance and automated decision-making based on systems using machine learning. For instance, low-income groups and minorities often experience low credit scoring, limited or less favourable borrowing options, and being refused credit. The systems are known to benefit white people and higher earners. The injustices of using such flawed methods deepen the economic divide and perpetuate both poverty and discrimination.

Therefore, an important aspect of understanding the ethics of data-driven innovation projects is in understanding the ethics of the data pipeline itself – from understanding the data to the impact of model choices, and evaluation of how the data models work. These stages are complex, and more than one type of bias can arise along the way.

 
Black and white image of concrete barriers.

Bias in Data-Driven Innovation

In a review of data products developed using data-driven innovation, Akter et al (2021) identified three main types of bias – data bias, method bias and societal bias.

Data Bias occurs due to technical issues with datasets. These could be due to data scarcity, illustrated by the example of Amazon’s abandoned recruitment algorithm, which favoured men when recommending top candidates. Data bias can also happen in the way the data is collected, for example, if the format of a survey forces leaving out important information to the survey itself.

Method Bias occurs when there are errors inherent to the design of the algorithm itself, for example, an algorithm failing to predict outcomes that it should have predicted.

Societal Bias refers to the impact of historical and social biases within the datasets and methods.


People walking in different directions, captured from above.

Data ethics aim

Data ethics’ aim is to guide data-driven innovation projects in navigating the rights and wrongs of decision-making. This is not always straightforward, and issues like ethics-washing can fail us. There are challenges around translating ethical principles into practical actions. Those challenges can arise when projects focus on the technologies in isolation and do not consider the situations they are designed within and for.

One of the ways to navigate these issues is to try to understand complex relationships between data, methods and algorithms, and the humans who interact with them. Relational Data Ethics explicitly takes this relational aspect into account. It looks at broader contextual considerations when we try to decide what is the right or wrong decision to make.


How can relational ethics help?

Relational ethics places relationships at the core of its focus, with ethics looking at the way in which important relationships are navigated. It focuses on reflecting if the right questions are engaged with, rather than on solving a clearly defined problem. Part of this process is providing the opportunity for individuals and groups, who are impacted by a data-driven initiative, to give their feedback on intentions and results.

Another important aspect of relational ethics is understanding context, and taking active consideration of an action or project’s impact on the broader ecosystem it is situated within. That is, in order to understand how to be ethical we need to understand the dynamic, interwoven contexts and relationships within which the data-driven innovation project is designed and deployed.

A good example of developing this in practice exists in the domain of care robots. A lot of research around care robots is aimed at deepening the understanding of not only the needs of people receiving care, and how they will interact with the care robots, but also looking at existing relationships which will be affected by the introduction of such technology, as well as new ones formed on the basis of it. Those relationships include ones between a care receiver and care robot, current caretaker and care receiver, as well as all the interactions of care providers and care robots technology. Provision of care, in this context, doesn’t just mean focusing on the tasks required, but it looks at a human element, bonds created through interactions, and trust developed between people in this environment. All of these needs must be taken into account, and so it is important that a solution based on all of these needs is deployed. Autonomous care robots might completely cut out the human interactions a care receiver has with their caretaker, which is hugely transformative in what it means to the humans in this situation. The caretaker’s job changes too, and their relationships are affected, the dynamic is different. Taking all these aspects into account during the development of a solution can help ensure a successful deployment that truly improves all of the areas for the intended beneficiaries.

In answering “what is the right thing to do?”, we need to ask: “how will this affect existing people and relationships?” and actively ask the people who are actions will impact “is this the right thing to do?”.

The key to this is involving people with relevant expertise - people with experience and knowledge of the contexts within which data-driven innovation projects are deployed. Designing for inclusion has its own challenges. Groot et al (2022) reflect on facilitating co-production in service design for Mental Health, noting how researchers with experiential knowledge are often silenced. They explain how relational ethics can help address these challenges within collaborative teams, by actively framing experiential knowledge as having the same validity as other kinds (for more information see REF). Reflexivity is crucial within this – we need to have an ongoing process of asking “is this the right thing to do?”, and changing our actions based on the answers. This can be tricky within collaborative teams comprised of different roles and forms of expertise, who might progress at different speeds and value distinct types of knowledge and might necessitate building points of reflexivity into processes of data-driven innovation.


Want to find out more?

Want to find out more?

If you are interested in relational approaches to ethics and their importance within data-driven innovation, you can read more here:

Care Ethics in robot design

Applying the value of Ubuntu in AI - Reviglio, U. and Alunge, R., 2020. “I am datafied because we are datafied”: An Ubuntu perspective on (relational) privacy. Philosophy & Technology, 33(4), pp.595-612.

Exploring what Data Justice means for development work - Heeks, R. and Renken, J., 2018. Data justice for development: What would it mean?. Information Development, 34(1), pp.90-102.

Understanding how Design Justice can inform system design - Costanza-Chock, S., 2020. Design justice: Community-led practices to build the worlds we need. The MIT Press.

Further reading on fairness and power in algorithmic systems:

Fairness, Equality, and Power in Algorithmic Decision-Making (acm.org)

Expanding Explainability: Towards Social Transparency in AI systems (acm.org)

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning (acm.org)


References:

Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y.K., D’Ambra, J. and Shen, K.N., 2021. Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, p.102387.

Birhane A., 2021. Algorithmic injustice: a relational ethics approach, Patterns, Volume 2, Issue 2, [link].

Dignum V., 2022. Relational Artificial Intelligence, Computing Science Department, Umea University, Sweden, [link].

Groot B., Haveman A., Abma T., 2022. Relational, ethically sound co-production in mental health care research: epistemic injustice and the need for an ethics of care, Critical Public Health, [link].


 
Previous
Previous

Data for Children Collaborative with UNICEF winner of DataIQ 2022 Award: Data for Society

Next
Next

DCC Dictionary: What do we mean when we talk about frontier data?