peer-reviewed publications

Olya Kudina


2021 (6)
Feng, S., Kudina, O., Halpern, B. and Scharenborg, O.
Quantifying bias in automatic speech recognition
arXiv preprint arXiv:2103.15122
Automatic speech recognition (ASR) systems promise to deliver objective interpretation of human speech. Practice and recent evidence suggests that the state-of-the-art (SotA) ASRs struggle with the large variation in speech due to e.g., gender, age, speech impairment, race, and accents. Many factors can cause the bias of an ASR system. Our overarching goal is to uncover bias in ASR systems to work towards proactive bias mitigation in ASR. This paper is a first step towards this goal and systematically quantifies the bias of a Dutch SotA ASR system against gender, age, regional accents and non-native accents. Word error rates are compared, and an in-depth phoneme-level error analysis is conducted to understand where bias is occurring. We primarily focus on bias due to articulation differences in the dataset. Based on our findings, we suggest bias mitigation strategies for ASR development.
Kudina, O.
Bridging privacy and solidarity in COVID-19 contact-tracing apps through the sociotechnical systems perspective
The COVID-19 pandemic confronts people with moral uncertainty, where they balance daily the individual values, rights, and needs with the collective perspective. COVID-19 contact-tracing apps offer digital media to correlate the movement of people with COVID-19 cases with the help of integrated date from public health organizations. This helps to notify people when they come in proximity with carriers of the disease. Regardless of the differences in the technical setup and manner of introduction globally, the values of privacy and solidarity are often pitted against each other when discussing COVID-19 apps. In this paper, I reframe the COVID-19 tracking apps from being neither a messiah nor a destroyer of pandemic management, but as a localized and complex sociotechnical system helping to shape and qualify moral concerns. This will allow to not only expand the scope of discussion beyond privacy and solidarity, but demonstrate how the two can be bridged under careful consideration of the technical, sociocultural, and institutional embedding.
Kudina, O.
Regulating AI in Health Care: The Challenges of Informed User Engagement
The Hastings Center Report
The European Union’s proposed Artificial Intelligence Act is a welcome, ambitious law on the regulation of AI systems. However, it underestimates the responsibilities placed on individual users to navigate the implementation of AI. Focusing on the health care sector, this policy piece examines challenges that the proposed law bypasses. First, effective human-AI collaboration in the diagnostic process hinges on the acknowledgment of AI’s mediating role in this process, on forming a diagnostic dialogue between humans and AI. Second, with AI in this mediating role, the meaning of responsibility is changed to accommodate the broadened scope of clinician and patient duties, modified clinical workflows, and emergent medical norms. Finally, the challenge of media literacy concerns both the issues of access to knowledge and the ability to make informed choices regarding human-AI interaction. This policy piece suggests that embracing the complexity of the use practices is essential to achieving an effective human-AI partnership, in the medical sector and at large.
Kudina, O. and M. Coeckelbergh
«Alexa, define empowerment»: Voice assistants at home, appropriation and technoperformances
Journal of Information, Communication and Ethics in Society
This paper aims to show how the production of meaning is a matter of people interacting with technologies, throughout their appropriation, and in co-performances. The researchers rely on the case of household-based voice assistants that endorse speaking as a primary mode of interaction with technologies. By analyzing the ethical significance of voice assistants as co-producers of moral meaning intervening in the material and socio-cultural space of the home, the paper invites their informed and critical use as a form of (re-)empowerment while acknowledging their productive role in human values. By combining two approaches, appropriation, and technoperformances, and analyzing the themes of privacy, power, and knowledge, the paper shows how voice assistants help to shape a specific moral subject: embodied in space and made as it performatively responds to the device and makes sense of it together with others.
Kudina, O.
«Alexa, who am I?»: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making
Human Studies
In this paper, I argue that AI-powered voice assistants, just as all technologies, actively mediate our interpretative structures, including values. I show this by explaining the productive role of technologies in the way people make sense of themselves and those around them. More specifically, I rely on the hermeneutics of Gadamer and the material hermeneutics of Ihde to develop a hermeneutic lemniscate as a principle of technologically mediated sense-making. The lemniscate principle links people, technologies and the sociocultural world in the joint production of meaning and explicates the feedback channels between the three counterparts. Using digital voice assistants as an example, I show how these AI-guided devices mediate our moral inclinations, decisions and even our values, while in parallel suggesting how to use and design them in an informed and critical way.
Kudina, O. & de Boer, B.
Co-designing diagnosis: towards a responsible integration of Machine Learning decision-support systems in medical diagnostics.
Journal of Evaluation in Clinical Practice.
This paper aims to show how the focus on eradicating bias from Machine Learning decision-support systems in medical diagnosis diverts attention from the hermeneutic nature of medical decision-making and the productive role of bias. We want to show how an introduction of Machine Learning systems alters the diagnostic process. Reviewing the negative conception of bias and incorporating the mediating role of Machine Learning systems in the medical diagnosis are essential for an encompassing, critical and informed medical decision-making.
2020 (3)
Nickel, P.J, Kudina, O. & Van de Poel, I.
Moral uncertainty in technomoral change: Bridging the explanatory gap.
Perspectives on Science.
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call "differential disruption" and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation.
De Boer, B. & Kudina, O. (forthcoming).
What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.
Theoretical Medicine and Bioethics.
In this paper, we examine the qualitative moral impact of Machine Learning (ML) decision-supportive systems in the medical diagnosis process. To date, discussions about Machine Learning in this context have justifiably focused on the concerns that can be quantified and objectively assessed, such as measuring the extent of harm or calculating introduced risks. We argue that such discussions neglect the qualitative moral impact of these technologies, which we explore with the help of the philosophical approaches of Technomoral Change and Technological Mediation. Albeit anticipatory in nature, the findings expand on current discussions and contribute to a more encompassing and better informed decision-making when considering the introduction of ML in medical practice, while acknowledging multiple stakeholders and the active role of technologies in producing ethical concerns.
Boenink, M. & Kudina, O.
Values in Responsible Research and Innovation: from entities to practices.
Journal of responsible innovation, 7(3), pp. 450-470.
This article explores the understanding of values in Responsible Research and Innovation (RRI). First, it analyses how two mainstream RRI approaches, the largely substantial one by Von Schomberg and the procedural one by Stilgoe and colleagues, identify and conceptualize values. We argue that by treating values as relatively stable entities, directly available for reflection, both fall into an 'entity trap'. As a result, the hermeneutic work required to identify values is overlooked. We therefore seek to bolster a practice-based take on values, which approaches values as the evolving results of valuing processes. We highlight how this approach views values as lived realities, interactive and dynamic, discuss methodological implications for RRI, and explore potential limitations. Overall, the strength of this approach is that it enables RRI scholars and practitioners to better acknowledge the complexities involved in valuing.
2019 (5)
Kudina, O.
The technological mediation of morality: value dynamism, and the complex interaction between ethics and technology [PhD thesis].
Enschede: University of Twente.
In this dissertation, Olya Kudina investigates the complex interactions between ethics and technology. Center stage to this is the phenomenon of "value dynamism" that explores how technologies co-shape the meaning of values that guide us through our lives and with which we evaluate these same technologies. The dissertation provides an encompassing view on value dynamism and the mediating role of technologies in it through empirical and philosophical investigations, as well as with attention to the larger fields of ethics, design and Technology Assessment.
Kudina, O. & Verbeek, P. P.
Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy.
Science, Technology, & Human Values, 44(2), pp. 291-314.
This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Present-day approaches to this dilemma focus on methods to anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications. We present the approach of technological mediation as an alternative that focuses on the dynamics of the interaction between technologies and human values. By investigating online discussions about Google Glass, we examine how people articulate new meanings of the value of privacy. This study of "morality in the making" allows developing a modest and empirically informed form of anticipation.
Kudina, O.
Accounting for the Moral Significance of Technology: Revisiting the Case of Non-Medical Sex Selection.
Journal of bioethical inquiry, 16(1), pp. 75-85.
This article explores the moral significance of technology, reviewing a microfluidic chip for sperm sorting and its use for non-medical sex selection. I explore how a specific material setting of this new iteration of pre-pregnancy sex selection technology — with a promised low cost, non-invasive nature and possibility to use at home — fosters new and exacerbates existing ethical concerns. I compare this new technology with the existing sex selection methods of sperm sorting and Prenatal Genetic Diagnosis. Current ethical and political debates on emerging technologies predominantly focus on the quantifiable risk-and-benefit logic that invites an unequivocal "either-or" decision on their future and misses the contextual ethical impact of technology. The article aims to deepen the discussion on sex selection and supplement it with the analysis of the new technology's ethical potential to alter human practices, perceptions and the evaluative concepts with which we approach it.
Kudina, O.
Alexa does not care. Should you? Media literacy in the age of Digital Voice Assistants.
Glimpse, Vol. 19, pp. 106-114.
This article explores the ethical dimension of digital voice assistants from the angle of postphenomenology and the technological mediation approach, whereby technology plays a mediating role in the human-world relations. Digital voice assistants, such as Amazon Echo's Alexa or Google's Home, increasingly form an integral part of everyday life for many people. Powered by Artificial Intelligence and based on voice interaction, voice assistants promise constant accompaniment by answering any questions people might have and even managing the physical space of their homes. However, while accompanying daily lives of people, voice assistants also seamlessly redefine the way people talk, interact and perceive each other. In view of their intentionalities, such as interaction by voice, command-based model of communication and development of attachment, digital voice assistant mediate the norms of interaction beyond their immediate use, the way people perceive themselves, those around and form consequent normative expectations. The article argues that understanding how technologies, such as digital voice assistants, mediate our moral landscape forms an essential part of media literacy in the digital age.
Kudina, O. & Bruce, L.
The #10YearChallenge: Harmless Fun or Cause for Concern?
Bioethics.net, February 11. Accessed on November 26, 2019
This blogpost explores the pervasive photo-sharing across social networks from a perspective of bioethics and philosophy of technology. More specifically, it positions human faces on photos as a unique form of identification, a type of biometric data most specific to a person and difficult to change. Online photo-sharing practices, such as #10YearChallenge, makes human faces susceptible to commercial facial recognition software. The articles explore the ethical challenges that ensue from this and suggest governance direction through the framework of bioethics.
2018 (2)
Kudina, O. & Bas, M.
«The end of privacy as we know it»: Reconsidering public space in the age of Google Glass. In B. C. Newell, T. Timan, &  B. J. Koops (Eds.).
Surveillance, Privacy and Public Space (pp. 131-152). Routledge.
In this chapter, we seek to examine the impact of mixed reality glasses on the nature of public space, and suggest to turn to Glass as a device with history in this regard. Philosophically, we rely on the theory of technological mediation (Verbeek 2005, 2011) and the thought of Hannah Arendt (1990, 2013) to understand Google Glass better. The theory of technological mediation understands technologies to be in dynamic, mediating relations with the people and the world (Ihde 1990, Verbeek 2005). As such, people and technologies are not independent of each other, because, on one hand, people design technologies with certain intentions; but on the other hand, these same technologies help to shape the perceptions and interpretation of the world and consequently influence the way people act. The ethical consequence of this is that human values are also not independent of technologies — technologies mediate morality (Verbeek 2011). In the current study, we want to understand how privacy as a value takes shape in relation to Google Glass in public space.
de Boer, B., Hoek, J., & Kudina, O.
Can the technological mediation approach improve technology assessment?
A critical view from 'within'.

Journal of responsible innovation, 5(3), pp. 299-315.
The technological mediation approach aspires to complement current Technology Assessment (TA) practices. It aims to do so by addressing ethical concerns from 'within' human-technology relations leading to ethical Constructive Technology Assessment (eCTA), as articulated by Kiran, Asle H., Nelly Oudshoorn, and Peter-Paul Verbeek in their 2015 article. In this paper, we problematize this ambition. Firstly, we situate the technological mediation approach in the history of TA. Secondly, as a study into the normativity from 'within' human-technology relations, we reveal the phenomenological and existential origins of Verbeek's technological mediation approach. Thirdly, we show that there are two possible readings of this approach: a strong and a weak one. The weak reading can augment current TA practices but is eventually uncommitted to the idea of technological mediation. The strong reading defines a wholly new scope for our engagement with (emerging) technologies but is incompatible with existing TA approaches.
2014 (1)
Kudina, O.
Technology and academic virtues in Ukraine: Escaping the Soviet path dependency.
EASST Review, 33(4), pp. 9-11.
Inspired by participation in the EASST Panel on postsocialist condition, in this essay I tried to look into postsocialist transition in the sphere of education in Ukraine as influenced by introduction of Information and Communication Technologies. Utilizing Swierstra's (2009) concept of techno-moral change, I trace how new technologies gradually assist renegotiation of the moral landscape in Ukrainian education sphere, whose integrity has been often questioned since the Soviet times. STS and Philosophy of Technology can be useful frameworks to further enhance existing knowledge on postsocialist transition and generate new one as to how this change can be facilitated.
Olya Kudina