13
5
106
peer-reviewed publications
h-index
citations

Olya Kudina

Publications

2021 (2)
Kudina, O.
"Alexa, who am I?": Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making
Human Studies
Abstract
In this paper, I argue that AI-powered voice assistants, just as all technologies, actively mediate our interpretative structures, including values. I show this by explaining the productive role of technologies in the way people make sense of themselves and those around them. More specifically, I rely on the hermeneutics of Gadamer and the material hermeneutics of Ihde to develop a hermeneutic lemniscate as a principle of technologically mediated sense-making. The lemniscate principle links people, technologies and the sociocultural world in the joint production of meaning and explicates the feedback channels between the three counterparts. Using digital voice assistants as an example, I show how these AI-guided devices mediate our moral inclinations, decisions and even our values, while in parallel suggesting how to use and design them in an informed and critical way.
Kudina, O. & de Boer, B.
Co-designing diagnosis: towards a responsible integration of Machine Learning decision-support systems in medical diagnostics.
Journal of Evaluation in Clinical Practice.
Abstract
This paper aims to show how the focus on eradicating bias from Machine Learning decision-support systems in medical diagnosis diverts attention from the hermeneutic nature of medical decision-making and the productive role of bias. We want to show how an introduction of Machine Learning systems alters the diagnostic process. Reviewing the negative conception of bias and incorporating the mediating role of Machine Learning systems in the medical diagnosis are essential for an encompassing, critical and informed medical decision-making.
2020 (3)
Nickel, P.J, Kudina, O. & Van de Poel, I.
Moral uncertainty in technomoral change: Bridging the explanatory gap.
Perspectives on Science.
Abstract
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call "differential disruption" and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation.
De Boer, B. & Kudina, O. (forthcoming).
What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.
Theoretical Medicine and Bioethics.
Abstract
In this paper, we examine the qualitative moral impact of Machine Learning (ML) decision-supportive systems in the medical diagnosis process. To date, discussions about Machine Learning in this context have justifiably focused on the concerns that can be quantified and objectively assessed, such as measuring the extent of harm or calculating introduced risks. We argue that such discussions neglect the qualitative moral impact of these technologies, which we explore with the help of the philosophical approaches of Technomoral Change and Technological Mediation. Albeit anticipatory in nature, the findings expand on current discussions and contribute to a more encompassing and better informed decision-making when considering the introduction of ML in medical practice, while acknowledging multiple stakeholders and the active role of technologies in producing ethical concerns.
Boenink, M. & Kudina, O.
Values in Responsible Research and Innovation: from entities to practices.
Journal of responsible innovation, 7(3), pp. 450-470.
Abstract
This article explores the understanding of values in Responsible Research and Innovation (RRI). First, it analyses how two mainstream RRI approaches, the largely substantial one by Von Schomberg and the procedural one by Stilgoe and colleagues, identify and conceptualize values. We argue that by treating values as relatively stable entities, directly available for reflection, both fall into an 'entity trap'. As a result, the hermeneutic work required to identify values is overlooked. We therefore seek to bolster a practice-based take on values, which approaches values as the evolving results of valuing processes. We highlight how this approach views values as lived realities, interactive and dynamic, discuss methodological implications for RRI, and explore potential limitations. Overall, the strength of this approach is that it enables RRI scholars and practitioners to better acknowledge the complexities involved in valuing.
2019 (5)
Kudina, O.
The technological mediation of morality: value dynamism, and the complex interaction between ethics and technology [PhD thesis].
Enschede: University of Twente.
Abstract
In this dissertation, Olya Kudina investigates the complex interactions between ethics and technology. Center stage to this is the phenomenon of "value dynamism" that explores how technologies co-shape the meaning of values that guide us through our lives and with which we evaluate these same technologies. The dissertation provides an encompassing view on value dynamism and the mediating role of technologies in it through empirical and philosophical investigations, as well as with attention to the larger fields of ethics, design and Technology Assessment.
Kudina, O. & Verbeek, P. P.
Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy.
Science, Technology, & Human Values, 44(2), pp. 291-314.
Abstract
This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Present-day approaches to this dilemma focus on methods to anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications. We present the approach of technological mediation as an alternative that focuses on the dynamics of the interaction between technologies and human values. By investigating online discussions about Google Glass, we examine how people articulate new meanings of the value of privacy. This study of "morality in the making" allows developing a modest and empirically informed form of anticipation.
Kudina, O.
Accounting for the Moral Significance of Technology: Revisiting the Case of Non-Medical Sex Selection.
Journal of bioethical inquiry, 16(1), pp. 75-85.
Abstract
This article explores the moral significance of technology, reviewing a microfluidic chip for sperm sorting and its use for non-medical sex selection. I explore how a specific material setting of this new iteration of pre-pregnancy sex selection technology — with a promised low cost, non-invasive nature and possibility to use at home — fosters new and exacerbates existing ethical concerns. I compare this new technology with the existing sex selection methods of sperm sorting and Prenatal Genetic Diagnosis. Current ethical and political debates on emerging technologies predominantly focus on the quantifiable risk-and-benefit logic that invites an unequivocal "either-or" decision on their future and misses the contextual ethical impact of technology. The article aims to deepen the discussion on sex selection and supplement it with the analysis of the new technology's ethical potential to alter human practices, perceptions and the evaluative concepts with which we approach it.
Kudina, O.
Alexa does not care. Should you? Media literacy in the age of Digital Voice Assistants.
Glimpse, Vol. 19, pp. 106-114.
Abstract
This article explores the ethical dimension of digital voice assistants from the angle of postphenomenology and the technological mediation approach, whereby technology plays a mediating role in the human-world relations. Digital voice assistants, such as Amazon Echo's Alexa or Google's Home, increasingly form an integral part of everyday life for many people. Powered by Artificial Intelligence and based on voice interaction, voice assistants promise constant accompaniment by answering any questions people might have and even managing the physical space of their homes. However, while accompanying daily lives of people, voice assistants also seamlessly redefine the way people talk, interact and perceive each other. In view of their intentionalities, such as interaction by voice, command-based model of communication and development of attachment, digital voice assistant mediate the norms of interaction beyond their immediate use, the way people perceive themselves, those around and form consequent normative expectations. The article argues that understanding how technologies, such as digital voice assistants, mediate our moral landscape forms an essential part of media literacy in the digital age.
Kudina, O. & Bruce, L.
The #10YearChallenge: Harmless Fun or Cause for Concern?
Bioethics.net, February 11. Accessed on November 26, 2019
Abstract
This blogpost explores the pervasive photo-sharing across social networks from a perspective of bioethics and philosophy of technology. More specifically, it positions human faces on photos as a unique form of identification, a type of biometric data most specific to a person and difficult to change. Online photo-sharing practices, such as #10YearChallenge, makes human faces susceptible to commercial facial recognition software. The articles explore the ethical challenges that ensue from this and suggest governance direction through the framework of bioethics.
2018 (2)
Kudina, O. & Bas, M.
"The end of privacy as we know it": Reconsidering public space in the age of Google Glass. In B. C. Newell, T. Timan, &  B. J. Koops (Eds.).
Surveillance, Privacy and Public Space (pp. 131-152). Routledge.
Abstract
In this chapter, we seek to examine the impact of mixed reality glasses on the nature of public space, and suggest to turn to Glass as a device with history in this regard. Philosophically, we rely on the theory of technological mediation (Verbeek 2005, 2011) and the thought of Hannah Arendt (1990, 2013) to understand Google Glass better. The theory of technological mediation understands technologies to be in dynamic, mediating relations with the people and the world (Ihde 1990, Verbeek 2005). As such, people and technologies are not independent of each other, because, on one hand, people design technologies with certain intentions; but on the other hand, these same technologies help to shape the perceptions and interpretation of the world and consequently influence the way people act. The ethical consequence of this is that human values are also not independent of technologies — technologies mediate morality (Verbeek 2011). In the current study, we want to understand how privacy as a value takes shape in relation to Google Glass in public space.
de Boer, B., Hoek, J., & Kudina, O.
Can the technological mediation approach improve technology assessment?
A critical view from 'within'.

Journal of responsible innovation, 5(3), pp. 299-315.
Abstract
The technological mediation approach aspires to complement current Technology Assessment (TA) practices. It aims to do so by addressing ethical concerns from 'within' human-technology relations leading to ethical Constructive Technology Assessment (eCTA), as articulated by Kiran, Asle H., Nelly Oudshoorn, and Peter-Paul Verbeek in their 2015 article. In this paper, we problematize this ambition. Firstly, we situate the technological mediation approach in the history of TA. Secondly, as a study into the normativity from 'within' human-technology relations, we reveal the phenomenological and existential origins of Verbeek's technological mediation approach. Thirdly, we show that there are two possible readings of this approach: a strong and a weak one. The weak reading can augment current TA practices but is eventually uncommitted to the idea of technological mediation. The strong reading defines a wholly new scope for our engagement with (emerging) technologies but is incompatible with existing TA approaches.
2014 (1)
Kudina, O.
Technology and academic virtues in Ukraine: Escaping the Soviet path dependency.
EASST Review, 33(4), pp. 9-11.
Abstract
Inspired by participation in the EASST Panel on postsocialist condition, in this essay I tried to look into postsocialist transition in the sphere of education in Ukraine as influenced by introduction of Information and Communication Technologies. Utilizing Swierstra's (2009) concept of techno-moral change, I trace how new technologies gradually assist renegotiation of the moral landscape in Ukrainian education sphere, whose integrity has been often questioned since the Soviet times. STS and Philosophy of Technology can be useful frameworks to further enhance existing knowledge on postsocialist transition and generate new one as to how this change can be facilitated.
Olya Kudina