Examples of studies on the theme include: (Coghlan et al., 2020) (Coghlan et al., 2023) (Paltiel et al., 2023) (Cohney & Cheong, 2023) (Cheong et al., 2021) (Njoto et al., 2022) (and many more…)
Collaborators include: (from UniMelb unless otherwise stated) Simon Coghlan, Leah Ruppanner (and the Future of Work Hallmark Research Initiative), Lea Frermann, Tony Wirth (Sydney), Reeva Lederman, Tim Miller (Queensland), Jeannie Paterson, Gabby Bush, Ronal Singh, Shaanan Cohney, Inbar Levy, Lía Acosta Rueda, Sophie Squires, Tim Kariotis, John Howe, John de New, Sheilla Njoto (former PhD student), Aidan McLoughney* (PhD candidate), Sarita Rosenstock, Kobi Leins (KCL), Joanne Byrne, Upol Ehsan (GA Tech)…
Funded projects include:
- A Fair Day’s Work: Detecting Wage Theft with Data (Paul Ramsay Foundation/data.org);
- Understanding digital inequality in Victoria (MSEI);
- Gendered algorithms: Rethinking discrimination in automated recruitment predictions (MCDS);
- Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance (UniBank).
A new interdisciplinary area of my research since ca. 2022 is the amalgamation of information systems ethics and software engineering, to investigate contemporary issues with software development, use, deployment and broader impact on stakeholders (ranging from users to organisations) (Gao et al., 2024). An example of an area which attracts popular attention is the ethics of ‘protestware’ (Cheong et al., 2024). I am part of the BRIDGES research cluster, working closely with academics from Japan, Papua New Guinea, and Singapore.
Collaborators include: Raula Gaikovina Kula (NAIST), Christoph Treude (SMU), Takashi Nakano (NAIST), Kazumasa Shimari (NAIST), Sarita Rosenstock (UniMelb), Mansooreh Zahedi (UniMelb), Haoyu Gao (UniMelb)…
References
2024
-
Documenting ethical considerations in open source AI models
Haoyu Gao, Mansooreh Zahedi, Christoph Treude, Sarita Rosenstock, and Marc Cheong
2024
Background: The development of AI-enabled software heavily depends on AI model documentation, such as model cards, due to different domain expertise between software engineers and model developers. From an ethical standpoint, AI model documentation conveys critical information on ethical considerations along with mitigation strategies for downstream developers to ensure the delivery of ethically compliant software. However, knowledge on such documentation practice remains scarce. Aims: The objective of our study is to investigate how developers document ethical aspects of open source AI models in practice, aiming at providing recommendations for future documentation endeavours. Method: We selected three sources of documentation on GitHub and Hugging Face, and developed a keyword set to identify ethics-related documents systematically. After filtering an initial set of 2,347 documents, we identified 265 relevant ones and performed thematic analysis to derive the themes of ethical considerations. Results: Six themes emerge, with the three largest ones being model behavioural risks, model use cases, and model risk mitigation. Conclusions: Our findings reveal that open source AI model documentation focuses on articulating ethical problem statements and use case restrictions. We further provide suggestions to various stakeholders for improving documentation practice regarding ethical considerations.
-
Ethical considerations toward protestware
Marc Cheong, Raula Gaikovina Kula, and Christoph Treude
2024
This article looks into possible scenarios where developers might consider turning their free and open source software into protestware. Using different frameworks commonly used in artificial intelligence (AI) ethics, we extend the applications of AI ethics to the study of protestware.
2023
-
To chat or bot to chat: Ethical issues with using chatbots in mental health
Simon Coghlan, Kobi Leins, Susie Sheldrick, Marc Cheong, Piers Gooding, and Simon D’Alfonso
2023
This paper presents a critical review of key ethical issues raised by the emergence of mental health chatbots. Chatbots use varying degrees of artificial intelligence and are increasingly deployed in many different domains including mental health. The technology may sometimes be beneficial, such as when it promotes access to mental health information and services. Yet, chatbots raise a variety of ethical concerns that are often magnified in people experiencing mental ill-health. These ethical challenges need to be appreciated and addressed throughout the technology pipeline. After identifying and examining four important ethical issues by means of a recognised ethical framework comprised of five key principles, the paper offers recommendations to guide chatbot designers, purveyers, researchers and mental health practitioners in the ethical creation and deployment of chatbots for mental health.
-
Approaches and Models for Teaching Digital Ethics in Information Systems Courses – A Review of the Literature
Minna Paltiel, Marc Cheong, Simon Coghlan, and Reeva Lederman
2023
The Australasian Journal of Information Systems is a refereed journal that publishes articles contributing to Information Systems theory and practice.
-
COVID Down Under: where did Australia’s pandemic apps go wrong?
Shaanan Cohney, and Marc Cheong
In 2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), 2023
Governments and businesses worldwide deployed a variety of technological measures to help prevent and track the spread of COVID-19. In Australia, these applications contained usability, accessibility, and security flaws that hindered their effectiveness and adoption. Australia, like most countries, has transitioned to treating COVID as endemic. However it is yet to absorb lessons from the technological issues with its approach to the pandemic. In this short paper we a) provide a systematization of the most notable events; b) identify and review different failure modes of these applications; and c) develop recommendations for developing apps in the face of future crises. Our work focuses on a single country. However, Australia’s issues are particularly instructive as they highlight surprisingly pitfalls that countries should address in the face of a future pandemic.
2022
-
Gender Bias in AI Recruitment Systems: A Sociological- and Data Science-based Case Study
Sheilla Njoto, Marc Cheong, Reeva Lederman, Aidan McLoughney, Leah Ruppanner, and Anthony Wirth
In Proceedings of the 2022 IEEE International Symposium on Technology and Society (ISTAS), 2022
2021
-
Computer Science Communities: Who is Speaking, and Who is Listening to the Women? Using an Ethics of Care to Promote Diverse Voices
Marc Cheong, Kobi Leins, and Simon Coghlan
In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2021
Those working on policy, digital ethics and governance often refer to issues in ’computer science’, that includes, but is not limited to, common subfields such as Artificial Intelligence (AI), Computer Science (CS) Computer Security (InfoSec), Computer Vision (CV), Human Computer Interaction (HCI), Information Systems, (IS), Machine Learning (ML), Natural Language Processing (NLP) and Systems Architecture. Within this framework, this paper is a preliminary exploration of two hypotheses, namely 1) Each community has differing inclusion of minoritised groups (using women as our test case, by identifying female-sounding names); and 2) Even where women exist in a community, they are not published representatively. Using data from 20,000 research records, totalling 503,318 names, preliminary data supported our hypothesis. We argue that ACM has an ethical duty of care to its community to increase these ratios, and to hold individual computing communities to account in order to do so, by providing incentives and a regular reporting system, in order to uphold its own Code.
2020
-
Tracking, tracing, trust: contemplating mitigating the impact of COVID-19 through technological interventions
Simon Coghlan, Marc Cheong, and Benjamin Coghlan
2020