THE UNIVERSITY of EDINBURGH

DEGREE REGULATIONS & PROGRAMMES OF STUDY 2026/2027

Draft Edition - Due to be published Thursday 9th April 2026

Timetable information in the Course Catalogue may be subject to change.

University Homepage
DRPS Homepage
DRPS Search
DRPS Contact
DRPS : Course Catalogue : Edinburgh Futures Institute : Edinburgh Futures Institute

Postgraduate Course: Algorithmic Bias, Fairness and Justice (Online) (EFIE11467)

Course Outline
SchoolEdinburgh Futures Institute CollegeCollege of Arts, Humanities and Social Sciences
Credit level (Normal year taken)SCQF Level 11 (Postgraduate)
Course typeOnline Distance Learning AvailabilityAvailable to all students
SCQF Credits20 ECTS Credits10
SummaryThis course introduces students to one of the most important challenges in academia and industry for making data-driven AI systems ethical: algorithmic bias. By drawing on interdisciplinary perspectives from computing, statistics, political and legal philosophy, and feminist and media theory, students will learn about principles for addressing fairness in AI and machine learning systems, how they relate to the limits and opportunities of algorithmic fairness in with regard to the aims of social and distributive justice.
Course description Data-driven machine learning systems learn from biases in existing data and risk reproducing and amplifying social patterns of bias reflected in this data, such as racial or gender bias. They also rely on the consumption of large amounts of energy and fresh water, contributing to climate change. Such systems then exacerbate existing injustices. Algorithmic fairness is an emerging research area for addressing societal biases in the hope of making data-driven AI systems ethical. This course introduces students to issues of bias, fairness, and justice in data-driven machine learning systems by drawing on a diverse set of perspectives from computing, statistics, political and legal philosophy, and feminist and media theory in order to raise critical awareness and understanding of these issues as well as providing students with tools and knowledge to examine and consider such problems.

Topics include sources of bias in data and machine learning, methods for measuring and mitigating bias and unfairness, notions of individual and group fairness and the tensions between them, limitations of algorithmic fairness approaches, environmental impacts of machine learning systems, and philosophical accounts of distributive and structural justice relevant to machine learning systems. You will work together in collaborative groups to practice the identification and evaluation of algorithmic bias concerns in concrete cases; students will also practice jointly deliberating about and communicating the benefits and limits of different methods, techniques and approaches to algorithmic fairness and justice.

Edinburgh Futures Institute (EFI) - Online Hybrid Course Delivery Information:

The Edinburgh Futures Institute will teach this course in a way that enables online and on-campus students to study together. To enable this, the course will use technologies to record and live-stream student and staff participation during their teaching and learning activities. Students should note that their interactions may be recorded and live-streamed (see the Lecture Recording and Virtual Classroom policies for more details). There will, however, be options to control whether or not your video and audio are enabled.

You will need access to a personal computing device for this course. Most activities will take place in a web browser, unless otherwise stated. We recommend using a device with a screen, a physical keyboard, and internet access.
Entry Requirements (not applicable to Visiting Students)
Pre-requisites Co-requisites
Prohibited Combinations Other requirements None
Information for Visiting Students
Pre-requisitesNone
High Demand Course? Yes
Course Delivery Information
Academic year 2026/27, Available to all students (SV1) Quota:  0
Course Start Semester 2
Timetable Timetable
Learning and Teaching activities (Further Info) Total Hours: 200 ( Lecture Hours 20, Programme Level Learning and Teaching Hours 4, Directed Learning and Independent Learning Hours 176 )
Assessment (Further Info) Written Exam 0 %, Coursework 100 %, Practical Exam 0 %
Additional Information (Assessment) The course will be assessed by means of the following components:

1) Mock Ethics Statement for Published Paper or Publicly Available AI System (20%)

Students will be divided into groups to write a 750-word mock 'authors ethics statement' for an AI research paper that presents significant fairness and bias concerns.

2) Mock Ethics Panel for Mock Assessment of Mock Ethics Statement (20%)

In this assessment, students will be divided into groups to function as Mock Ethics Assessment Panels evaluating another group's Ethics Statement. Groups will write a 750-word report, detailing their assessment of another group's ethics statement.

3) Fairness Risk Report (60%)

Students will also produce an individual 2,500-word 'fairness risk report' on a selected case study involving an algorithmic model about which there are evident fairness and bias concerns.
Feedback Feedback on any formative assessment may be provided in various formats, for example, to include written, oral, video, face-to-face, whole class, or individual. The Course Organiser will decide which format is most appropriate in relation to the nature of the assessment.

Feedback on both formative and summative in-course assessed work will be provided in time to be of use in subsequent assessments within the course.

Feedback on the summative assessment(s) will be provided in written form via Learn, the University of Edinburgh's Virtual Learning Environment (VLE).

Formative Feedback Opportunity:

Formative feedback is ongoing feedback which monitors learning and is intended to improve performance in the same course, in future courses, and also beyond study.

Formative feedback will be provided in the immersive phase for the asynchronous groups and to individuals in the Q&A session, when the Course Organiser will jointly help to shape the understandings of students of the core issues and the first collaborative task.

The Course Organiser will be available to meet with groups to discuss work in progress and mediate any significant problems or disagreements within the group that cannot be resolved internally.
No Exam Information
Learning Outcomes
On completion of this course, the student will be able to:
  1. Demonstrate a basic understanding of key concepts, theories, metrics, and principles of bias, fairness, and justice from statistics, machine learning, feminist, legal, and political philosophy.
  2. Critically discuss and evaluate a variety of perspectives in debates on how various conceptions of bias, fairness, and justice are to be used for the design of ethical data and AI ecosystems.
  3. Work constructively with others to weigh bias, fairness and justice considerations and identify potential remedies and interventions for a concrete instance of a data-driven machine learning model.
  4. Produce and clearly communicate for non-specialists a basic analysis and advisory output pertaining to bias, fairness, and justice in a concrete data-driven AI application.
  5. Identify and critically evaluate the technical and moral trade-offs involved in decisions about which fairness metrics or interventions to employ in a given AI application context, while weighing these against the broader aims of justice.
Reading List
Required Reading:

Angwin, J., & Larson, J. (2016, December 30). Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. ProPublica. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And It's Biased against Blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Barocas, S., Hardt, M., & Narayanan, A. (Chapter 1, 2023). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org

Benjamin, R. (Chapter 1, 2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (Technology) is Power: A Critical Survey of "Bias" in NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5454-5476. https://doi.org/10.18653/v1/2020.acl-main.485

Hardt, M. (2016, July 12). How big data is unfair. Medium. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260. https://doi.org/10.1126/science.aaa8415

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607

Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204-231. https://doi.org/10.1177/0306312718772094

Wallach, H., Desai, M., Pangakis, N., Cooper, A. F., Wang, A., Borocas, S., Chouldechova, A., Atalla, C., Blodgett, S. L., Corvi, E., Dow, P. A., Garcia-Gathright, J., Olteanu, A., Reed, S., Sheng, E., Vann, D., Wortman Vaughan, J., Vogel, M., Washington, H., Jacobs, A. Z. (2024). Evaluating Generative AI Systems is a Social Science Measurement Challenge. Workshop on Evaluating Evaluations (EvalEval). https://doi.org/10.48550/arXiv.2411.10939

Recommended Reading:

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st conference on fairness, accountability and transparency (Vol. 81, pp. 149-159). PMLR. https://proceedings.mlr.press/v81/binns18a.html

Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205

Blodgett, S. L., Lopez, G., Olteanu, A., Sim, R., & Wallach, H. (2021). Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1004-1015. https://doi.org/10.18653/v1/2021.acl-long.81

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Vol. 81, pp. 77-91). PMLR. http://proceedings.mlr.press/v81/buolamwini18a.html

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186. https://doi.org/10.1126/science.aal4230

Fazelpour, & Danks. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. arXiv:1803.09010 [Cs]. http://arxiv.org/abs/1803.09010

Hao, K., & Stray, J. (2019). Can you make AI fairer than a judge? Play our courtroom algorithm game. MIT Technology Review. https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/

Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900-915. https://doi.org/10.1080/1369118X.2019.1573912

Kasirzadeh, A. (2022). Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 349-356. https://doi.org/10.1145/3514094.3534188

Kulynych, B., Overdorf, R., Troncoso, C., & Gürses, S. (2020). POTs: Protective optimization technologies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 177-188. https://doi.org/10.1145/3351095.3372853

Ledford, H. (2019). Millions of black people affected by racial bias in health-care algorithms. Nature, 574(7780), 608-609. https://doi.org/10.1038/d41586-019-03228-6

Longpre, S., Kapoor, S., Klyman, K., Ramaswami, A., Bommasani, R., Blili-Hamelin, B., Huang, Y., Skowron, A., Yong, Z.-X., Kotha, S., Zeng, Y., Shi, W., Yang, X., Southen, R., Robey, A., Chao, P., Yang, D., Jia, R., Kang, D., ... Henderson, P. (2024). A Safe Harbor for AI Evaluation and Red Teaming (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2403.04893

Mumford, L. (1964). Authoritarian and Democratic Technics. Technology and Culture, 5(1), 1.
https://doi.org/10.2307/3101118

Raji, I. D., Bender, E. M., Paullada, A., Denton, E., & Hanna, A. (2021). AI and the Everything in the Whole Wide World Benchmark. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. Conference on Neural Information Processing Systems. https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/084b6fbb10729ed4da8c3d3f5a3ae7c9-Paper-round2.pdf

Resnik, P. (2025). Large Language Models Are Biased Because They Are Large Language Models. Computational Linguistics. 51 (3). https://doi.org/10.1162/coli_a_00558

Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T., & Prabhakaran, V. (2021). Re-imagining Algorithmic Fairness in India and Beyond. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 315-328. https://doi.org/10.1145/3442188.3445896

Solaiman, I., Talat, Z., Agnew, W., Ahmad, L., Baker, D., Blodgett, S. L., Chen, C., III, H. D., Dodge, J., Duan, I., Evans, E., Friedrich, F., Ghosh, A., Gohar, U., Hooker, S., Jernite, Y., Kalluri, R., Lusoli, A., Leidinger, A., .... Subramonian, A. (2024). Evaluating the Social Impact of Generative AI Systems in Systems and Society (No. arXiv:2306.05949). arXiv. http://arxiv.org/abs/2306.05949

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1). http://www.jstor.org/stable/20024652.

Further Reading:

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org

Bender, E. M., & Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587-604. https://doi.org/10.1162/tacl_a_00041

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

Birch, K. (2025). Do Artifacts Have Political Economy? Science, Technology, & Human Values, 01622439251352167. https://doi.org/10.1177/01622439251352167

Joerges, B. (1999). Do Politics Have Artefacts? Social Studies of Science, 29(3), 411-431. https://doi.org/10.1177/030631299029003004

Joerges, B. (1999). Scams Cannot Be Busted: Reply to Woolgar & Cooper. Social Studies of Science, 29(3), 450-457. https://doi.org/10.1177/030631299029003006

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229. https://doi.org/10.1145/3287560.3287596

Woolgar, S., & Cooper, G. (1999). Do Artefacts Have Ambivalence: Moses, Bridges, Winner's Bridges and other Urban Legends in S&TS. Social Studies of Science, 29(3), 433-449. https://doi.org/10.1177/030631299029003005

Xiang, A., & Raji, I. D. (2019). On the Legal Compatibility of Fairness Definitions (No. arXiv:1912.00761). arXiv. https://doi.org/10.48550/arXiv.1912.00761
Additional Information
Graduate Attributes and Skills Not entered
KeywordsAlgorithmic Bias,Algorithmic Fairness,Algorithmic Justice,Fair Machine Learning
Contacts
Course organiserDr Zeerak Talat
Tel:
Email: ztalat@ed.ac.uk
Course secretaryMiss Yasmine Lewis
Tel:
Email: yasmine.lewis@ed.ac.uk
Navigation
Help & Information
Home
Introduction
Glossary
Search DPTs and Courses
Regulations
Regulations
Degree Programmes
Introduction
Browse DPTs
Courses
Introduction
Humanities and Social Science
Science and Engineering
Medicine and Veterinary Medicine
Other Information
Combined Course Timetable
Prospectuses
Important Information