Postgraduate Course: AI and the Ethics of Knowledge Production (EFIE11476)
Course Outline
| School | Edinburgh Futures Institute |
College | College of Arts, Humanities and Social Sciences |
| Credit level (Normal year taken) | SCQF Level 11 (Postgraduate) |
Availability | Available to all students |
| SCQF Credits | 20 |
ECTS Credits | 10 |
| Summary | This course introduces students to how AI is fundamentally changing how we gain knowledge about the world. The course looks at specific core areas where AI is shaping knowledge production: science, social media, the use of large language models in education, and explaining automated AI decisions. Students will learn core concepts in epistemology of AI and how epistemic questions are also inherently ethical questions. The course will take a critical perspective with consideration on how we might concretely address core ethical problems surrounding how AI produces knowledge. |
| Course description |
Data-driven machine learning systems and AI enabled platforms are encroaching on all our knowledge forming practices. Large language models are embedded into Google Search, email platforms, and more. More people are turning to ChatGPT to ask informational questions, instead of turning to search engines. AI filters which content and news we see on social media platforms. At the same time, AI and machine learning methods are being deployed across all areas of scientific inquiry, from developing research, writing up research results, to even scientific peer-review. Furthermore, AI is used to explain AI decisions to decision-subjects.
It is essential in today's knowledge landscape to have an understanding about how AI is transforming the total knowledge ecosystem, and to be able to critically assess its ethical implications and consider possible remedies. This course introduces students to issues in the epistemology of AI and how epistemic questions are also ethical questions. Students will see a wide range of domains in which AI is shaping knowledge production, including social media, science, the use of large language models in education, and explaining the very decisions of AI. The course will draw on a multiplicity of perspectives from philosophy of science, informatics, political and legal philosophy, epistemology, and law.
Edinburgh Futures Institute (EFI) - Hybrid Course Delivery Information:
The Edinburgh Futures Institute delivers many of its courses in hybrid mode. This means that you may have some online students joining sessions for this course. To enable this, the course will use technologies to record and live-stream student and staff participation during their teaching and learning activities.
Students should be aware that:
- Classrooms used in this course will have additional technology in place: in some cases, students might not be able to sit in areas away from microphones or outside the field of view of all cameras.
- All presentations, and whole class discussions will be recorded (see the Lecture Recording and Virtual Classroom policies for more details).
- You will need access to a personal computing device for this course. Most activities will take place in a web browser, unless otherwise stated. We recommend using a device with a screen, physical keyboard, and internet access.
|
Entry Requirements (not applicable to Visiting Students)
| Pre-requisites |
|
Co-requisites | |
| Prohibited Combinations | |
Other requirements | None |
Information for Visiting Students
| Pre-requisites | None |
Course Delivery Information
|
| Academic year 2026/27, Available to all students (SV1)
|
Quota: 30 |
| Course Start |
Semester 2 |
Timetable |
Timetable |
| Learning and Teaching activities (Further Info) |
Total Hours:
200
(
Lecture Hours 20,
Programme Level Learning and Teaching Hours 4,
Directed Learning and Independent Learning Hours
176 )
|
| Assessment (Further Info) |
Written Exam
0 %,
Coursework
100 %,
Practical Exam
0 %
|
| Additional Information (Assessment) |
The course will be assessed by means of the following components:
1) Group Presentation and Reflection (30%)
On the final workshop day there will be group presentations. Groups will be formed during the first workshop day and will be assigned one of the 4 key themes of the course. Students will choose a supplementary reading on their topic and present the ideas of this paper alongside two current events that touch on core concepts related to their topic. The presentation will be 20 minutes to the class with an addition 10 mins for class discussion facilitated by the group.
In addition to the presentation, students will individually write a reflection of their way of working on the presentation and key themes learned (300 words). The reflection is due on the day of the presentation.
Learning Outcomes Assessed by Component: 1, 2, 3, 5
2) 2,500 Word 'White Paper' on Case Study / 500 Word Appendix (70%)
Each student will choose a case study/controversy dealing with the use of AI in producing knowledge from a prepared list. The student will conduct independent research on that case/controversy. Students will write an individual 2500 word 'white paper' on their case study for a non-specialist audience, making a moral argument and advisory recommendation related to the case study.
In addition, students will be paired with another student (or two) who have chosen the same study. Students will meet discuss their answers to key questions in the case study and deliberate about areas of reasoned disagreement as well as consensus. Students must also include a 500 word appendix commenting on their discussion with their paired student(s), and what different factual assumptions, perspectives, values and moral reasons the conversation revealed.
Learning Outcomes Assessed by Component: 1, 2, 3, 4 |
| Feedback |
Feedback on any formative assessment may be provided in various formats, for example, to include written, oral, video, face-to-face, whole class, or individual. The Course Organiser will decide which format is most appropriate in relation to the nature of the assessment.
Feedback on both formative and summative in-course assessed work will be provided in time to be of use in subsequent assessments within the course.
Feedback on the summative assessment(s) will be provided in written form via Learn, the University of Edinburgh's Virtual Learning Environment (VLE).
Formative Feedback Opportunity:
Formative feedback is ongoing feedback which monitors learning and is intended to improve performance in the same course, in future courses, and also beyond study.
Live feedback will be given on the group presentations. In addition to live feedback a written summary of additional feedback will be given on presentations, one week after the presentations are given.
The Course Organiser will also be available to meet with groups to discuss work in progress and mediate any significant problems or disagreements within the group that cannot be resolved internally. |
| No Exam Information |
Learning Outcomes
On completion of this course, the student will be able to:
- Demonstrate a basic understanding of key concepts, theories, and applications surrounding AI and the ethics of knowledge production. For example, understanding the relationship between concepts of reliability, knowledge, and applying epistemic considerations into ethical perspectives.
- Critically discuss and evaluate a variety of normative perspectives in debates about epistemic and ethical issues in the design, deployment and regulation of AI used for producing knowledge, across multiple domains.
- Work constructively with others to identify salient ethical issues in a case study involving AI used for producing knowledge, form questions that allow deeper investigation, and articulate the relevant epistemic and moral interests of different groups and stakeholders that developers, regulators and purchasers of these systems must take into account.
- Produce and clearly communicate for non-specialists in a 'white paper' format a basic analysis and advisory output pertaining to a pressing challenge for the use of AI in structuring and producing knowledge.
- Orally communicate complex ideas clearly in a presentation to the class on how core concepts, themes, and theories apply to current events. Raising new perspectives and analyses.
|
Reading List
Essential Reading:
Milano, S., McGrane, J. A., & Leonelli, S. (2023). Large language models challenge the future of higher education. 'Nature Machine Intelligence,' 5(4), 333-334.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., ... & Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
Vredenburgh, K. (2022). The right to explanation. Journal of Political Philosophy, 30(2), 209-229.
Alfano, M., Carter, J. A., & Cheong, M. (2018). Technological seduction and self-radicalization. Journal of the American Philosophical Association, 4(3), 298-322.
Grote, T., Genin, K., & Sullivan, E. (2024). Reliability in machine learning. Philosophy Compass, 19(5), e12974.
Sullivan, E. (2022). Unde/rstanding from machine learning models. The British Journal for the Philosophy of Science.
Meske, C., Eisenhardt, D., Seselja, D., Straßer, C., & Schneider, J. (2024). Mitigating Bias In Academic Publishing: Towards Responsible (Gen) Ai-Augmentation In Peer-Review Processes.
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., ... & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the national academy of Sciences, 113(3), 554-559.
Recommended Reading:
Ungless, E. L., Vitsakis, N., Talat, Z., Garforth, J., Ross, B., Onken, A., ... & Birch, A. (2025, January). The only way is ethics: A guide to ethical research with large language models. In 'Proceedings of the 31st International Conference on Computational Linguistics' (pp. 8992-9005).
Rini, R. (2020). Deepfakes and the Epistemic Backstop. Philosopher's Imprint, 20(24).
Porter, T. (2024). Manufacturing the Illusion of Epistemic Trustworthiness. Episteme, 1-20.
Ballantyne, N. (2019). Epistemic trespassing. Mind,128(510), 367-395.
Karlan, B., & Kugelberg, H. D. (2025). No right to an explanation. Philosophy and Phenomenological Research.
Grote, T., & Paulo, N. (2025). A Minimalist Account of the Right To Explanation. Philosophy & Technology, 38(2), 55.
Sullivan, E. (2026). Are Explanations of AI Decisions Morally Necessary?. Contemporary Debates in the Ethics of Artificial Intelligence, 97-110.
Sullivan, E., & Kasirzadeh, A. (2025). Explanation hacking: The perils of algorithmic recourse. In Philosophy of Science for Machine Learning: Core issues and new perspectives (pp. 197-220). Cham: Springer Nature Switzerland.
Alfano, M., Fard, A. E., Carter, J. A., Clutton, P., & Klein, C. (2021). Technologically scaffolded atypical cognition: The case of YouTube's recommender system. Synthese,199(1), 835-858.
Clark, E., & Khosrowi, D. (2022). Decentring the discoverer: how AI helps us rethink scientific discovery. Synthese,200(6), 463.
Heesen, R., & Bright, L. K. (2021). Is peer review a good idea?. The British Journal for the Philosophy of Science.
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research.Nature,627(8002), 49-58. |
Additional Information
| Graduate Attributes and Skills |
Not entered |
| Keywords | Ethics of AI,Philosophy,Epistemology,Philosophy of Science |
Contacts
| Course organiser | Dr Emily Sullivan
Tel:
Email: esulliv2@ed.ac.uk |
Course secretary | Miss Yasmine Lewis
Tel:
Email: yasmine.lewis@ed.ac.uk |
|
|