THE UNIVERSITY of EDINBURGH

DEGREE REGULATIONS & PROGRAMMES OF STUDY 2026/2027

Draft Edition - Due to be published Thursday 9th April 2026

Timetable information in the Course Catalogue may be subject to change.

University Homepage
DRPS Homepage
DRPS Search
DRPS Contact
DRPS : Course Catalogue : Edinburgh Futures Institute : Edinburgh Futures Institute

Undergraduate Course: Critical Perspectives on Contemporary Artificial Intelligence (EFIE10009)

Course Outline
SchoolEdinburgh Futures Institute CollegeCollege of Arts, Humanities and Social Sciences
Credit level (Normal year taken)SCQF Level 10 (Year 3 Undergraduate) AvailabilityAvailable to all students
SCQF Credits20 ECTS Credits10
SummaryThis course develops critical AI (artificial intelligence) literacy by examining how artificial intelligence systems are designed, evaluated, and deployed in society. Students will analyse real-world AI applications across a range of social domains, combining hands-on activities with discussion of ethical, legal, and environmental implications. The course is designed for students from all disciplinary backgrounds and does not require prior technical knowledge.
Course description This course examines artificial intelligence (AI) as a tool in society. It focuses on how AI technologies are designed, evaluated, and deployed in real-world contexts. Students will develop an interdisciplinary framework for analysing AI systems, considering the role of data, theoretical assumptions, evaluation practices, and implementation to help understand claims about what AI can (and cannot) do. The course first asks 'what is artificial intelligence?' before exploring AI's application in different domains such as affective computing, healthcare and mental wellbeing, education, and creative industries, with attention to their ethical, legal, social, and environmental implications.

The course is delivered through interactive seminars that combine short lecture inputs, guided discussion of key readings, and applied activities. Students will engage in hands-on exercises, such as analysing and annotating an AI dataset, alongside critical discussion of contemporary case studies and policy debates. Learning activities are designed to support students in developing independent, evidence-based judgements about AI technologies, and the course is accessible to students from a wide range of disciplinary backgrounds, with no prior technical or programming experience required.
Entry Requirements (not applicable to Visiting Students)
Pre-requisites Co-requisites
Prohibited Combinations Other requirements None
Information for Visiting Students
Pre-requisitesNone
Course Delivery Information
Academic year 2026/27, Available to all students (SV1) Quota:  30
Course Start Semester 2
Timetable Timetable
Learning and Teaching activities (Further Info) Total Hours: 200 ( Supervised Practical/Workshop/Studio Hours 20, Programme Level Learning and Teaching Hours 4, Directed Learning and Independent Learning Hours 176 )
Assessment (Further Info) Written Exam 0 %, Coursework 100 %, Practical Exam 0 %
Additional Information (Assessment) The course will be assessed by means of the following components:

1) Dataset Annotation and Critical Reflection (30%)

Students will participate in a guided in-class activity involving the creation and annotation of a small dataset for an emotion recognition system (approximately 50-100 images). The dataset is submitted as evidence of engagement and is not assessed for technical quality. Students will submit a 500-750-word critical reflection analysing their experience of dataset creation and annotation, linking this to course concepts around data, theoretical assumptions, bias, and evaluation in AI systems.

Note: If students would want to create their own images these are subject to privacy concerns. Proper guidance is given according to the privacy of such photos, and proper data protection protocols are followed. However, generating new photos is not required for this assessment and students are encouraged to find open-source photos online.

Learning Outcomes Assessed by Component: 2, 4 (with partial coverage of 3, 5)

2) 2,000 Word Analytical Essay (70%)

Students will submit a 2,000-word essay (no more and ideally no less) applying the course's analytical framework to an AI technology within a chosen social domain (this may be one discussed in the class or one the student chooses themselves).

The essay will critically evaluate claims made about the technology, considering training data, underlying assumptions, evaluation practices, and real-world implementation, and will identify at least one benefit and one challenge associated with its use.

Learning Outcomes Assessed by Component: 1, 2, 3, 4, 5
Feedback Feedback on any formative assessment may be provided in various formats, for example, to include written, oral, video, face-to-face, whole class, or individual. The Course Organiser will decide which format is most appropriate in relation to the nature of the assessment.

Feedback on both formative and summative in-course assessed work will be provided in time to be of use in subsequent assessments within the course.

Feedback on the summative assessment(s) will be provided in written form via Learn, the University of Edinburgh's Virtual Learning Environment (VLE).

Formative Feedback Opportunity:

Formative feedback is ongoing feedback which monitors learning and is intended to improve performance in the same course, in future courses, and also beyond study.

Seminars and Workshops:
- Informal tutor feedback will be integrated into seminar discussions and applied activities, supporting students in developing and applying the course's analytical framework.

Guided Paper Presentations/Discussions:
- Students will receive informal formative feedback on guided paper presentations, focusing on critical engagement with readings and clarity of communication.
No Exam Information
Learning Outcomes
On completion of this course, the student will be able to:
  1. Demonstrate an understanding of contemporary artificial intelligence systems including the distinctions between rule-based, data-driven, and large language model approaches.
  2. Critically analyse the role of data, theoretical assumptions, and evaluation practices in shaping claims about capabilities and limitations of AI systems.
  3. Apply an interdisciplinary analytical framework to evaluate potential benefits, risks, and limitations of AI applications in context across a range of social domains.
  4. Evaluate the ethical, social, legal, and environmental implications of AI systems drawing on relevant academic and policy debates.
  5. Communicate well-reasoned, evidence-based judgements about AI technologies reflecting on capability, uncertainty, limitations, and responsibility.
Reading List
This list is indicative - all readings currently on the list are available via the library.

Essential Reading:

Zuboff, S. (2019). The age of surveillance capitalism: the fight for the future at the new frontier of power. Profile Books (selected chapters)

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, FAT 2018, 81, 77-91.

Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204-231. https://doi.org/10.1177/0306312718772094

Thejll-Madsen, T., Yongsatianchot, N., Marsella, S. (Under Review). A Systematic Evaluation Framework on Emotion Perception for Large Language Models Using Appraisal Theory. Transactions in Affective Computing

Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14, Article 1199058. https://doi.org/10.3389/fpsyg.2023.1199058

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679

Thejll-Madsen, T. (2025). Large Language Models and Emotion: Challenges in Evaluation and Application. Thesis Chapter.

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://doi.org/10.48550/arxiv.2303.12712

Rubin, M., Li, J. Z., Zimmerman, F., Ong, D. C., Goldenberg, A., & Perry, A. (2025). Comparing the value of perceived human versus AI-generated empathy. Nature Human Behaviour, 9(11), 2345-2359. https://doi.org/10.1038/s41562-025-02247-w

Munin, S., Jurkiewicz, O., Gueorguieva, E. S., Oveis, C., & Ong, D. C. (2025). What can I say to help you? Language associated with successful extrinsic emotion regulation. Emotion. https://doi.org/10.1037/emo0001631

Placido, D., Yuan, B., Hjaltelin, J. X., Zheng, C., Haue, A. D., Chmura, P. J., Yuan, C., Kim, J., Umeton, R., Antell, G., Chowdhury, A., Franz, A., Brais, L., Andrews, E., Marks, D. S., Regev, A., Ayandeh, S., Brophy, M. T., Do, N. V., Sander, C. (2023). A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories. Nature Medicine, 29(5), 1113-1122. https://doi.org/10.1038/s41591-023-02332-5

Bychkov, D., Linder, N., Turkki, R., Nordling, S., Kovanen, P. E., Verrill, C., Walliander, M., Lundin, M., Haglund, C., & Lundin, J. (2018). Deep learning based tissue analysis predicts outcome in colorectal cancer. Scientific Reports, 8(1), Article 3395. https://doi.org/10.1038/s41598-018-21758-3

Kirkpatrick, K. (2023). Can AI demonstrate creativity?. Communications of the ACM, 66(2), 21-23.

Strowel, A. ChatGPT and Generative AI Tools: Theft of Intellectual Labor?. IIC 54, 491-494 (2023). https://doi.org/10.1007/s40319-023-01321-y

Creely, E., & Blannin, J. (2025). Creative partnerships with generative AI. Possibilities for education and beyond. Thinking Skills and Creativity, 56, Article 101727. https://doi.org/10.1016/j.tsc.2024.101727

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542-570. https://doi.org/10.1111/ejed.12533

Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., S, M., & Joseph, S. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology, 16, Article 1550621. https://doi.org/10.3389/fpsyg.2025.1550621

Recommended Reading:

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(1), 1-2. https://doi.org/10.1177/1529100619832930

Further Reading:

Crawford, K. (2021). The Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. (Remaining chapters)

Zuboff, S. (2019). The age of surveillance capitalism: the fight for the future at the new frontier of power. Profile Books (remaining chapters)

D'Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.
Additional Information
Graduate Attributes and Skills Not entered
KeywordsArtificial Intelligence,Interdisciplinary,Technomoral Futures,Affective Computing,AI
Contacts
Course organiserMr Tobias Thejll-Madsen
Tel:
Email: Tobias.Thejll-Madsen@ed.ac.uk
Course secretaryMr Matt Bryant
Tel:
Email: Matt.Bryant@ed.ac.uk
Navigation
Help & Information
Home
Introduction
Glossary
Search DPTs and Courses
Regulations
Regulations
Degree Programmes
Introduction
Browse DPTs
Courses
Introduction
Humanities and Social Science
Science and Engineering
Medicine and Veterinary Medicine
Other Information
Combined Course Timetable
Prospectuses
Important Information