Undergraduate Course: Ethics of Artificial Intelligence (PHIL10167)
|School||School of Philosophy, Psychology and Language Sciences
||College||College of Humanities and Social Science
|Credit level (Normal year taken)||SCQF Level 10 (Year 4 Undergraduate)
||Availability||Not available to visiting students
|Summary||Artificial intelligence (AI) is developing at an extremely rapid pace. We expect to see significant changes in our society as AI systems become embedded in various aspects of our lives. This course will cover philosophical issues raised by current and future AI systems. Questions we consider include:
- How do we align the aims of autonomous AI systems with our own?
- Does the future of AI pose an existential threat to humanity?
- How do we prevent learning algorithms from acquiring morally objectionable biases?
- Should autonomous AI be used to kill in warfare?
- How should AI systems be embedded in our social relations? Is it permissible to fall in love with an AI system?
- What sort of ethical rules should a self-driving car use?
- Can AI systems suffer moral harms? And if so, of what kinds?
- Can AI systems be moral agents? If so, how should we hold them accountable?
- Which ethical norms should we program into our AI, if any?
The aim of this course is to introduce students to a range ethical issues that arise regarding current and future artificial intelligence (AI). The main questions we will consider are listed in the course summary. No previous familiarity with the literature on AI will be assumed.
The classes will be primarily discussion based, so students are expected to have done the reading in advance of class. During class, students will work in small teams to answer a question (approximately 1 per team) based on the reading for the week. They may be instructed to argue for a particular case (pro or contra). They may be asked to assess the merits of a given view. They may be asked to look for counterexamples to a generalisation or fallacies with a specific argument. In second part of the class, we will come together to discuss what each group has achieved to see how it helps us to answer our questions.
Topics covered in class:
- Robot rights
- AI existential threats
- Biases in learning algorithms
- Ethics of AI in warfare
- Ethics of AI in self-driving cars
- Moral harms to AI
- Falling in love with AI
- AI and future of human jobs
Entry Requirements (not applicable to Visiting Students)
|| Students MUST have passed:
Knowledge and Reality (PHIL08017) AND
Mind, Matter and Language (PHIL08014)
||Other requirements|| Students who have not taken Knowledge and Reality (PHIL08017) and Mind, Matter and Language (PHIL08014) must gain permission from the Course Organiser before enrolling on this course.
Students studying on MA Cognitive Science (Humanities) are permitted to take this course without having met the pre-requisites of Mind, Matter and Language and Knowledge and Reality. However, it is advisable that students discuss the suitability of the course with their PT and the course organiser before enrolling.
Course Delivery Information
|Academic year 2018/19, Not available to visiting students (SS1)
|Learning and Teaching activities (Further Info)
Seminar/Tutorial Hours 22,
Programme Level Learning and Teaching Hours 4,
Directed Learning and Independent Learning Hours
|Assessment (Further Info)
|Additional Information (Assessment)
||10% participation grade
20% short writing assignment (500 words)
20% short writing assignment (500 words)
50% end-of-semester essay (2,000 words)
|No Exam Information
On completion of this course, the student will be able to:
- Demonstrate knowledge of philosophical issues involved in ethics of artificial intelligence
- Demonstrate familiarity with relevant examples of AI systems
- Demonstrate ability to bring philosophical considerations to bear in practical contexts
- Demonstrate ability to work in a small team
- Demonstrate skills in research, analysis and argumentation
|- Anderson, M., Anderson, S. L. (Eds.) (2011), Machine Ethics, Cambridge University Press|
- Awret, U. (Ed.) (2016), The Singularity: Could artificial intelligence really out-think us (and would we want it to)?
- Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press
- Lin, P. (Ed.), (2017), Robot Ethics 2.0, Oxford University Press
- Wallach, W., Allen, C. (2008), Moral Machines, Oxford University Press
|Graduate Attributes and Skills
|Course organiser||Dr Mark Sprevak
|Course secretary||Miss Ann-Marie Cowe
Tel: (0131 6)50 3961