Topic outline

  • Welcome to the course

    Welcome to the home page of the Course. This course is focused on learning a process for evaluating AI systems, with emphasis on the Z-inspection process. We will present, through live and video lectures, about each step in this process as well as the EU guidelines that are the backbone for the process. We will post readings each week and expect all students to share their thoughts from those readings via the discord channel for the course. Your course grade will be based on a midterm and final project, which will be two group presentations.

    Midterm will be presented during the live lecture on November 6th, 2023. Please reach out if you cannot make this lecture. We expect teams of 3-4 people to be formed on the week of October 16th who will work together for the midterm and final presentation projects.

    See below for each week's videos and reading assignments.

    Additional Material and Communication Channels

    • Discord channel for Z inspection
  • Topic 1

    Lecture 1: overview of Z-inspection Week 1: Sept 25th 2023 [LIVE session - All] https://z-inspection.org/wp-content/uploads/2023/07/SNU-Z-Inspection.Overview2023.pdf Assigned articles: None Lecture 2: EU Ethics: guidelines for trustworthy AI Week 2: October 2nd, 2023 Watch: video 1 (start at time mark 36:30): https://www.youtube.com/watch?v=uRbD4lB1e8M video 2: https://www.youtube.com/watch?v=esAIu_FogDM&authuser=0 Assigned articles: https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
    • Topic 3

      Lecture 3: Ethics, Moral Values, Humankind,Technology, AI Examples Week 3: October 9th, 2023 Watch: Video 1 Prof. Rafael A. Calvo https://www.youtube.com/watch?v=o3-VrXNPHOc Video 2 Dr. Emmanuel Goffi https://www.youtube.com/watch?v=5fS1ihmjcMw Assigned articles: Paper 1: Leikas et al. (2019) – Ethical Framework for Designing Autonomous Intelligent Systems Link: https://www.mdpi.com/2199-8531/5/1/18 Paper 2: Rajkomar et al. (2018) – Ensuring Fairness in Machine Learning to Advance Health Equity Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6594166/
      • Topic 4

        Lecture 4: Z-Inspection, team formation, scenario building Watch: Prof. Roberto Zicari, Z-Inspection® Overview and Socio-Technical Scenarios https://youtu.be/Z1dANbEHoPc?si=zGbOvO0mw0QQXDHv&t=128 Prof. Roberto Zicari Claim, Arguments and Evidence; Ethical Tensions https://www.youtube.com/watch?v=w0-NLazwn_I Assigned articles Z-Inspection®: A Process to Assess Trustworthy AI. https://ieeexplore.ieee.org/document/9380498 Lessons Learned from Assessing Trustworthy AI in Practice. https://link.springer.com/article/10.1007/s44206-023-00063-1
        • Topic 5

          Week 5: Technical assessment. Live lecture
        • Topic 6

          The Ethics of Artificial Intelligence (AI): On the ethics of algorithmic decision-making in healthcare


          Readings:
          Paper 1: Grote & Berens (2019) – On the ethics of algorithmic decision-making in healthcare
          https://jme.bmj.com/content/medethics/early/2019/11/20/medethics-2019-105586.full.pdf

          Paper 2: Holm (2023) - Algorithmic legitimacy in clinical decision‑making
          https://link.springer.com/article/10.1007/s10676-023-09709-7

        • Midterm information

          Midterm details

          Mid-Term Report Requirements
          The goal of the mid-term report is to select an AI system (i.e. an AI-product and or an AI-based service) used in healthcare, and start with the evaluation process.
          In teams of three students:

          Choose a data-driven product/solution in healthcare, and assess for “trustworthiness” with trustworthy AI as based on the EU Ethics Guidelines for Trustworthy AI, adapted to the healthcare domain and the Z-inspection process.
          E.g. use of AI/machine learning approaches and technologies to optimise the management of emergencies (e.g. tracking devices, predictions of number of contagions etc., during COVID-19).
          The report contributes to your grade and every team member will receive the same grade.

          Scope
          The report must be delivered as a google slides doc (create one google slide presentation per team).
          The presentation must be a minimum of 8 min long, including references (included in notes of slides)
          The Mid Term report should cover the following:
          Define and agree upon the ​boundaries and context ​of the assessment.
                Analyze ​Socio-technical scenarios
                Identify ​Ethical Issues and Tensions
          In particular, the term report should relate to the topics covered by the lecture recordings and paper recommendations for up until the due date, as well as the ALTAI assessment list published by the EU and the ALTAI web tool. Questions that need to be answered in the Mid term report are:
                Analyze Socio-technical scenarios
          By collecting relevant resources, socio-technical scenarios should be created and analyzed by the team of students:
                Describe the aim of the AI system;
                Who are the actors;
                What are the actors expectations;
                How Actors interacts with each other and with the AI system;
                What are the process where the AI system is used;
                What AI technology is used;
                What is the context where the AI is used;
                What are any legal and contractual obligations related to the use of AI in this context
                And anything else you wish to be relevant.

          Identify Ethical Issues and Tensions
          We use the term ‘tension’ as defined in ​[Whittlestone et al. 2019]​: „tensions between the pursuit of different values in technological applications rather than an abstract tension between the values themselves.“
          Use the Catalog of Examples of Ethical tensions
          Accuracy vs. Fairness
          Accuracy vs. Explainability
          Privacy vs. Transparency
          Quality of services vs. Privacy
          Personalisation vs. Solidarity
          Convenience vs. Dignity
          Efficiency vs. Safety and Sustainability
          Satisfaction of Preferences vs. Equality
          Classify ethical tensions
          according to the three dilemma defined in ​[Whittlestone et al. 2019]​:
          – true dilemma​, i.e. “a conflict between two or more duties, obligations, or values, both of which an agent would ordinarily have reason to pursue but cannot”;
          –  ​dilemma in practice​, i.e. “the tension exists not inherently, but due to current technological capabilities and constraints, including the time and resources available for finding a solution”;
          –  ​false dilemma​, i.e. “situations where there exists a third set of options beyond having to choose between two important values”.
          Remarks
          NEVER EVER COPY AND PASTE​ text from the internet and other sources.
          Two options:
              1.You can describe what the source is saying using your words and quoting the source.
          e.g As indicated in ​[Roig and Vetter 2020]​ the moon is flat. Reference [Roig and Vetter 2020]​ Why the Moon is Flat. Roig Gemma, Vetter Dennis Journal of Dreams, Issue No. 1, November 2020-
              2.You can quote what the source is saying using their words in “ “
          e.g ​[Roig and Vetter 2020]​ has made a case that “the moon is completely flat and not round”.Reference [Roig and Vetter 2020]​ Why the Moon is Flat. Roig Gemma, Vetter Dennis Journal of Dreams, Issue No. 1, November 2020
          -Make sure to READ the source when you use them to make sure you understand the context! In the example above, if you quote that “the moon is flat” without understanding that this was a dream and not a scientific evidence, you are using a quote in a WRONG way!
          Grading
          Each team receives 0-5 points for the mid-term report and again for the final report.To pass the course you need to receive at least one point for both the mid-term and the final report and have a total of at least 3 points.
          The points will then be translated into a grade (more points = better grade).
          References
          [Whittlestone et al. 2019]​ ​Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. ​Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019), ​London.​ Nuffield Foundation.
          • Topic 10

            AI Privacy, Responsibility, Accountability, Safety and Human-in the loop + Trustworthy AI: A Human-Centered Perspective Week 9: November 20th, 2023 https://www.youtube.com/watch?v=rIk8ojfFDbY https://www.youtube.com/watch?v=yG_poIyRwT8 Paper 1: Gebru et al. (2020) – Datasheets for Datasets https://arxiv.org/pdf/1803.09010v7.pdf Paper 2: Peters et al. (2020) – Responsible AI – Two Frameworks for Ethical Design Practice http://lcfi.ac.uk/media/uploads/files/PetersVoldetal_responsAI_IEEE.pdf
            • Final information

              Final exam details
              Final Report Requirements
              The goal of the final presentation is to continue to work on the selected AI system (i.e. an AI-product and or an AI-based service) used in healthcare, and finish the evaluation process.
              Scope
              The report must be delivered using the same google slides presentation. The ​new part must be min minimum of 8-10 min long, including references (included in notes of slides) — estimated 4-5 slides The FINAL report should cover the following 5 points:

              1. Identify any CLAIMS made by the producer of the AI system
              According to the definition provided by Brundage et al. (2020, p.65) “​Claims are assertions put forward for general acceptance. They’re typically statements about a property of the system or some subsystem. Claims asserted as true without justification are assumptions, and claims supporting an argument are subclaims.” Furthermore “AI developers regularly make claims regarding the properties of AI systems they develop as well as their associated societal consequences. Claims related to AI development might include, e.g.:
              We will adhere to the data usage protocols we have specified;
              The cloud services on which our AI systems run are secure;
              We will evaluate risks and benefits of publishing AI systems in partnership with appropriately qualified third parties;
              We will not create or sell AI systems that are intended to cause harm;
              We will assess and report any harmful societal impacts of AI systems that we build; and
              Broadly, we will act in a way that aligns with society’s interests.”(Brundage et al., 2020, p.64)
              2. Develop of an evidence base
              Following Brundage et al. (2020), this step consists of reviewing and creating an evidence base to verify/support any claims made by producers of the AI system and other relevant stakeholders:
              “​Evidence​ serves as the basis for justification of a claim. Sources of evidence can include the design, the development process, prior experience, testing, or formal analysis” (Brundage et al., 2020, p.65) and “​Arguments​ link evidence to a claim, which can be deterministic, probabilistic, or qualitative. They consist of “statements indicating the general ways of arguing being applied in a particular case and implicitly relied on and whose trustworthiness is well established” [144], together with validation of any scientific laws used. In an engineering context, arguments should be explicit” (Brundage et al. 2020, p.65)
              NOTE:​You can for example use the “helping hand” from the Claims, Arguments, and Evidence (CAE) framework (Adelard LLP, 2020)
              3. Map Ethical issues to Trustworthy AI Areas of Investigation
              The basic idea of the process in this step is to identify from the list of ethical issues which areas require inspection. Therefore map Ethical issues to some or all of the seven requirements for trustworthy AI:
              – Human agency and oversight,
              – Technical robustness and safety,
              – Privacy and data governance,
              – Transparency,
              – Diversity, non-discrimination and fairness,
              – Societal and environmental wellbeing
              – Accountability
              (High-Level Expert Group on Artificial Intelligence, 2019, p.14)
              4. Use the ​ALTAI web tool​ to answer the questions for the corresponding areas of trustworthy AI that you have mapped
              5. Critically evaluate the result of the ALTAI assessment if it is relevant for the use case you have chosen. Motivate your analysis.
              Grading
              Each team receives 0-5 points for the final report.To pass the course you need to receive at least one point for both the mid-term and the final report and have a total of at least 3 points.The points will then be translated into a grade (more points = better grade).