Final exam details
Final Report Requirements
The goal of the
final presentation is to continue to work on the selected AI system (i.e. an AI-product and or an AI-based service) used in healthcare, and finish the evaluation process.
Scope
The report must be delivered using the same google slides presentation. The new part must be min minimum of 8-10 min long, including references (included in notes of slides) — estimated 4-5 slides The FINAL report should cover the following 5 points:
1. Identify any CLAIMS made by the producer of the AI system
According to the definition provided by Brundage et al. (2020, p.65) “Claims are assertions put forward for general acceptance. They’re typically statements about a property of the system or some subsystem. Claims asserted as true without justification are assumptions, and claims supporting an argument are subclaims.” Furthermore “AI developers regularly make claims regarding the properties of AI systems they develop as well as their associated societal consequences. Claims related to AI development might include, e.g.:
We will adhere to the data usage protocols we have specified;
The cloud services on which our AI systems run are secure;
We will evaluate risks and benefits of publishing AI systems in partnership with appropriately qualified third parties;
We will not create or sell AI systems that are intended to cause harm;
We will assess and report any harmful societal impacts of AI systems that we build; and
Broadly, we will act in a way that aligns with society’s interests.”(Brundage et al., 2020, p.64)
2. Develop of an evidence base
Following Brundage et al. (2020), this step consists of reviewing and creating an evidence base to verify/support any claims made by producers of the AI system and other relevant stakeholders:
“Evidence serves as the basis for justification of a claim. Sources of evidence can include the design, the development process, prior experience, testing, or formal analysis” (Brundage et al., 2020, p.65) and “Arguments link evidence to a claim, which can be deterministic, probabilistic, or qualitative. They consist of “statements indicating the general ways of arguing being applied in a particular case and implicitly relied on and whose trustworthiness is well established” [144], together with validation of any scientific laws used. In an engineering context, arguments should be explicit” (Brundage et al. 2020, p.65)
NOTE:You can for example use the “helping hand” from the Claims, Arguments, and Evidence (CAE) framework (Adelard LLP, 2020)
3. Map Ethical issues to Trustworthy AI Areas of Investigation
The basic idea of the process in this step is to identify from the list of ethical issues which areas require inspection. Therefore map Ethical issues to some or all of the seven requirements for trustworthy AI:
– Human agency and oversight,
– Technical robustness and safety,
– Privacy and data governance,
– Transparency,
– Diversity, non-discrimination and fairness,
– Societal and environmental wellbeing
– Accountability
(High-Level Expert Group on Artificial Intelligence, 2019, p.14)
4. Use the ALTAI web tool to answer the questions for the corresponding areas of trustworthy AI that you have mapped
5. Critically evaluate the result of the ALTAI assessment if it is relevant for the use case you have chosen. Motivate your analysis.
Grading
Each team receives 0-5 points for the final report.To pass the course you need to receive at least one point for both the mid-term and the final report and have a total of at least 3 points.The points will then be translated into a grade (more points = better grade).