**Questions? [[Contact us!]]**
## Introduction
While data accessibility and machine learning continue to progress, AI-driven clinical solutions in healthcare might be linked to shortcomings that could potentially raise safety issues for healthcare service consumers. The utilisation of AI and dependence on AI-driven recommendations in healthcare can vary significantly from other domains, primarily due to the heightened risks and the gravity of patients' well-being involved.
AI algorithm errors can result in various consequences, such as:
1. Missed diagnoses of life-threatening diseases, leading to false negatives;
2. Unnecessary treatments, as healthy individuals are incorrectly classified as diseased by the AI algorithm, resulting in false positives;
3. Inaccurate diagnoses, potentially causing inappropriate interventions or incorrect prioritisation of interventions in emergency departments.
**Given that AI (in healthcare) is always changing and developing, there may be some legal gaps in the regulations mentioned. This is also the case with accountability in medical AI. It is therefore good to know that the information on this page is still subject to change and will be regularly updated.
Some AI-caused harms may fall into gaps between product liability for manufacturers and medical liability for healthcare providers. Patients injured by autonomous black box AI, which makes unsupervised medical decisions, may have no clear legal recourse.
## Algorithmic accountability
While the term '**algorithmic accountability**' may suggest the responsibility lies with the algorithm itself, it actually underscores the reality that algorithms are the result of a blend of [[Machine learning|machine learning]] and human design. The errors or ethical lapses in algorithms originate from the humans involved in their development, implementation, and utilisation. This is particularly significant since AI systems themselves cannot be held morally or legally accountable.
Securing accountability holds immense significance within the domain of medical AI, as it serves as a cornerstone in enhancing its acceptance, reliability, and the potential for broad adoption in both society and the healthcare sector.
A crucial method to enhance the accountability of AI tools in healthcare involves conducting regular audits and risk assessments. These assessments can gauge the level of regulatory oversight required for a specific AI tool. Audits, focusing on attributes such as fairness, transparency, accuracy, and safety, serve as mechanisms to subject AI decision-making processes to the same standards as human processes.
## Relevant legislation
### [[GDPR]]
The GDPR adopts a dual approach to algorithmic accountability, addressing it through the lens of individual rights on one side and systemic regulatory frameworks on the other. Notably, the GDPR places transparency as a central tenet for data processing and connects it to the concept of lawfulness (as stated in Art. 5 (1(a)) GDPR), both of which are integral components of the overarching principle of accountability (as stated in Art. 5 (2) GDPR).
### [[AI Act]]
The AI Act does not yet deal with accountability for damages caused by AI, in the current state of the concept. It is however known that the 'AI Office' (short for ‘European Artificial Intelligence Office’) will be established and that they will provide guidance on the subject of accountability. More will become clear by then.
Until now, the tendency seems to be that companies bear the responsibility for the algorithm used and the effects resulting from it because they choose to use it. This highlights the importance of creating and using ethical AI.
### [[(Revision) Product Liability Directive]]
The [[(Revision) Product Liability Directive]] (PLD) sets out conditions under which consumers can claim compensation for damage caused by defective products. Article 4 of the Product Liability Directive Proposal states that software is a product. This implies that AI systems are also products.
Article 6 of the Proposed Product Liability Directive lays down different circumstances for assessing whether a product is defective.
### [[AI Liability Directive]]
The proposal for the AI Liability Directive seeks to "align private law with the requirements of the digital economy's transformation."
As it is currently formulated, the AI Liability Directive primarily aims to accomplish three key objectives:
1. **Reducing Legal Uncertainty**: The directive intends to mitigate legal ambiguity concerning liability claims and damages arising from AI-related incidents.
2. **Ensuring Redress for Victims**: It seeks to guarantee that individuals affected by AI-related damages have the means to pursue effective remedies and seek compensation.
3. **Harmonization and Modernization**: The directive aims to standardize specific regulations across EU Member States and bring national liability rules in line with contemporary AI-related challenges and advancements.
### [[Data Act]]
The public sector is accountable for the data shared under the [[Data Act]]. To prevent data misuse and uphold public sector accountability, data requests must be proportionate, clearly state their purpose, and consider the interests of the data-sharing firm.
### [[MDR]] and [[IVDR]]
Numerous AI-specific facets often go unaddressed in the [[MDR]] and [[IVDR]], including the continuous learning of AI models and the detection of algorithmic biases. Specifically, the dynamic nature of AI, characterised by its continual learning and adaptation as more data becomes available, necessitates novel approaches for monitoring the risks associated with AI software.