AI and ethics

9 important questions on AI and ethics

What are the well-known AI ethics guidelines?(2) And what are the main pillars? (4)

  • EU = Ethics guidelines for trustworthy AI
  • WHO = Ethics and governance of AI for health

  1. Respect for human autonomy
  2. Prevention of harm => use is with a good intention, social and environmental wellbeing
  3. Fairness => use real data, be diverse
  4. Explicability = transparency

(1) lawful -  respecting all applicable laws and regulations
(2) ethical - respecting ethical principles and values
(3) robust - both from a technical perspective while taking into account its social environment



Why can these principles not guarantee ethical AI? (2)

  • Principles don’t give solutions for practical moral dilemmas
  • AI developers have a different relation with the patient (informed consent, shared decision making etc.) For doctors there are documents that go way back on ethics, this is not the case for AI developers. They have a different starting point.

One of the pillars is explicability. What is the problem with this?

Blackbox problem: AI uses deep learning and the human brain often does not understand the proces between the input and the output.
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

You can do a lot of AI research by using the electronic patient system. What is the difficulty?

• Incomplete data
• Inconsistent Data Entry and Coding Practices, incl. errors
• Lack of Standardization Across Systems
• Data Fragmentation (patient)
• Poor data quality in free textfields in EHR-systems

Why do we need to be skeptical about using EPD?

  • You don’t know who is going to profit from it, maybe not your specific patient population
  • Model make norms and practices on a specific context => will it be inclusive, for a broad patient population?

The PROFIT study want to update the indications for getting an ICD implant; based on biomarkers to make a prediction model. He uses two kind of trials. Which ones? Is this ethical?

  • Non-inferiority clinical trial => include patients with a low risk. To test if the new treatment is not worse than the standard treatment.
  • Superiority clinical trial => includes patients with high risks. To see if the new treatment is superior to the other standard treatment.


Not really ethical:

• Disregard for Patient Interests (incl. patient enrollment)
• Misleading Results: When ‘Good Enough’ Isn’t Actually (more) Effective
• Market-Driven Motives
• Limited Clinical Insight

Are there ways to effectively integrate patient preferences and values into (AI-driven) personalized medicine inititiaves?

Patients think AI is important to let speak facts for itself, because people have limitations => but there need to be a personal context.

  1. A physician should be available to asses the AI application => is the application reliable and effective. => (example; you have a headache => tools says go home. The doctor needs to see if the tool is right)
  2. A physician keeps on being a sparring partner. => taken your personal situation into account. It is not just 'gezellig' it really has a function.

What are different scenario's/levels for integrating AI into healthcare?

1. Not
2. Prediction model
3. Prediction and recommendation 
4. AI decision

What are ethical concerns in AI and health?

  • The guidelines are made top-down => the patient is included in the last step = too LATE
  • The guidelines are not really dealing with practical issues. We need every day practice ethics. 
  • We need normative research if the society still thinks that 'all patients need a human doctor'.

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo