Ethical principles are followed in interactions with patients and in the development and use of AI applications is essential for promoting trust, safety, and fairness.several organizations and entities that have contributed to developing ethical guidelines for AI applications, particularly in healthcare contexts:
- World Health Organization (WHO): WHO has been actively involved in developing guidelines and frameworks for the ethical use of AI in healthcare. They emphasize principles such as transparency, accountability, and equity in AI applications.
- Indian Council of Medical Research (ICMR): As a premier organization in India responsible for the formulation, coordination, and promotion of biomedical research, ICMR has likely contributed to the development of ethical guidelines for AI applications in healthcare.
- NITI Aayog (National Institution for Transforming India): NITI Aayog, the policy think tank of the Government of India, has likely played a role in formulating ethical frameworks and guidelines for the use of AI in various sectors, including healthcare.
- IEEE Standards Association: The IEEE (Institute of Electrical and Electronics Engineers) has developed standards and guidelines for ethical AI design and deployment. Their efforts include the creation of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
- European Commission: The European Commission has issued guidelines and recommendations for the ethical development and deployment of AI, including specific guidance for healthcare applications through initiatives like the European AI Alliance.
- American Medical Association (AMA): The AMA has issued guidelines and policies on the ethical use of AI in healthcare, focusing on issues such as patient privacy, transparency, and physician responsibility.
- The AI Ethics Lab: This organization focuses on AI ethics research and consulting, providing guidance and resources for organizations developing AI applications in healthcare and other domains.
- AI for Good Global Summit: This annual event, organized by the International Telecommunication Union (ITU) in partnership with various UN agencies, focuses on leveraging AI for societal benefit while ensuring ethical considerations are addressed.
These organizations, among others, have contributed to the development of ethical frameworks and guidelines for AI applications in healthcare, ensuring that patient well-being, privacy, and dignity are prioritized in the adoption of AI technologies.
These Basic principles will be guidelines for using them for AI applications:
- Autonomy– Respect for the patient’s right to self-determination. This means that patients have the right to make their own decisions about their care, even if the doctor disagrees. Doctors must provide patients with all the information they need to make informed decisions, including the risks and benefits of different treatment options.
- Beneficence– The duty to act in the best interests of the patient. This means that doctors should always strive to provide care that will benefit the patient and avoid causing harm. Doctors should consider all of the potential risks and benefits of treatment before making a recommendation.
- Non-maleficence– The duty to do no harm. This means that doctors should avoid causing harm to patients, both physically and emotionally. Doctors should always weigh the risks of treatment against the potential benefits.
- Justice– The principle of fairness and equity. This means that all patients deserve to be treated fairly, regardless of their race, ethnicity, gender, sexual orientation, or socioeconomic status. Doctors should allocate resources fairly and ensure that all patients have access to quality care.
MUHS (Maharashtra University of Health Sciences) collaborated with the UNESCO Chair in Bioethics program at the University of Haifa, Israel As a Vice-chancellor, I saw that MUHS established a UNESCO Bioethics Unit in 2015. The UNESCO Chair in Bioethics offered the “Train the Trainers – Integrated Bioethics in Health Sciences” (3T-IHSc) program. This program aimed to train faculty members to integrate bioethics education into their institutions’ curriculums. MUHS participated in the 3T-IHSc program offered by the UNESCO Chair in Haifa. This has equipped MUHS faculty with the skills to establish their own Bioethics Unit and integrate bioethics into the curriculum for medical students. This collaboration created a vertically integrated curriculum of Ethical management. Subsequently, this program was extended to all medical universities in India
This is a summary of extensive work done and references are given in further reading
Ethical guidelines for AI are crucial to ensure that artificial intelligence systems are developed, deployed, and used responsibly and beneficially.
- Transparency: AI systems should be transparent, meaning their operations, decision-making processes, and data usage should be understandable to stakeholders.
- Accountability: Developers and deployers of AI systems should be accountable for their creations. This involves being able to explain the reasoning behind AI decisions and being held responsible for any negative consequences.
- Fairness: AI systems should be designed and implemented to avoid bias and discrimination. They should treat all individuals fairly and equally, regardless of characteristics such as race, gender, or socioeconomic status.
- Privacy: AI systems should respect user privacy and data protection laws. They should collect, store, and process data in a manner that ensures confidentiality and respects individuals’ rights.
- Security: AI systems should be secure against unauthorized access, tampering, or misuse. Security measures should be implemented to protect both the AI system itself and the data it handles.
- Safety: AI systems should be designed with safety in mind to minimize the risk of harm to users, stakeholders, and society as a whole. This includes both physical safety and the prevention of harmful outcomes such as misinformation or manipulation.
- Human Oversight: Humans should retain control over AI systems and decisions, especially in critical areas where the stakes are high. AI should augment human capabilities rather than replace human judgment entirely.
- Societal Benefit: AI development and deployment should prioritize the welfare and benefit of society as a whole. Ethical considerations should guide decision-making to ensure that AI contributes positively to societal goals.
- Continuous Monitoring and Evaluation: There should be mechanisms in place to continuously monitor and evaluate the ethical implications and societal impact of AI systems throughout their lifecycle. This includes ongoing assessment of fairness, bias, safety, and other relevant factors.
- Global Collaboration: Given the global nature of AI development and deployment, collaboration among stakeholders across borders is essential to establish common ethical standards and frameworks.
The World Health Organization (WHO) emphasizes caution in utilizing artificial intelligence (AI)-generated large language model tools (LLMs) to safeguard human well-being, safety, autonomy, and public health. LLMs, including platforms like ChatGPT, Bard, and Bert, simulate human understanding and communication. Their rapid growth and experimental use for health-related purposes raise excitement about supporting people’s health needs. However, careful examination of risks is crucial when using LLMs to enhance access to health information, decision support, or diagnostic capacity in under-resourced settings. Key points are discussed in detail in the following way
Data Privacy and Security: . AI relies on a lot of data, which can raise privacy concerns. raising concerns about collection, storage, and use. Breaches could expose sensitive information. There are two main points to this debate:
- Companies might be collecting too much personal data and not giving users control over it. This could lead to a loss of trust. Companies should be more transparent about how they use data and how it benefits users.
- Companies with more data have an advantage because they can build better AI models. This might not necessarily hurt consumers, but it’s something to consider.
Here’s a summary of the different approaches to dealing with privacy issues in AI:
Legal Frameworks:
- Establish a strong national data protection law with principles like user consent and data minimization.
- Consider additional regulations for specific sectors like self-driving cars.
- Keep data protection laws updated to address new risks.
International Standards:
- Adhere to international standards for ethical AI development, especially regarding personal data and privacy.
- Encourage self-regulation by developers using tools like Data Privacy Impact Assessments.
Privacy-Preserving Technologies:
- Invest in research on anonymizing data and minimizing information leakage.
- Collaborate on technologies like Differential Privacy and Multi-Party Computation.
Public Awareness:
- Educate citizens about their privacy rights and how to protect them in the digital age.
- Create public awareness campaigns about data consent, ethics, and privacy.
- Integrate privacy education into school and college curriculums.
Bias and Fairness: AI algorithms trained on biased data can perpetuate those biases in healthcare. This could lead to misdiagnosis or unequal treatment for certain demographics. the data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness;
Transparency and Explainability:
AI systems are often like a black box – we see the input and the output, but not how the system arrives at its answer. This lack of transparency can be a problem, especially when AI is used for important decisions. There are calls for AI systems to be more explainable, but this doesn’t necessarily mean revealing all the code. The key is to find a way to explain how the AI system is making decisions, without giving away too much information that could be used to game the system. More research is needed to find the right balance.
LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response
Accountability: If an AI-driven decision leads to harm, who’s accountable – the developer, the healthcare provider, or the algorithm itself? LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content; LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content
Over-reliance and Job Displacement: Overdependence on AI could sideline healthcare professionals’ expertise and judgment. Additionally, AI automation might displace some healthcare jobs.
Authors
Prof. Dr Arun Jamkar, MS. PhD. FICS, FIAGES, FMAS , FAMS and FAIMER fellow
Former Vice-Chancellor. Maharashtra University of Health Sciences, Nashik, Distinguished Professor Symbiosis International University, Consultant Healthcare nad Life sciences, Persistent Systems Ltd, Email jamkar@gmail.com
Here are some resources for further reading:
- World Health Organization: Safe and Ethical AI for health [WHO calls for safe and ethical AI for health – World Health Organization (WHO) who.int]
- Forbes: Ethical AI in Healthcare [Ethical AI in Healthcare: A Focus on Responsibility, Trust, and Safety – Forbes forbes.com]
- ICMR guidelines
- https://main.icmr.nic.in/sites/default/files/upload_documents/Ethical_Guidelines_AI_Healthcare_2023.pdf
- Niti AYOGA guidelines
- https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf