Ethics and Artificial Intelligence: Principles for a Responsible Future
By Joan Fontrodona
Professor of Business Ethics
CaixaBank Chair of Sustainability and Social Impact at IESE Business School
Artificial Intelligence (AI) has advanced by leaps and bounds in recent decades, transforming our society in unimaginable ways. As this technology continues to evolve, new questions are being raised about the purpose and use of AI that require ethical reflection.
The relationship between technology and ethics must be based on respect for the autonomy of each of them. But also from the acceptance that ethics - insofar as it questions the ultimate reasons for human action - has a directive role to play in technical progress. From this mutual relationship, ethics offers some principles that can guide the development and application of AI.
Not everything that is technically possible is ethically acceptable
This first principle is an essential reminder that the mere technical ability to create something does not justify its creation. AI has proven to be a fertile field for innovation, but not all applications are ethical or beneficial. For example, the creation of lethal autonomous AI-powered lethal weapons raises profound ethical concerns. The autonomy of these weapons raises questions about who is responsible in the event of misuse or accidents. Ethical reflection and robust regulation are required to ensure that technologies are not developed that could cause harm or undermine fundamental human values.
But there is no need to go as far as the examples of so-called strong AI. Also in more common uses - the so-called weak AI - a debate from ethical categories is required to justify its development. For example, the collection and use of data - an issue of great relevance today - must be transparent and respect the privacy of individuals.
The same can be said of the use of AI to expand surveillance and tracking of individuals, which may collide with people's freedom and right to privacy. Ultimately, technological developments must not compromise our fundamental rights.
Technology is a means; the end is human excellence and the development of society.
Artificial Intelligence should be understood as a tool to improve the human condition and the progress of society rather than an end in itself. A paper by a European Union working group to develop trusted AI advocated the "people-centered approach" as the interpretive key to all AI developments.
An example of how this principle can be applied in AI is in the field of healthcare. AI systems can be used to improve the diagnosis and treatment of diseases, but it is more questionable whether they will replace personal medical care and the empathy of healthcare professionals. Technology can be a valuable ally, but it should not sacrifice the quality of medical care or reduce people to mere numbers or data.
Let's also think about the field of education. AI can be a valuable support tool, but it should not become the sole means of learning or replace personal educational experience. Human excellence in education involves interaction with teachers and peers, the development of social and emotional skills, and the promotion of critical thinking and creativity, which can hardly be replaced by AI.
Technique must always be used in favor of the nature of things, never against them.
This third principle emphasizes the importance of respecting and working in harmony with nature and the fundamental laws that govern the world around us. In the context of AI, this means that we must use technology in a way that does not undermine the integrity of nature or violate fundamental ethical principles.
AI can help us improve the quality of life of people, particularly those with disabilities. AI has proven to be a valuable tool for accessibility, enabling people with disabilities to overcome barriers and participate fully in society, from voice recognition systems for the visually impaired to motorized exoskeletons that help restore mobility. However, when it comes to manipulating human cognitive abilities and identity, as transhumanism aims to do, questions are raised about the legitimacy of these attempts. While improving the quality of life is a laudable goal, extreme manipulation raises serious ethical concerns and should be approached with caution and a deep respect for the nature of things and human dignity.
Questions are also being raised about the environmental impact of AI. As the demand for computing power increases with the development of larger and more complex AI models, it is important to address the environmental implications of increasing energy demand. The pursuit of human excellence and societal development must go hand in hand with sustainability and environmental protection. The adoption of renewable energy sources and energy efficiency in AI infrastructure are key aspects of meeting this principle.
Caution in the face of unreturnable advances
Technological progress, especially in the field of AI, is accelerating at breakneck speed. This accelerated pace can be exciting, but it also poses significant ethical challenges. When we move down a technological path, there is often no turning back. That is why caution and ethical reflection are essential.
The precautionary principle is a crucial reminder that we must carefully consider the consequences of our actions in AI development. Once a technology is implemented, it may be difficult or even impossible to undo what has been done. Therefore, we must apply careful ethical consideration before moving forward in areas of AI that may have a significant impact on society, privacy, personal growth, the environment and other fundamental aspects of human life.
Joan Fontrodona, Professor of Business Ethics and CaixaBank Chair of Sustainability and Social Impact, IESE Business School
Joan Fontrodona is Professor and Director of the Department of Business Ethics at IESE Business School - University of Navarra. He is the holder of the CaixaBank Chair of Sustainability and Social Impact, and Director of the Center for Business in Society at IESE. He holds a PhD in Philosophy and an MBA. He is a member of the Board of ABIS (The Academy of Business and Society) and of the Executive Committee of the Spanish Network of the United Nations Global Compact.