Search
Close this search box.

AI on ESG

For ethical and sustainable AI

By Idoia Salazar
Founder and president of the Observatory on the Social and Ethical Impact of Artificial Intelligence (OdiseIA)

"The process of training complex artificial intelligence models is energy and resource intensive, forcing researchers to seek a balance between processing power and energy efficiency."

Artificial Intelligence (AI) already affects multiple areas of our daily lives, from voice recognition on our phones to data analysis in medical research. Its advanced technology offers countless advantages such as task automation, efficiency, capacity to process large volumes of data and personalization of multiple services. However, these advantages come with challenges in terms of privacy, algorithmic biases, liability in case of errors and the impact on employment, among other things.

One of the most significant dangers of unsupervised AI is bias. We have already seen incidences of this that have led to cases of gender and race discrimination, among others. AI systems learn from data, and if that data reflects existing biases in society, it could perpetuate or even exacerbate those biases. This manifests itself in use cases for hiring systems, credit, and court proceedings, among others . Moreover, without proper regulation, AI can be used to collect, analyze and share personal data without proper consent. On the other hand, the question of liability arises in case of errors or accidents caused by AI systems, without adequate human supervision. Without a clear legal framework, determining who is liable can be challenging. Most of these aforementioned ethically dubious cases usually stem from unconscious actions or directly 'lack' of them. We should not forget 'the dark side of AI' i.e. the problems arising from conscious attacks or malicious manipulations of AI, which can easily lead to serious consequences in areas such as national security or critical infrastructure.

In combating these challenges, AI is no different from other high-impact technologies such as the automobile or the Internet. These also needed to be regulated to prevent negative consequences. In this case, and given its rapid evolution, dynamic and adaptive regulation is certainly required. In any case, in most countries, AI regulation is in its infancy. For example, in the United States, regulation has been more sectoral and largely dependent on individual states, although certain federal frameworks exist in specific areas, such as privacy or discrimination.

Globally, the challenge lies in balancing innovation with citizen protection. Excessive regulation could stifle innovation, while a lack of regulation could leave people unprotected. In this regard, the EU has been developing the so-called AI ACT for several years now in a joint effort between European regulators, companies, AI experts and civil society. Its aim: to protect the fundamental rights of individuals, ensure transparency in the decision-making of AI systems, and establish appropriate accountability mechanisms and human oversight, among other basic issues. They do not aim to regulate the technology itself, as this would pose a problem for its implementation and development in EU industry, but specific use cases that may pose a risk.

For a 'green AI

The concept of sustainability is intertwined with these ethical dilemmas, extending the vision into the long-term impact of AI on our world. Sustainability not only addresses efficiency and resource preservation, but also includes considerations of how AI can contribute positively to global challenges such as climate change. For example, it is being used to improve energy efficiency and to develop models that more accurately forecast extreme weather events. However, the process of training these complex AI models is energy and resource intensive, forcing researchers to seek a balance between processing power and energy efficiency.

Therefore, responsibility in AI development is not only about building systems that do no harm, but also about forging a future where technology works in harmony with our environment and society. In this regard, innovation in the field of AI that minimizes the environmental footprint should be encouraged. This includes the development of more efficient algorithms and the creation of less energy-demanding hardware. Global collaboration is also important to address ethical and sustainability challenges, sharing knowledge and resources to avoid duplication of efforts and ensure that best practices are disseminated and adopted globally.

In this context, international forums, including the Observatory on the Social and Ethical Impact of AI (OdiseIA) have a significant role to play, as the common effort they provide facilitates the creation of regulatory and ethical frameworks that transcend national boundaries and are responsive to the needs of different cultures and economies. The goal should be a universally agreed set of principles that governs the implementation of AI, ensuring that human rights are respected and collective welfare is promoted.

The decisions we make now about how we incorporate AI into our lives will define the trajectory of this technology and its role in society for decades to come. By committing to ethical and sustainable practices, we can ensure that artificial intelligence is not only a tool for efficiency and innovation, but also for the collective well-being and ecological balance of our planet. This is the challenge of our age, and the time to act is now.

Idoia Salazar, Founder and President of the Observatory on the Social and Ethical Impact of Artificial Intelligence (OdiseIA)

Founder and president of the Observatory of the Social and Ethical Impact of Artificial Intelligence (OdiseIA). Member of the team of experts of the Observatory of Artificial Intelligence of the European Parliament. She is currently working on the development of the Artificial Intelligence Regulation Sandbox and the National AI Seal with the Government of Spain.

She is a PhD. professor at the CEU San Pablo University, specializing in Ethics and regulation in Artificial Intelligence. Author of 4 books on the social impact of AI and other new technologies. The latest: 'The Algorithm and I: Guide to coexistence between humans and artificial beings' and 'The Myth of the Algorithm: tales and accounts of Artificial Intelligence'. She is an advisor for Spain in the Advisory Council of the International Group on Artificial Intelligence (IGOAI), founding member of the Springer journal 'AI and Ethics' and member of the Global AI Ethics Consortium (GAIEC).