You are reading:
Beyond the fear of the algorithm: a socially responsible AI
22.02.24
5 minutos de lecturaArtificial Intelligence (AI) was born in the 1950s with the aim of building intelligent computer systems. After decades of development, it has undergone exponential growth in recent years, with major implications in the technological, social and economic spheres, as well as a democratisation of its use by everyone through tools such as ChatGPT. As the expert Nuria Oliver analyses, its growth implies extensive ethical debates, challenges in such crucial areas as disinformation and an impact on democracy, but also an immense range of possibilities for social good.
Oliver is one of the top references in the field of Artificial Intelligence (AI) in Spain. She graduated in Telecommunications Engineering from the Polytechnic University of Madrid in 1994, at the top of her class, and received a grant from the ”la Caixa” Foundation to pursue a PhD in Artificial Intelligence at the MIT MediaLab. She completed her PhD in 2000 and has worked as a researcher and research manager for such major companies as Microsoft, Telefónica and Vodafone. She is a co-founder and vice-president of ELLIS (European Laboratory for Learning and Intelligent Systems), the European network of scientific excellence in AI, and director of the ELLIS unit in Alicante, a foundation dedicated to human-centred AI research.
According to Oliver, AI cannot be understood without addressing the major issues we face as a society, and points to: “population ageing, the depletion of resources and polarisation of wealth accumulation, the energy crisis driven by the high consumption of non-renewable energy, climate change and pandemics”. She goes on to assert that, “We need AI to tackle these challenges, and therefore we need it to survive as a species.”
“We live in a world where we use AI-powered apps all the time and can even talk to our cars, phones, smart speakers or home appliances, which understand us,” says Oliver. “And we’re moving towards a world where we’ll live with autonomous cars and social robots. A world with smart cities that are constantly generating data, data that we’ll only be able to interpret through AI”.
According to IoT Analytics Research, there will be more than 12 billion devices connected to the Internet by 2020. “The smartphone is the most widely adopted technological device in human history,” says the expert. This ubiquity of technology is generating huge amounts of data. The 2018 IDC White Paper predicts that the Global Datasphere will grow to 175 zettabytes of information by 2025. And “there’s no other way to process that volume of data than through AI techniques,” explains Oliver. Thanks to these data and significant computational capacity, algorithms learn and find patterns, make predictions, create images that appear human-made, or translate from one language to another.
“Today, we coexist with AI, often without knowing it. AI algorithms determine what friends we have, what news we read, what films we watch, what music we listen to or what books we read,” she says, indicating ChatGPT as one of the keys to democratising AI. But increasingly, “it’ll be algorithms that decide what medical diagnosis we get, what treatment we receive or what sentence is handed down to us.”
As a result, AI has gone from being a technological issue to “becoming a political one”. With this exponential growth comes the need for regulation. Accordingly, work began in 2018 on a European strategy for AI, culminating in 2023 in the adoption of the AI Act, the European Union's first regulatory framework for AI. In addition, two equally important pieces of legislation were adopted in 2022: the European Digital Markets Act and the European Digital Services Act.
Human-centred AI
To this technological base, framed by national and international regulation, Oliver adds several dimensions without which she cannot imagine a real approach to AI: ethical, social and economic aspects. According to Forbes’ AI Trends, 97 million people will be working in the AI sector by 2025, and the World Economic Forum predicts that 23% of jobs will be affected by AI.
Beyond the displacement of jobs in this Fourth Industrial Revolution and the immense wealth creation it can bring to the productive sector, the AI expert stresses the need for AI to be human-centred. This revolution, which we have been witnessing particularly since 2016, has already led to achievements “that seemed like science fiction a few years ago”, such as deducing the 3D structure of proteins. The applications of AI for social good are countless: from predicting crime hotspots in cities to personalised education, early diagnosis of diseases, precision medicine or autonomous driving. It is this social aspect, aimed at the common good and reaching out to all of humanity, that Oliver has focused her career on.
The ELLIS Alicante Foundation, co-founded by Oliver with the encouragement and support of the Valencian government, believes “in the power of AI as a driver of progress and a key factor for well-being”. However, as its website points out, “this potential is by no means guaranteed, and that’s why the research of our foundation is so important”.
Over the last decade, both privately and through ELLIS Alicante, Oliver has concentrated her efforts on using AI to mitigate the impact of natural disasters, promote financial inclusion, create smart cities and transport, help fight pandemics or establish more sustainable energy systems. She recently published an article in Science on the use of mobile phone data to inform public health decisions during the COVID pandemic and has driven initiatives such as NAIXUS, a global network of AI centres of excellence focused on sustainable development.
Although ethical debates about its implications are regularly raised, Nuria Oliver's work shows that AI has a real social and responsible application that can help us find a way forward to improve people's lives and tackle future challenges as significant as resource depletion, public health or the climate crisis.