Mark Coeckelberg: “We should not be afraid of AI, but we should be afraid of authoritarianism and totalitarianism.”

Mark Coeckelbergh during his stay in Catalonia
+
Download

Mark Coeckelbergh during his stay in Catalonia

Mark Coeckelbergh during the AI and the principles of democracy lecture at the UPC
+
Download

Mark Coeckelbergh during the AI and the principles of democracy lecture at the UPC

Mark Coeckelbergh is a full professor of Philosophy of Media and Technology at the University of Vienna and an ERA Chair at the Institute of Philosophy of the Czech Academy of Sciences in Prague. He delivered a lecture at the UPC on 24 May, invited by the Industrial Robotics and Informatics Institute (IRI) and the Ethics Committee. Coeckelbergh is the author of numerous books, including 'AI ethics', 'Robot ethics', 'Introduction to philosophy of technology' and 'The political philosophy of AI'.

Jun 19, 2023

How can artificial intelligence (AI) affect democracy? Mark Coeckelbergh, a professor of Philosophy of Media and Technology at the University of Vienna and an ERA Chair at the Institute of Philosophy of the Czech Academy of Sciences in Prague, used concepts from political philosophy to analyse the impact of AI on key democratic principles during his lecture AI and the principles of democracy at the Universitat Politècnica de Catalunya - BarcelonaTech (UPC). The philosopher warned about a tendency towards totalitarianism. But AI has also generated discussions in a more constructive direction: How can we use AI to improve democracy?

Coeckelbergh is the author of numerous articles and books, including AI ethics (MIT Press), Robot ethics (MIT Press), Introduction to philosophy of technology (Oxford University Press) and, most recently, The political philosophy of AI (Polity Press). His expertise focuses on ethics and technology, particularly robotics and artificial intelligence. He is involved in several national and European research projects on AI and robotics. He visited the UPC on 24 May, invited by the Institute of Robotics and Industrial Informatics (IRI) and the University’s Ethics Committee.

"AI endangers freedom and intersects with issues about justice."

Mark Coeckelbergh

Regarding the subject of your latest book, ‘The Political Philosophy of AI’, which role does political philosophy play in the implementation of AI?

AI raises not only ethical questions but also political ones, as it has a profound impact on our society. For example, it endangers freedom and intersects with issues about justice. Think of the possibilities for manipulating the choices of people or the bias in AI. If we want to deal with these questions, we need to have the appropriate conceptual tools. Political philosophy can help us to discuss the underlying political issues.

In your conference, you state that the future of AI should be democratic. Should we be afraid of AI? Could AI affect democracy in a positive way?

If our democracies are already weak and plagued by problems such as populism, AI can worsen the problems. Think about the manipulation of elections and the spreading of misinformation and fake news. We should not be afraid of AI, but we should be afraid of authoritarianism and totalitarianism. If we want to do something about this, we need to use AI in a positive, constructive way for democracy, for example, by having AI help with supporting deliberative processes. AI can inform people and help them to find a consensus.

How can AI help solve global problems?

AI can help us for example with climate change, for example. It can predict extreme weather events and support a smarter use of energy. But when we do this, it's important to also think of the ethical and political issues. And AI can itself contribute to global problems. For example, through energy use and carbon emissions related to the training of large language models and the data centres and other infrastructure and devices needed for AI, AI contributes to environmental and climate problems. These issues need to be addressed, both at the technical level and the level of regulation.

"We should be careful when we delegate decisions to machines, because that raises a
problem of responsibility and accountability."

Mark Coeckelbergh

Do you think we are delegating too much decision-making power to machines?

We should be careful when we delegate decisions to machines, because that raises a problem of responsibility and accountability. We should make sure humans can be held responsible and answerable for decisions. Otherwise, we have a moral problem and also a problem of political legitimacy: citizens have a right to know why a particular decision about them was taken. Think of social benefits distribution or court decisions. AI can help, but we need to keep responsibility and judgment on the side of the human.

We saw a classroom full of students listening to a lecture on democracy and philosophy. Are philosophy and ethics in fashion again??

We all live in a world that is increasingly confusing and deeply problematic in many ways. I think young people see a turn to philosophy as one of the tools we can use to better understand the world and to reflect on what kind of normative direction we want to go in. It's not easy to find the good life and good ways of living together with digital technologies today. But this is exactly a key challenge in and of our time. Philosophy can help with this.