When travellers arrive at Copenhagen Airport, they can pick up a free city map on their way to the metro, making it easy to navigate the city. But today, hardly any tourists look at a physical map, as the electronic maps on smartphones guide us around much more easily.
This simple example illustrates a deeper issue: our growing reliance on technology in a world where resilience matters.
Perhaps we will reach a point where humans can no longer navigate using a physical map. Is that a problem? asked Dr Rain Ottis, Tenured Associate Professor at TalTech, the technical university in Tallinn, Estonia, addressing a packed hall at the deep-tech conference Digital Tech Summit in November in Copenhagen.
In a time marked by geopolitical tensions and hybrid attacks, it certainly seems sensible that we maintain basic knowledge of how to operate offline.
A secure digital future is high on Estonia’s agenda
Rain Ottis was part of an Estonian delegation led by the country’s Minister of Education, and their presentations attracted great interest at the Digital Tech Summit.
The global AI race between the USA, the EU, and China confronts Europe with critical choices about cybersecurity, AI training, and digital sovereignty.
A secure digital future is a top priority in Estonia. The country has introduced dedicated educational programmes for teachers and older pupils in primary and secondary schools, teaching them how to maintain a critical approach to AI and use the technology responsibly.
Deputy Director at DTU Compute, Head of Section for Cyber Security and Professor Nicola Dragoni, moderated one of the sessions at the conference: Can Europe Survive the AI Tsunami? Here, both Rain Ottis and Dr Dan Bogdanov, Chief Scientific Officer at Cybernetica – which develops digital solutions for the Estonian state and is a member of the Estonian Academy of Sciences – urged us to remember the importance of a critical approach to AI.
“In a world where everything is becoming digital, interconnected, and AI-driven, building AI systems that are both responsible and secure is essential to prevent misuse, protect people, and maintain public safety. Embedding security, transparency, and ethical safeguards into AI from the outset helps ensure that technological progress strengthens trust and resilience rather than creating new vulnerabilities and threats”, said Nicola Dragoni.
Will machines overtake humans?
Rain Ottis is an expert in cyber conflict, cybersecurity, and national security, with a background in the Estonian Defence Forces and NATO. He advocates a critical approach to artificial intelligence, which he sees as both a valuable and potentially risky technology – one that nations must have a clear strategy for implementing and managing.
This perspective was evident in his talk. Among other things, he mentioned a scenario that occasionally surfaces in opinion pieces:
We train artificial intelligence with all our knowledge. At some point, machines will surpass us in everything – if we allow them to. What role should humans then play? Rain Ottis asked, urging us to reflect on how we, as dominant species, have treated others.
He added that, in such a case, we would be dealing with an intelligent machine capable of predicting the world and calculating its next move.
A somewhat bleak, yet fascinating and thought-provoking contribution to an important agenda in an era when the AI tsunami is sweeping over us.
Individuals can also take action
The good news, explained Dr Dan Bogdanov, who took the stage after Rain Ottis, is that with a critical and strategic approach to AI, we can still retain control.
AI is no terminator, because it is still humans who, based on our philosophy, tell artificial intelligence what to do. So, it is irrelevant to talk about humanity becoming second on Earth, Dan Bogdanov assured the audience.
It is one thing for society to adopt an AI strategy, but in everyday life, we also have a personal responsibility. With a few simple steps, we can steer clear of the AI tsunami, he explained, using ChatGPT as an example and immediately opening a browser so the audience could follow along on the big screen.
For fun, he asked the chatbot to help a high school student write an essay on how AI could harm learning. Despite the topic, the chatbot was more than willing to respond. Normally, the chatbot will not let us go but will immediately offer to do more: Would you like me to… Shall I suggest… Chatbait – designed to keep you engaged with constant follow-up questions. Chatbots, like social media, are built to hold on to users.
But that pressure is easy to resist if you know how, Dan Bogdanov explained. He then demonstrated how to systematically give the chatbot a prompt that stops chatbait. Just copy this one, he said and pointed at the screen.
By tailoring the chatbot to do only what you ask, you can use it smartly. But, of course, you need to learn how to do that…
Estonians must learn to use AI safely
Estonia is focusing on increasing AI competence in schools and public institutions – including launching AI Leap programmes in education to upgrade the population’s knowledge and skills in using AI.
As a society, it is crucial to have a clear AI strategy for what we use AI for and which values we teach AI to act upon, Dan Bogdanov explained.
Especially because today it is much easier for young people to ask a chatbot for an answer immediately rather than wait for their parents to have time to respond. In this way, artificial intelligence risks taking over the upbringing of our children and shaping them as individuals.
Digital sovereignty is not simple
Fittingly, Dan Bogdanov’s talk ended with a kind of exam. The audience had to raise their hands to indicate how far they were willing to delegate digital sovereignty in specific areas:
Should AI-based solutions be trained and used in the same country as the people whose data is involved?
- An AI service that diagnoses tumours from medical images.
- A chatbot that helps apply for social benefits.
- A chatbot that provides legal advice based on current legislation.
- An AI coding tool that helps industry develop software products.
- A (social) media monitoring tool that supports public policy development.
- The questions divided opinion and clearly showed that AI involves a battle of values.