image

SICUREZZA 2025: OVER 120 COMPANIES ALREADY CONFIRMED MORE THAN A YEAR BEFORE THE EVENT

The industry expresses its confidence in the event, which reaffirms itself as one of the leading European exhibition.

Read the press release

Discover all the novelties

image

SICUREZZA 2025: OVER 120 COMPANIES ALREADY CONFIRMED MORE THAN A YEAR BEFORE THE EVENT

The industry expresses its confidence in the event, which reaffirms itself as one of the leading European exhibition.

Read the press release

Discover all the novelties

Is AI a fundamental cybersecurity instrument or a weak link?
image
Ne abbiamo parlato con Fabio Roli, Professore Ordinario di Ingegneria Informatica, Università degli Studi di Genova, che interverrà all’interno della Cyber Security Arena.

“Today we know that artificial intelligence is a fundamental tool for cybersecurity, but we also know that it can become the weak link in the chain”. It is from this fine line that Fabio Roli, Full Professor of Computer Engineering, University of Genoa – in preparation for his participation at the upcoming edition of the Cyber Security Arena, to be held during SICUREZZA 2023 (Rho-Fiera Milano from 15 to 17 November) – started to explain to us how artificial intelligence has become a leading player in the development and evolution of processes, dynamics and flows of interaction and management in all sorts of activities, in an increasingly digital and digitised era, in which businesses, workplaces, the buildings and cities in which we live are being transformed into elements of a single large network that is constantly connected and evolving, enabling, on the one hand, security and prevention systems and devices to become more effective, faster and more efficient, but on the other, also making them more vulnerable and fragile. The professor continued by saying, “modern artificial intelligence technology based on “machine learning” has the ability to analyse huge amounts of data in real time to recognise a specific 'pattern' of interest within the data, a particular 'configuration' of the data. This is what insiders call 'pattern recognition' and allows artificial intelligence to be used for dozens of various applications, such as recognising any photo that contains the pattern that matches my face in a set of millions of photos that may contain thousands of other people's faces”. On the other hand, modern cybersecurity is now essentially based on the ability to analyse huge amounts of data in real time to recognise “patterns” that correspond to abnormal behaviour of machines or their users. Roli pointed out, “This ability to recognise “patternsis what allows our spam filters to separate the e-mails we want to receive from the unwanted and potentially dangerous ones”. “Pattern recognition” is indeed the common factor between artificial intelligence and cybersecurity. The academic remarked that “this makes us understand why many today say, exaggerating somewhat, that cybersecurity is all about artificial intelligence and that artificial intelligence is “all you need” for cybersecurity. But not all that glitters is gold, and, metaphorically speaking, we now know that artificial intelligence is not the final solution for cybersecurity, it is not that “silver bullet” we believed it to be twenty years ago. Insiders have now realised that artificial intelligence itself is vulnerable to attack. When using AI within my computer system I risk, if I do not do so with the adequate awareness, increasing the vulnerability of my system, increasing what experts call the attack surface”.

 

THE VULNERABLE BALANCE OF THE AI

So, artificial intelligence is another link in the cybersecurity chain. According to Roli, “It can become the strongest link or paradoxically the weakest.  Compliance and privacy are certainly two issues that need to be addressed carefully to avoid making AI the weakest link, along with the pivotal problem of its inherent vulnerability to targeted attacks. In general, the first aspect to focus on, when people decide to implement their systems and solutions with this technology, is certainly that of investing in staff training, starting from managerial levels, to increase awareness of the benefits and risks associated with AI. Secondly, then, we have to consider the use of industrial platforms for the development, deployment, operation and maintenance of AI systems. MLOps (Machine Learning Operations) platforms facilitate the reliable and efficient development and maintenance of AI models under production and certainly reduce compliance and privacy risks when compared to the use of more “homegrown” or homemade tools. Finally, the issue of adopting AI in companies will have to be addressed across the board, not only at the technological level, but also at the organisational and legal level. Securing AI, making sure it respects compliance and privacy, is not a problem that can be solved at the technological level alone, and it will be even less possible to do so when the European AI Act comes into force”.

 

SECURING THE AI PROCESS

In this sense, securing AI is a very complex problem, a “process” problem, we could say. “It is about securing a process by mitigating and managing the risks associated with the use of a new technology that is anything but perfect”, the professor added. Despite this, the opportunities are clear. Professor Roli emphasised that “AI enables a level of automation in cybersecurity to be introduced into the defence of information systems and the networks that connect them that would be unimaginable without using AI. The amount of data that needs to be analysed to protect systems, devices and networks cannot be done without the use of AI”. That is why AI is a necessity as well as an opportunity. "The great future that I see", Roli commented, "is that we will have intelligent machines capable of recognising 'patterns', as mentioned earlier, that will improve and make the lives of citizens and consumers safer in both physical and digital ways. I am talking about an evolution of what is already happening now. Personally, I am already very grateful to AI-based systems when they warn me that I might click on something potentially dangerous while browsing the web or reading my e-mails. Something none of us can do". Already now, in fact, AI algorithms protect us from careless behaviour and help us navigate safely in the complexity of a digital world that is not always on a human scale and that bombards us with too much data that we cannot humanly analyse. The professor added “I am confident that in the coming years AI will be an even better guardian angel for our cybersecurity. An increasingly intelligent and increasingly “friendly” guardian”. On the other hand, to understand the risks, one needs to understand what AI is nowadays and what it will be for many years to come. According to Roli, who then added, “This concept, however, has nothing to do with human intelligence. This has to be said loud and clear. Modern AI is a technology that analyses huge amounts of data and discovers statistical correlations”. It learns only to discover statistical correlations, the “patterns” mentioned earlier. “The most successful newcomer to the AI world, ChatGPT, also does this. ChatGPT did not learn Italian, although it seems to speak it very well. It has not learned Italian grammar and, to put it somewhat brutally, “literally does not know what it is talking about'”. The technology behind ChatGPT simply allows it to make a prediction of which is the next most likely word in a given sentence”. A very simple thing, but thanks to the enormous computing and memory power of today's computers, it is possible to “simulate” the ability to converse naturally, giving the impression that ChatGPT knows a language very well and knows exactly what it is talking about. The academic pointed out, “Our AI-based machines are not intelligent like us, they only simulate intelligent behaviour. They are intelligent in a very different way that should not be humanised. Many of the risks can come precisely from humanising the intelligence of machines and deciding to delegate to them tasks that are not appropriate for the kind of intelligence they have, the intelligence of discovering statistical correlations, “patterns”, in huge amounts of data that we could never analyse. In terms of cybersecurity, one of the other real risks I see is that of remaining attached to the vision of AI as a magic solution, a “silver bullet”, and not seriously delving into the problem of AI vulnerabilities”. Modern AI, in fact, is strictly dependent on the data we give it as examples so that it can learn. “Attacking AI is relatively easy if you can manipulate the data, and unfortunately it is often possible to do so. Thus, it is fundamental in cybersecurity to see AI as a link in the security chain, a part of a security architecture that contains many “layers” and many links”.

 

A GLIMPSE INTO THE FUTURE

Future AI-based cybersecurity technologies could radically change the figure of cybersecurity professionals”, Roli suggested. To understand this, an analogy with the future figure of the software developer, who will almost certainly no longer be a programmer but an “instructor” of AI systems developing software, may help. This is already partly the case for software development today, especially since the advent of ChatGPT-like technologies. It has become very common to be assisted in software development by AI tools that generate a first version of working code in major programming languages. The professor indicated that “it is probable that something similar occurs to computer security specialists. They will train and manage intelligent officers who, together with “unintelligent” personnel and machines, will implement cybersecurity. They will have to have the ability to provide these intelligent agents with the best data so that they can learn how to best manage the cybersecurity of our systems and networks, instruct them as we do today with ChapGPT by giving them the right “prompts” (the right instructions, the right data). To do so, they will need to have a thorough understanding of AI and its security implications. Required skills will include an advanced understanding of AI and its vulnerabilities, the ability to implement AI-based security systems, and the management of specific AI-related threats. Moreover, they will have to be able to manage the continuous training of their employees and stay up-to-date on the latest trends and techniques used by attackers against AI”.