By  Insight UK / 12 Nov 2025 / Topics: Digital transformation
We’re a few weeks past the joint IDC, Microsoft and Insight security event held at the Hotel Café Royal, London, but my mind is still racing from the discussions and presentations throughout the day. This piece summarises what I felt were some of the key talking points from the event.
It was no surprise to see the small conference room packed with new and familiar faces. The global cost of cybercrime in 2024 was estimated to be as high as $9 trillion — if it were a country, that would make it the world’s third-largest economy behind only the US and China. In the face of such high risk, it’s natural for the community to want to band together to share strategies, advice, and insights to combat the rising threat of cyberattacks.
The event was titled ‘Securing the AI Frontier: From Threat to Triumph’, and AI dominated most of the day’s conversations. Attendees explored its dual nature: how new technology creates fresh attack surfaces for bad actors while simultaneously empowering tools for defence.
Most importantly, everyone at the event had the chance to reflect on their own AI and security strategies. Understanding where your peers stand is critical for adapting to a rapidly changing environment.
Unsurprisingly, most of those in the room are in the infancy stages of their AI strategy. While 35% of attendees are already utilising AI and looking to go further, 65% remain in exploratory and early implementation phases. A significant barrier is trust, with many leaders citing the “black box” nature of AI as a concern. The room discussed moving away from what has been dubbed the ‘AI scramble’, where organisations experiment with everything and anything without structure.
Most attendees self-identified as either being part of the ‘AI scramble’ or beginning to move into the next phase, ‘the AI pivot’. This phase is where the essential components of AI become clear and use cases for the ‘softer’ side of AI start to emerge. It’s in this key phase that combining AI strategy with cybersecurity becomes a real possibility in fighting the rise of cybercrime. A particularly exciting example can be seen in AI transforming modern SOCs by enabling real-time threat detection, automating incident response, and reducing analyst workload. The opportunities to make security operations faster, smarter, and more resilient are boundless.
Another recurring topic was shadow AI — the unsanctioned use of AI tools or applications by employees or end users without approval or oversight from an IT department. Shadow AI is growing into a dominant concern, with 35% of EMEA businesses voicing anxiety, up from 29% in 2024 (Source: IDC 2025 EMEA Security Survey May 2025 – N = 1,028). I was surprised that this number isn’t higher — perhaps it indicates that organisations are yet to consider the significant risk it poses.
Ultimately, concerns around shadow AI tie into an even more prevalent topic: the human factor in security. Shadow AI is forcing businesses to confront questions such as what kind of data is being fed into AI systems, whether outputs and interactions are protected and well-governed, and most importantly, who is using AI. Business culture and, critically, educating employees will be key in combating shadow AI and much more in the coming years.
The room agreed that tackling cybersecurity has moved from a matter of ‘if’ to ‘when’, and we should not focus solely on prevention but pay serious attention to how we recover swiftly when attacks occur. Some even suggested that a point is coming where AI-enabled phishing becomes so advanced that training our people will no longer work. The weakest link — more often than not — is us, the users. As deepfakes become more prevalent and sophisticated, so too does our personal risk of compromise.
Despite these challenges, the session ended on an optimistic note. AI offers the potential to improve work-life balance by reducing the constant pressure of security vigilance. By enabling smarter detection systems, AI can lessen human accountability and the need for constant monitoring.
In my view, the relationship between AI and cybersecurity is still very much in its infancy, but it’s developing fast and the potential is undeniable. As AI empowers attackers, it equally strengthens our ability to defend — and, most importantly, supports the experts whose knowledge and skills are critical to minimising risk. Insight showcased this perfectly by using AI to sift through threat intelligence and provide tailored guidance to customers based on their job role.
If one thing is certain, it’s that experts must continue to gather, share expertise, and build a safer cyber world together.
