Where 795+ cyber teams were tested—and what their performance revealed
Cybersecurity teams around the world are getting better. But not fast enough—and in some industries, not where it matters most. This year’s Global Cyber Skills Benchmark reveals a growing divide between elite performers and the average team. Foundational domains like Secure Coding (18.7%), Cloud (21.3%), and Web Security (21.1%) continue to be major weak spots, even in high-risk industries.
Geography offered a few surprises. Countries like Japan (52.1%) and Vietnam (40.2%) outpaced traditional leaders like the US (30%) and the UK (31.6%). But then, some of the top 10 teams across the whole CTF were US-based— proving that standout talent exists everywhere, even when national averages suggest otherwise.
AI and Machine Learning (ML) emerged as new challenge categories this year, with solve rates of 37% and 30.1%, respectively. Despite the hype, nearly 93% of teams said AI tools were not essential to their success, underscoring that human expertise still drives results.
Industry shake-ups were just as telling. Business Services led all sectors with a 43.9% average solve rate, while Government (27%) and Education (20.4%) fell to the bottom. The introduction of emerging threats—like AI prompt injection and smart contract exploits—reshuffled the leaderboard and exposed blind spots in training.
The takeaway is clear: certifications may check compliance boxes, but attackers don’t care about credentials. Real resilience is performance-based. If your team can’t handle cloud misconfigs, lateral movement, or adversarial AI, policy alone won’t protect you. This report is designed to move organizations from insight to action. We unpack what top teams are doing differently, identify where the biggest skill gaps remain, and explore how frameworks like CTEM—Continuous Threat Exposure Management—can help close those gaps with real-world, data-backed strategies.