World News

AI is Reinventing Cybersecurity Threats

Technology is advancing at a rapid pace, but skills, leadership and training are lagging behind. Unsplash+

Artificial intelligence is not new to cybersecurity. The industry is one of the early adopters of AI. For years, secure cybersecurity has relied on machine learning to identify anomalies, detect patterns and respond to threats with speed and accuracy beyond human capabilities. What’s new is the speed, scale and reach of AI, and the way it’s reshaping not just our defenses, but the nature of cyber risk itself.

On April 7, Anthropic sent shockwaves through the industry when it announced that its latest model, Claude Mythos, was too powerful to be released publicly due to its unique ability to identify and exploit software vulnerabilities. The company has instead chosen to provide controlled access to select businesses, including JPMorgan, Apple, Nvidia and Google, to strengthen their cybersecurity defenses. The move underscored the growing reality that the same systems designed for defense can easily be weaponized.

The unfortunate truth is that while AI is accelerating offensive and defensive capabilities, threat actors are proving equally, if not more, innovative. Technology alone will not close the skills gap. Instead, leadership, talent generation and training need to keep pace with this technological evolution.

The new case: faster, smarter and more personal

For decades, cyber attacks followed a common pattern. Phishing emails were often confusing, full of grammatical errors and easy to spot. Consider the infamous “Nigerian prince” scams. That time is over.

AI has dramatically changed the accuracy and economics of cybercrime. It allows bad actors to operate like never before speed and complexity. Cyberate attacks have always been, in part, a numbers game—like trying every door and window in an area until one is opened. AI simply allows attackers to try multiple doors at near the speed of light. It also acts as a multiplier for low-skilled actors, facilitates the creation of malware and phishing campaigns and enables Ransomware-As-A-Service business models.

At the same time, cyber attacks are becoming more sophisticated. Today’s phishing campaigns are unusual; are highly customized. AI tools consume large amounts of publicly available data, from social media profiles, to our companybsites and professional bios and use that information to craft messages that resonate with colleagues, family members or trusted institutions. The result is a new class of cyber threat: more common, more dangerous and more credible.

Even the smallest details matter. Recently, my husband received what appeared to be a legitimate fraud alert from his bank. Upon closer inspection he noticed that American spelling was used in the message from the Canadian bank. That subtle conflict was the only clue that it was a scam. He looked directly at the bank, and his suspicions were confirmed. But in the realm of sophisticated manipulation, those clues disappear.

Conflict problem

That omission points to another problem. Organizations have spent years preparing for a seamless, frictionless digital experience. We are conditioned to click, approve and move quickly. But when it comes to cybersecurity, the conflict is often defensive rather than adversarial.

Multiple authentication, authentication prompts and transaction delays are viewed as disruptive. In fact, it is a deliberate stop designed to interrupt automatic or unexpected behavior and help users—both human and machine—validate actions. AI-driven threats exploit exactly what non-powered systems allow: speed without reflection.

As cyber risks increase, individuals and organizations must rethink their relationships for convenience. Slowing down, asking for a message, confirming a request and pausing before clicking are some of the most effective defenses we have.

Skills change in real time

While much of the discussion surrounding AI in defense focuses on the threats, there is an equally important shift taking place among the workforce. AI is rapidly being embedded in everyday workflows, changing not only how work is done but also what skills are required to do it.

In cybersecurity, this change is more pronounced. Tasks that were once the domain of entry-level analysts—monitoring alerts, identifying patterns and trying events—are increasingly being automated with powerful AI tools. On the face of it, this is a positive development. It allows organizations to operate more efficiently and frees up talent to focus on high-value work.

But it also raises an important question: if basic tasks are automated, how do future professionals develop the basic skills needed to thrive? Does the foundation need to change? Do we need to maintain any of these skills? The challenge is becoming more urgent as organizations continue to roll out powerful systems like those capable of identifying vulnerabilities at scale, as highlighted by Claude Mythos’ announcement. But cybersecurity provides a clear lens on the challenge. When all the layers of entry-level experience begin to disappear, traditional career paths collapse.

Without deliberate intervention, we risk creating a generation of professionals who know the tools well but lack the basic knowledge to use them in depth.

An intellectual illusion

Adding to this challenge is a growing tendency to overestimate what AI actually is. Much has been written about AI concepts, but to really understand this, we need to consider how these systems work. Today’s models are powerful, but they are not human. They don’t think the way we do; they identify patterns in the data and predict the most likely sequence of outputs based on their previous observations.

They can help, accelerate and amplify, but they cannot replace human judgment. While AI systems may have all the data relevant to a particular situation, qualities such as context, empathy and unique insight remain human. However, many organizations are deploying AI tools at scale without adequately preparing their employees to use them in depth.

This brings into focus issues affecting the safe and secure use of this technology. AI is infallible. There are many examples of bias or model drift, as well as deliberate manipulation that can result in unintended consequences that often cause more harm than good. The result is a sharp and urgent gap: we rely more on AI than ever before, but it does not mean that we have more knowledge about how to use it safely, securely and effectively.

Leadership gap in practice

These changes point to a broader issue that has been part of industry discourse for more than a decade: cybersecurity is not just a technical exercise. It is a strategic, organizational and human challenge. And that requires a kind of leadership that goes beyond just technological innovation.

While this is important, it is not enough by itself. Leaders must aim to understand AI, systems thinking, data management, human behavior, regulatory environments and organizational change simultaneously. They should not only be able to ask “Can we use this technology?” but “Should we, and how do we do it safely and responsibly?”

At the same time, they have to navigate the powerful and frustrated workforce. Employees are being introduced to an increasing number of AI tools, often without clear guidance, while also facing the fear of being fired. Tool fatigue is real, and without proper support, it can lead to misuse or misalignment.

In many organizations, these leadership skills are still emerging. The result is a growing gap between the pace of technological change and the readiness of those tasked with leading that change.

Rethinking AI-driven future talent

Addressing this gap will require a fundamental rethinking of how we develop talent. First, we must embrace the concept of lifelong learning, not as a slogan but as a necessity. The pace of change in AI and cybersecurity means that static skills are rapidly becoming obsolete. Continuous education, skills development and re-skilling must be incorporated into the organisation.

Second, we need to create new ways to develop skills. If traditional entry-level roles are changing or disappearing, we must define a new foundation of core skills and design alternative ways for professionals to acquire the knowledge they need. This should include learning within simulated environments, training or rotation programs or integrated learning models that combine theory and practice.

Third, organizations must invest in helping their people, not just sending equipment. This means providing clear policies, guidance, training and governance structures that help employees use AI safely, responsibly and effectively.

Finally, cooperation in all sectors is important. The challenges posed by AI and the current and emerging risks of the Internet do not fit neatly into any one domain. Partnerships between industry, government and academia will be critical in building the talent pipelines and knowledge ecosystem needed to keep pace.

The way forward

None of this is an argument against AI Far from it. AI has great potential to strengthen cyber security, improve efficiency and unlock new capabilities. But like any powerful tool, its impact depends on how we use it.

Organizations that succeed in this new field will not be those that adopt AI the fastest, but those that adopt it in the smartest way, balancing innovation with oversight, purposeful speed and automation with human judgment.

Cybersecurity has always been about staying one step ahead of a threat. In the age of AI, that step is no longer just technological; it’s a person. Bridging this gap will be critical to maintaining our future digital resilience.

Judith Borts is the Executive Director at Rogers Cybersecure Catalyst—Toronto Metropolitan University’s national center for training, innovation and collaboration in cyber security.

AI's Cybersecurity Problem Is Not Technology. It's a Man.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,

fbq(‘init’, ‘618909876214345’);
fbq(‘track’, ‘PageView’);

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button