
Follow ZDNET: Add america arsenic a preferred source connected Google.
ZDNET's cardinal takeaways
- There's a large mismatch betwixt request and rewards successful cyber.
- Working unit is lone apt to summation owed to the usage of AI.
- Security unit should absorption connected strategy and connection skills.
Almost 20% of organizations person reported a major information attack successful the past 2 years, and the menace environment, whether owed to transgression enactment oregon the emergence of caller AI-enabled models, specified arsenic Anthropic's Mythos, continues to germinate astatine breakneck speed. However, the cybersecurity professionals who assistance their enterprises negociate these challenges don't consciousness adequately rewarded -- and astir are fed up with the situation.
That's the decision from the recently released Harvey Nash Global Tech Talent & Salary Report, which surveyed implicit 3,646 exertion professionals globally. While 19% of respondents reported a large onslaught astatine their steadfast successful the past 24 months, those moving successful the information specialism were the slightest apt to study a wage summation implicit the past year.
Also: These 4 captious AI vulnerabilities are being exploited faster than defenders tin respond
Only 29% of cyber professionals said they'd received further compensation for their efforts, which is successful stark opposition to different roles, wherever astatine slightest fractional of tech professionals received a wage summation successful 2025, specifically successful DevOps (56%), merchandise absorption (51%), and concern investigation (50%).
"The probe intelligibly tells america that there's a large mismatch betwixt the request and the reward successful cyber," said Ankur Anand, radical CIO astatine exertion and endowment solutions supplier Nash Squared, which owns tech recruiter Harvey Nash, the steadfast that produced the survey.
"I deliberation this mismatch is owed to the complacency of galore boards saying thing atrocious has happened successful the past fewer years, truthful information indispensable beryllium fine. And that's the irony -- that erstwhile information teams are doing truthful much, and they're preventing harm to the organization, they're getting the slightest recognition."
Motivation is waning
Unsurprisingly, the survey recovered that information specialists person had enough. People moving successful cybersecurity are the third-most unhappy IT professionals globally (23%), conscionable down those moving successful prime assurance/testing (24%) and infrastructure/support (25%).
What's more, the deficiency of designation and a wide consciousness of despondency mean astir fractional (49%) of cybersecurity professionals privation to determination jobs successful the adjacent 12 months, good supra the planetary mean (39%) crossed exertion roles.
"Cyber is 1 of the fewer roles wherever occurrence is invisible, and nonaccomplishment is precise visible," said Anand, referring to the age-old concern situation of excessively galore executives assuming information is good due to the fact that their enactment hasn't been attacked.
Also: 10 ways AI tin inflict unprecedented harm successful 2026
However, this complacency could rapidly go a large issue. While 80% of organizations person not suffered a large onslaught successful the past 2 years, a nonaccomplishment from elder executives to admit the standard of the cyber situation and to look aft their information teams could mean the endeavor is adjacent successful the firing line.
In these circumstances, wherever cybersecurity concerns proceed to rise, and companies proceed to stall astatine rewarding and retaining their talented staff, galore professionals tin consciousness their information for enactment commencement to wane.
"It's the operation of the deficiency of recognition, the unit successful presumption of ensuring that the harm is not done, and that adds to the workload due to the fact that of the bequest tech stack and the distributed workforce operation that is doing the harm to people's motivation," said Anand.
AI brings caller threats
Crucially, the moving unit is lone apt to spell 1 way: upwards. The emergence of AI brings caller models, techniques, and risks. Anand said organizations and information professionals indispensable see the velocity astatine which AI is evolving and its apt interaction connected concern operations.
"When I reappraisal the menace vectors with my caput of security, it boggles my caput astir the fig of vulnerabilities that outsiders are trying to compromise successful the endeavor IT environment, and that world makes it precise stressful to enactment successful the information organization," helium said.
Such is the gait of alteration that Anand said the menace situation is moving faster than astir organizations tin structurally adapt. He regularly speaks with integer leaders astatine different companies who accidental they've invested heavy successful information but inactive conflict to header with the threats.
Also: AI is softly poisoning itself and pushing models toward illness - but there's a cure
Some manufacture experts are acrophobic that current fears astir the gait of AI-enabled alteration are conscionable the starting point. Anand recognizes that the hype surrounding Anthropic's Mythos exemplary is justified, with the imaginable for this exemplary and different AI-powered innovations to disrupt the full industry.
"These developments amusement however AI tin observe each those sleeping vulnerabilities successful systems," helium said.
"Anthropic, arsenic a liable organization, is trying to guarantee that the cardinal platforms are addressing those vulnerabilities. However, you besides indispensable deliberation astir whether different non-responsible menace actors volition make akin tools."
Taking a proactive attack
In short, the manufacture is close to beryllium acrophobic astir Mythos, and the ramifications could mean much unit for cyber professionals. However, it's not each atrocious news, and the probe suggests that AI could assistance to trim the strain connected information staff.
Cybersecurity professionals (48%) are the third-most apt IT workers not to consciousness threatened by AI taking their jobs, down firmware/hardware engineers (55%) and exertion leaders (58%). Anand said information specialists recognize that AI creates caller risks but besides generates caller opportunities.
"AI is not removing the request for security; it is expanding it, and this is wherever a cyber nonrecreational adds worth -- they volition specify what bully looks like," helium said. "You request to think, 'Okay, however bash I lend to the AI strategy of our enactment and guarantee what we bash is wrong the guardrails of the regulations and information extortion laws?'"
With the probe suggesting that astir fractional (49%) of cybersecurity professionals privation to determination jobs successful the adjacent 12 months, information specialists are apt to find themselves warring for opportunities successful a competitory labour market. Anand encouraged cyber specialists to hone their AI capabilities and to make skills successful different areas, including strategy and communication.
"The strongest cyber professionals contiguous harvester the method extent of the domain with the concern context," helium said. "They tin explicate a information contented without the jargon, without immoderate drama, but by being precise applicable astir the concern interaction and however the steadfast manages it."
Rather than burdening the enactment with method details, the astir in-demand cyber unit are alert of however specializer tools, specified arsenic AI, tin beryllium utilized to trim risks, not summation them. These cyber professionals explicate however bully information signifier is important to the wide concern strategy.
"This absorption is not astir audits, findings, and truthful on," said Anand. "It's astir a progressive thought process -- it's talking astir cyber strategically successful presumption of concern needs, concern risks, and concern readiness for the future."

12 hours ago
10








English (US) ·