Skip to main content
A CISO’s View on Machine-Speed Attacks, Real AI Capabilities, and the Shift Toward Automated Defense  

Introduction: The AI Security Market Got the Story Wrong 

If you scroll through most cybersecurity product launches over the last two years, the narrative is always the same: 

“AI assistant.” 

“AI copilot.” 

“AI that explains your alerts.” 

“AI that summarizes your logs.” 

And yet, if you sit with any CISO for more than 10 minutes, you’ll hear a completely different set of expectations — often the exact opposite of what the market is offering. 

CISOs don’t want another copilot. 

They don’t want a chatbot. 

They don’t want commentary on their logs. 

 They want AI that reduces workload, closes blind spots, and keeps pace with automated, machine-speed attacks that have already entered real-world operations.  

In 2025, the first verified case of an autonomous AI agent conducting the majority of a real espionage operation was documented. 

This wasn’t a simulation. 

It wasn’t a controlled experiment. 

It was a live operation — where a state-linked group delegated most technical actions to an AI agent capable of recon, exploitation, lateral movement, and data processing. 

That single event crystallized a truth that CISOs have been feeling for years: 

The attackers have automated. The defenders have not.  

2026 will be the year this gap becomes impossible to ignore. 

This article outlines — in practical terms — what CISOs really need from AI in the next year, why current models fall short, and where AI must evolve if it’s going to be a meaningful part of modern cyber defense. 

The Core Reality: CISOs Don’t Need More Visibility — They Need Less Noise 

One of the biggest misalignments between cybersecurity vendors and security leaders is the assumption that more visibility equals better defense. 

Every year, SOC teams collect more data: 

  • More logs 
  • More alerts 
  • More dashboards 
  • More threat feeds 
  • More “insights” 

And every year, SOC performance… gets worse. 

Why? 

Because volume without prioritization is not visibility — it’s noise. 

Modern SOC teams are drowning: 

  • Thousands of daily alerts 
  • Dozens of overlapping tools 
  • Correlation rules tuned by vendors, not reality 
  • Analysts spend 60–70% of their time on low-value triage 

The painful truth is simple: 

The limiting factor isn’t data. It’s human attention. 

Today’s AI features mostly increase cognitive load: 

  • “Here’s a summary.” 
  • “Here’s an explanation.” 
  • “Here’s the log in natural language.” 

CISOs are clear: 

AI in 2026 must reduce decisions — not create new ones. 

What CISOs expect AI to do: 

  • Collapse 5000 alerts into 5 real incidents 
  • Automatically suppress false positives 
  • Enrich events without analyst involvement 
  • Correlate identity, endpoint, cloud, and network activity 
  • Flag anomalies with real business impact 
  • Tell the analyst: “This is the part that matters.” 

What CISOs don’t want: 

  • Less noise disguised as “more visibility” 
  • Dashboards with AI labels 
  • AI-generated descriptions of low-quality events 
  • Yet another feed to connect and maintain 

The CISO requirement is direct: 

AI must reduce the number of decisions a human analyst makes per day. 

Everything else is secondary. 

Machine-Speed Attacks: Why Human-Centric SOC Models Can’t Keep Up

Until 2024, even sophisticated cyber adversaries were fundamentally limited by human speed: 

  • Humans run scans 
  • Humans escalate access 
  • Humans navigate the environment 
  • Humans test business logic 
  • Humans analyze stolen data 

That changed with the emergence of practical, adaptable AI attack agents. 

 By late 2025, threat intelligence groups documented: 

  • Autonomous recon 
  • Vulnerability testing in parallel 
  • Instant adaptation when blocked 
  • Complex lateral movement chains 
  • Automated data classification 
  • Business-logic probing 
  • Large-scale identity testing 

 

These are not hypothetical. 

These are real-world attacker workflows happening right now. 

Why this breaks the traditional SOC model: 

  • Humans work in shifts — AI doesn’t 
  • Humans get tired — AI doesn’t 
  • Humans escalate step-by-step — AI can branch out in parallel 
  • Humans reason slowly — AI models process context instantly 
  • Humans specialize — AI can execute diverse tasks simultaneously 

 

 A SOC built on: 

  • manual triage 
  • static correlation rules 
  • signature-based detection 
  • event-by-event investigation 

 

…is by design slower than an AI-enabled attacker. 

CISOs understand this deeply. 

It’s why they’re now asking for machine-speed defense, not more analysts. 

 

What CISOs Actually Want From AI in 2026

Below are the expectations security leaders consistently mention when discussing AI-based defense — grounded not in marketing, but in operational reality. 

  1. AI That Does the Work, Not Describes It

CISOs are tired of features that merely “narrate” the SOC. 

They don’t want explanations. 

They want outcomes. 

What they expect: 

  • AI that fully triages initial alerts 
  • AI that correlates across systems before escalation 
  • AI that gathers all necessary context automatically 
  • AI that proposes likely root cause 
  • AI that identifies attacker paths (“what they tried next”) 

 The bar has shifted from “assist me” to “do this step for me.” 

  1. Behaviour-Based Detection Over Signatures

AI-driven attacks will not match yesterday’s IOC patterns. 

Traditional detection logic fails because: 

  • Attackers generate unique payloads 
  • Identity-based compromise requires sequence analysis 
  • Business-logic attacks lack simple indicators 
  • API abuse looks like real traffic 
  • Lateral movement is hypothesis-driven, not signature-driven 

 CISOs now ask for: 

  • identity and privilege anomaly detection 
  • user and service behavioral modeling 
  • access sequence deviation 
  • business-logic baselines 
  • ML-driven session analysis 
  • event clustering by context, not keywords 

AI must understand patterns, not strings. 

  1. Automation for All Low-Value Work

CISOs agree: 

AI should automate everything that doesn’t require human judgement. 

Examples include: 

  • log enrichment 
  • event stitching 
  • suppression of repeat benign events 
  • initial investigation 
  • hypothesis generation 
  • automated pivoting (“what else changed in this timeframe?”) 
  • safe response actions (isolation, session revocation, MFA force, token invalidation) 

Security analysts should handle decisions. 

Machines should handle everything else. 

  1. Fewer Tools, Stronger Telemetry Pipelines

CISOs are tired of integrating: 

  • 40+ tools 
  • inconsistent data 
  • duplicate alerts 
  • broken connectors 
  • vendor lock-in 

What they want instead: 

  • unified data collection 
  • clean normalization pipelines 
  • AI that sits on top of a consistent data layer 
  • one SOC brain, not 12 dashboards 

AI without reliable telemetry is useless. 

CISOs know this better than anyone. 

  1. Explainability and Auditability — Especially for Regulated Industries

 With upcoming AI regulations across EU, UK, US and APAC, CISOs expect: 

  • model transparency 
  • input/output logging 
  • reproducibility of decisions 
  • fallback models 
  • multi-model scoring 
  • model drift detection 
  • full audit trails 

 In financial services, this is not optional. 

 AI that cannot be audited cannot be deployed.

Why Most Vendors Will Miss the Mark

Here is the uncomfortable part.  

Most cybersecurity vendors will continue focusing on: 

  • copilots 
  • chatbots 
  • summarization layers 
  • AI badges added to old products 
  • “search your logs with natural language!” features 

 

Why? Because it’s easy. Because it demos well. Because it sells. 

But CISOs don’t care about demos. 

They care about: 

  • reducing analyst workload 
  • minimizing risk 
  • stopping automated attacks 
  • passing regulatory scrutiny 
  • decreasing MTTR 

The gap between what the market ships and what security leaders need has never been wider. 

CISOs will increasingly choose platforms that provide real AI capability — not AI decoration. 

The New Standard: AI as Part of the SOC Engine, Not a Feature

The SOC of 2026 will be defined by three capabilities: 

  1. Data-Driven (Complete, Unified Telemetry)

AI only works if the underlying data is complete. 

CISOs require: 

  • EDR/XDR-level endpoint telemetry 
  • API logs for cloud and SaaS 
  • IAM activity (Entra, Okta, IAM) 
  • Network flow 
  • DNS patterns 
  • Access patterns 
  • On-chain telemetry (for crypto environments) 

If the pipeline is weak, the model will fail — no matter how advanced it is. 

  1. AI-Assisted (Model-Driven Analysis)

Not chatbots. 

Not copilots. 

Not “AI search bars.” 

Actual model-driven analysis: 

  • event clustering 
  • cross-asset correlation 
  • anomaly scoring 
  • identity sequence analysis 
  • risk-based prioritization 
  • real-time hypothesis generation 

AI should help humans investigate attacks, not events. 

  1. Autonomous-Ready (Automated Low-Risk Actions)

AI should take responsibility for: 

  • enrichment 
  • context gathering 
  • suppression 
  • stitching 
  • initial investigation 
  • safe remediation 

Humans remain in control. 

AI removes friction. 

This is the architecture CISOs expect.

Conclusion: In 2026, AI Must Become the Analyst — Not the Assistant

CISOs have been consistent: 

They don’t want more dashboards. 

They don’t want more alerts. 

They don’t want more “AI descriptions of logs.” 

They want capacity, clarity, and speed. 

Attackers have already automated: 

  • reconnaissance 
  • privilege misuse 
  • lateral movement 
  • API abuse 
  • smart-contract exploitation 
  • identity attacks 
  • fraud engineering  

Defenders are still automating… commentary. 

In 2026, AI must evolve from: 

  • assistant → analyst 
  • explainer → operator 
  • advisor → executor 
  • suggestion → action 
  • enhancement → capability 

AI won’t replace security teams. 

But it must replace the repetitive work that prevents those teams from doing the parts that matter. 

The attackers are using AI to scale. 

The defenders must use AI to survive. 

This is what CISOs really want in 2026 — not another copilot. 

Author: V. Garbar
11 Dec, 2025
CISO @ Q-Sec