SECURITY TECH
AI-centric companies accounted for more than half of all global cybersecurity deals by late 2025. The hardest question in security DD is whether the AI actually works — or whether it generates more alerts than the customer can handle.
Security technology is the one category where a product that doesn't work creates immediate and measurable customer harm. Alert fatigue, false positive rates, and the gap between demo environment performance and production environment performance are the technical risks that determine whether a security product retains its customers or loses them at renewal. We find the gap.
Why it’s different
Security products are tested in production. Failure is not a support ticket — it is an incident.
Security technology has a failure mode that other software categories don't: when it fails, it fails in production, during an active threat, with immediate consequences for the customer. A false negative means a breach. A false positive at scale means an analyst who stops trusting the tool — which is functionally equivalent to having no tool at all. The technical assessment in a security investment is primarily about detection efficacy, false positive rate, and whether the product performs under the operational conditions of the actual customer.
01
Alert fatigue is not a UX problem — it is a product quality problem
Security operations teams receive a median of approximately 960 alerts per day, with around 40% never investigated. A security platform that adds to that volume without reducing it is not solving the customer's problem — it is making it worse. False positive rates are rarely disclosed in pitch decks. Detection rate claims are almost always benchmarked against known-threat test sets rather than against the novel threats that actually matter. We evaluate both: the precision (how many alerts are real) and the recall (how many real threats are caught), against test conditions that reflect production reality.
02
Security products deployed in production environments perform differently than in demos
A SOAR platform that auto-quarantined 14,000 endpoints in 12 minutes due to a misconfigured rule triggered by a legitimate software update is a documented production incident, not a hypothetical. Security automation that has not been tested against the specific IT environment of the target customer carries operational risk that surfaces immediately on deployment. Integration with the customer's existing security stack — SIEM, EDR, identity provider, ticketing system — creates complexity that demo environments don't replicate.
03
The threat landscape changes faster than most security products iterate
A detection rule, signature, or ML model trained on threat data from 2023 may not detect the variants, novel techniques, and living-off-the-land methods in use in 2026. Security products require active, continuous threat intelligence feeds, rule updates, and model retraining to remain effective. Companies without a credible threat intelligence programme — either proprietary or through robust third-party feeds — are selling a product that depreciates faster than a laptop.
Assessment Areas
Where we focus in Security Tech engagements.
AI in Security Tech
AI is both the attack surface and the defence. Both matter to the investment.
AI has changed both sides of the security equation. Attackers are using AI to generate novel malware variants, craft convincing phishing content, and automate reconnaissance at scale. Defenders with genuine AI capability — not just rule-based detection with an AI label — have a meaningful advantage. VC markets rewarded AI cybersecurity models with premium valuations in 2025. The premium is only justified when the AI is real.
Opportunities we verify
Behavioural AI that detects novel threats without signatures. The most durable security AI is anomaly detection and behavioural analysis that can identify previously unknown attack techniques by recognising unusual patterns in user, system, or network behaviour. This capability requires high-quality training data, continuous learning pipelines, and low-latency inference at production scale. We assess whether the behavioural model has been tested against genuinely novel threat scenarios.
AI that reduces analyst workload rather than increasing it. The security tools with the highest NRR and retention are those that demonstrably reduce the time analysts spend on false positives and low-priority alerts. AI triage, automated enrichment, and guided response playbooks that compress the analyst workflow are the features that security operations teams actually value.
Identity and access intelligence as a durable category. Identity-based attacks — credential theft, privilege escalation, lateral movement — now account for the majority of significant enterprise breaches. AI-powered identity threat detection and response sits at the intersection of IAM and security operations and addresses a threat vector that perimeter-focused tools cannot see. Companies with strong identity graph analytics and behavioural baselines have a technically differentiated position.
Risks we surface
AI that detects yesterday's threats reliably but misses today's. ML models trained on historical threat data have a known weakness against novel attack techniques — including adversarial evasion specifically designed to avoid detection. An attacker who knows the model architecture can craft inputs that evade detection. We assess whether the company conducts adversarial robustness testing as part of its quality programme.
Alert fatigue that destroys customer trust and drives churn. A security product that generates too many false positives will be turned off or ignored by analysts — which is worse than not having a security product at all, because it creates false assurance. We assess false positive rates in production environments, not in demo or test environments.
The security platform as an attack target. Security platforms have access to the most sensitive data in an enterprise — network traffic, endpoint telemetry, user behaviour, credentials. A breach of the security platform itself would be catastrophic for the vendor's customers and the vendor's business. We assess the security posture of the platform, including its attack surface, supply chain dependencies, and controls around privileged access.
Know what you’re backing before you commit.
X-Ray delivers a full product and tech verdict on any security technology target in one business day — assessing detection efficacy, false positive rates, threat intelligence depth, and integration architecture.
250+ European engagements · 100% partner repeat rate