The 12 Cybersecurity Platforms at the Age of AI (Part 2)
CTEM, GRC and User Awareness goes AI... for the better and the worst.
Hello Cyber Builders đ
AI is showing up in every cybersecurity platform.
Thatâs not news anymore. What matters now is how AI is actually being used and whether there are real patterns worth your attention. As Cyber Builders, whether youâre creating new products or using cybersecurity platforms, you need to understand these patterns to build better tools or choose more effective solutions.
AI is shifting the balance from users learning complex systems to systems that understand and respond to users in plain language.
This week, weâre exploring a few more categories (CTEM, User Awareness, GRC) where vendors are integrating AI into their products and services. Iâll highlight whatâs different, whatâs copy-paste marketing, and what might actually change the way you work.
Next week, Iâll wrap it all up into a bigger picture view. We'll explore key questions, such as: How are AI-driven interfaces reshaping efficiency and user experience in cybersecurity? What challenges and solutions are emerging in aligning AI applications with human thinking? Right now, it's messy, noisy, and more than a little confusing, but clarity is on the horizon.
Bridging the Gap: AI, UX, and the Future of Security Tools
When you look across industries, most products use generative AI the same way: to make the interface smoother. Iâve already covered that angle in depth.
Cybersecurity is no different. These products have always been challenging to set up, tune, and master. Rules and queries are powerful, but they werenât designed for humans. They were designed for efficiency, and they speak a language most people never want to learn. Anyone who has attempted to create a KQL query or a Snort signature is familiar with this issue.
Thatâs where AI is quietly changing the game. It serves as the middleman between users and tools. Instead of forcing you to adapt to a cryptic query language, the software starts adapting to you. You describe what you want in plain language, and the AI translates it into something the system understands.
Another common UX-driven use case is creating personalized summaries from a large dataset generated by IT and cybersecurity platforms, resulting in a report that addresses the questions users raise.
But the real innovation would be to see these synthesis tools get smarter. Instead of producing generic summaries, AI-powered platforms should tailor reports to the specific needs, risk profiles, and preferences of each team or user, integrating their own data or alerts.
You should be able to prioritize the most urgent threats in your context and even turn findings into actionable recommendationsâalmost automatically. Weâre not there yet, but progress is clear. Soon, AI-powered tools will help connect technical data with business needs, providing each team with insights they can actually use. This shift is crucial. It will increase productivity and transform how you utilize these products.
And itâs one of the key threads Iâll be pulling on in next weekâs wrap-up: is AI finally helping security tools work the way humans think, instead of the other way around?
AI Security features pop up in every cybersecurity platform.
As AI/LLM tools pervade corporate environments, new risks emerge, including prompt injection, shadow AI/LLM usage, model theft, and the leakage of sensitive data. It is striking to see that AI Security (how you assess your exposure to AI threats and detect them) is a set of features that is spreading across multiple categories. Last week, we saw that Cloudflare integrated it like an âAI Firewallâ and Wiz added it to its Cloud Security platform as an âAI-SPM (Security Posture Management)â.
In the CTEM category, vendors are responding by deeply embedding AI into exposure management. Below are concrete examples from leading vendors, illustrating the real features, their functionality, and the outcomes they produce. Once again, they also integrate AI Security and utilize AI to enhance cybersecurity.
In the GRC platforms, you are seeing AI Security Governance features. In contrast, in the User Awareness category, platforms are bringing new content to make end-users aware of AI Security risks.
While I understand all these new features and see them as valuable, I think they are confusing. Yes, misused AI technologies are a new âthreatâ within risk matrices. However, I think it will be confusing over time for MSSPs, CISOs, and the entire cybersecurity community if AI Security is somewhat ubiquitous yet unclear at the same time. Weâll see how this goes in the following years.
The 12 Cybersecurity Platforms at the Age of AI (Part 2)
Continuous Threat Exposure Management (CTEM)
Continuous Threat Exposure Management (CTEM) is an ongoing approach to identifying, assessing, and managing an organizationâs vulnerabilities and potential attack surfaces. CTEM approach includes continuous monitoring for exposures, enabling organizations to address risks as they emerge proactively.
đ Includes: Automated vulnerability scanning, attack surface management, penetration testing, and exposure triage.
Tenable â ExposureAI and AI Exposure
Tenable has gone beyond traditional vulnerability scanning with its ExposureAI and AI Exposure features. (Note to the Tenable Product Marketing team, isnât it a bit confusing?)
AI Exposure
Discovery of shadow AI. AI Exposure inventories AI apps, libraries, and plugins across environmentsâsurfacing risky usage that would otherwise not appear on a CVE list.
Governance and policy enforcement. Beyond discovery, AI Exposure introduces policy controls, enabling enterprises to monitor and restrict the use of Copilot or ChatGPT Enterprise.
ExposureAI
Summarized attack paths and guided fixes. ExposureAI generates plain-language summaries of attack paths and recommends mitigations, so analysts donât have to manually parse graphs before filing remediation tickets.
Unified prioritization. ExposureAI integrates context from EDR, cloud, OT, and ITSM connectors, surfacing toxic combinations and ranking issues by business impact.
User value: faster triage, clear fixes, and visibility into AI risk that was previously invisible.
Qualys â TotalAI
Qualys has extended its TruRisk platform with TotalAI.
AI and LLM workload discovery. TotalAI inventories models, GPUs, cloud services, and shadow AI workloads.
Scanning for AI/LLM risks. It identifies prompt injection, data leakage, jailbreaks, and model theft, mapped against the OWASP Top 10 for LLMs.
Risk scoring and compliance. Findings are integrated into TruRisk, so AI risks are ranked alongside other exposures. Compliance reports are generated for GDPR, PCI, and other relevant regulations.
Scenario-based coverage. TotalAI already tests against dozens of attack scenarios, from multilingual exploits to bias amplification.
User value: treating AI as part of the exposure landscape, not an afterthoughtâso AI assets are inventoried, tested, scored, and governed with the same rigor as everything else.
Rapid7 â Incident Command and AI Triage
Rapid7 is embedding AI directly into its detection and exposure stack with Incident Command and AI Alert Triage. It appears that they are utilizing the AI integration also to facilitate a category shift. From a vulnerability scanner, to a CTEM platform.. now to a wider positioning with CTEM but also SIEM, MDR, and more.
Unified context for investigations. Incident Command combines exposure visibility with threat detection, allowing analysts to view alerts and the assetâs risk posture within the same workflow.
Agentic AI workflows. These workflows triage, investigate, and propose response steps, trained on Rapid7âs own SOC data.
Alert triage at scale. The AI Alert Triage classifies alerts automatically with a claimed 99.93% accuracy in identifying benign cases, thereby reducing the number of false positives.
Report automation. AI drafts investigation summaries, thereby reducing the time analysts spend on documentation.
User value: reduced alert fatigue, consistent investigations, and faster hand-offs to IT teams for closure.
User Awareness & Training: where AI is actually useful
Phishing and social engineering are now often generated by machines. Awareness programs had to evolve from annual slide decks to continuous, data-driven training that mirrors live attacks and adapts to each person. Below are concrete examples of how KnowBe4 and Hoxhunt are applying AIâwhat the feature does, where it resides in the product, and the value it provides to a security team.
KnowBe4 â AIDA (Artificial Intelligence Defense Agents)
KnowBe4âs AIDA adds four agents on top of its training stack: an Automated Training Agent that assigns modules based on user risk, a Template Generation Agent that creates phishing simulations aligned to current attack patterns, a Knowledge Refresher Agent that pushes short spaced-repetition quizzes, and a Policy Quiz Agent that turns your policies into checks for understanding. Datasheet (PDF) ¡ Admin guide.
User value: less manual campaign design, training tied to observed behavior, and policy comprehension you can audit rather than assume.
Hoxhunt â GenAI content + adaptive simulations
Hoxhuntâs GenAI Content Generation converts your security policies or playbooks into publishable lessons (cards + quiz) in minutes, then lets you AI-translate them into ~40 languages for global rollouts.
For simulations, Hoxhunt explains how it utilizes real phishing reports across its network to keep templates current and how scenarios adapt to user behavior, including factors such as difficulty (easier or harder next time) and role/location context. The article also outlines the multi-language generation path and how training content aligns with company policy inputs. See how Hoxhunt uses GenAI in training.
User value: policy updates become training within the same week; localized content is shipped promptly without waiting for translators; simulations closely mirror live attacker tactics.
GRC & Compliance: where AI is actually useful
GRC used to be about paperwork: policies in Word, controls in spreadsheets, and evidence in emails. AI is changing the loop: drafting policies from obligations, mapping regulatory changes to controls, and transforming evidence into control assertions that can be queried. Below are concrete, vendor-specific examples from ServiceNow and OneTrustâwhat the feature does, where it lives, and the value to your team.
ServiceNow â Now Assist for IRM + AI Control Tower
ServiceNowâs recent Zurich-family updates add several âdo-the-workâ capabilities inside Integrated Risk Management (IRM):
Common control objective creation. Now Assist for IRM identifies similar control objectives across your library and generates a consolidated âcommon control objective,â reducing duplication before audits and attestations.
Regulatory change â control mapping. When a new or changed requirement is introduced, Now Assist proposes mappings to your internal controls, allowing you to run gap checks without manually matching clauses to procedures.
Risk event summarization. For incidents and losses recorded in IRM, Now Assist summarizes lifecycle details (root cause, actions, recoveries) so you can speed risk write-ups and RCA reviews.
AI governance in-platform. AI Control Tower adds AI inventory and governance capabilities, aligning with ISO/IEC 42001 and the EU AI Actâuseful if you want AI risk managed alongside other obligations in ServiceNow.
Automated item generation. Outside of GenAI, IRM can auto-generate risks and controls from policies/standards (âitem generationâ) to keep your register consistent with the policy library.
User value: less time reconciling frameworks and more time validating what changed (and where). AI handles the tedious mapping and summarization; your team decides what to accept and what to fix.
OneTrust â AI Governance (model registry, intake, and assessments)
OneTrust is bringing AI/ML risk into mainstream GRC operations:
Model & agent registry with data-plane context. Build an auditable inventory of AI systems and sync metadata from platforms like Databricks Unity Catalog to keep models/agents and datasets visible to compliance.
Framework-based risk assessments. Assess AI use against NIST AI RMF, OECD, and other frameworks using out-of-the-box templates, which help standardize intake and impact scoring.
AI intake & workflow embedding. Intake forms auto-score risk and can be embedded in tools like Jira, allowing projects to be registered and routed for review before build or buy decisions.
Third-party AI risk. Extend questionnaires with AI-specific due diligence topics to evaluate vendor models and hosted services.
EU AI Act readiness. Solution pages and resources map obligations and roles (provider/deployer) into operational tasksâhelpful to separate âwhatâs in scopeâ from âwho must do what.â
User value: Your AI program becomes traceable (identifying what models exist, who owns them, and what data they utilize) and defensible (assessed against a known framework, with third-party risk and regulatory mapping built in).
Conclusion
In summary, the integration of artificial intelligence across cybersecurity platforms is accelerating. Weâve seen impact in domains such as Continuous Threat Exposure Management (CTEM), User Awareness Training, and Governance, Risk, and Compliance (GRC).
AI is also a new threat, and AI Security has become a feature of many existing platforms to address the latest issues and market hype. I believe many cybersecurity professionals find it confusing, expecting a more âholisticâ approach and still seeking strategies and guidelines where the market offers new features.
Anyway, AI has already shaped cybersecurity products, and we are shifting from complex systems to user-friendly platforms with more automated detection. I call this Intelligent Security, as I've discussed in previous articles.
I am continuing this review across platforms and will do a wrap-up in the following publication. In the meantime, subscribe if you donât want to miss it.
Laurent đ