Fall 2024 Best-Of: AI Forces, LLMs Cyber Capacities, Investment Insights, and Shifting Left in Software Security
This is the Cyber Builders post you want to read! A two-month recap of Cyber Builders’ insights, covering everything from VC shifts to "Shift Left", with a spotlight on AI.
Hello Cyber Builders 🖖
September and October were eventful months, packed with insights into AI’s evolving role in cybersecurity, new software security approaches, and an overview of critical industry trends.
This week, I am offering you a comprehensive recap of these months’ posts, organized by theme, to provide a fuller picture of each topic.
In this best of :
AI is shaping the future of cybersecurity with models like O1 and vast cybersecurity knowledge already available in GPT4 or Mistral Large2. Also, AI asks for more capital, data, and distribution resources.
A guide on DevSecOps, Shift Left, AppSec, and why Collaboration is key
How the VC market is changing in cybersecurity and M&A operations are slowing
AI & Cybersecurity
The Evolution of AI in Cybersecurity: Benchmarking GPT-4 and Mistral Large2
My deep dive compared the GPT-4 and Mistral Large2 models’ abilities to tackle cybersecurity tasks. Testing showed GPT-4’s superior knowledge, but Mistral’s Large2 offered comparable performance, especially in handling CEH exam topics - 91% of good answers. Can you believe it !!.
Smaller models prove especially useful for cybersecurity teams concerned about privacy and budget. They’re less costly and secure to run on local hardware, suggesting these models could be invaluable for teams wanting high-level support without sharing sensitive data with external servers.
OpenAI’s O1 Preview: Transforming AI from Threat to Tool
The O1-preview model from OpenAI was a hot topic, with discussions centered on its potential as a cybersecurity asset rather than a risk. While the model struggled with penetration testing challenges, I see a promise in automating routine security tasks and boosting productivity.
The cybersecurity community must counter fears that AI could autonomously exploit vulnerabilities and explain how models like O1 can enhance daily cybersecurity work, reducing overload on cybersecurity teams facing a talent shortage.
Apple’s Privacy-Centric AI Strategy
Apple’s recent AI integration focuses on privacy. It uses on-device processing to keep sensitive data local while providing robust AI functionality. This post highlighted Apple’s strategic embedding of AI in hardware, reinforcing its long-standing commitment to privacy while delivering high-performance devices.
Apple's approach prevents data exposure, aligning AI innovation with user privacy—a model worth examining for builders aiming to blend tech with trust. After last week's ML & Homomorphic Encryption publications, I see Apple in that camp (more on this soon).
Centralizing AI’s Driving Forces: Capital, Data, and Distribution
This post explored the structural forces propelling AI forward—capital, data, and distribution. Big tech controls these resources, giving large players a massive advantage.
It is a long-form publication exploring why Capital, Data, and Distribution drive AI.
Shift Left in Software Security
Shifting Left Part 1: Announcing a New Software Security Initiative
This new CyGO Venture Studio initiative opens a dialogue on software security, inviting feedback to refine practices for implementing security earlier in development. “Shifting Left” moves security checks from post-production to design and coding stages, helping teams preemptively tackle vulnerabilities.
This first post urged community participation, emphasizing that direct input from builders could help develop user-centered security solutions that address real, everyday challenges.
It is still not too late to participate. DM me
Shifting Left Part 2: Practices for a Collaborative Security Culture
Building on the Shifting Left theme, this post explored specific team-based security practices, from defining product security terms to setting up incident response teams. Emphasizing a collaborative approach, it highlighted how developers and security engineers could integrate security protocols within workflows. Best practices included Secure SDLC frameworks, threat modeling, and security champions to keep teams alert to vulnerabilities early on, creating a proactive rather than reactive security culture.
Trends (VC Market & Other Insights)
Unmasking Digital Deception: Insights from DEFCON 32
This summary of DEFCON’s talk on deception and counter-deception techniques sheds light on the psychological and technical aspects of digital manipulation. Hackers and defenders discussed how biases fuel misinformation and outlined methods like information triangulation to combat it. Practical strategies involved diverse security tools to counter human and AI deceptions online, encouraging developers to adopt these as vital defenses in their cybersecurity arsenals.
Q2 2024 VC Market Overview: Trends in Cybersecurity Investment
Despite ongoing VC caution, this post pointed to steady investment in AI and cybersecurity, reflecting the sustained importance of both fields. The “Seed Apocalypse”—difficulties securing Series A funding—was particularly intense in the U.S. but less so in Europe. Mergers and acquisitions have slowed, with VCs focusing on growth and profitability metrics. This environment presents opportunities and challenges for startups needing to stand out in a competitive, AI-driven market. [Link to post]
Thank you for following along with Cyber Builders!
Laurent 💚