Generative AI & Cybersecurity: Security Practionners will be leading new use-cases
Andrew Ng's take on the current and future potential of AI ; Security practionners to lead the way. Plus launching a new collaborative project!
Hello Cyber Builders š¤š¾
I think there is a lot of buzz and hype around Generative AI & LLMs nowadays. There are no meetings with a startup or an investor where I am not asked, āWhat is the most interesting startup using AI for cybersecurity today?ā
I am sorry, guys, but I think this is the wrong question š. I am sure the next āBigā use case will emerge from security practitioners, not an entrepreneur or a security vendor.
I kindly ask all readers to share their pains and hopes around Cyber and AI. Please take the time.
This post concludes the five week in a row series on AI and Cyber. I should add āfor the momentā as I expect this topic to be one of the significant trends in the years to come as AI is the new electricity. The author of this great quote, Andrew Ng, will guide us through this post to understand how AI has and will impact the world.
Before we dive in, let me remind you of the previous posts:
šĀ Post 1 - AI Meets Cybersecurity: Understanding LLM-based Apps: A Large Potential, Still Emerging, but a Profound New Way of Building Apps
šĀ Post 2 - A new UX with AI: LLMs are a Frontend Technology: Halo effect and Reasoning, NVIDIA PoV, History of UIs, and 3 Takeaways on AI and UX
šĀ Post 3 - AI and Cybersecurity: An In-Depth Look at LLMs in SaaS Software. Using fictional HR software to understand the value and risks of using Generative AI in SaaS apps. A simple threat model to reflect on a practical use case.
šĀ Post 4 - Building Effective LLM-Based Apps: A Cybersecurity Chatbot: Building a Cybersecurity Chatbot in Minutes, Exploring its Limitations, Improving Accuracy with Trusted Data.
Opportunities in AI - Sept 2023
Andrew Ng is a renowned artificial intelligence (AI) figure, and his contributions have profoundly impacted the world. He is widely recognized for his expertise and insights in AI, earning him the reputation of leading industry authority. Ng was a co-founder and head ofĀ Google BrainĀ and was the former Chief Scientist atĀ Baidu. Ng is aĀ professorĀ at Stanford University; he is also an entrepreneur, having cofounded Coursera and the AI Fund - a venture studio to create AI startups.
Ng's talk, "Opportunities in AI," is a neutral title that may seem low profile. Still, it delivers high-value content. With his extensive knowledge and experience, Ng is poised to provide valuable insights into the future of AI and its potential opportunities.
You can watch his talk on YouTube:
Andrew highlighted the potential of artificial intelligence (AI) and the trends we expect to see in 2023. He focuses on the existing role of supervised learning in AI and the emergence of new technologies such as generative AI and the Large Language Model (LLM). He breaks down the world of AI into three main categories: supervised learning, generative AI, and reinforcement learning.
Supervised learning is the most developed and widely used form of AI. It involves teaching AI systems to make predictions based on labeled data. Generative AI, on the other hand, is a newer, exciting field that involves creating new content, from writing articles to designing buildings.
Andrew Ng insists on three points:
The primary use case was and will still be Supervised Learning in the next three years.
Still, Supervised learning needs a large dataset, a large amount of computing power, and significant engineering time to build up the model, clean up the data, refine and iterate, and finally get a valuable model to be integrated into a product. Supervised learning has an enormous impact in the B2C market, where companies like Google or Facebook use it to build up automated photo tagging or personalized news feeds. These giants had a significant ROI to spend millions on their AI team, ultimately increasing the time spent on the platform and the ad-driven revenue.
Generative AI reverses the paradigm. As you leverage a foundational model, pre-trained with trillions of data points and fine-tuned to assist on many tasks, you can build up in a matter of hours, either by using low code tools or just providing a prompt or a small number of data points. It enables the long-tail of use cases where no ROI was possible due to the engineering time needed to build a model. The need for a prompt engineer replaces the need for a data scientist.
Cybersecurity Use Cases from Supervised Learning to Generative AI and LLMs
You've likely interacted with supervised learning algorithms if you've used or worked with AI-integrated products. A prime example is email spam classification, a use case that became popular in the early 2000s. Classifying URLs into different categories for analyzing employee browsing for compliance reasons or determining if a domain is potentially malicious is another common application of supervised learning.
In cybersecurity, these use cases involve processing a colossal amount of data, with hundreds of thousands or even millions of examples, each associated with a label such as "spam/not spam" or "malicious / not malicious.ā We can create a model to classify new data by training these algorithms. However, this is a time-consuming process that requires significant computational capacity and a large volume of data.
Generative AI and LLMs are game changers by reducing the engineering time of security automation.
Generative AI technologies, specifically generative pre-trained transformers (GPT), are changing the game. Let's break down what GPT means:
"Transformers" are the basic building blocks of these models, capable of understanding the connection between data in a sequence within a significant context.
"Pre-trained" refers to these models being trained on trillions of data points, allowing them to comprehend the relationships between different concepts, signals, words, or images.
"Generative" implies that once these models are given an entry point, they can generate new data based on the initial input.
The introduction of GPT has significantly reduced the time and resources required previously. Instead of collecting billions of data points and spending six months on resource-intensive work, users can now create simple applications using freely available foundational GPT models. What used to take six to twelve months can now be accomplished in mere hours.
This is why I am confident that cybersecurity practitioners can effectively leverage GPT. They understand their challenges and know where they would like to see improvements. With GPT, they can achieve rapid and effective results without needing a data scientist or embarking on a lengthy high-engineering project.
The next big AI and cyber use cases will come from security practitioners.
LLMs, Generative AI, and cybersecurity are two subjects that currently hold great importance for many of my readers. Frequently, during conversations with entrepreneurs or VCs, the question arises about the next influential use case and which startup is leading the way in AI-driven cybersecurity. Undoubtedly, this is a complex subject. Moreover, the market is flooded with startups and new solutions, making it challenging to navigate through them all.
I believe the initial groundbreaking use cases of generative AI applications in cybersecurity will emerge from the professionals who work daily - the auditors, pen-testers, SOC analysts, and security engineers.
These experts will be the pioneers in automating repetitive tasks that consume valuable time using generative AI. They understand best where they spend their time and where they face inefficiencies. By hacking systems to improve their daily lives, they have everything to gain.
As evidence, we can look at Flexport, a company that revolutionized marine transport by decreasing the cost and leveraging a world-class data science team. Flexport outperforms traditional freight forwarders by reducing costs and offering superior service. Their optimized and technology-driven approach serves as an excellent example. Look at his latest tweet.
Innovation will first occur in cybersecurity teams because they encounter the problems and challenges firsthand. They are the ones who will lead the way in saving time.
Another example is Thomas Roccia, who truly grasped this concept. In his latest Medium post, he openly shared the code and registration for a newsletter automatically generated by LLM. This newsletter takes information from threat intelligence sources and synthesizes it into a format easier for analysts to comprehend and read. This simple yet effective use case perfectly aligns with the needs of security practitioners like Thomas.
These use cases will be replicated as they save time and provide security practitioners with more accurate and relevant information.
Letās work together.
I am incredibly excited about the potential that LLMs hold for cybersecurity. If you're interested in exploring how to implement these use cases, I invite you to contact me. Let's work together to create exciting and valuable solutions to enhance your daily work.
If you are a security practitioner, please get in touch with me. I am launching a simple form (2 minutes to fill) that allows you to anonymously or openly share your issues and challenges.
Don't question whether or not AI could solve it or yet another product. Instead, articulate the problems you face.
Suppose you are a SOC analyst spending excessive time analyzing logs, an auditor struggling with time-consuming report writing, or a cybersecurity professional burdened with filling out questionnaires. In that case, I want to hear from you. These are the types of problems I am eager to address, along with any others that consume your time. Together, we can bring about meaningful change. So, waste no more time and complete this two-minute questionnaire.
Share your pains and hopes with the Cyber Builders community!
Iāll do some sum-up posts once I get enough answers from the readership.
āTalkā (or Substack) to you next week.
Laurent š