In this Help Net Security interview, Lior Div, CEO of Seven AI, talks about the concept of agent AI and its application in cybersecurity. He explains how it differs from traditional automated security systems by providing greater autonomy and decision-making capabilities.
Div highlights that Agent AI is particularly well-suited to combat modern AI-powered threats such as AI-generated phishing and malware by processing massive amounts of alerts in real-time.
How do you differentiate between agent security and traditional automated cybersecurity solutions? What’s different in terms of autonomy and decision-making? What gaps does this paradigm fill in traditional security approaches? Is that what you’re aiming for?
Traditional automation is typically based on predefined “if-then” rules, where the person writing the code must anticipate all possible outcomes and decisions in advance. This works in static environments, but cybersecurity is never static. For example, if an email contains URLs, text, or attachments, automated systems can be programmed to inspect those elements. This level of automation is suitable for initial analysis and enrichment.
However, challenges arise when more detailed investigation is required. If the system identifies a malicious file, it can only follow predefined paths. Traditional automation stops here and the next steps are left to human analysts.
Agentic AI, on the other hand, goes beyond predefined paths. When AI detects a malicious file, it doesn’t just stop it. Further actions can be dynamically initiated, such as initiating an Endpoint Detection and Response (EDR) investigation, without the need for pre-scripted instructions. Like human analysts, it has the ability to make real-time decisions based on changing conditions, but without the limitations of static code.
I often liken this to teaching a self-driving car how to drive using “if-then” rules. In a controlled environment, you might manage to write code that works. However, such a system breaks down when the car exits onto a busy road where variables are constantly changing. The same applies to traditional cybersecurity automation. Unable to adapt to the complexity of real-world cyber threats. However, agent AI can dynamically respond to unexpected situations, making it far more capable of dealing with today’s complex cyber environments.
With the rise of AI-generated attacks such as AI-created phishing and malware, how can Agent Security effectively combat these threats? Do you have any specific examples?
One of the main challenges we face today is the sheer volume of attacks. Hackers can use AI to generate countless variations of phishing emails and malware, far beyond what human analysts can handle manually. Agentic AI can handle this kind of scale by being able to review every alert and every potential threat in real-time without fatigue or oversight.
Another important factor is speed. As the volume of attacks increases, the time you have to respond decreases. Unlike human analysts, AI systems can review every alert as if it were the most important investigation. While humans can be overwhelmed and make mistakes, AI systems remain consistent and fast, processing information much faster than humans.
Traditional security approaches often try to reduce the number of alerts by focusing only on the highest priority alerts. Agentic AI is a game-changer. Treat every alert, every email like it’s important, investigate every possibility, and do it in a fraction of the time. This level of disproportionate privilege means potential threats can be thoroughly investigated without compromise, leading to more comprehensive security protection.
How does Agent Security address the scale and speed of modern cyber threats? What role does machine speed play in the ability to manage massive amounts of alerts?
Agentic AI excels at parallel processing, allowing you to process multiple alerts simultaneously and analyze and investigate each one in detail. But there’s another layer to this. It’s context awareness. AI can do more than force through alerts. Over time, you will learn the nuances of the specific environment you are protecting and understand your organization’s unique context.
For example, if the AI detects an IP address that was previously flagged in an internal database as part of a regular network scan, the AI can correlate that information and dismiss the alert as benign. . A human analyst would have a hard time remembering such details across a large number of alerts. However, the AI can effortlessly handle this context, allowing you to focus on the actual threat.
This ability to correlate information, learn from past data, and adapt to specific environments gives agent AI a significant advantage when managing large volumes of alerts. Additionally, AI remembers what it learns and can apply it elsewhere. By addressing one situation with one customer, you can apply that learning to all your other customers.
Can you share a real-world example of how Agent Security significantly reduced response time to a complex cyber threat? What lessons were learned from that example?
We are already showing that agent AI outperforms human analysts in terms of speed and thoroughness. All investigations performed by the system are significantly faster by orders of magnitude compared to human analysts. However, speed isn’t the only thing that matters. It also depends on the depth of the investigation. Agent AI can track leads and analyze data much more thoroughly than manual methods.
One key takeaway is that while people are still hesitant to trust fully autonomous systems with critical responses, agent AI can provide step-by-step explanations for their behavior. This allows human analysts to review and approve actions before taking them, creating a hybrid model where AI does the heavy lifting and humans monitor.
As agent AI becomes more autonomous, what ethical considerations need to be taken into account, especially regarding decisions made without human oversight?
The important point here is that agent AI systems in cybersecurity are not general purpose AI. These are not Skynet-like systems from science fiction, but highly specialized agents designed to perform specific tasks within the context of cybersecurity. By design, they cannot make decisions outside of their role.
In fact, agent AI can actually improve privacy protection by eliminating the need for humans to review sensitive data such as browsing history or email content. AI focuses solely on determining whether something poses a threat, without considering the personal information involved. In some ways, this could lead to improved privacy compared to traditional methods, where analysts could inadvertently access personal data.
Another important consideration is transparency. Our agent AI system is not a black box. It provides a clear audit trail of what tools were used, how decisions were made, and what actions were taken. This level of auditability ensures that humans can see the system’s behavior, understand its decisions, and maintain control.
As AI continues to evolve, the potential for agent AI to revolutionize cybersecurity is enormous. The combination of speed, scale, and understanding of context makes it far superior to traditional automation. But while this technology is powerful, we are committed to ensuring that it operates with full transparency and ethical oversight so that organizations can trust the systems they rely on to protect them. is working on it.