ReliaQuest’s 15+ years of experience comes together in AI Agent to handle everyday Tier 1 and Tier 2 tasks.
AI agents are everywhere these days. In the past few weeks alone, we’ve seen them pop up at Salesforce, Workday, and Google. Today we also added ReliaQuest. However, this AI agent is focused on security operations. It’s much more fundamental to an organization than an agent helping it communicate with customers or employees. At the end of the day, this is about the security and safety of your organization. Having autonomously operating agents make decisions about this is a big step for many organizations.
In an ideal world, we wouldn’t need an AI agent like ReliaQuest. After all, in such a world, cyber threats are so rare that humans can handle them well. Also, the infrastructure in such a world is very simple, and it is completely clear where to look for the data needed to process alerts quickly. But we don’t live in an ideal world. The number of threats continues to grow rapidly, security personnel are in short supply, and many organizations have highly complex infrastructures. Therefore, some degree of autonomy becomes increasingly necessary. To do that, you need a trusted AI agent.
ReliaQuest AI Agent
ReliaQuest says that by adding an AI agent to its GreyMatter platform, it is the first time it has introduced such an AI agent. Due to the variety of security solutions on the market, it is difficult to verify whether this is actually the case. That’s not to say there aren’t startups doing similar work somewhere in Israel. SentinelOne also appears to be similar to the Singularity Platform with Purple AI added. As far as we know at this point, the two are actually quite different, but there is some overlap. And conceptually, both are trying to solve the same problem.
It doesn’t really matter whether ReliaQuest releases something completely new. The most important part of this article is exactly what the new AI agent is. To find out, we spoke briefly with ReliaQuest’s Brian Foster. He is president of product and technology operations for the company, which was founded in 2007.
Turn human expertise into AI
Foster cites the company’s 17-year existence as a key reason for developing AI Agent. “Without the experience we gained then, we would not have been able to develop an AI agent,” he says. The AI agent utilizes all the know-how accumulated to date. This is expertise that people have built up.
At ReliaQuest, we refer to the result of all this expertise as cyber analysis techniques. This methodology is the basis for AI agents. Part of the methodology is the so-called planner. You have access to lots of small tools. In Foster’s words, each of these tools “solves a finite problem.” In other words, ReliaQuest has divided security operations into many smaller problems made up of a set number of components. AI agents have tools available for each problem.
As an example of such a tool, Foster cites searching for specific attack artifacts. AI agents can search for incidents similar to those found on ReliaQuest’s GreyMatter platform and draw conclusions from them. By the way, this search and inference itself does not use all the latest stuff like GenAI or LLM. It also uses basic queries. This makes perfect sense since the AI also uses them. This means that when you send a query to a GenAI tool, it is also translated into a specific query language.
Article continues below box
Lilia Quest
ReliaQuest is a little difficult to fit into standard classifications. It is a security operations platform and is focused only on large organizations. Although we offer MDR services, we do not refer to ourselves as MDR providers. Additionally, the company does not use a central location where all security data must be stored before anything can be done with it. The data may remain within the security tools it generates. ReliaQuest handles this in a so-called federation manner. Organizations can continue to use their existing security tools. There is no need to make huge new investments.
According to Foster, one of the main features that sets ReliaQuest apart is that it is 100% transparent. This also applies to the MDR services provided by the company. That’s what makes ReliaQuest special in the market, he says. “We are 100% transparent and are not a black box like other companies. We can see everything that is happening within our platform and can intervene if necessary.” Main body of article The same goes for the AI agents described in . This is completely transparent and is critical to the combination of cybersecurity and AI. In fact, trust is even more important here than in other environments.
Reducing the burden on SOC employees
According to Foster, breaking down large queries into many smaller steps is a key differentiator of AI Agent’s agent approach. Only then will the agent be able to make decisions independently. Because that’s what it’s ultimately meant to be. The agent must also analyze any issues found. In other words, AI agents don’t bother people to analyze what they discover. The idea is that it can run just fine on its own. To do this, we not only use analyzes conducted in the past, but also access external threat intelligence, for example. Additionally, ReliaQuest personnel are always actively searching for new threats. Therefore, it does not mean that AI agents can only detect existing known cases.
ReliaQuest wants to use AI agents to take Tier 1 and Tier 2 analytics out of the hands of SOC employees. These come into play after the AI agent has finished its job. This component is also fully transparent, allowing employees to properly evaluate the AI agent’s analysis. That way, you can take action right away.
autonomy is the future
However, Foster also expects more automated actions from AI agents early next year. Regardless, that’s where the security industry as a whole needs to do more. Human SOC employees alone cannot keep an organization safe. Foster also observes that “enterprises are becoming nervous about automated actions.” However, we also saw a 200% quarterly increase in the number of automated actions. It seems like the organization’s attitude is tilted.
Of course, autonomous agents/assistants/analysts may be allowed and take a long time to be able to run them. Whether this is desirable for all parts is debatable. However, there are many things that can be automated on a daily basis. As an example, Foster points to resetting passwords for users who click on phishing links. In principle, this wouldn’t be very disruptive, but he hopes that doing it in an automated way “could be a big step forward.”
If you don’t want to be inundated with attacks and alerts, it’s a big step for your organization and the security industry as a whole. From that perspective, ReliaQuest’s announcement is very interesting. There is no doubt that many security companies will follow suit, and it looks like the race for AI agents is starting in this part of the market as well. We are particularly interested in when autonomous components actually appear. Technically, it seems like a lot is already possible. Now, if the attitude of the organization also fundamentally changes, things can move quickly. There are silver linings, but history teaches us that this may take a long time.