At Cyber Ireland’s annual cybersecurity conference, experts discussed the impact of AI on the threat landscape and the power of data.
Yesterday (September 26) Cyber Ireland held its 2024 Annual Cyber Security Conference at the Lyrath Estate Hotel in Kilkenny. The day-long Cyber Ireland National Conference (CINC) hosted a number of presentations and panels from a range of renowned figures from the world of science and technology, all covering today’s key cybersecurity trends. Ta.
A popular topic in the cybersecurity field right now is how artificial intelligence will impact the field from both a threat and defense perspective. A Techopedia report from earlier this year highlighted the complex relationship between AI and cybersecurity, saying that while this disruptive technology will enhance cyber attack capabilities, it will also help defenders discover threats faster and more effectively. Because it’s helpful.
CINC’s expert panel delved further into this complex relationship, exploring topics such as the importance of perception and how artificial intelligence, particularly generative AI, could change the threat landscape.
lower the barrier
“The history of cybercrime has always been a competition,” said Senan Moloney, global head of cybercrime and cyberfraud convergence at Barclays. According to Moloney, this competition between attackers and defenders is based on two parameters: pace and scale.
One of the key ways that AI can give cybercriminals an advantage in this competition is its ability to lower the barriers to entry for cybercriminals. As Moloney explained, through simple and “natural” communication with advanced AI, attackers can go beyond traditional requirements for cybercrime, such as extensive knowledge of programming languages and systems.
Regarding the attack methods themselves, the panel discussed how cyberattacks using AI such as deepfakes are becoming more sophisticated.
Stephen Begley, Mandiant’s proactive services lead for the UK and Ireland, explains how he and his team conducted a red team exercise – a cyber attack simulation to test an organization’s defensive capabilities. explained. There, they used AI technology to replicate the voices of senior executives and make calls to various organizations. A colleague with a request. Begley said the disguised cyberattack was successful because the targeted employees were fooled by the deepfake audio.
This incident highlights the importance of educating and upskilling employees to be aware of the capabilities of AI attacks and how they can be used to infiltrate organizations. As Moloney says, without proper education about this technology, “you won’t be able to trust your gut.”
AI literacy
The importance of proper education, especially AI literacy, was one of the panel’s most important talking points. Begley warned that without proper AI literacy and awareness, people could fall into the trap of anthropomorphizing these systems. He explained that we need to focus on understanding how AI works and avoid attributing human characteristics to AI tools.
There needs to be a focus on understanding the limitations of AI and the potential for the technology to be misused.
Understanding the limits and risks of AI should also be an organization-wide requirement. Dr Valerie Lyons says senior management and executive committees need to know the risks just like everyone else.
Lyons, director and COO of BH Consulting, said company leaders tend to jump on the AI bandwagon without fully understanding the technology and its need. “AI is not a strategy,” she explained, adding that instead of making AI the focus, companies need to focus on incorporating AI into their strategy.
Accurate but not smart
As with any detailed discussion of AI, there is always a risk of panic. Of course, AI is a major concern for many people, especially with predictions that the technology will replace some human jobs.
While opinions vary on the scale of potential job losses, there is at least agreement that certain jobs will be transformed by AI. Moloney said he believes some traditional cybersecurity roles will change, predicting the “death” of the analyst role and moving toward something more like an engineer or “conductor” with the integration of AI. Ta.
Professor Barry O’Sullivan also spoke about the fears surrounding AI and LLMs, likening the technology to a “drunk man at the end of a bar” who says what you want, as much as you want, but lacks He humorously explained that he has what he has. Full cognition and advanced intelligence.
For O’Sullivan, director of the Center for Data Analytics and Insights, the main concerns about AI should be around regulation and the impact of malfunction. He cited concerns about controversial applications such as biometric surveillance and their potential for abuse, and spoke about how to be mindful of risks to people’s “fundamental rights.”
He said that while some current AI systems may seem mind-bogglingly intelligent, they are, after all, tools trained on data, and in their current state cannot “think”. He added that he could not. He also highlighted how these systems currently rely on human-generated data, and how research shows that AI systems tend to perform poorly when trained on their own output. I also mentioned what is shown in the.
“(AI) isn’t smart, it’s just accurate,” he said. “Data is accurate because it is powerful.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of must-know science and technology news.