Research by NCA and CybSafe highlights growing concerns about AI-enabled cybercrime and the easy-go-lucky approach to AI cybersecurity in the workplace. (Photo: Shutterstock)
The rise of generative AI (GenAI) in cybersecurity is raising concerns, with 65% of respondents expressing concern about AI-related cybercrime, according to a new survey. This concern is one of the key findings of “Oh Behave!” The Annual Cybersecurity Attitudes and Behaviors Report 2024 was released by the US-based National Cybersecurity Alliance (NCA) and CybSafe.
Based on a survey of 7,012 people across seven countries, the report highlights a significant gap in concern and preparedness when dealing with AI-powered cyber threats.
The report reveals that fear of AI-based cyberattacks varies by generation. The Silent Generation, those born between 1928 and 1945, are the most concerned, with 73% expressing fear of AI-related cyber risks, closely followed by Baby Boomers at 70%. Gen Xers are less concerned, with 61% expressing anxiety. Although younger generations remained concerned, their anxiety levels were lower.
Despite this growing concern, this study highlights the issue of a lack of training on AI-related cybersecurity risks. According to the study, 55% of AI tool users report receiving no formal training on the security and privacy risks associated with these technologies. This is further exacerbated by the fact that 56% of participants said they do not use AI tools at all, suggesting a large knowledge gap overall.
“The growing concern about AI-related cybercrime reflects a growing awareness of the digital threats we face,” said Lisa Plaggemier, executive director of the National Cybersecurity Alliance. “However, more than half (56%) of participants do not even use AI tools, and most (55%) of participants who do use AI have not been trained on the risks, making it even more likely. It’s clear that a lot of education and resources are needed.”
Staff having fun playing with AI at work
Another worrying statistic revealed that 38% of AI users admitted to sharing sensitive work-related information with an AI tool without their employer’s knowledge. Younger generations, especially Gen Z (46%) and Millennials (43%), are more likely to engage in this risky behavior compared to older generations.
“AI poses unique and urgent challenges, but the core risks remain the same,” said Oz Alashe, CEO and founder of CybSafe. “Many employees understand what it takes to protect their workplaces from cyber threats, but the key to strengthening organizational resilience is turning that knowledge into regular, safe actions. .”
The report shows a significant increase in the use of generative AI tools like ChatGPT, which is the most popular generative AI tool among users, with 65% of AI tool users using ChatGPT. I did. Despite this popularity, the increased use of GenAI also increases security risks such as phishing attacks and the creation of deepfakes, making it harder for individuals to detect fraudulent content.
The report also found that 77% of participants believe that technology companies should have primary responsibility for regulating and overseeing the use of generative AI, providing oversight for rapidly evolving technologies. The need for strengthening is emphasized.
Beyond AI-related concerns, the report highlights a broader increase in cybercrime incidents. There were 3,346 cybercrime incidents reported in 2024, an increase of 1,299 incidents from the previous year. Phishing scams remain the most common, accounting for 44% of all incidents. Additionally, 35% of survey respondents reported being a victim of cybercrime, an 8% increase from 2023.
These challenges are further compounded by the increasing sophistication of AI-powered attacks, including AI-generated phishing and deepfake techniques. The report also found that 91% of phishing incidents are reported by victims, highlighting not only increased awareness but also the persistence of the threat.
Despite increasing awareness of cyber threats, many participants still struggle to adopt safe behaviors. For example, only 65% of respondents consistently use their own passwords, and 46% said they have never used a password manager. Additionally, 81% of participants are aware of multi-factor authentication (MFA), but only 66% use it regularly.