It’s impossible to read about any industry without finding multiple opinions on the coming impact of artificial intelligence, but cybersecurity is surely in the top tier of disciplines being transformed by AI. Machine learning and AI are already taking on multiple roles in the battle against cyberattacks, the problem being that the bad guys are using them also. Here’s a look at how “good AI” and “bad AI” might shake out in the near future.
On the good side, AI has several applications, beginning with its ability to process enormous amounts of data to discern threats and speed up response times. Security Week says: “If the rules are right, AI will do more, faster, and without human error. This will reduce pressure on existing staff levels, solving one of security’s enduring problems: how to do more with less.”
AI, then, can take over some of the repetitive “grunt work” from skilled professionals, freeing them for higher-level tasks. As with every other industry, what that will mean for future career prospects in the field remains to be seen. There is a legitimate concern that entry-level personnel will no longer be able to learn the basics that will now be handled by AI.
The potential of AI in cybersecurity goes deeper than its ability to churn immense amounts of data. Industry experts expect that 2024 will see AI not only isolating problems and proposing solutions, but implementing and testing those solutions as well.
Which brings us back to the opening phrase of that quote above, “If the rules are right …” Like any other data processing operation, AI is dependent on the quality of its input, and the potential for bad output is multiplied geometrically.
Worse, the bad guys are actively seeking to use their own AI to influence those inputs. “Bad AI” has the ability to not only allow those with few technical skills to carry out large-scale attacks, but is empowering nation-state adversaries to an unprecedented degree.
If all this makes it sound like only the techiest of techies need be concerned, consider the recent case in Hong Kong where an employee transferred $25 million after being instructed to do so by the company CFO in a video meeting. Unfortunately, it wasn’t actually the CFO but a video deepfake, as were the other employees onscreen during the call. Beyond all the data-crunching, this demonstrates the potential for AI to take phishing to new and unheard-of levels.
An assessment from the UK’s National Cyber Security Centre (NCSC) notes that all types of cyber threat actors, from low-skilled individuals to nation-states, are already using AI to varying degrees. Unsurprisingly, that same assessment concludes that both the volume and the impact of cyberattacks will increase over the next two years.
So who will win the Great Race to get AI on their cyber team, the good guys or the bad guys? Most likely, there will be no clear winner, just an ongoing game of one-upmanship. And that means the rest of us need to stay vigilant.
Questions about cybersecurity? Contact Hill Tech Solutions.