Armorblox Report Surfaces Spike in BEC Attacks
An analysis of customer data from email protection platform provider Armorblox found business email compromise (BEC) attacks have increased 72% year-over-year.
More than half of those attacks (56%) bypassed legacy security filters that many organizations rely on to thwart these attacks, the report found.
The report also found 20% of BEC attacks involved a threat to expose a secret via a graymail attack or some type of unwanted solicitation. A full 19% of these attacks included a malicious payload.
Brian Johnson, chief security officer for Armorblox, said BEC attacks are increasing as cybercriminals become more adept at infiltrating the email workflows that organizations have set up to, for example, process invoices. Those attacks, for example, make use of LinkedIn and other readily available sources of data to enable cybercriminals to pose as someone who normally participates in those workflows, he noted.
Those attacks are only going to increase in volume and sophistication as cybercriminals begin to employ generative artificial intelligence (AI) platforms such as ChatGPT to launch attacks that will be much more difficult to detect, he added. BEC attacks launched by non-native English speakers are easier for humans to detect, but will likely be replaced by social engineering attacks that humans will not be able to as easily distinguish, said Johnson.
As a result, organizations will have to rely more on cybersecurity platforms infused with machine learning algorithms to discover and thwart those attacks, he added.
Like it or not, the AI genie is now out of the bottle. Every organization is now locked in a cybersecurity AI arms race. AI platforms don’t replace the need for cybersecurity professionals, but given the current chronic shortage of expertise, it’s clear there is a need to augment cybersecurity staff. In fact, many cybersecurity professionals are eager to embrace AI to help level the playing field. In fact, many of them might decline to work with organizations that don’t provide them access to tools infused with AI capabilities. Much of the turnover among cybersecurity professionals is attributable to alerts that turn out to false positives and other forms of monotonous toil that, over time, conspire to burn out cybersecurity professionals.
The challenge now is to determine the best path forward in what has become a sea of options. Regardless of approach, however, the longer it takes an organization to embrace AI, the more they are falling behind. It should not take a cyberattack enabled by AI for organizations to realize their existing legacy platforms are no longer adequate.
In the meantime, cybersecurity teams should assume that cybercriminals have plenty of resources and are already employing generative AI platforms to hone their attacks. Many of them are also using prompt engineering techniques to expose these platforms to data that enables those platforms to, for example, mimic the writing style of specific individuals.
Of course, there’s nothing new about BEC as an attack vector. It’s just that, as usual, cybercriminals tend to focus on people-centric processes that are easier to fool than a machine.