Cylance survey finds that 62 percent of infosec experts believe artificial intelligence will be used for cyberattacks in the coming year.
Last week at Black Hat USA 2017, the best minds in cybersecurity met to discuss the latest threats poised to make waves over the coming months. This included everything from a sophisticated WiFi worm, Broadpwn, which is capable of jumping from one mobile device to another via a shared wireless Internet connection, to an exploit that allows hackers to hijack connected car washes for destructive purposes.
One thing that was readily apparent at Black Hat this year was that artificial intelligence (AI) has officially arrived. Between the countless booths plastered with the promises of AI, machine learning, and automation (including our own), and various sessions focused on the use of these technologies for active defense, it was clear that the industry has high expectations for intelligent solutions. However, the rise of AI comes with its own drawbacks.
During the conference, Cylance surveyed 100 attendees on various topics being discussed at the show – from criminals using AI as a tool, to the impact nation-states are having on the U.S.
The following are key findings from the survey.
Criminals Will Likely Use AI for Offensive Purposes in the Next 12 Months
Sixty-two percent of surveyed attendees believe that there is a high possibility that AI could be used by hackers for offensive purposes. While AI may be the best hope for slowing the tide of cyberattacks and breaches, it may also create more advanced attacker tactics in the short-term.
However, increasingly automated cyberattacks won’t slow the adoption of AI for defensive purposes. In fact, as cybercriminals and nation-states begin using AI to increase the rate of attacks, the need for smarter solutions that can help human security (Read more...)
This is a Security Bloggers Network syndicated blog post authored by The Cylance Team. Read the original post at: Cylance Blog