SBN

CyberSec & AI Is Bringing Top Experts To Prague | Avast

Cybersecurity and artificial intelligence are two sprawling fields. They overlap in a fascinating area that will draw top experts to Prague in October for the conference CyberSec & AI, which is sponsored by Avast and the Czech Technical University. The conference will allow researchers and engineers to share ideas on state-of-the-art topics, including the cybersecurity threat posed by the advances in AI.

Defenders train our AI classifiers to be more robust by feeding them many examples of good and malicious files. Cybercriminals, meanwhile, are busy training their AI classifiers to generate malicious files that seem harmless. This active arms race make AI in security particularly challenging. We must teach our AI to look for those disguised threats, and always stay a step ahead. One way to do that is by generating adversarial examples to teach our classifiers by using approaches like generative adversarial networks (GANs). Another way would be to model “good” or “acceptable” characteristics, which might be possible to protect simple devices like IoTs. The problem is the adversary has access to the same AI as the defenders. 

AI security still relies on security through obscurity, meaning the only way to protect AI is by hiding it from the adversary. An adversary can fool an AI program as soon as they have access to the algorithm. This is different from other facets of security, such as encryption, where the security relies on the complexity of factoring large prime numbers. 

Few users may realize that these machine-generated cybersecurity tools have been integrated into our product line, seeking and blocking all kinds of threats, such as phishing and email spam. In this way, our products work like airport security screeners around the world (perhaps a bit more efficiently) but with ever-evolving technology looking for ever-improving threats. Perhaps airport screeners learn there is a new threat from a certain nation, or using a certain kind of luggage. Those types of tell-tale signs also surface in the binary code captured by our cybersecurity products, and we use those clues to improve our algorithms all the time. 

Our speakers at CyberSec & AI Prague come from this very select area of the overlapping worlds of cybersecurity and artificial intelligence, and that’s one reason we are so looking forward to the conference on October 25 in Prague, one of Europe’s more beautiful cities. Attendees will represent the world of cybersecurity, and engineers will likely build both their knowledge base and professional networks. Students are also invited to submit their own ideas in a poster – and grants to support travel are being considered on a limited basis.

castlePrague Castle at night. Photo by Rajarshi Gupta, Avast.

Especially in the overlapping worlds of AI and cybersecurity, where everything is changing so fast, a gathering of the top minds may give us important insights into what is to come. Adversarial AI may not be widespread yet. But a quarter-century ago academics mapped out a threat that seemed remote, or even unlikely. That threat was ransomware, and while it took more than 20 years to reach its greatest impact, today cities and organizations around the world are caught in its grip. 

It won’t take 20 years for today’s emerging threats to develop. Somewhere right now, adversarial AI algorithms are working on them. In October in Prague, we will gather to stay one step ahead. Join us. 

Rajarshi Gupta is Avast’s head of AI. Sadia Afroz is an AI researcher at Avast and International Computer Science Institute, Berkeley.


*** This is a Security Bloggers Network syndicated blog from Blog | Avast EN authored by Avast Blog. Read the original post at: https://blog.avast.com/cybersec-ai-conference-prague