Fooling NLP Systems Through Word Swapping
MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:
The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence “The characters, cast in impossibly contrived situations, are totally estranged from reality” to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality” makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.
The results of this adversarial machine learning attack are impressive:
For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.
The paper:
Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective — it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving — it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient — it generates adversarial text with computational complexity linear to the text length.
*** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2020/04/fooling_nlp_sys.html