This article was written by Elie Bursztein.
This blog post survey the attacks techniques that target AI (artificial intelligence) systems and how to protect against them.
At a high level, attacks against classifiers can be broken down into three types:
This post explores each of these classes of attack in turn, providing concrete examples and discussing potential mitigation techniques.
This post is the fourth, and last, post in a series of four dedicated to providing a concise overview of how to use AI to build robust anti-abuse protections. The first post explained why AI is key to building robust protection that meets user expectations and increasingly sophisticated attacks. Following the natural progression of building and launching an AI-based defense system, the second post covered the challenges related to training classifiers. The third one looked at the main difficulties faced when using a classifier in production to block attacks.
This series of posts is modeled after the talk I gave at RSA 2018.
Disclaimer: This post is intended as an overview for everyone interested in the subject of harnessing AI for anti-abuse defense, and it is a potential blueprint for those who are making the jump. Accordingly, this post focuses on providing a clear high-level summary, deliberately not delving into technical details. That said, if you are an expert, I am sure you’ll find ideas, techniques and references that you haven’t heard about before, and hopefully you’ll be inspired to explore them further.
To read the rest of the article, click here.