computer vision

You Only Look Eighty times: defending object detectors with repeated masking

We've been reading through a lot of papers recently about defenses against adversarial attacks on computer vision models. We've got two more to go!

[For now anyway. The pace of machine learning research these days is dizzying]

Minority reports (yes, like that movie) for certifiable defenses

In the recent series of blog posts about making computer vision models robust to the presence of adversarial attacks, we have mostly been looking at the classic notion of an adversarial attack on an image model. That is to say, you the attacker are providing a digital image that you …

Know thy enemy: classifying attackers with adversarial fingerprinting

In the last three posts, we've looked at different ways to defend an image classification algorithm against a "classic" adversarial attack -- a small perturbation added to the image that causes a machine learning model to misclassify it, but is not detectable to a human. The options we've seen so far …

Steganalysis based detection of adversarial attacks

For the last few months, we have been describing defenses against adversarial attacks on computer vision models. That is to say, if I have a model in production, and someone might feed it malicious inputs in order to trick it into generating bad predictions, what are defenses I can put …

What your model really needs is more JPEG!

When machine learning models get deployed into production, the people who trained the model lose some amount of control over inputs that go into the model. There is a large body of literature on all the natural ways in which the data a model sees at inference time might be …

Adversarial training: or, poisoning your model on purpose

So far, we have been looking at different ways adversarial machine learning can be applied to attack a machine learning model. We've seen different adversary goals, applied under different threat models, that resulted in giant sunglasses, weird t-shirts, and forehead stickers.

But what if you are the person with a …

Anti-adversarial patches

In the papers that we have discussed about adversarial patches so far, the motivation has principally involved looking at the security or safety of machine learning models that have been deployed to production. So, these papers typically reference an explicit threat model where some adversary is trying to change the …

Adversarial patch attacks on self-driving cars

In the last post, we talked about one potential security risk created by adversarial machine learning, which was related to identity recognition. We saw that you could use an adversarial patch to trick a face recognition system into thinking that you are not yourself, or that you are someone else …

Faceoff : using stickers to fool face ID

We've spent the last few months talking about data poisoning attacks, mostly because they are really cool. If you missed these, you should check out Smiling is all you need : fooling identity recognition by having emotions, which was the most popular post in that series.1

There are two more …

When reality is your adversary: failure modes of image recognition

In the typical machine learning threat model, there is some person or company who using machine learning to accomplish a task, and there is some other person or company (the adversary) who wants to disrupt that task. Maybe the task is authentication, maybe the method is identity recognition based on …

We're not so different, you and I -- adversarial attacks are poisonous

I spent a lot of time thinking about the title for this post. Way more than usual! So I hope you'll indulge me in quickly sharing two runners up:

  1. The real data posions were the adversarial examples we found along the way
  2. Your case and my case are the same …

Evading real-time detection with an adversarial t-shirt

In the last blog post, we saw that a large carboard cutout with a distinctive, printed design could help a person evade detection from automated surveillance systems. As we noted, this attack had a few drawbacks -- largely, that the design needed to be held in front of the person's body …

Evading CCTV cameras with adversarial patches

In our last blog post, we looked at a paper that used a small sticker (a "patch") to make any object appear to be a toaster to image recognition models. This is known as a misclassification attack -- the model still recognizes that there is an object there, but fails to …

Fooling AI in real life with adversarial patches

In our last blog post, we talked about how small perturbations in an image can cause an object detection algorithm to misclassify it. This can be a useful and sneaky way to disguise the contents of an image in scenarios where you have taken a digital photograph, and have the …

What is adversarial machine learning?

If you work in computer security or machine learning, you have probably heard about adversarial attacks on machine learning models and the risks that they pose. If you don't, you might not be aware of something very interesting -- that the big fancy neural networks that companies like Google and Facebook …