Spy GANs : using adversarial watermarks to send secret messages
In the recent posts where we have been discussing data poisoning, we have mostly been focused on one of two things:
In the recent posts where we have been discussing data poisoning, we have mostly been focused on one of two things:
In the typical machine learning threat model, there is some person or company who using machine learning to accomplish a task, and there is some other person or company (the adversary) who wants to disrupt that task. Maybe the task is authentication, maybe the method is identity recognition based on …
Maybe. It depends a little bit on how you do it and a lot of bit on why.
I am not a lawyer, and this blog post should not be construed as legal advice. But other people who are lawyers or judges have written about this, so we can review …
I spent a lot of time thinking about the title for this post. Way more than usual! So I hope you'll indulge me in quickly sharing two runners up:
In the last blog post, we saw that a large carboard cutout with a distinctive, printed design could help a person evade detection from automated surveillance systems. As we noted, this attack had a few drawbacks -- largely, that the design needed to be held in front of the person's body …
In our last blog post, we looked at a paper that used a small sticker (a "patch") to make any object appear to be a toaster to image recognition models. This is known as a misclassification attack -- the model still recognizes that there is an object there, but fails to …
In our last blog post, we talked about how small perturbations in an image can cause an object detection algorithm to misclassify it. This can be a useful and sneaky way to disguise the contents of an image in scenarios where you have taken a digital photograph, and have the …
If you work in computer security or machine learning, you have probably heard about adversarial attacks on machine learning models and the risks that they pose. If you don't, you might not be aware of something very interesting -- that the big fancy neural networks that companies like Google and Facebook …
In Evading real-time person detectors by adversarial t-shirt1, Xu and coauthors show that the adversarial patch attack described by Thys, Van Ranst, and Goedemé2 is less successful when applied to flexible media like fabric, due to the warping and folding that occurs.
They propose to remedy this failure …