index

How Glaze and Nightshade try to protect artists

In the last post we saw how systems powered by AI shift harm between population segments, and that there are some practical, physical methods that can be employed to shift those harms back. These were called Protective Optimization Technologies (POTs) because they are designed to protect the people who employ …


How to protect yourself from AI

It's trite to note how pervasive AI systems are becoming but that doesn't make it less true. Some of these systems -- like AI chatbots -- are highly visible and a common part of the public discours, which has spurred some notable AI pioneers to publicly denounce them. However most AI powered …


You Only Look Eighty times: defending object detectors with repeated masking

We've been reading through a lot of papers recently about defenses against adversarial attacks on computer vision models. We've got two more to go!

[For now anyway. The pace of machine learning research these days is dizzying]


Minority reports (yes, like that movie) for certifiable defenses

In the recent series of blog posts about making computer vision models robust to the presence of adversarial attacks, we have mostly been looking at the classic notion of an adversarial attack on an image model. That is to say, you the attacker are providing a digital image that you …


Know thy enemy: classifying attackers with adversarial fingerprinting

In the last three posts, we've looked at different ways to defend an image classification algorithm against a "classic" adversarial attack -- a small perturbation added to the image that causes a machine learning model to misclassify it, but is not detectable to a human. The options we've seen so far …


Steganalysis based detection of adversarial attacks

For the last few months, we have been describing defenses against adversarial attacks on computer vision models. That is to say, if I have a model in production, and someone might feed it malicious inputs in order to trick it into generating bad predictions, what are defenses I can put …


What your model really needs is more JPEG!

When machine learning models get deployed into production, the people who trained the model lose some amount of control over inputs that go into the model. There is a large body of literature on all the natural ways in which the data a model sees at inference time might be …


Adversarial training: or, poisoning your model on purpose

So far, we have been looking at different ways adversarial machine learning can be applied to attack a machine learning model. We've seen different adversary goals, applied under different threat models, that resulted in giant sunglasses, weird t-shirts, and forehead stickers.

But what if you are the person with a …


Anti-adversarial patches

In the papers that we have discussed about adversarial patches so far, the motivation has principally involved looking at the security or safety of machine learning models that have been deployed to production. So, these papers typically reference an explicit threat model where some adversary is trying to change the …


Getting catfished by ChatGPT

At the AIVillage at DEFCON 2022, Justin Hutchens a cybersecurity expert at Set Solutions, gave a presentation on using dating apps as an attack vector.1 The idea goes like this: let's say you want to gain access to a system that normally requires secure logins. Some systems provide a …


Page 1 / 3 »