index

Why data scientists are leaving your company

The company I work for has been actively recruiting for a new data science hire since January. During this process I spoke to a lot of candidates with diverse backgrounds and work histories. They each had their own relevant projects to discuss, and in general very few people gave the …


How to protect yourself from AI

It's trite to note how pervasive AI systems are becoming but that doesn't make it less true. Some of these systems -- like AI chatbots -- are highly visible and a common part of the public discours, which has spurred some notable AI pioneers to publicly denounce them. However most AI powered …


How to deploy conda-based docker images

The scientific python community has settled on conda and conda-forge as the easiest way to compile and install dependencies that have complicated build recipes:


You Only Look Eighty times: defending object detectors with repeated masking

We've been reading through a lot of papers recently about defenses against adversarial attacks on computer vision models. We've got two more to go!

[For now anyway. The pace of machine learning research these days is dizzying]


Minority reports (yes, like that movie) for certifiable defenses

In the recent series of blog posts about making computer vision models robust to the presence of adversarial attacks, we have mostly been looking at the classic notion of an adversarial attack on an image model. That is to say, you the attacker are providing a digital image that you …


Know thy enemy: classifying attackers with adversarial fingerprinting

In the last three posts, we've looked at different ways to defend an image classification algorithm against a "classic" adversarial attack -- a small perturbation added to the image that causes a machine learning model to misclassify it, but is not detectable to a human. The options we've seen so far …


Steganalysis based detection of adversarial attacks

For the last few months, we have been describing defenses against adversarial attacks on computer vision models. That is to say, if I have a model in production, and someone might feed it malicious inputs in order to trick it into generating bad predictions, what are defenses I can put …


What your model really needs is more JPEG!

When machine learning models get deployed into production, the people who trained the model lose some amount of control over inputs that go into the model. There is a large body of literature on all the natural ways in which the data a model sees at inference time might be …


Adversarial training: or, poisoning your model on purpose

So far, we have been looking at different ways adversarial machine learning can be applied to attack a machine learning model. We've seen different adversary goals, applied under different threat models, that resulted in giant sunglasses, weird t-shirts, and forehead stickers.

But what if you are the person with a …


Anti-adversarial patches

In the papers that we have discussed about adversarial patches so far, the motivation has principally involved looking at the security or safety of machine learning models that have been deployed to production. So, these papers typically reference an explicit threat model where some adversary is trying to change the …


Page 1 / 5 »