index

Poisoning deep learning algorithms

Up to this point, when we have been talking about adversarial attacks on machine learning algorithms, it has been specifically in the context of an existing, fixed model. Early work in this area assumed a process where an attacker had access to test examples after capture (e.g., after a …


MLOps lessons from Creativity, Inc. (Part 2)

Last time, we talked a bit about lean manufacturing, DevOps, MLOps, and the history of Pixar Studios according to Ed Catmull. In particular, we noted some similarities between Ed's lessons about running a film production company and MLOps best practices. In this blog post, we'll finish going through that list …


MLOps lessons from Creativity, Inc. (Part 1)

I recently finished listening to the audiobook version of Creativity Inc., Ed Catmull's book on the history of Pixar Animation. Many of the company policies and managerial decisions discussed in the book, especially the parts about experimentation and feedback, sound very similar to what you would hear in an agile …


Evading real-time detection with an adversarial t-shirt

In the last blog post, we saw that a large carboard cutout with a distinctive, printed design could help a person evade detection from automated surveillance systems. As we noted, this attack had a few drawbacks -- largely, that the design needed to be held in front of the person's body …


Evading CCTV cameras with adversarial patches

In our last blog post, we looked at a paper that used a small sticker (a "patch") to make any object appear to be a toaster to image recognition models. This is known as a misclassification attack -- the model still recognizes that there is an object there, but fails to …


There's treasure everywhere: a devops perspective on the Port of Long Beach

The CEO of Flexport posted a thread on Twitter last week about supply chain shortages that ended up getting a lot of attention.


Fooling AI in real life with adversarial patches

In our last blog post, we talked about how small perturbations in an image can cause an object detection algorithm to misclassify it. This can be a useful and sneaky way to disguise the contents of an image in scenarios where you have taken a digital photograph, and have the …


What is adversarial machine learning?

If you work in computer security or machine learning, you have probably heard about adversarial attacks on machine learning models and the risks that they pose. If you don't, you might not be aware of something very interesting -- that the big fancy neural networks that companies like Google and Facebook …


Fast operations on scikit-learn decision trees with numba

The title is a bit wordy. But that's what this post is about.

To start with, you might be wondering why someone would want to operate on a decision tree from inside numba in the first place. After all, the scikit-learn implementation of trees uses Cython, which should be providing …


SciPy Proceedings 2021 Survey

Last year, the SciPy Conference Proceedings Committee (Proccom) started collecting demographic data from authors and reviewers, in order to understand:

  1. how authors compare to conference attendees; and,
  2. how authors compare to reviewers.

This post will be an update with new data from 2021; the discussion of the 2020 results are …


« Page 3 / 5 »