In Barcelona for NIPS’2016

I’m in Barcelona now for NIPS’2016 — or should I say, for NIPS’ Symposia and Workshops, since the main conference this year… sold out. That’s at once exciting and frightening: is machine learning the next dot.com?

Anyways — science ! We’re participating, with posters, in two workshops: Adversarial Training (on Friday, 9th) and Bayesian Deep Learning (on Saturday, 10th). Will you be there, let’s talk!

The papers:

Adversarial Images for Variational Autoencoders

Pedro Tabacof, Julia Tavares, Eduardo Valle

We investigate adversarial attacks for autoencoders. We propose a procedure that distorts the input image to mislead the autoencoder in reconstructing a completely different target image. We attack the internal latent representations, attempting to make the adversarial input produce an internal representation as similar as possible as the target’s. We find that autoencoders are much more robust to the attack than classifiers: while some examples have tolerably small input distortion, and reasonable similarity to the target image, there is a quasi-linear trade-off between those aims. We report results on MNIST and SVHN datasets, and also test regular deterministic autoencoders, reaching similar conclusions in all cases. Finally, we show that the usual adversarial attack for classifiers, while being much easier, also presents a direct proportion between distortion on the input, and misdirection on the output. That proportionality however is hidden by the normalization of the output, which maps a linear layer into non-linear probabilities.

The fulltext here:  https://arxiv.org/abs/1612.00155

🔔 The organizers of the workshop have opened a reddit thread for the public to ask questions. We have a subthread there — ask us anything! 🔔

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

Ramon Oliveira, Pedro Tabacof, Eduardo Valle

We evaluate the uncertainty quality in neural networks using anomaly detection. We extract uncertainty measures (e.g. entropy) from the predictions of candidate models, use those measures as features for an anomaly detector, and gauge how well the detector differentiates known from unknown classes. We assign higher uncertainty quality to candidate models that lead to better detectors. We also propose a novel method for sampling a variational approximation of a Bayesian neural network, called One-Sample Bayesian Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We compare the following candidate neural network models: Maximum Likelihood, Bayesian Dropout, OSBA, and — for MNIST — the standard variational approximation. We show that Bayesian Dropout and OSBA provide better uncertainty information than Maximum Likelihood, and are essentially equivalent to the standard variational approximation, but much faster.

The fulltext here:  https://arxiv.org/abs/1612.01251