Extremely Simple Activation Shaping for Out-of-Distribution Detection

1ML Collective, 2Google Research, Brain Team, 3Faculty of Technical Sciences, University of Novi Sad

TL;DR: At inference time, pick a layer, simplify its representation, feed it through the rest of the network. Accuracy is not affected and OOD detection is much better!


Abstract

The separation between training and deployment of machine learning models implies that not all scenarios encountered in deployment can be anticipated during training, and therefore relying solely on advancements in training has its limits. Out-of-distribution (OOD) detection is an important area that stress-tests a model’s ability to handle unseen situations: Do models know when they don’t know? Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network. In contrast, in this work, we propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion (e.g. 90%) of a sample's activation at a late layer is removed, and the rest (e.g. 10%) simplified or lightly adjusted. The shaping is applied at inference time, and does not require any statistics calculated from training data. Experiments show that such a simple treatment enhances in-distribution and out-of-distribution sample distinction so as to allow state-of-the-art OOD detection on ImageNet, and does not noticeably deteriorate the in-distribution accuracy. We release alongside the paper two calls for explanation and validation, believing that collectively we'd have a better chance understanding and validating the discovery.



How it works



Call for Explanation and Validation

We are releasing two calls alongside this paper to encourage, increase, and broaden the reach of scientific interactions and collaborations. The two calls are an invitation for fellow researchers to address two questions that are not yet sufficiently answered by this work:

  1. What are plausible explanations of the effectiveness of ASH, a simple activation pruning and readjusting technique, on ID and OOD tasks?
  2. Are there other research domains, application areas, topics and tasks where ASH (or a similar procedure) is applicable, and what are the findings?

Answers to these calls will be carefully reviewed and selectively included in future versions of this paper, where individual contributors will be invited to collaborate.

For each call we provide possible directions to explore the answer, however, we encourage novel quests beyond what's suggested below.

Call for explanation. A possible explanation of the effectiveness of ASH is that our overparameterized networks likely overdo representation learning—generating features for data that are largely redundant for the optimization task at hand. It is both an advantage and a peril: on the one hand the representation is less likely to overfit to a single task and might retain more potential to generalize, but on the other hand it serves a poorer discriminator between data seen and unseen.

Call for validation in other fields. We think any domains that use a deep neural network (or a similar intelligent system) to learn representations of data when optimizing for a training task would be fertile ground for validating ASH. A straightforward domain is natural language processing, where pretrained language models are often adapted for downstream tasks. Are native representations learned with those large language models simplifiable? Would reshaping of activations (in the case of transformer-based language models, activations can be keys, values or queries) enhance or damage the performance?

We are still working to set up a proper portal for submitting, reviewing and discussing answers to both calls. (ETA: likely after the ICLR deadline.) In the meantime, feel free to email rosanneliu@google.com to start a conversation.



How Score distributions are being morphed by ASH

BibTeX

@article{djurisic2022ash,
    url = {https://arxiv.org/abs/2209.09858},
    author = {Djurisic, Andrija and Bozanic, Nebojsa and Ashok, Arjun and Liu, Rosanne},   
    title = {Extremely Simple Activation Shaping for Out-of-Distribution Detection},
    publisher = {arXiv},
    year = {2022},
    }