CISPA
Browse

File(s) not publicly available

Towards Reverse-Engineering Black-Box Neural Networks

conference contribution
posted on 2023-11-29, 18:08 authored by Seong Joon Oh, Max Augustin, Bernt Schiele, Mario FritzMario Fritz
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models.

History

Preferred Citation

Seong Oh, Max Augustin, Bernt Schiele and Mario Fritz. Towards Reverse-Engineering Black-Box Neural Networks. In: International Conference on Learning Representations (ICLR). 2018.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Learning Representations (ICLR)

Legacy Posted Date

2018-02-04

Open Access Type

  • Gold

BibTeX

@inproceedings{cispa_all_1405, title = "Towards Reverse-Engineering Black-Box Neural Networks", author = "Oh, Seong Joon and Augustin, Max and Schiele, Bernt and Fritz, Mario", booktitle="{International Conference on Learning Representations (ICLR)}", year="2018", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC