cispa_all_3656.pdf (997.14 kB)

Teaching Temporal Logics to Neural Networks

Download (997.14 kB)
Version 2 2023-12-11, 20:13
Version 1 2023-11-29, 18:18
conference contribution
posted on 2023-12-11, 20:13 authored by Christopher Hahn, Frederik SchmittFrederik Schmitt, Jens U. Kreber, Markus Norman Rabe, Bernd FinkbeinerBernd Finkbeiner
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics. In this work we focus on linear-time temporal logic (LTL), as it is widely used in verification. We train a Transformer on the problem to directly predict a solution, i.e. a trace, to a given LTL formula. The training data is generated with classical solvers, which, however, only provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions to formulas, and that Transformers can predict solutions even to formulas from benchmarks from the literature on which the classical solver timed out. Transformers also generalize to the semantics of the logics: while they often deviate from the solutions found by the classical solvers, they still predict correct solutions to most formulas.


Preferred Citation

Christopher Hahn, Frederik Schmitt, Jens Kreber, Markus Rabe and Bernd Finkbeiner. Teaching Temporal Logics to Neural Networks. In: International Conference on Learning Representations (ICLR). 2021.

Primary Research Area

  • Reliable Security Guarantees

Name of Conference

International Conference on Learning Representations (ICLR)

Legacy Posted Date


Open Access Type

  • Gold


@inproceedings{cispa_all_3656, title = "Teaching Temporal Logics to Neural Networks", author = "Hahn, Christopher and Schmitt, Frederik and Kreber, Jens U. and Rabe, Markus Norman and Finkbeiner, Bernd", booktitle="{International Conference on Learning Representations (ICLR)}", year="2021", }

Usage metrics


    No categories selected


    Ref. manager