Graph generative models become increasingly effective for data distribution approximation and data augmentation. While they have aroused public concerns about their malicious misuses or misinformation broadcasts, just as what Deepfake visual and auditory media has been delivering to society. Hence it is essential to regulate the prevalence of generated graphs. To tackle this problem, we pioneer the formulation of the generated graph detection problem to distinguish generated graphs from real ones. We propose the first framework to systematically investigate a set of sophisticated models and their performance in four classification scenarios. Each scenario switches between seen and unseen datasets/generators during testing to get closer to real-world settings and progressively challenge the classifiers. Extensive experiments evidence that all the models are qualified for generated graph detection, with specific models having advantages in specific scenarios. Resulting from the validated generality and oblivion of the classifiers to unseen datasets/generators, we draw a safe conclusion that our solution can sustain for a decent while to curb generated graph misuses.
History
Primary Research Area
Trustworthy Information Processing
Name of Conference
International Conference on Machine Learning (ICML)
Journal
ICML
Page Range
23412-23428
BibTeX
@conference{Ma:Zhang:Yu:He:Backes:Shen:Zhang:2023,
title = "Generated Graph Detection.",
author = "Ma, Yihan" AND "Zhang, Zhikun" AND "Yu, Ning" AND "He, Xinlei" AND "Backes, Michael" AND "Shen, Yun" AND "Zhang, Yang",
year = 2023,
month = 7,
journal = "ICML",
pages = "23412--23428"
}