Back

Synthetic data generation method for data-free knowledge distillation in regression neural networks

Journal Type:  Journal Paper
Journal:  Expert Systems with Applications, Vol 227, 1 Oct 2023, 120327, doi: 10.1016/j.eswa.2023.120327
Impact Factor:  8.5
Date of Acceptance:   29 Apr 2023

Knowledge distillation is the technique of compressing a larger neural network, known as the teacher, into a smaller neural network, known as the student, while still trying to maintain the performance of the larger neural network as much as possible. Existing methods of knowledge distillation are mostly applicable for classification tasks. Many of them also require access to the data used to train the teacher model. To address the problem of knowledge distillation for regression tasks in the absence of original training data, the existing method uses a generator model trained adversarially against the student model to generate synthetic data to train the student model. In this study, we propose a new synthetic data generation strategy that directly optimizes for a large but bounded difference between the student and teacher model. Our results on benchmark experiments demonstrate that the proposed strategy allows the student model to learn better and emulate the performance of the teacher model more closely.