FWIW It’s not clear cut if AI generated data feeding back into further training reduces accuracy, or is generally harmful.
Multiple papers have shown that generated images by high quality diffusion models with a proportion of real images in mix (30-50%) improve the adversarial robustness of the models. Similiar things might apply to language modeling.
FWIW It’s not clear cut if AI generated data feeding back into further training reduces accuracy, or is generally harmful.
Multiple papers have shown that generated images by high quality diffusion models with a proportion of real images in mix (30-50%) improve the adversarial robustness of the models. Similiar things might apply to language modeling.