Researchers propose a self-distillation fix for ‘catastrophic forgetting’ in LLMs
A new fine-tuning technique aims to solve “catastrophic forgetting,” a limitation that often complicates repeated model updates in enterprise deployments. Researchers at MIT, the Improbable