Skip to main navigation menu Skip to main content Skip to site footer

Internal Knowledge Adaptation in LLMs with Consistency-Constrained Dynamic Routing

Abstract

This paper addresses the problem of unstable knowledge retention and catastrophic forgetting in large language models during task evolution and fine-tuning. It proposes a dynamic modeling approach based on structural intervention and semantic preservation. The method freezes the backbone parameters of the original model and introduces a set of learnable modules along with a task-aware controller. Through a module activation mechanism and structural consistency regularization, it constructs a dynamically adjustable semantic path framework. The model can adaptively select structural paths based on input changes, enabling stable retention and controlled intervention of key knowledge representations. To enhance semantic consistency, a representation alignment loss is designed to constrain task-induced disturbance in the high-dimensional space and improve the model's steady-state semantic capability under continuous input changes. The experimental setup includes systematic evaluation under typical sensitivity factors such as task order variation, input noise, and training data ratio adjustment. Results show that the proposed method can effectively regulate knowledge flow and enable fine-grained internal modeling without external supervision, demonstrating strong structural adaptability and semantic robustness.

pdf