Hierarchical Large Language Model Agents for Multi-Scale Planning in Dynamic Environments
Abstract
This study addresses the complexity and uncertainty of agent planning and decision-making in dynamic environments and proposes a hierarchical large language model agent method for multi-scale planning. The approach begins with global goal modeling, where the semantic reasoning ability of the language model generates a global planning vector. A task decomposition mechanism then refines the global goal into an executable sequence of sub-goals. At the local execution level, the agent employs a policy network to perform action selection and dynamic adaptation, achieving coordination between global and local levels. The framework optimizes both global subgoal constraints and local action accuracy, enabling robust hierarchical planning through joint optimization. To validate the effectiveness of the proposed method, experiments were conducted under multiple sensitivity scenarios. The results show that the agent demonstrates strong stability and recovery ability under varying noise intensity, optimizer types, planning update frequencies, and mutation frequencies and amplitudes, while maintaining task decomposition accuracy and overall performance in uncertain and dynamic conditions. In addition, sensitivity experiments on learning rate and weight decay reveal the critical role of hyperparameters in maintaining hierarchical strategy stability, providing clear guidance for model design and optimization. Overall, the study demonstrates the advantages of hierarchical large language model agents in multi-scale planning under dynamic environments and confirms their robustness and adaptability through detailed experimental design and quantitative evaluation.