1
Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran
2
Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran
10.22124/jmm.2025.29704.2654
Abstract
Proximal Policy Optimization (PPO) is one of the most widely used methods in reinforcement learning, designed to optimize policy updates while maintaining training stability. However, in complex and high-dimensional environments, maintaining a suitable balance between bias and variance poses a significant challenge. The λ parameter in Generalized Advantage Estimation (GAE) influences this balance by controlling the trade-off between short-term and long-term return estimations. In this study, we propose a method for adaptive adjustment of the λ parameter, where λ is dynamically updated during training instead of remaining fixed. The updates are guided by internal learning signals such as the value function loss and Explained Variance—a statistical measure that reflects how accurately the critic estimates target returns. To further enhance training robustness, we incorporate a Policy Update Delay (PUD) mechanism to mitigate instability from overly frequent policy updates. The main objective of this approach is to reduce dependence on expensive and time-consuming hyperparameter tuning. By leveraging internal indicators from the learning process, the proposed method contributes to the development of more adaptive, stable, and generalizable reinforcement learning algorithms. To assess the effectiveness of the approach, experiments are conducted in four diverse and standard benchmark environments: Ant-v4, HalfCheetah-v4, and Humanoid-v4 from the OpenAI Gym, as well as Quadruped-Walk from the DeepMind Control Suite. The results demonstrate that the proposed method can substantially improve the performance and stability of PPO across these environments. Our implementation is publicly available at https://github.com/naempr/PPO-with-adaptive-GAE.
Mohammadpour, N. , Fozi, M. , Ebadzadeh, M. M. , Azimi, A. and Kamali Iglie, A. (2025). Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements. Journal of Mathematical Modeling, (), -. doi: 10.22124/jmm.2025.29704.2654
MLA
Mohammadpour, N. , , Fozi, M. , , Ebadzadeh, M. M. , , Azimi, A. , and Kamali Iglie, A. . "Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements", Journal of Mathematical Modeling, , , 2025, -. doi: 10.22124/jmm.2025.29704.2654
HARVARD
Mohammadpour, N., Fozi, M., Ebadzadeh, M. M., Azimi, A., Kamali Iglie, A. (2025). 'Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements', Journal of Mathematical Modeling, (), pp. -. doi: 10.22124/jmm.2025.29704.2654
CHICAGO
N. Mohammadpour , M. Fozi , M. M. Ebadzadeh , A. Azimi and A. Kamali Iglie, "Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements," Journal of Mathematical Modeling, (2025): -, doi: 10.22124/jmm.2025.29704.2654
VANCOUVER
Mohammadpour, N., Fozi, M., Ebadzadeh, M. M., Azimi, A., Kamali Iglie, A. Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements. Journal of Mathematical Modeling, 2025; (): -. doi: 10.22124/jmm.2025.29704.2654