TY - JOUR
T1 - Implementability improvement of deep reinforcement learning based congestion control in cellular network
AU - Naqvi, Haidlir Achmad
AU - Hilman, Muhammad Hafizhuddin
AU - Anggorojati, Bayu
N1 - Funding Information:
Funding : This work was supported by the Indonesia Endowment Fund for Education (Lembaga Pengelola Dana Pendidikan) (LPDP) .
Funding Information:
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Haidlir Achmad Naqvi reports financial support was provided by Indonesia Endowment Fund for Education (Lembaga Pengelola Dana Pendidikan) (LPDP).
Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/9
Y1 - 2023/9
N2 - The application of deep reinforcement learning to improve the adaptability of congestion control is promising. However, the state-of-the-art method indicates a high packet loss and requires a high CPU to handle a flow. Those hinder the implementation of deep reinforcement learning-based congestion control in the production network. Therefore, we propose modifications in the agent's deployment design, specifically in the monitoring component, interval, and transport protocol to reduce packet loss and CPU usage. Unfortunately, those agent modifications yield a tradeoff in throughput performance. In order to compensate for the tradeoff, we re-train the policy model using ns-3 (network-simulator-3) as a gym environment and custom reward function to improve the throughput. Our work shows that the proposed method evaluated in cellular networks is able to reduce packet loss by up to 50.7×, suppress CPU (central processing unit) usage by up to 4.13×, and increase the throughput by up to 6.94%. We hope our contribution can drive the adoption of deep reinforcement learning-based congestion control to the production network.
AB - The application of deep reinforcement learning to improve the adaptability of congestion control is promising. However, the state-of-the-art method indicates a high packet loss and requires a high CPU to handle a flow. Those hinder the implementation of deep reinforcement learning-based congestion control in the production network. Therefore, we propose modifications in the agent's deployment design, specifically in the monitoring component, interval, and transport protocol to reduce packet loss and CPU usage. Unfortunately, those agent modifications yield a tradeoff in throughput performance. In order to compensate for the tradeoff, we re-train the policy model using ns-3 (network-simulator-3) as a gym environment and custom reward function to improve the throughput. Our work shows that the proposed method evaluated in cellular networks is able to reduce packet loss by up to 50.7×, suppress CPU (central processing unit) usage by up to 4.13×, and increase the throughput by up to 6.94%. We hope our contribution can drive the adoption of deep reinforcement learning-based congestion control to the production network.
KW - Cellular network
KW - Congestion control
KW - Deep reinforcement learning
KW - Implementability
UR - http://www.scopus.com/inward/record.url?scp=85163794354&partnerID=8YFLogxK
U2 - 10.1016/j.comnet.2023.109874
DO - 10.1016/j.comnet.2023.109874
M3 - Article
AN - SCOPUS:85163794354
SN - 1389-1286
VL - 233
JO - Computer Networks
JF - Computer Networks
M1 - 109874
ER -