RI-RPL: a new high-quality RPL-based routing protocol using Q-learning algorithm

Niloofar Zahedy,Behrang Barekatain, Alfonso Ariza Quintana

The Journal of Supercomputing(2024)

引用 0|浏览0
暂无评分
摘要
The lack of a central controller, severe resource constraints, and multi-path data routing have turned data exchanges into one of the fundamental challenges of the Internet of Things. Despite numerous research efforts on various aspects of routing and data exchanges, some fundamental challenges such as the instant negative impacts of selecting the best possible path and the absence of measures to observe the dynamic conditions of nodes still exist. This study introduces a method called RI-RPL, based on the development of the RPL routing protocol, along with the use of reinforcement learning to address these challenges effectively. To achieve this, RI-RPL is designed in three general stages. In the first stage, routers are aligned with optimizing the RPL protocol with a focus on the Q-learning algorithm. In the second stage, based on learning and convergence, changes in the parents’ learning in different network conditions are supported. In the third stage, control and management changes are coordinated. The reason for choosing this algorithm is its ability to address the desired challenges effectively without wasting network resources for calculations. Simulation results using the Cooja software show that the proposed RI-RPL method, compared to similar recent methods such as ELBRP, RLQRPL, and RPL, has improved successful delivery rates by 4.03%, 13.26%, and 28.87%, respectively, for end-to-end delay by 3.04%, 9.82%, and 13.12%, respectively, for energy consumption optimization by 10.43%, 28.91%, and 36.35%, respectively, for throughput by 10.23%, 28.45%, and 46.88%, respectively, and for network data loss rate by 15.06%, 34.95%, and 49.66%, respectively.
更多
查看译文
关键词
Internet of Things,Routing,RPL protocol,Service quality,Learning algorithms,Q-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要