Date of Award
2024
Publication Type
Dissertation
Degree Name
Ph.D.
Department
Computer Science
Keywords
Congestion Control; Deep Q-learning; Q-learing; Reinforcement Learning; VANET
Supervisor
Arunita Jaekel
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Abstract
Vehicular Ad Hoc Networks (VANETs) are vital for ensuring traffic safety in autonomous driving systems and intelligent transportation networks, where timely exchange of safety information is crucial. However, VANETs face significant challenges related to congestion control due to the high mobility of vehicles, dynamic changes in network topology, and increasing vehicle density, which can result in packet loss, delays, and reduced communication reliability. This dissertation focuses on addressing the congestion control challenge by exploring both traditional and innovative machine learning-based approaches, with the goal of improving communication efficiency while maintaining vehicle awareness. Traditional congestion control methods, such as rate-based and power-based approaches, primarily focus on optimizing transmission power and rate. While these methods can mitigate channel congestion, they often lead to non-convex optimization problems, and lowering transmission power or rate can reduce vehicle awareness, impacting traffic safety. To address these limitations, this dissertation proposes a hybrid congestion control approach that balances channel congestion and vehicle awareness. The core idea is to prioritize awareness of nearby vehicles over distant ones, which is critical for enhancing traffic safety while reducing channel congestion. The dissertation introduces two novel congestion control algorithms: a variable power control method (BACVT) and a hybrid approach (BACVT-H) that combines rate control with power control. Simulation results demonstrate that these algorithms achieve better performance in terms of Channel Busy Ratio (CBR) and vehicle awareness, as measured by Inter-Packet Delay (IPD), compared to traditional methods. The BACVT-H method, in particular, achieves the lowest CBR while maintaining strong vehicle awareness, even in high-density traffic scenarios. Building on the limitations of traditional methods, this dissertation explores the application of Reinforcement Learning (RL) to VANET congestion control. By formulating the problem as a sequential decision-making process with Markov properties, we employ Q-learning to develop an intelligent control framework that allows vehicles to adjust their transmission parameters based on real-time channel conditions dynamically. The RL framework defines a discrete state and action space and uses a reward function to guide the agent’s learning process. Simulation results show that Q-learning effectively reduces CBR while maintaining strong vehicle awareness, providing a promising alternative to traditional congestion control methods. To address the challenges posed by large state and action spaces, this dissertation further expands the scope by applying Deep Q-learning (DQN). The DQN framework leverages neural networks to handle complex and large-scale environments that Qlearning cannot manage. Testing DQN in the same traffic simulation environment showed that it not only performed better in managing larger state and action spaces but also achieved lower CBR and improved performance across other key metrics such as Packet Delivery Ratio (PDR) and Beacon Error Rate (BER). This highlights the potential of DQN to offer superior congestion control solutions in real-world VANET applications. Despite the progress made in this dissertation, several challenges remain, including the scalability of congestion control algorithms to handle large networks with diverse traffic types. Furthermore, the lack of a realistic simulation environment that provides real-time access to channel performance parameters such as CBR, IPD, and BER remains a key limitation. Future work should focus on developing more robust simulation tools and environments that support advanced RL algorithms, enabling more accurate and scalable congestion control solutions. In conclusion, this dissertation makes significant contributions to the field of VANET congestion control by proposing innovative hybrid control algorithms and applying reinforcement learning to optimize channel utilization and vehicle awareness. The results demonstrate that RL-based methods, particularly Deep Q-learning, offer promising solutions for addressing the congestion control challenges in VANETs, paving the way for more scalable and reliable communication systems in autonomous driving networks.
Recommended Citation
Liu, Xiaofeng, "Congestion Control for V2V Communication in VANET" (2024). Electronic Theses and Dissertations. 9614.
https://scholar.uwindsor.ca/etd/9614