The growing adoption of electric vehicles (EVs) has underscored the need for optimizing routing and charging strategies, particularly given the limited range and longer charging times compared to traditional fuel-based vehicles. This paper presents a novel framework for optimizing EV routing and charging using a combination of heuristic methods and reinforcement learning approaches. We propose several algorithms, including Time Efficient Routing and Charging (TERC), its enhanced version TERC2, and the K-fastest Path (KFP) method, alongside advanced reinforcement learning techniques such as Q-Learning and Deep Q-Learning (DQL). These algorithms aim to minimize travel time and optimize battery usage by selecting the most efficient routes and charging strategies. Additionally, we introduce a dynamic variant of TERC2 that adapts to real-time traffic and environmental changes, improving the efficiency of EV routing in urban and rural environments. Through extensive simulations, our methods demonstrate significant improvements in both total travel time and energy consumption when compared to traditional heuristic and state-of-the-art methods. Our findings reveal that reinforcement learning, particularly Deep Q-Learning, offers superior performance in dynamically changing environments by learning optimal policies based on real-time feedback.
-
Notifications
You must be signed in to change notification settings - Fork 2
Comparative Analysis of Reinforcement Learning Approaches for Electric Vehicle Routing and Charging Optimization
License
majidghassemi/EV_RL
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Comparative Analysis of Reinforcement Learning Approaches for Electric Vehicle Routing and Charging Optimization
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published