Intelligent traffic signal controllers, applying DQN algorithms to traffic light policy optimization, efficiently reduce traffic congestion by adjusting traffic signals to real-time traffic. Most propositions in the literature however consider that all vehicles at the intersection are detected, an unrealistic scenario. Recently, new wireless communication technologies have enabled cost-efficient detection of connected vehicles by infrastructures. With only a small fraction of the total fleet currently equipped, methods able to perform under low detection rates are desirable. In this paper, we propose a deep reinforcement Q-learning model to optimize traffic signal control at an isolated intersection, in a partially observable environment with connected vehicles. First, we present the novel DQN model within the RL framework. We introduce a new state representation for partially observable environments and a new reward function for traffic signal control, and provide a network architecture and tuned hyper-parameters. Second, we evaluate the performances of the model in numerical simulations on multiple scenarios, in two steps. At first in full detection against existing actuated controllers, then in partial detection with loss estimates for proportions of connected vehicles. Finally, from the obtained results, we define thresholds for detection rates with acceptable and optimal performance levels.
Veille Scientifique et Technologique quotidienne sur les thématiques de recherche du département Cosys de
l'Université Gustave Eiffel et plus largement sur les thématiques de la ville durable.
Environ 25 000 articles issus de différentes sources, académiques, industrielles, gouvernementales, françaises et internationales.
Utilisez le moteur de recherche du blog.
Inscription à :
Publier les commentaires (Atom)
Aucun commentaire:
Enregistrer un commentaire