Comparative Study of End-to-end Deep Learning Methods for Self-driving Car

Full Text (PDF, 820KB), PP.15-27

Views: 0 Downloads: 0


Fenjiro Youssef 1,* Benbrahim Houda 1

1. National School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University, in Rabat 8007, Morocco

* Corresponding author.


Received: 7 May 2019 / Revised: 11 Dec. 2019 / Accepted: 14 Jun. 2020 / Published: 8 Oct. 2020

Index Terms

Self-driving car, deep learning, Imitation Learning, deep reinforcement learning


Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to output control commands. In this paper, we introduce the Self-driving car domain and the CARLA simulation environment with a focus on the lane-keeping task, then we present the two main end-to-end models, used to solve this problematic, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and thereafter, describing Deep Reinforcement Learning (DRL), and precisely DQN and DDPG (respectively Deep Q learning and deep deterministic policy gradient), that uses the concepts of learning by trial and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent. In the last chapter, we compare the two algorithms IL and DRL based on a new approach, with metrics used in deep learning (Loss during training phase) and Self-driving car (the episode's duration before a crash and Average distance from the road center during the testing phase). The results of the training and testing on CARLA simulator reveals that the IL algorithm performs better than DRL algorithm when the agents are already trained on a given circuit, but DRL agents show better adaptability when they are on new roads.

Cite This Paper

Fenjiro Youssef, Benbrahim Houda, "Comparative Study of End-to-end Deep Learning Methods for Self-driving Car", International Journal of Intelligent Systems and Applications(IJISA), Vol.12, No.5, pp.15-27, 2020. DOI:10.5815/ijisa.2020.05.02


[1]M. Bojarski et al., “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs], Apr. 2016.
[2]V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
[3]T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” arXiv:1509.02971 [cs, stat], Sep. 2015.
[4]A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An Open Urban Driving Simulator,” arXiv:1711.03938 [cs], Nov. 2017.
[5]“CARLA Simulator.” [Online]. Available:
[6]“Waymo,” Waymo. [Online]. Available:
[7]“Cruise Automation.” [Online]. Available:
[8]“Looking Further,” Ford Corporate. [Online]. Available:
[9]“Mobileye | Autonomous Driving & ADAS (Advanced Driver Assistance Systems),” Mobileye. [Online]. Available:
[10]“Autopilot.” [Online]. Available:
[11]“Autonomous Car Development Platform from NVIDIA DRIVE AGX,” NVIDIA. [Online]. Available:
[12]“Uber Advanced Technologies Group,” Uber Advanced Technologies Group. [Online]. Available:
[13]“Autonomous Driving | Innovation | Volkswagen Australia.” [Online]. Available:
[14]“Groupe PSA’s safe and intuitive autonomous car tested by the general public,” Groupe PSA. [Online]. Available:
[15]“Autonomous car & driving : vehicle automation,” Groupe PSA. [Online]. Available:
[16]“Autonomous vehicle and autonomous driving - Groupe Renault.” [Online]. Available:
[17]“Autonomous Driving - CASE,” Daimler. [Online]. Available:
[18]“Autonomous Driving – five steps to the self-driving car.” [Online]. Available:
[19]T. M. CORPORATION, “Toyota Research Institute Introduces Next-Generation Automated Driving Research Vehicle at CES | Corporate | Global Newsroom,” Toyota Motor Corporation Official Global Website. [Online]. Available:
[20]“Mobility Meet the Autonomous Work Vehicle Prototype.” [Online]. Available:
[21]“Apollo.” [Online]. Available:
[22]“DiDi Labs - Intelligent Transportation Technology & Security.” [Online]. Available:
[23]“” [Online]. Available:
[24]W. Pananurak, S. Thanok, and M. Parnichkun, “Adaptive Cruise Control for an Intelligent Vehicle,” 2009, pp. 1794–1799.
[25]İ. Kılıç, A. Yazıcı, Ö. Yıldız, M. Özçelikors, and A. Ondoğan, “Intelligent adaptive cruise control system design and implementation,” in 2015 10th System of Systems Engineering Conference (SoSE), 2015, pp. 232–237.
[26]Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, “Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks,” arXiv:1903.02193 [cs], Mar. 2019.
[27]P. Sermanet and Y. LeCun, “Traffic sign recognition with multi-scale Convolutional Networks,” in The 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 2011, pp. 2809–2813.
[28]D. J. Phillips, J. C. Aragon, A. Roychowdhury, R. Madigan, S. Chintakindi, and M. J. Kochenderfer, “Real-time Prediction of Automotive Collision Risk from Monocular Video,” arXiv:1902.01293 [cs], Feb. 2019.
[29]D. Tomè, F. Monti, L. Baroffio, L. Bondi, M. Tagliasacchi, and S. Tubaro, “Deep convolutional neural networks for pedestrian detection,” Signal Processing: Image Communication, vol. 47, pp. 482–489, Sep. 2016.
[30]“(PDF) Real-time image-based parking occupancy detection using deep learning.” [Online]. Available: [Accessed: 31-Mar-2019].
[31]S. Park, F. Pan, S. Kang, and C. D. Yoo, “Driver Drowsiness Detection System Based on Feature Representation Learning Using Various Deep Networks,” in Computer Vision – ACCV 2016 Workshops, vol. 10118, C.-S. Chen, J. Lu, and K.-K. Ma, Eds. Cham: Springer International Publishing, 2017, pp. 154–164.
[32]“SAE J 3016-2018 - Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.” [Online]. Available: [Accessed: 31-Mar-2019].
[33]Salman Khan, Hossein Rahmani, Syed Afaq Ali Shah, Mohammed Bennamoun, A Guide to Convolutional Neural Networks for Computer Vision. .
[34]“(PDF) LiDAR-Video Driving Dataset: Learning Driving Policies Effectively.” [Online]. Available: [Accessed: 31-Mar-2019].
[35]V. De Silva, J. Roche, and A. Kondoz, “Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots,” Sensors, vol. 18, no. 8, p. 2730, Aug. 2018.
[36]“[1604.06915] On the Sample Complexity of End-to-end Training vs. Semantic Abstraction Training.” [Online]. Available:
[37]B. Wymann, C. Dimitrakakis, A. Sumner, E. Espie, and C. Guionneau, “TORCS: The open racing car simulator,” p. 5.
[38]“(PDF) Autonomous Drifting Control in 3D Car Racing Simulator,” ResearchGate. [Online]. Available: [Accessed: 31-Mar-2019].
[39]Y. Duan et al., “One-Shot Imitation Learning,” arXiv:1703.07326 [cs], Mar. 2017.
[40]Y. Pan et al., “Agile Off-Road Autonomous Driving Using End-to-End Deep Imitation Learning,” arXiv:1709.07174 [cs], Sep. 2017.
[41]D. A. Pomerleau, “ALVINN: An Autonomous Land Vehicle in a Neural Network,” in Advances in Neural Information Processing Systems 1, D. S. Touretzky, Ed. Morgan-Kaufmann, 1989, pp. 305–313.
[42]“DAVE: Autonomous Off-Road Vehicle Control using End-to-End Learning.” [Online]. Available: [Accessed: 31-Mar-2019].
[43]“(PDF) Vision meets robotics: the KITTI dataset.” [Online]. Available: [Accessed: 31-Mar-2019].
[44]G. Ros, L. Sellart, J. Materzynska, D. Vázquez, and A. M. López, “The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3234–3243, 2016.
[45]M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding,” arXiv:1604.01685 [cs], Apr. 2016.
[46]X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The ApolloScape Open Dataset for Autonomous Driving and its Application,” arXiv:1803.06184 [cs], Mar. 2018.
[47]A. Y. Ng and S. Russell, “Algorithms for Inverse Reinforcement Learning,” ICML ’00 Proceedings of the Seventeenth International Conference on Machine Learning, May 2000.
[48]RL DDPG agent driving in the CARLA Simulator - YouTube. .
[49]S. Sharifzadeh, I. Chiotellis, R. Triebel, and D. Cremers, “Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks,” arXiv:1612.03653 [cs], Dec. 2016.