Multi-target recognition and detection algorithm of unmanned vehicles

Date: February 2021 - June 2021 (Bachelor project)

The project is a part of the development of virtual test environment for unmanned vehicles and the verification of perception control strategy, including image recognition, LiDAR perception, trajectory planning and vehicle control.

Methodology

The framework of the project is shown in Fig 1. The entire project consists of two parts:

  • Sensor data acquisition and state recognition
  • Trajectory planning and control.

The autonomous vehicle acquires the surrounding driving environment through sensors and constructs maps. Based on this, the vehicle’s trajectory is planned, and vehicle control is implemented. One of the project’s goals is to build a virtual environment in Prescan for algorithm testing and validation in real-world scenarios.

I was mainly responsible for the image processing part of the perception system, using an industrial camera to capture road information and conduct lane line, pedestrian, and vehicle detection.

Fig. 1 Framework of the Project

The purpose of developing the virtual test environment is to simulate and verify the vehicle control strategy based on the perception virtual environment. However, due to the limited time, only the offline testing of the perception part is realized. The virtual environment is shown in Fig 2.

Fig. 2 Prescan virtual environment

The main work of this paper is image recognition, including lane lines, pedestrians and vehicles.

  1. Lane line detection: The lane line detection section first preprocesses the image to obtain a binary result. It uses sliding windows to fit the lane line pixels and then applies a quadratic polynomial to fit the lane lines. Based on the geometric relationship between the camera and the ground, the vehicle’s position can be calculated.

Fig. 3 Lane line detection

  1. Pedestrian and vehicle detection: The YOLOv5 algorithm was deployed for target detection.
  2. Target detection and LiDAR data fusion: By performing a reverse coordinate transformation, the rectangular bounding boxes on the pixel plane are converted into a cone in the laser radar coordinate system, indicating the category of obstacles recognized by the laser radar, as shown in Fig 4..

Fig. 4 Target detection and LiDAR data fusion

Result

The lane line is marked by the lavender color in the lower left corner. The combination of YOLOv5+DeepSort was adopted to detect pedestrians and vehicles. Pedestrians and vehicles were selected by the green box in the upper left corner.

Fig. 4 Ros-based unmanned vehicle perception system (a) real scene recognition (b) virtual environment recognition

Multi-target recognition and detection algorithm of unmanned vehicles

http://example.com/2021/06/03/car/

Author

Shihao Dong

Posted on

2021-06-03

Updated on

2023-10-16

Licensed under