Submitter and Co-author information

Adonay TecleFollow
Dr. Jalal AhamedFollow

Standing

Undergraduate

Type of Proposal

Visual Presentation (Poster, Installation, Demonstration)

Faculty

Faculty of Engineering

Proposal

Autonomous vehicles are expected to aid a safe, convenient and easier driving experience. There is a huge demand for a cost effective, accurate, stable, and precise navigation system for autonomous vehicles. Inertial navigation systems combined with visual camera-based guidance are showing great promise with stable and intelligent autonomy. GPS based navigation alone is not enough to provide pin-point accuracy and needs a direct line of sight with the satellite. In contrast, inertial navigation works anywhere anytime. The focus for this research is to deploy an autonomous vehicle navigation system designed with sensor fusion technology that is supported by an inertial measurement unit (IMU) and smart camera. The proposed navigation system has an embedded IMU and camera to generate a complete assistive map of the surrounds with relative positions in any environment. The mapping of the surroundings works by performing a sensor data fusion from the camera, position data from IMU, as well as a heat graph, indicating if obstacles are close based on colour intensity. However, inertial sensor drift can degrade the sensor fusion and obstacle map due to error in position and distance of obstacles detected. To mitigate this sensor drift error, sensor fusion between the IMU and smart camera is processed to obtain an accurate environment. This sensor fusion incorporates artificial intelligence through a deep convolutional neural network (CNN). In general, convolutional neural networks are modelled after the human brain to train the vehicle to identify obstacles of interests in any environment. The smart camera mounted on the vehicle feeds real-time video footage for inference with the trained CNN and is used for real-time obstacle avoidance. This research is work-in-progress to correct sensor drift in the embedded IMU using a trained CNN to create a fully autonomous navigation system providing more autonomy towards achieving reliable assistive navigation.

Location

University of Windsor

Grand Challenges

Viable, Healthy and Safe Communities

Special Considerations

Results of this research in development will be shown via a powerpoint presentation.

Share

COinS
 

Autonomous Inertial Based Vehicle Navigation Assisted by Convolutional Neural Network for Obstacle Avoidance

University of Windsor

Autonomous vehicles are expected to aid a safe, convenient and easier driving experience. There is a huge demand for a cost effective, accurate, stable, and precise navigation system for autonomous vehicles. Inertial navigation systems combined with visual camera-based guidance are showing great promise with stable and intelligent autonomy. GPS based navigation alone is not enough to provide pin-point accuracy and needs a direct line of sight with the satellite. In contrast, inertial navigation works anywhere anytime. The focus for this research is to deploy an autonomous vehicle navigation system designed with sensor fusion technology that is supported by an inertial measurement unit (IMU) and smart camera. The proposed navigation system has an embedded IMU and camera to generate a complete assistive map of the surrounds with relative positions in any environment. The mapping of the surroundings works by performing a sensor data fusion from the camera, position data from IMU, as well as a heat graph, indicating if obstacles are close based on colour intensity. However, inertial sensor drift can degrade the sensor fusion and obstacle map due to error in position and distance of obstacles detected. To mitigate this sensor drift error, sensor fusion between the IMU and smart camera is processed to obtain an accurate environment. This sensor fusion incorporates artificial intelligence through a deep convolutional neural network (CNN). In general, convolutional neural networks are modelled after the human brain to train the vehicle to identify obstacles of interests in any environment. The smart camera mounted on the vehicle feeds real-time video footage for inference with the trained CNN and is used for real-time obstacle avoidance. This research is work-in-progress to correct sensor drift in the embedded IMU using a trained CNN to create a fully autonomous navigation system providing more autonomy towards achieving reliable assistive navigation.