Autonomous Control for Quadcopters

Dr. Yiqiang Han (Department of Mechanical Engineering)
Alex Krolicki (Department of Mechanical Engineering)
Phillip Do (School of Computing)
Rowan Desiardins (Department of Aerospace Engineering, Purdue University)

Abstract

Our goal for the summer CI was to learn more about the application and deployment of deep neural networks (DNN). We aimed to learn more about how an autonomous drone functioned, by building our own. The NVIDIA project redtail provided a road map to learn more about the software infrastructure and hardware involved in achieving a fully functional autonomous robot capable of navigating forest trails. This experience has helped us develop the tools we would need to develop future autonomous robots for a variety of applications.

System Operation

The  camera provides an input image to the onboard computer. The onboard computer is a compact, powerful and cutting edge technology allowing for graphically intense computations to be done efficiently. The camera image is taken as an input for the path planning and object detection neural networks. These neural networks can be trained on labeled data sets to build a set of reference patterns it uses to classify new images. Once the controller receives the predicted position, orientation and potential obstacles from the neural networks, a new yaw angle is transmitted to the flight controller from the computer. The flight controller communicates with the electric motors to steer toward the center of the path, while maintaining a constant altitude and forward velocity. An onboard Lidar Sensor is used for maintaining a stable height from the ground, and a optical flow sensor measures the velocity of the drone.

1
2
3
4
5
6
1

Camera

2

Used for object detection, such as people and pets.

3

Neural Net used to navigate a trail.

4

Takes output data from the DNN and controls the drone.

5

Used to initialize autonomous flight and stop any unwanted flight behavior.

6

Allows communication via MAVLink from the ground station to ROS.

Testing Hardware Communication
First Autonomous Flight Test

Hardware

  • Compute Module – NVIDIA ® Jetson™ TX 2

    NVIDIA Jetson TX2, an embedded AI computing platform

  • Camera Input – Logitech C910

    Standard USB web camera used to provide information to the TrailNet DNN and YOLOv3 for obstacle avoidance.

  • Flight Controller – Pixracer

    The flight controller contains on-board sensors that collect data used to change flight dynamics

  • Downward Facing Camera – PX4FLOW & LeddarOne

    The PX4Flow is a smart camera, it provides optical flow data which can be used to calculate velocity.

    The LeddarOne provides distance measurements which allows the drone to lock in its altitude accurately.

Software

  • Flight Controller – PX4 autopilot

  • Flight Controller – MAVLink

  • Compute Module – ROS version Kinetic

  • Compute Module – GStreamer

Neural Networks

  • TrailNet DNN

    Convolution Neural Network (CNN) that provides a prediction about the orientation and position of the drone with respect to the center of the trail.

  • YOLOv3

    A Regional Convolutional Nueral Network (RCNN) that provides a prediction and classification of objects it detects in the image frame.

Quadcopter Dynamics

Performance/Results

  • Power Consumption
    • The bulk of the power consumption comes from the motors, so reducing weight will save us the most flight time.
    • It was estimated that we could achieve 9 minutes of flight time, however on average we could fly for 15 minutes.
    • Reducing the number of onboard components would greatly increase the longevity of missions.
  • Autonomous controller accuracy
    • The accuracy of the controller is dependent on the quality of your input sensor data, as well as the training data used to make predictions about the location and orientation of the drone with respect to the center of the path.
    • The controller achieves impressive accuracy without the need for providing explicit information about the path it will follow, making it ideal for real world implementation in any previously unexplored environment

Conclusions

  • It is possible to control a quadcopter using artificial intelligence.
  • Good accuracy can be obtained if the hardware is well calibrated and reasonably accurate.
  • Good training data will help the drone make accurate predictions about the center of a trail that it has not seen before.
  • It is important to troubleshoot problems with others to gain a new perspective about the problem and possible solutions.

Future Plans

  • Transfer the knowledge gained from this project to build a smaller drone in order to compete in future competitions.
  • Create and evaluate our own training data sets to train the neural network to navigate in urban environments.
  • Integrate custom hardware into the robotics operating system (ROS) to perform package pick-up and drop-off maneuvers.
  • Use a stereo camera to capture image depth to build a 3D map of the environment using a point cloud for obstacle avoidance.
  • Develop future projects on the more compact, powerful and inexpensive NVIDIA Jetson Nano.

References

Smolyanskiy, N., Kamenev, A., Smith J., & Birchfield, S. (2017, May). Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness [Scholarly project]. In ArXiv.org. Retrieved August 3, 2019, from https://arxiv.org/abs/1705.02550