1 Star 0 Fork 1K

gaap_yin / apollo

forked from ApolloAuto / apollo 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
Apollo_2.0_Software_Architecture.md 6.42 KB
一键复制 编辑 原始数据 按行查看 历史
csukuangfj 提交于 2018-06-21 21:18 . minor fixes. (#4578)

Apollo 2.0 Software Architecture

Core software modules running on the Apollo 2.0 powered autonomous vehicle include:

  • Perception — The perception module identifies the world surrounding the autonomous vehicle. There are two important submodules inside perception: obstacle detection and traffic light detection.
  • Prediction — The prediction module anticipates the future motion trajectories of the perceived obstacles.
  • Routing — The routing module tells the autonomous vehicle how to reach its destination via a series of lanes or roads.
  • Planning — The planning module plans the spatio-temporal trajectory for the autonomous vehicle to take.
  • Control — The control module executes the planned spatio-temporal trajectory by generating control commands such as throttle, brake, and steering.
  • CanBus — The CanBus is the interface that passes control commands to the vehicle hardware. It also passes chassis information to the software system.
  • HD-Map — This module is similar to a library. Instead of publishing and subscribing messages, it frequently functions as query engine support to provide ad-hoc structured information regarding the roads.
  • Localization — The localization module leverages various information sources such as GPS, LiDAR and IMU to estimate where the autonomous vehicle is located.

The interactions of these modules are illustrated in the picture below.

img

Every module is running as a separate CarOS-based ROS node. Each module node publishes and subscribes certain topics. The subscribed topics serve as data input while the published topics serve as data output. The detailed interactions are described in the following sections.

Perception

Perception depends on the raw sensor data such as LiDAR point cloud data and camera data. In addition to these raw sensor data inputs, traffic light detection also depends on the localization data as well as the HD-Map. Because real-time ad-hoc traffic light detection is computationally infeasible, traffic light detection needs localization to determine when and where to start detecting traffic lights through the camera captured pictures.

Prediction

The prediction module estimates the future motion trajectories for all the perceived obstacles. The output prediction message wraps the perception information. Prediction subscribes to both localization and perception obstacle messages as shown below.

img

When a localization update is received, the prediction module updates its internal status. The actual prediction is triggered when perception sends out its published perception obstacle message.

Localization

The routing module aggregates various data to locate the autonomous vehicle. There are two types of localization modes: OnTimer and Multiple SensorFusion.

The first localization method is RTK-based, with a timer-based callback function OnTimer, as shown below.

img

The other localization method is the Multiple Sensor Fusion (MSF) method, where a bunch of event-triggered callback functions are registered, as shown below.

img

Routing

The routing module needs to know the routing start point and routing end point, to compute the passage lanes and roads. Usually the routing start point is the autonomous vehicle location. The important data interface is an event triggered function called OnRoutingRequest, in which RoutingResponse is computed and published as shown below.

img

Planning

Apollo 2.0 uses several information sources to plan a safe and collision free trajectory, so the planning module interacts with almost every other module.

Initially, the planning module takes the prediction output. Because the prediction output wraps the original perceived obstacle, the planning module subscribes to the traffic light detection output rather than the perception obstacles output.

Then, the planning module takes the routing output. Under certain scenarios, the planning module might also trigger a new routing computation by sending a routing request if the current route cannot be faithfully followed.

Finally, the planning module needs to know the location (Localization: where I am) as well as the current autonomous vehicle information (Chassis: what is my status). The planning module is also triggered by a fixed frequency, and the main data interface is the OnTimer callback function that invokes the RunOnce function.

img

The data dependencies such as chassis, localization, traffic light, and prediction are managed through the AdapterManager class. The core software modules are similarly managed. For example, localization is managed through AdapterManager::GetLocalization() as shown below.img

Control

As described in the planning module, control takes the planned trajectory as input, and generates the control command to pass to CanBus. It has three main data interfaces: OnPad, OnMonitor, and OnTimer.

img

The OnPad and OnMonitor are routine interactions with the PAD-based human interface and simulations. The main data interface is the OnTimer interface, which periodically produces the actual control commands as shown below.

img

CanBus

The CanBus has two data interfaces as shown below.

img

The first data interface is a timer-based publisher with the callback function OnTimer. This data interface periodically publishes the chassis information as well as chassis details, if enabled.

img

The second data interface is an event-based publisher with a callback function OnControlCommand, which is triggered when the CanBus module receives control commands.

C
1
https://gitee.com/gaat_yin/apolloauto.git
git@gitee.com:gaat_yin/apolloauto.git
gaat_yin
apolloauto
apollo
master

搜索帮助