Team AnnieWAY ’ s entry to GCDC 2011

semanticscholar(2011)

引用 6|浏览21
暂无评分
摘要
This article gives an overview of the research background, the techniques and approaches of team AnnieWAY participating at the Grand Cooperative Driving Challenge 2011. It describes the composition of the team, the experimental vehicle, the algorithms and approaches developed for the GCDC, and the preliminary results. I. TEAM COMPOSITION AND RESEARCH BACKGROUND A. History of Team AnnieWAY Team AnnieWAY has been founded as a spin-off of the collaborative research center “Cognitive Automobiles” 1 in 2006 as a collaboration of the Karlsruhe Institute of Technology (KIT), the Technical University of Munich, and the University of the German Forces Munich. The overall goal of the research center was to investigate techniques for fully autonomous driving. This task included research in the areas of on-board sensors like cameras, lidar, and inertial sensors, signal processing, vision, sensor fusion, scene understanding, behavior generation, and control. While the basic research was made in the institutes belonging to the collaborative research center, team AnnieWAY was created to integrate these components into a single hardware and software setup and to build an autonomous vehicle that is capable of autonomous driving. Participating at competitions for autonomous vehicles allowed team AnnieWAY to compare the performance of their approaches with other research groups on the same benchmarks. The formation of team AnnieWAY was triggered by the DARPA Urban Challenge 2007. The objective of the Urban Challenge was autonomous driving in urban environments. The task was to navigate autonomously through a road network similar to an urban environment. The vehicles had to keep their lanes and follow normal traffic rules. Additionally, the vehicles had to execute special maneuvers like parking and three-point turning. Since the road network was provided as a set of polylines, navigation could be based mainly on the digital map and GPS/INS localization. On-board sensors were required to sense other traffic participants and to acquire knowledge on their maneuvers [10]. After the termination of the collaborative research center on cognitive automobiles in 2010 team AnnieWAY continues its work with a changed staff composition being hosted solely at Karlsruhe Institute of Technology. However, the goals of making vehicles moving autonomously remain the same. As a next step the team is participating at the Grand Cooperative Driving Challenge 2011 in Helmond with their experimental vehicle [12]. The preparation phase started in fall 2010 and became more intensive month by month. B. Research Background Team AnnieWAY is embedded into the research of the Institute of Measurement and Control at KIT which consists of research in visual scene perception and scene understanding, optical measurement techniques, 1http://www.kognimobil.org 21-4-2011 www.gcdc.net 1 and digital signal processing with applications to advanced driver assistance systems, 3D surface reconstruction, and train localization. As example for those research interests we want to focus on some topics which are related to autonomous driving and into which members of team AnnieWAY are incorporated. One prerequisite for video-based scene understanding are efficient algorithms for 3D reconstruction from stereo camera images. Only real-time capable approaches are appropriate for autonomous vehicles and driver assistance systems. However, stereo reconstruction must also be accurate and the resulting depth maps must be dense. Therefore, we are developing efficient stereo matching algorithms for high resolution camera images. We follow two lines of development. On the one hand, we are improving stereo matching using a combination of sparse stereo matching at unique feature points with a variational approach on small local patches to obtain dense depth maps [4, 5]. On the other hand, we are working on parallel implementations of stereo matching algorithms using multi-core CPUs and GPUs [16]. This allows us to obtain dense high resolution depth images with 25 frames per second or more which can be used for scene understanding or map-generation [11]. The image sequences of the on-board stereo cameras are used for scene understanding. While classical approaches for scene understanding for driver assistance systems are based on simple visual features like road markings our approache uses a set of non-standard image features like house facades and vehicle flow which are combined in statistical models using elaborated stochastic inference techniques [3]. This allows us to reconstruct complex urban environments while classical approaches are limited to relatively simple scenarios like highways with litte variation in the scene. Besides video cameras, lidar data interpretation plays an important role in our research activities since lidar data are very precise and allows for reliable obstacle detection and scene reconstruction. We focus on 3D lidar point clouds for which we developed algorithms for scene segmentation [14] and object tracking [13]. As well as with stereo cameras the lidar data can be used for map generation [15]. Although our research focus is on the perception of autonomous vehicles some techniques have also been developed for path and trajectory planning for autonomous vehicles and for vehicle control. This includes efficient collision checking [19] and trajectory generation based on fast lattice search [18]. The remainder of the paper is organized as follows. In section II. we describe our experimental vehicle, in section III. the general software and hardware architecture of our system. In the subsequent sections we discuss individual components of our system like the communication modules (sec. IV.), the environment representation (sec. V.), and the control strategy (sec. VI.). The final section wraps up our preliminary results. II. EXPERIMENTAL VEHICLE Our experimental vehicle, AnnieWAY (cf. fig. 1), is equipped with several modifications over the VW Passat base vehicle: Electronically controllable actuators for acceleration, brakes, transmission and steering have been added, each of which can be enabled individually. A CAN gateway allows sending requests to these actuators and receiving selected signals like wheel speeds and status information. It additionally implements a low-level safety disengagement of autonomous functions in case the driver needs to interfere. Determining reasonable commands for the actuators requires cognition. Several complementary sensors are available for this task: A high definition laser scanner delivers several all around 3D point clouds per second 2. Multiple cameras can be mounted in different configurations on a roof rack, e.g. to provide stereoscopic vision. A third source of environmental information is the vehicle’s stock radar, which can be used to supplement the communication-based information about other vehicles. Self localization of the egovehicle is realized by a combined inertialand satellite-based navigation system 3, which can optionally be augmented by reference stations (differential GPS). 2Velodyne HDL64-E 3OXTS RT 3003
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要