In a design science research approach, we develop three artifacts in three iterations of expert workshops and design cycles: the end-to-end concept featuring road damages in the system architecture and two lightweight deep neural networks, one for detecting road damages and another for detecting their severity as the central components of the system. Based on the autonomous driving architecture, we develop an end-to-end concept that leverages data from low-cost pre-installed sensors for real-time road damage and damage severity detection as well as cloud- and crowd-based HD Feature Maps to share information across vehicles. We investigate the environmental perception systems’ architecture and current algorithm designs for road damage detection. This paper addresses the lack of algorithms for detecting road damages that meet autonomous driving systems’ requirements. While autonomous driving technology made significant progress in the last decade, road damage detection as a relevant challenge for ensuring safety and comfort is still under development. The results show that TM³Loc is able to achieve high precision localization performance using a low-cost monocular camera, largely exceeding the performance of the previous state-of-the-art methods, thereby promoting the development of autonomous driving. Experiments are conducted on large scale dataset of 15 km long in total. By applying the sliding window-based optimization technique, the historical visual features and HD map constraints are also introduced, such that the vehicle poses are estimated with an abundance of visual features and multi-frame HD map landmark features, rather than with single-frame HD map observations in previous works. TM³Loc introduces semantic chamfer matching (SCM) to model monocular map-matching problem and combines visual features with SCM in a tightly-coupled manner. This article proposes the tightly-coupled monocular map-matching localization algorithm (TM³Loc) for monocular-based vehicle localization. However, its localization performance is still unsatisfactory in accuracy and robustness in numerous real applications due to the sparsity and noise of the perceived HD map landmarks. Vision-based map-matching with HD map for high precision vehicle localization has gained great attention for its low-cost and ease of deployment. The experiments report satisfactory accuracy with mean absolute errors of 0.052m, 0.135m and 0.251$^\circ$ in lateral, longitudinal translation and heading angle degree. The experimental results show that the BEV-locator is capable to estimate the vehicle poses under versatile scenarios, which effectively associates the cross-model information from multi-view images and global semantic maps. We evaluate the proposed method in large-scale nuScenes and Qcraft datasets. Finally, the ego pose can be inferred by decoding the transformer outputs. The localization information of ego-car is recursively queried out by cross-attention modules. Then a cross-model transformer associates the BEV features and semantic map queries. While the semantic map features are structurally embedded as map queries sequence. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and flattens the multi-view images into BEV space. In this paper, we propose BEV-Locator: an end-to-end visual semantic localization neural network using multi-view camera images. Traditional visual localization frameworks approach the semantic map-matching problem with geometric models, which rely on complex parameter tuning and thus hinder large-scale deployment. RoadRunner Scene Builder lets you automatically generate 3D road models from HD maps.Accurate localization ability is fundamental in autonomous driving. RoadRunner Asset Library lets you quickly populate your 3D scenes with a large set of realistic and visually consistent 3D models. 3D scenes built with RoadRunner can be exported in FBX ®, glTF™, OpenFlight, OpenSceneGraph, OBJ, and USD formats. The exported scenes can be used in automated driving simulators and game engines, including CARLA, Vires VTD, NVIDIA DRIVE Sim ®, Baidu Apollo ®, Cognata, Unity ®, and Unreal ® Engine. You can import and export road networks using OpenDRIVE ®. RoadRunner supports the visualization of lidar point cloud, aerial imagery, and GIS data. RoadRunner provides tools for setting and configuring traffic signal timing, phases, and vehicle paths at intersections. You can insert signs, signals, guardrails, and road damage, as well as foliage, buildings, and other 3D models. You can customize roadway scenes by creating region-specific road signs and markings. RoadRunner is an interactive editor that lets you design 3D scenes for simulating and testing automated driving systems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |