More about HKUST
Street View Data Segmentation and Modelling
MPhil Thesis Defence Title: "Street View Data Segmentation and Modelling" By Mr. Zhexi WANG Abstract Large scale urban scene scanned data becomes available these years since many institutions and companies are capturing large amount of data all over the world, and some of them are already used in industry products such as Google StreetView. There are many research topics along the whole process, including data capture, alignment, registration, segmentation, and modelling. My research work is related to some of these topics. We design and implement a simultaneously localization and mapping (SLAM) system. This system is composed of a monocular SLAM (mono-SLAM) component and a stereo SLAM components. The mono-SLAM can run individually and the result can be enhanced by the stereo SLAM if the data captured by a stereo camera pair is added. With stereo SLAM, the similarity $3$D reconstruction can be upgraded to metric reconstruction. We solve the similarity transformation between similarity reconstruction and metric reconstruction, and update the transformation online for every newly input frames. We did some research on 3D point clouds processing and modeling. We first address the problem of separating objects from 3D scanned point clouds of urban scene. The proposed approach hierarchically performs the grouping from the 3D points to curve segments, object elements, and finally objects. The main contribution lies in grouping object primitives to objects. We introduce the relation attributes that describe relations for pairs of object primitives, learn a preference function over such attributes via ranking-SVM which is used to compute the degree that two object primitives belong to an object, and finally merge object elements that are very likely to be contained in the same object. Unlike previous 3D points segmentation algorithms that require object priors to annotate 3D points, our approach only exploits the relation prior that is not limited to any specific object and can separate general urban objects. Experimental results over large scale real urban scenes demonstrate our approach is effective to object separation. We also present an automatic approach to reconstruct 3D road network models as a part of 3D cities from terrestrial LiDAR and photo data. Comparing to aerial data, terrestrial data provide much higher resolution and bring us superior reconstruction quality. However, the common terrestrial LiDAR data suffer from occlusion, inconsistency between multiple scans, and the lack of topology information. Moreover, since the road network coverage of a city is too large to be modeled as a single entity, it is crucial to partition the full set of data into smaller parts and model each of them individually. Unlike the previous approaches of point clouds segmentation and modeling that do not take the knowledge of road structures into account, we introduce the prior knowledge of roads in the form of 2D topology maps, which are widely available on Internet, e.g., OpenStreetMap, to assist the reconstruction of roads. After the point clouds of roads are segmented from the input data, a cross-domain alignment method is designed to align the scanning point clouds and 2D topology maps. Topology-aware partition is further used to break the point clouds and images into manageable partitions so that a model for each partition can be generated and textured by the captured photos. Finally, all individual partitions are merged to a global consistent model. We use a graph-based method to automatically select best partitions to generate textures of the road with the consideration of both accuracy and consistency. A novel method designed for texture generation from rolling shutter image is proposed. Our pipelines are tested in large scale datasets such as San Francisco, New York City, and Paris. Date: Monday, 17 June 2013 Time: 1:30pm – 3:30pm Venue: Room 3494 Lifts 25/26 Committee Members: Prof. Long Quan (Supervisor) Dr. Pedro Sander (Chairperson) Dr. Chiew-Lan Tai **** ALL are Welcome ****