• Breaking News

    Sunday, November 24, 2019

    Self-Driving Cars Automatically labelling semantic free space using human driving behaviour

    Self-Driving Cars Automatically labelling semantic free space using human driving behaviour


    Automatically labelling semantic free space using human driving behaviour

    Posted: 24 Nov 2019 03:07 AM PST

    Paper: "Minimizing Supervision for Free-space Segmentation"

    Here's an awesome application of weakly supervised learning for semantic segmentation of free space. (Free space is unobstructed roadway that a car can safely drive on.) The researchers use human driving as a form of labelling instead of manual annotation. They exploit the fact that wherever humans drive is free space (or at least it is 99.99%+ of the time). The researchers note:

    Of course, fully supervised somewhat outperforms our results (0.8531 vs 0.835). Nonetheless, it is impressive that our technique achieves 98% of the IoU [Intersection over Union] of the fully supervised model, without requiring the tedious pixel-wise annotations for each image. This indicates that our proposed method is able to perform proper free-space segmentation while using no manual annotations for training the CNN.

    If you can automatically label 10,000x more data than you can afford to manually label — which is true for a company like Tesla — then I would imagine weakly supervised learning would outperform fully supervised learning. A hybrid approach in which you use a combination of manually labelled and automatically labelled data might outperform both.

    Paper abstract:

    Identifying "free-space," or safely driveable regions in the scene ahead, is a fundamental task for autonomous navigation. While this task can be addressed using semantic segmentation, the manual labor involved in creating pixel-wise annotations to train the segmentation model is very costly. Although weakly supervised segmentation addresses this issue, most methods are not designed for free-space. In this paper, we observe that homogeneous texture and location are two key characteristics of free-space, and develop a novel, practical framework for free-space segmentation with minimal human supervision. Our experiments show that our framework performs better than other weakly supervised methods while using less supervision. Our work demonstrates the potential for performing free-space segmentation without tedious and costly manual annotation, which will be important for adapting autonomous driving systems to different types of vehicles and environments.

    A key excerpt:

    We now describe our technique for automatically generating annotations suitable for training a free-space segmentation CNN. Our technique relies on two main assumptions about the nature of free-space: (1) that free-space regions tend to have homogeneous texture (e.g., caused by smooth road surfaces), and (2) there are strong priors on the location of free-space within an image taken from a vehicle. The first assumption allows us to use superpixels to group similar pixels. ... The second assumption allows us to find "seed" superpixels that are very likely to be free-space, based on the fact that free-space is usually near the bottom and center of an image taken by a front-facing in-vehicle camera.

    Open access PDF: http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w14/Tsutsui_Minimizing_Supervision_for_CVPR_2018_paper.pdf

    Examples of segmentations included in the paper: https://i.imgur.com/3IVodKWr.jpg

    submitted by /u/strangecosmos
    [link] [comments]

    Economics of robotaxis

    Posted: 23 Nov 2019 07:32 PM PST

    Seeing more news start to trickle out about robotaxi services and I'm pretty excited about one day getting rid of my car entirely. Anyone have any guesses on what kind of pricing structure we'd see for driverless ride hailing services? I'm really hoping they can hit a price point competitive with (or less than) owning a car in a typical suburban setting (driving ~12,000 miles/year).

    submitted by /u/on_mobile
    [link] [comments]

    No comments:

    Post a Comment