Self-Driving Cars Driverless Waymo spotted on streets of Chandler |
- Driverless Waymo spotted on streets of Chandler
- GM pushes feds to approve Chevy Bolts with no steering wheel - Electrek
- Active learning and Tesla's training fleet of 0.25M+ cars
- Best non-Tesla options available outside of CA/AZ
Driverless Waymo spotted on streets of Chandler Posted: 24 Dec 2019 11:07 AM PST
| ||
GM pushes feds to approve Chevy Bolts with no steering wheel - Electrek Posted: 23 Dec 2019 06:24 PM PST
| ||
Active learning and Tesla's training fleet of 0.25M+ cars Posted: 23 Dec 2019 04:18 PM PST From Wikipedia):
In the context of fully supervised deep learning for computer vision, i.e. when images or video are hand-labelled by paid human annotators, the utility of Tesla's fleet of 250,000+ cars with the Full Self-Driving Computer (a.k.a. Hardware 3 or HW3) lies in active learning. According to the Tesla rumour mill, a new FSD Computer-only software update is currently going out to some Early Access testers (i.e. customers who volunteer to test unpolished software builds). The update allows users to see a visualization of stop signs, stop lines, and traffic lights on the car's display. Based on the fact that this update apparently requires the FSD Computer, it seems plausible that these visualizations are being generated by Tesla's new, bigger neural networks designed for the new compute hardware. Tesla's Senior Director of AI, Andrej Karpathy, has discussed these new networks on a Tesla earnings call, at ICML, and most recently at PyTorch DevCon. These new neural networks will not just perform real time inference for visualizations and, eventually, urban Autopilot. They will be able to select training examples from the stream of camera data coming into the car, save them, and queue them for uploading when the car connects to wifi. Those examples will then get labelled by Tesla's annotation staff and added to the neural networks' training datasets. This is active learning. At Tesla Autonomy Day, Andrej Karpathy described some of the ways Tesla does active learning for object detection. I highly recommend watching this 4-minute clip. Nvidia had a recent blog post on an active learning method wherein training examples are selected based on disagreements between an ensemble of neural networks. This was shown to be superior to manual selection of training examples by humans reviewing footage. I also stumbled upon an interesting academic paper in which the researchers devise a method to discover new object categories in large quantities of unlabelled video. This is another way active learning could take place. Other potential ways of selecting the best training examples include instances where a Tesla driver disengages Autopilot in a way that's unexpected (e.g. not after taking a highway exit). Any time Autopilot is disengaged, disagreements between a human driver's actions and the actions outputted by the Autopilot planner (which can run passively, in "shadow mode") could be used to select training examples as well. Then there are simpler, manually designed triggers, such as a hard braking event, a crash or close call, or "driver turned steering wheel more than X degrees within Y milliseconds". What having orders of magnitude more training vehicles allows is not having orders of magnitude more hand-labelled images or videos, but to extract — using active learning — the best, most informative examples from massive sample size of real world driving that is orders of magnitude larger and, therefore, includes orders of magnitude more of the best examples. Active learning could also be used to determine what data gets uploaded for weakly supervised learning (i.e. human driving behaviour automatically labels images or videos) and self-supervised learning (i.e. a neural network predicts an as-yet-unobserved part of the dataset from an observed part of the dataset) of computer vision tasks. These techniques have the potential to greatly improve upon the results that Tesla could get from fully supervised learning alone. I wanted to post about this here because the concept of active learning only recently clicked for me and I wanted to share that realization with everyone here. I think the concept of active learning adds a layer of nuance to discussions of computer vision that is sometimes lacking. I believe active learning is also applicable to road user prediction and imitation learning. [link] [comments] | ||
Best non-Tesla options available outside of CA/AZ Posted: 23 Dec 2019 06:07 PM PST So for those of us not blessed to live where Waymo/Cruise are operating, what are the current competitors to Tesla? Is Nissan's ProPilot keeping up with Tesla? Is anything better? For these systems, how big are their current fleets (edit: i.e., how many cars have ProPilot)? Is Tesla's the largest? [link] [comments] |
You are subscribed to email updates from Self-Driving Cars – Look reddit, no hands!. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment