10

I recently heard someone make a statement that when you're designing a self-driving car, you're not building a car but really a computerized driver, so you're trying to model a human mind -- at least the part of the human mind that can drive.

Since humans are unpredictable, or rather since their actions depend on so many factors some of which are going to remain unexplained for a long time, how would a self-driving car reflect that, if they do?

A dose of unpredictability could have its uses. If, say, two self-driving cars are in a stuck in a right of way deadlock, it could be good to inject some randomness instead of maybe seeing the same action applied at the same time if the cars run the same system.

But, on the other hand, we know that non-deterministic isn't friends with software development, especially in testing. How would engineers be able to control it and reason about it?

nbro
  • 39,006
  • 12
  • 98
  • 176
guillaume31
  • 203
  • 1
  • 8

2 Answers2

3

Driving Priorities

When considering the kind of modeling needed to create reliable and safe autonomous vehicles, the following driving safety and efficacy criteria should be considered, listed in priority with the most important first.

  • The safety of those inside the vehicle and outside the vehicle
  • Reduction of wear on passengers
  • The safety of property
  • The arrival at the given destination
  • Reduction of wear on the vehicle
  • Thrift in fuel resources
  • Fairness to other vehicles
  • The thrift in time

These are ordered in a way that makes civic and global sense, but they are not the priorities exhibited by human drivers.

Copy Humans or Reevaluate and Design from Scratch?

Whoever said that the goal of autonomous car design is to model the portions of a human mind that can drive should not be designing autonomous cars for actual manufacture. It is well known that most humans, although they may have heard of the following safety tips, cannot bring them into consciousness with sufficient speed to benefit from them in actual driving arrangements.

  • When the tires slip sideways, steer into the skid.
  • When a forward skid starts, pump the breaks.
  • If someone is headed tangentially into your car's rear, immediately accelerate and then break.
  • On an on ramp, accelerate to match the speed of the cars in the lane into which you merge, unless there is no space to merge.
  • If you see a patch of ice, steer straight and neither accelerate nor decelerate once you reach it.

Many collisions between locomotives and cars are because a red light causes a line in multiple lanes across the tracks. Frequently, a person will move onto the railroad tracks to gain one car's length on the other cars. When others move to make undoing that choice problematic, a serious risk emerges.

As absurd as this behavior is to anyone watching, many deaths occur as a fast traveling 2,000 ton locomotive hits what feels like a dust speck to the train passengers.

Predictability and Adaptability

Humans are unpredictable, as the question indicates, but although adaptability may be unpredictable, unpredictability may not be adaptive. It is adaptability that is needed, and it is needed in five main ways.

  • Adaptive in the moment to surprises
  • Adaptive through general driving experience
  • Adaptive to the specific car
  • Adaptive to passenger expression
  • Adaptive to particular map regions

In addition, driving a car is

  • Highly mechanical,
  • Visual,
  • Auditory,
  • Plan oriented
  • Geographical, and
  • Preemptive in surprise situations.

Modelling Driving Complexities

This requires a model or models comprise of several kinds of objects.

  • Maps
  • The vehicle
  • The passenger intentions
  • Other vehicle
  • Other obstructions
  • Pedestrians
  • Animals
  • Crossings
  • Traffic signals
  • Road signs
  • Road side

Neither Mystery nor Indeterminance

Although these models are cognitively approximated in the human brain, how well they are modeled and how effective those models are at reaching something close to a reasonable balance of the above priorities varies from driver to driver, and varies from trip to trip for the same driver.

However, as complex as driving is, it is not mysterious. Each of the above models are easy to consider at a high level in terms of how they interact and what mechanical and probabilistic properties they have. Detailing these is an enormous task, and making the system work reliably is a significant engineering challenge, in addition to the training question.

Inevitability of Achievement

Regardless of the complexity, because of the economics involved and the fact that it is largely a problem of mechanics, probability, and pattern recognition, it will be done, and it will eventually be done well.

When it is, as unlikely as this sounds to the person who accepts our current culture as permanent, human driving may become illegal in this century in some jurisdictions. Any traffic analyst can mount heaps of evidence that most humans are ill equipped to drive a machine that weighs a ton at common speeds. The licensing of unprofessional drivers has only become widely accepted because of public insistence on transportation convenience and comfort and because the workforce economy requires it.

Autonomous cars may reflect the best of human capabilities, but they will likely far surpass them because, although the objects in the model are complex, they are largely predictable, with the notable exception of children playing. AV technology will use the standard solution for this. The entire scenario can be brought into slow motion to adapt for children playing simply by slowing way down. AI components that specifically detect children and dogs are likely to emerge soon, if they do not already exist.

Randomness

Randomness is important in training. For instance, a race car driver will deliberately create skids of various types to get used to how to control them. In machine learning we see some pseudo random perturbations introduced during training to ensure that the gradient descent process does not get caught in a local minimum but rather is more likely to find a global minimum (optimum).

Deadlock

The question is correct in stating that, "A dose of unpredictability could have its uses." The deadlock scenario is an interesting one, but is unlikely to occur as standards develop. When four drivers come to a stop sign at the same time, they really don't. It only seems like they did. The likelihood that none of them arrived more than a millisecond before the others is astronomically small.

People will not detect (or even be honest enough) to distinguish these small time differences, so it usually comes to who is most gracious to wave the others on, and there can be some deadlock there too, which can become comical, especially since all of them really wish to get moving. Autonomous vehicles will extremely rarely encounter a deadlock that is not covered by the rule book the government licensing entity publishes, which can be programmed as driving rules into the system.

On those rare occasions, the vehicles could digitally draw lots, as suggested, which is one place where unpredictability is adaptive. Doing skid experimentation like a race car driver on Main Street at midnight may be what some drunk teen might do, but that is a form of unpredictability that is not adaptive toward a sensible ordering of the priorities of driving. Neither would be texting or trying to eat and drive.

Determinism

Regarding determinism, in the context of the uses discussed, pseudo-random number generation of particular distributions will suffice.

  • Deadlock release or
  • Training speed-ups and improved reliability when there are local minima that are not the global minimum during optimization,

Functional tests and unit testing technologies are not only able to handle the testing of components with pseudo-randomness, but they sometimes employ pseudo-randomness to provide better testing coverage. The key to doing this well is understanding of probability and statistics, and some engineers and AI designers understand it well.

Element of Surprise

Where randomness is most important in AV technology is not in the decision making but in the surprises. That is the bleeding edge of that engineering work today. How can one drive safely when a completely new scenario appears in the audio or visual channels? This is perhaps the place where the diversity of human thought may be best adept, but at highway speeds, it is usually too slow to react in the way we see in movie chase scenes.

Correlation Between Risk and Speed

This brings up an interesting interaction of risk factors. It is assumed that higher speeds are more dangerous, the actual mechanics and probability are not that clear cut. Low speeds produce temporally longer trips and higher traffic densities. Some forms of accidents are less likely at higher speeds, specifically ones that are related mostly to either traffic density or happenstance. Other forms are more likely at higher speeds, specifically ones that are related to reaction time and tire friction.

With autonomous vehicles, tire slippage may be more accurately modeled and reaction time may be orders of magnitude faster, so minimum speed limits may be more imposed and upper limits may increase once we get humans out of the driver's seats.

Douglas Daseeco
  • 7,423
  • 1
  • 26
  • 62
  • Thanks for the great answer! The point about modelling a computerized driver was brought up [here](https://softwareengineeringdaily.com/2018/08/08/self-driving-engineering-with-george-hotz/) - it wasn't so much about emulating the human mind *with its flaws* but rather emphasizing that the hard part of this job is to build the AI, not a physical car. The extrapolation to the topic of randomness is mine. – guillaume31 Sep 05 '18 at 09:31
  • Around 09:50: *"I almost don't even like the term 'self-driving car' because it implies that the car drives. I think what we're really trying to build is a computerized driver. And then you don't think of yourself as building a car, you think of yourself as building a human."* – guillaume31 Sep 05 '18 at 09:35
  • @guillaume31, Thank you for the good Q. ... Although I understand what the writer of the quote intends to say, the quote contains one conceptual flaw per sentence. ... Sentence 1: The AI is packaged within the car during manufacture, so the cars do drive. ... Sentence 2: The term computerized driver obscures the undesirability of modelling driving intelligence after typical human driving. ... Sentence 3: We don't want a robot taking up a seat. ... The quote illustrates why only 1 in 1,000 of these AI start-ups are expected to survive. How can they design clearly if they can't write clearly? – Douglas Daseeco Sep 12 '18 at 16:36
2

Self driving cars apply Reinforcement Learning and Semi-Supervised learning, this allows them to be more suited for situations that developers did not anticipate themselves.

Some cars now apply Swarm Intelligence, where they effectively learn from interactions among themselves, which can also aid in cases of transfer learning.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53