Go ahead and read the article. It's OK, I'll wait....
Many people seem to expect a computer to solve every problem before we can let it make decisions that involve risk to human health or life. What we really should be aiming for is for the computer to do significantly better than a person would do *in the exact same situation*...and let's face it, we're already there just because a computer has a MUCH faster reaction time. The real issue is how to allow it to make even extremely low risk decisions.
The article discusses whether risking the life of the driver would be considered acceptable. Considering how much you can lower the risk to life for auto occupants when 1) occupants are wearing seat belts, 2) they are protected by airbags, and, most importantly 3) the car is driven at a safe speed for road conditions, which includes slowing down when visibility is impeded, the risk to occupants is incredibly low, because that last item requires by definition that the car be driven at a speed at which an accident is almost always avoidable.
Unfortunately, because we drive so often and accidents are so few and far between for the majority of people, we've come to see some risk as acceptable in order to save a few seconds here and a few minutes there. But with self-driving cars we may have to reevaluate that calculation, because people see a higher risk that they bring upon themselves (internal risk) as more acceptable than a lower risk that they cannot control (external risk). For example, that's a major reason that there are plenty of people who are scared to fly but are not afraid to drive, even though the latter is much more risky.
Until we can recognize and try to overcome that inaccurate, emotional evaluation of risk, we can't act completely rationally with regard to health and safety risks.
What do you think?