As I often say about innovation, the technical problems are nothing compared to the pinhead legal problems. Verge has a good article up sorting through some of the legal and treaty issues (yes, treaty issues) involved in automated robotic cars. It’s definitely worth your time.
The article seems unduly pessimistic to me. These are things that can be worked out — we have entire armies of lawyers in this country who stand to make millions getting everything sorted into legal precedent. And if these things prove to be safe — and I think they will — the economic pressure to work out the legal issues will be fierce.
The one thing that bothered me about the article was this:
The Geneva Convention on Road Traffic (1949) requires that drivers “shall at all times be able to control their vehicles,” and provisions against reckless driving usually require “the conscious and intentional operation of a motor vehicle.” Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured — a difficult task if there’s no one in the car.
As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they’re right, the self-parking car may never be legal.
Did you see the subtext? The subtext is that if I’m in a crash with an automated car, there is no one around to render assistance to me.
Well, maybe. Bleeding out while unconscious or seriously injured would be a risk (although it’s not like pedestrians and bystanders are going to disappear). But being in a collision with a robot would have some advantages over being in one with a human:
Robot cars are coming, one way or another. As powerful as the legal pinheads are, the force of progress is simply too strong.