G+_kevin jones Posted May 14, 2014 Share Posted May 14, 2014 So this self driving car ethical dilemma seems kind of made up... We solved this question for humans, you always save yourself and your passengers. That's the heart of defensive driving is that everyone protects themselves, right? When in doubt, you wait an extra moment and merge safely behind or take the extra look to make sure the lane is clear, these sorts of things. You do what you can not to endanger other drivers but if it's you or them, it's them and the feeling is mutual. Link to comment Share on other sites More sharing options...
G+_Jeff Stevens Posted May 14, 2014 Share Posted May 14, 2014 Ethical dilemma? Where did this come up? Link to comment Share on other sites More sharing options...
G+_Kyle Turpin Posted May 14, 2014 Share Posted May 14, 2014 The coverage lately had been about how the self driving car prioritizes an unavoidable accident. Should it hit car X or car Y when stopping isn't an option? Swerve into a wall and kill the passengers or a pedestrian on the other side? Link to comment Share on other sites More sharing options...
G+_James Karaganis Posted May 14, 2014 Share Posted May 14, 2014 You protect the people you are transporting. If we cannot depend upon the vehicle to do that, driving in one is likely to be perceived as too dangerous. "What do you mean my car might sacrifice me to save that cell-phone wielding psychopath who caused a multi-car pileup? Fuck that, I'm driving." Link to comment Share on other sites More sharing options...
G+_Billy Vaughn Posted May 14, 2014 Share Posted May 14, 2014 I think for it to work all cars would have to support self driving or have a separate set of lanes they operate in. Drivers are very unpredictable so I don't see how a computer could handle certain situations if it was a mix of drivers and self driving cars. Link to comment Share on other sites More sharing options...
G+_Kyle Turpin Posted May 14, 2014 Share Posted May 14, 2014 Billy Vaughn Yes, but the appeal to ignorance only works so well. I don't know how it will work but I know that electronic control systems have much faster reflexes than humans. Whether their judgement is as good is harder to predict but they'll definitely make the decision faster which makes them safer in countless scenarios even without perfect prediction of other drivers' behavior. Link to comment Share on other sites More sharing options...
G+_Billy Vaughn Posted May 14, 2014 Share Posted May 14, 2014 Kyle Turpin Oh, I certainly agree. I have a lengthy drive to work each day and would love to be able to do something other than drive while in the car. I really hope this tech takes off fast. It would take quite some time to replace all the cars on the road so some type of hybrid system would most likely be the best option. Link to comment Share on other sites More sharing options...
G+_Neil Sedlak Posted May 14, 2014 Share Posted May 14, 2014 Kyle Turpin I think it's important to discuss these issues and not dismiss them. Would you suggest your car should save you but in the process kill a bus load of school kids? Is it really ethically Ok to program a computer with that value system? Would you make that same choice to save yourself if you knew your kid was on the bus? Computers will have the time necessary to make those decisions, so isn't it a good idea to understand and discuss them? Link to comment Share on other sites More sharing options...
G+_Kyle Turpin Posted May 14, 2014 Share Posted May 14, 2014 Neil Sedlak I don't necessarily feel that this is being dismissive. My feeling is that this discussion is a hedge against having to put down in code ethical compromises that we feel queasy about admitting. It's not that the topic isn't worthy, it's that we're attacking the problem as though the question is different now that we have an agent acting on our behalf to actually implement our decision. We don't argue over whether it's more ethical to murder someone with a screwdriver than with a gun or with your bare hands because the tool used doesn't significantly change the ethical dimension of the act.? Link to comment Share on other sites More sharing options...
G+_Neil Sedlak Posted May 14, 2014 Share Posted May 14, 2014 Kyle Turpin It's not that the question is different now, but suddenly it is no longer possible to avoid the question. A very big distinction. Your murder analogy ignores hate crime laws, but a better analogy is that we do argue about intentional vs accidental murder and the punishment is vastly different for each. Is the death of someone in a crash accidental or intentional if a programmer decided on a certain choice for the computer to take? Will that programmer be liable for the car's actions? Does that make it premeditated? These are not made up dilemmas, but things we should discuss now rather than in court or at a funeral. Link to comment Share on other sites More sharing options...
G+_Jennifer Isaacs Posted May 15, 2014 Share Posted May 15, 2014 Less drunk/drugged driving? Still what if it malfuntions in some way like things do some times ?? Link to comment Share on other sites More sharing options...
G+_James Karaganis Posted May 15, 2014 Share Posted May 15, 2014 Ultimately, this comes down to risk/benefit analysis, as cold-blooded as that might be, and what is best for society as a whole. Given the impressive track record of autonomous vehicles so far (Google's in particular) and the fact that they are only going to get more reliable, I think it's clear that society will benefit. There will be less death and injury as more such machines appear on the roads. The inevitable problems will ensue when a self-driving automobile makes a decision that gets someone killed. It won't matter one bit that the robot took the only logical course of action, it won't matter that driving deaths dropped from 30,000+ per year to 100 ... "oh my god, killer robot car murders drivers!" will read the headlines, and the courts will be busy for decades. Unless, of course, Congress drafts some sane laws that keep a little balance, but I don't see that happening anytime soon. Link to comment Share on other sites More sharing options...
G+_Jennifer Isaacs Posted May 15, 2014 Share Posted May 15, 2014 Any thing created by humans is failable and most of the time only the good gets reported with companies by those same companies. If you are getting info owned by Google or on google I expect only positives. That would just make sense. Parts wear out and mistakes are made with any company ... but if there are fail safes I would feel better about such things over none at all. Even nature can influence if driving is safe like that of weather.? Link to comment Share on other sites More sharing options...
G+_James Karaganis Posted May 15, 2014 Share Posted May 15, 2014 Jennifer Isaacs The reality is that pretty much anything will be an improvement over human drivers in this country. In the case of the Google vehicles, their operational record is borne out by public record. The things were roving public highways and byways, not on a test track, and the only recorded issue wasn't the Google car's fault anyway. That's after a quarter of a million miles of operation. Link to comment Share on other sites More sharing options...
Recommended Posts