"... a factor that investigators are likely to focus on as they assess the performance of the technology in the first pedestrian fatality involving an autonomous vehicle. The Uber had a forward-facing video recorder, which showed the woman was walking a bike at about 10 p.m. and moved into traffic from a dark center median. 'It’s very clear it would have been difficult to avoid this collision in any kind of mode,' Sylvia Moir, the police chief in Tempe, Arizona [said]... 'The driver said it was like a flash, the person walked out in front of them,' Moir said, referring to the backup driver who was behind the wheel but not operating the vehicle. 'His first alert to the collision was the sound of the collision.'"
Bloomberg reports.
ADDED: When I drive around pedestrians, I think about their capacity to suddenly make an erratic move, and I care about people enough to keep my eye on them. You've got to teach the self-driving cars to act like they care. I've also got a powerful selfish motivation that a machine cannot have: If I ever hit a person — even if it were their fault — I would carry that experience with me and suffer emotionally for the rest of my life.
১৩৮টি মন্তব্য:
Artificial intelligence can't overcome natural stupidity, at least not yet.
Too bad they couldn't have constructed a 16 million dollar pedestrian overpass to keep her safe.
This story is way weirder than it seems at first. Way. Click all the way through.
I've been to Arizona and I'm pretty sure it's legal for blind people to drive there.
One nice thing about self-driving car accidents: the investigators will have plenty of information about what happened, including video and lidar.
No need to rely on witnesses with bad memories and/or motives to lie.
Maybe she wanted to go down in history as the first person killed by a driverless car.
The backup driver was a former armed robber who self-identified as a woman, according to my facts.
Iowakawk: Homeless Junkie Killed Jumping In Front of Robot Car Carrying Trans Armed Robber is the most 2018 story ever.
Not feeling the humor.
tim maguire said...
Maybe she wanted to go down in history as the first person killed by a driverless car.
I very much doubt it was suicide, as the car was not going fast enough to make death all that likely. If I was investigating it, I would certainly consider the possibility of someone intentionally getting themselves hit, expecting to survive, then suing a company with deep pockets.
I'm not saying that's what happened in this case, but it would very much surprise me if someone does not try that at some point.
This will happen a lot more when there are more electric vehicles, unless they add noise to them.
Blogger Larry J said...
"Artificial intelligence can't overcome natural stupidity, at least not yet."
I don't believe that there is any such thing as "artificial intelligence" I think the word "ersatz intelligence" is a better description, but when I am driving, pedestrians all carry different risk profiles, if there is somebody who strikes me as mentally unstable, or like they aren't paying attention, looking at their phone, or somebody with small children, or even sometimes, small children alone, I slow down on the chance they could "suddenly" end up in the road. If I don't do that, in my mind, the accident would be my fault. This is far beyond the scope of any kind of "artificial intelligence" that anybody outside of science fiction has cooked up.
"Not our fault."
We will proceed as planned.
Not that humans are anywhere close to perfect drivers. Self driving on the Interstates probably is low enough risk to be worth it. If they can get the risk as low as plane flight, for example, I would be all over it. I hate flying.
I am amazed at the reactions to this story.
Technology really does scare the shit out of us.
PS In case you're still worried, a photograph of you does not steal your soul.
Not just slow down, but move into the next lane if it's safe. AI won't do that.
Cadillac's Super Cruise works only on certified interstates.
Maybe she wanted to go down in history as the first person killed by a driverless car.
Like John Lillison--England's greatest one-armed poet and first man killed by an automobile. He wrote Pointy Birds.
Oh, pointy birds
oh, pointy pointy
Anoint my head
anointy-nointy.
https://www.youtube.com/watch?v=nU4RD2f2BnY
rhhardin said...
Iowakawk: Homeless Junkie Killed Jumping In Front of Robot Car Carrying Trans Armed Robber is the most 2018 story ever.
Speaking of Judy Garland, a major genius has produced "Of Oz The Wizard", edited so the dialog is in alphabetical order. Skip to "the" for extra fun.
It's all entertainment. The humor is explicitly framing it as what it is.
Last night I was driving and there was a black Scotty sauntering down the road, ignoring me as I drove slowly behind it. I am not sure that a camera would have picked up the black dog when it was a little bit dusky, and I wonder about lidar, since the thing was almost as short as a dachshund. I have lidar, just for collision avoidance, and it gave nary a peep. Cameras have to have to develop the dynamic range of the human eye, and the computing power to process, in real time, that much information? I am not sure we have it yet.
Lidar helps, I suppose. Once my car warned me of "objects in the road," it was a line of cones set up to close a lane, lots of other times I have seen stuff in the road with no warning. On the negative side, if you see a jogger by the road, and cross the center line to give them a little space, the car will jerk you back towards the jogger. You have to signal first!
The whole problem with the smart transport vehicle, is that it remains on the ground.
This is dumb. The Jetson's would never operate their vehicles on the ground. You could kill somebody.
Ralph L said...
Not just slow down, but move into the next lane if it's safe. AI won't do that.
Not only will AI do that, but it will do it better than you, because it will be much better than you at calculating the odds, both the odds that a pedestrian will step into the road, and the odds that changing lanes will cause other problems.
I'm not saying it is to that level now, but withing 5 year AI will have thousands of human lifetimes worth of driving experience, plus the ability to calculate the odds on more scenarios than any human would consider.
Maybe Ignorance, but it won't know who to give a wider berth to.
Last night a dog ran in front of me dragging a leash anchor. A good horn blast turned him around, but it was a good thing I'd slowed and moved to turn left.
"Technology really does scare the shit out of us.
PS In case you're still worried, a photograph of you does not steal your soul."
Your faith in computer programmers operating automobiles in public, not even remotely, but in advance of any possible situation that may arise is touching. No doubt the car's path is unfolding, as it should.
I was once charged with idiot-proofing an app. The boss remarked that this presupposed a finite supply of idiots.
Id ask why the darkness is relevant since the auto driver isnt using sight for navigation.
Ralph L said...
Maybe Ignorance, but it won't know who to give a wider berth to.
Why not? And what makes you think you do know that?
I'm sure that, for a long time, there will be accidents involving AI where a human would look at the situation, and instantly make a better decision. There will be many, many, many more situations where both the human and the AI will see it as a close call, and the computer will make the better choice. The AI will save a lot of lives, but will make headlines for the few "obvious" mistakes.
damikesc said...
Id ask why the darkness is relevant since the auto driver isnt using sight for navigation.
They do use video as one component.
I was under the impression radar and other systems were the major navigation systems.
Not only will AI do that, but it will do it better than you, because it will be much better than you at calculating the odds, both the odds that a pedestrian will step into the road, and the odds that changing lanes will cause other problems.
I'm not saying it is to that level now, but within 5 year AI will have thousands of human lifetimes worth of driving experience, plus the ability to calculate the odds on more scenarios than any human would consider.
So AI will be able to spot which pedestrians are wearing headphones, for example, and appear oblivious? Will be able to spot on the fly which ones are scrolling on their phones? Which ones appear to be, as you see in Boston plenty, wasted on drugs, or fit the profile to be potentially wasted on drugs? I exercise a lot of prejudice while driving, I admit. Or will it be overcautious, causing other drivers to attempt to find a way around it?
Personally not a big fan of the driverless car concept, but it seems pretty obvious that overall it would be safer and result in smoother traffic.
Essentially will remove probably 99% of the reaction time that humans have, as well as the distractions. But it will be susceptible to jamming, hacking, etc.
tim in vermont said...
Your faith in computer programmers operating automobiles in public, not even remotely, but in advance of any possible situation that may arise is touching.
My faith is only that, in the limited situations where they are being rolled out, they are safer than the average human driver, and will soon be better than the vast majority of drivers. That's a relatively low standard.
There are going to be fatalities from autonomous cars. As long as we have motor vehicles, there will be fatalities involving them.
The questions will be whether autonomous vehicles can avoid fatalities that would have occurred with human drivers (likely yes in most cases), and whether autonomous vehicles will cause fatalities that never would have occurred otherwise.
It's the answer to the second question that will drive acceptance of them. Oh, and the PR battle, of course. Will self-driving cars appear to be safer than human-driven cars?
As we all know, appearances are everything. It won't matter in terms of their acceptance if they're much safer, but don't appear to be. Neither will it matter if they're far less safe, but appear to be safer.
The AI will save a lot of lives, but will make headlines for the few "obvious" mistakes.
This trust in big data is touching too. The problem with "big data" is that it is subject to systematic bias in the collection of those thousands of lifetimes of driving experience. Maybe one day, with better sensors, the computers will be able to pick up in a pedestrian's walk that they are inebriated, or distracted, or whatever, but until that day comes, if a computer decides to keep driving and the video shows that the pedestrian martyr to technology was obviously in danger, there is going to be lawyer jubilee. Especially with companies like Google on the hook.
I think too, that there are ethical issues with experimenting with human lives, especially without the consent of the lab rats.
tim in vermont said...
So AI will be able to spot which pedestrians are wearing headphones, for example, and appear oblivious? Will be able to spot on the fly which ones are scrolling on their phones? Which ones appear to be, as you see in Boston plenty, wasted on drugs, or fit the profile to be potentially wasted on drugs? I exercise a lot of prejudice while driving, I admit. Or will it be overcautious, causing other drivers to attempt to find a way around it?
It will miss a lot of the clues on which you base your decisions. It will pick up on a lot of clues that you or I don't notice. And it will make statistically better decisions regarding being over or under cautious.
Maybe she wanted to go down in history as the first person killed by a driverless car.
Obviously not, but she is a historic footnote now. Like the U.S. Army's Lieutenant Selfridge, first to die in an airplane (1908).
The beauty of driverless is that private companies will be able shut down the vehicles of those with low social credit scores.
Chris of Rights said what I wanted to say more clearly.
tim in vermont said...
I think too, that there are ethical issues with experimenting with human lives, especially without the consent of the lab rats.
Every single new human driver starts out by experimenting with how to drive their car. Those experiments are far more risky than any risks taken in the development of AI.
t will miss a lot of the clues on which you base your decisions. It will pick up on a lot of clues that you or I don't notice. And it will make statistically better decisions regarding being over or under cautious.
That's just faith. You can't know that. Maybe they are blind to the information that they would need and no matter how many "lifetimes" of data that they gather, what they need is not there. One thing that struck me was the Tesla driver who was killed because the system could not tell the difference between the side of a white truck and an expanse of sky. At least that guy signed up for the experiment by turning the system on. But the camera wasn't good enough to gather the information required. Cameras are far inferior to the human eye in most respects. It's the sensors that worry me, and the better the sensors, the more processing power it is going to take to deal with the new information.
Every single new human driver starts out by experimenting with how to drive their car. Those experiments are far more risky than any risks taken in the development of AI.
That's rhetoric, not logic. Rhetoric is stuff that sounds like logic, but isn't.
"Lidar helps, I suppose."
Please let me know when they bring out PHODAR (pothole detection radar).
What if most of the victims of EI (Ersatz Intelligence) drivers are low status types, drug users, homeless types, mentally challenged, children, what then?
ADDED: When I drive around pedestrians, I think about their capacity to suddenly make an erratic move, and I care about people enough to keep my eye on them. You've got to teach the self-driving cars to act like they care.
Asimov's first law of robotics.
I've also got a powerful selfish motivation that a machine cannot have: If I ever hit a person — even if it were their fault — I would carry that experience with me and suffer emotionally for the rest of my life.
Which is the main reason I get so angry at cyclists and pedestrians that ignore road rules. I don't know what Madison is like, but in Ann Arbor cyclists zip through stop signs and red lights without even pausing, and often flip off the cars desperately braking to avoid killing them. Spandex-dude, I don't want to kill you; could you help me avoid that?
The beauty of driverless is that private companies will be able shut down the vehicles of those with low social credit scores.
And deplorables on election day.
What percentage of the accidents occurring today would be avoided by the improved reaction time/lack of distractions/obeying speed limits? 99%? Pretty obviously that improvement will exceed the handful of occasions when a human sees something unusual, which, btw, most human drivers either won't notice or won't react to quickly enough.
tim in vermont said...
That's rhetoric, not logic. Rhetoric is stuff that sounds like logic, but isn't.
Logic says that if the accident rates are lower for autonomous cars than for new drivers ( driving under similar circumstances ) than the autonomous cars are less risky.
"Well, Tim, we have to do the experiments to find out who the victims are!"
I am sure we could probably save a lot of lives, over the long run, if medical research were allowed to do potentially destructive experiments on humans, because "people are experimenting all the time! Using drugs, etc..."
The beauty of driverless is that private companies will be able shut down the vehicles of those with low social credit scores.
A Detroit-area dealership was able to do that years ago. Mel Farr had weekly payment plans, and had an attachment to allow the dealer to turn off the car's ignition a few days after the due date. A little blinking light would warn you that the payment was due.
tim in vermont said...
That's just faith.
It is an estimate, based on years of evidence, that we will continue to see the sorts of technological advances that we have been seeing year after year, decade after decade.
Logic says that if the accident rates are lower for autonomous cars than for new drivers ( driving under similar circumstances ) than the autonomous cars are less risky
Good luck with that argument in court!
"When I drive around pedestrians, I think about their capacity to suddenly make an erratic move, and I care about people enough to keep my eye on them. You've got to teach the self-driving cars to act like they care."
I think this is right, but (a) Uber may have sufficient sensors (surely more than mere peripheral vision) to replicate this human intuition already, and (b) if not, it obviously may not have made a difference anyway.
I wonder if the car automatically sounded the horn at the pedestrian before impact (or as a courtesy, as I might do, even before impact was imminent).
t is an estimate, based on years of evidence, that we will continue to see the sorts of technological advances that we have been seeing year after year, decade after decade.
Computing advances come in the areas where computers advance. You can't decide beforehand where the advances are going to come from. It does create the illusion that you can.
"Computers are logical, but not reasonable!"
Careful drivers avoid a lot of accidents by anticipation, which is a product of experience, intuition and attentiveness. Can your robot do that? Can it be programmed to care?
tim in vermont said...
Good luck with that argument in court!
I'm well aware that lawsuits ( or legislatures ) could kill autonomous cars in an instant.
If they do, it won't be because they are not safer than human drivers.
(I've been on the jury for a wrongful death lawsuit. The twelve person jury had seven people whose default response was somebody died, therefore somebody should be held accountable, and it didn't really matter if that second somebody had any causal relationship to the first somebody's death. )
Blogger Hagar said...
"Computers are logical, but not reasonable!"
They do what you tell them to do, not what you want them to do. Some science fiction writer once said that the compiler should take the comments into account. Hey! It's five years away!
Would a driverless car swerve to avoid a squirrel? How about a cute little innocent bunny? Could it see a deer with big brown eyes running out of the woods seeking safety from a dog chasing it? How about police tape across the road, can it see that?
Blogger Bob Boyd said...
The beauty of driverless is that private companies will be able shut down the vehicles of those with low social credit scores.
I suspect you are joking but this is a very real danger. Not necessarily of low credit scores but it does give govt control of your movements.
Headed for the big demo that the govt doesn't want to happen? Simple enough to shut off your car or even better, program it to turn around and take you home.
Yes, of course I am paranoid. Didn't the Obama administration teach you anything? And now with President Donald J (as in "Just like Hitler(tm)") Trump, you will get in your car to go get a liter of milk, the doors will lock and it will take you straight to a death camp. Only Crooked Hillary supporters, of course, so I am probably safe.
John Henry
"Personally not a big fan of the driverless car concept, but it seems pretty obvious that overall it would be safer and result in smoother traffic.
It will be sold on that basis, but ultimate goal is much different.
Want to go from Point A to Point B via route X? Nope, driverless car controller says "not so fast, you're going via route Y. And you've already used your allotted miles this week, maybe tomorrow".
Control is the operative word.
I still think OTR trucking will be the first self-driving market. Companies/OO will convert immediately when the tech has a payback period that provides a profit margin over NOT doing the switch.
Fed regulations will have to flex.
Harder to imagine spending $20K extra for a consumer car, frankly. My brand new VW Golf Sportwagen 4Motion was only $22K (including tax/tags/title - yes I am bragging). If it was $42k I wouldn't have bought it. My payback for $20K in extra cost is a looooooong time.
-XC
If you were in the back seat of a driverless car, would it listen to you?
David said...
Careful drivers avoid a lot of accidents by anticipation, which is a product of experience, intuition and attentiveness. Can your robot do that?
Yes.
Can it be programmed to care?
No. Nor does it need to. Humans use caring as a motivation to be careful, to pay attention and drive the best they can. The computer does not need that motivation, it can't do anything except pay attention and drive the best that it can.
Blogger Phil 3:14 said...
PS In case you're still worried, a photograph of you does not steal your soul.
Bullshit, Phil.
Remember this story?
https://boingboing.net/2010/02/17/school-used-student.html
School used student laptop webcams to spy on them at school and home
According to the filings in Blake J Robbins v Lower Merion School District (PA) et al, the laptops issued to high-school students in the well-heeled Philly suburb have webcams that can be covertly activated by the schools' administrators, who have used this facility to spy on students and even their families.
Or this one:
https://www.newyorker.com/magazine/2012/02/06/the-story-of-a-suicide
A few weeks into the semester, Ravi and another new student, Molly Wei, used a webcam to secretly watch Clementi in an embrace with a young man.
They secretly hacked into his laptop webcam then posted video to the web.
If you have a webcam make sure it is physically disconnected unless you are actually using it. Not turned off, physically unplugged.
If you have a webcam on your laptop, pop the lens off and use a small screwdriver to destroy the camera. At the very least, put a piece of duct tape over it.
Cameras really CAN steal your soul.
John Henry
I've written many programs that were intended to do reasonable things and it always wound up that you figured out a dumb algorithm that accomplishes it.
The intelligence is never in the computer. But the computer may be much better than you are at doing that thing.
I want to say something about fools and foolproof.
"I suspect you are joking but this is a very real danger. Not necessarily of low credit scores but it does give govt control of your movements."
I'm not joking. And I said social credit score. https://www.theinquirer.net/inquirer/news/3028737/chinese-citizens-with-poor-social-credit-rating-to-be-barred-from-public-transport
I see a trend toward using big business and social media to do things the government can't do. It's a way around the protections in the Constitution.
Expat(ish) said...
I still think OTR trucking will be the first self-driving market.
The killer app ( a little ironic given the actual subject of this post ) for self-driving cars will be an Uber-like service for retirement communities ( whole regions of Florida, parts of Arizona, etc.). Lots of people with declining driving skills, limited need for a car, not in a hurry, but still want the ability to go out to lunch with friends, get to a doctor's appointment, or do some shopping.
Nobody is in a hurry, so driving speeds are lower, which helps until processing speeds improve. Also, lower speeds means less damage in case of an accident.
Dense customer base, making it affordable to map the roads down to every last pothole.
Generally good weather, so you don't have to handle driving in snow.
Driving in snow is exceptionally easy, just an unlearned skill in most people. Computers would have no problem with it.
The decline of rear-drive cars has made it less entertaining. Their instability in fact makes them more controllable once you learn how to control them.
"I've written many programs that were intended to do reasonable things and it always wound up that you figured out a dumb algorithm that accomplishes it.
The intelligence is never in the computer."
Hear hear! And that "dumb algorithm" (the best kind, BTW) is a golden egg, not the goose that laid it. People mix them up all of the time.
Driverless cars will be awesome for the elderly. Eventually.
I'm struck by the word "suddenly" in the crash report. I wonder how sudden it actually was.
It seems to me that a human overseer in a driverless car could rapidly be lulled into a torpor. What are they actually going to do, after all?
Careful drivers avoid a lot of accidents by anticipation, which is a product of experience, intuition and attentiveness. Can your robot do that?
Yes.
You forgot to thump your thumb on your Dr Dobbs, when you declared your faith that computers will be able to make more accurate inferences than human drivers with less information. Probably in 15 years when they are powered by clean nuclear fusion...
I always had trouble with those airline statistics that said that flying was safer than driving, though today, flying seems a lot safer than it did a couple of decades ago, when DC-10s were falling from the sky when the same claim was made, but anyway, I always figured that, guess what? I don't drive drunk, and your statistics include drunk drivers!
I like watching the show "Air Disasters" (I don't like flying because of the airports, and I have time to drive, not because I am afraid of flying, BTW) anyway, maybe a show like "Rules Engine Drivers Disasters" would be pretty good too.
Self driving cars are going to have to go through the same safety issues as air travel did, but air travel takes place in a relatively controlled environment where they can make rules like "You can't fly below the deck of air traffic control if you are a commercial plane" because they have thrown their hands up at managing interaction with less trained pilots. It's not clear to me that the problems are solvable with the current state of technology. But then, I like to know "how" somebody is going to do something that they claim they can do, not just that they say thy can do it.
You've got to teach the self-driving cars to act like they care.
Presumably this means be aware something unexpected could happen and prepare to react (or are we really talking about machines emoting?). Why assume the car was not at the awreness level you desire? The only evidence is that an accident occurred, but the only way for this to translate into proof the car wasn't behaving as desired is by concluding no accidents can happen if people are prepared to react. This is pretty clearly not true.
How do you prepare to react? You may reduce speed and focus attention on the suspect person, you might change lanes if possible (I don't know whether it was or not in this case). But self driving cars don't suffer from human lack of attention and they always obey the speed limits. It's quite likely the machines base level of attention is already the level of heightened awareness you seek.
Maybe they can make pedestrians wear transponders that are connected with mood rings!
Rick said...
"You've got to teach the self-driving cars to act like they care."
Presumably this means be aware something unexpected could happen and prepare to react (or are we really talking about machines emoting?).
I thought it meant physically attacking the other person while crying and/or screaming after you get in an accident and then driving away so you don't get caught.
Blogger tim in vermont said...
What if most of the victims of EI (Ersatz Intelligence) drivers are low status types, drug users, homeless types, mentally challenged, children, what then?
Is that a bug or a feature in this Trumpian age?
John Henry
rhhardin said...
Driving in snow is exceptionally easy, just an unlearned skill in most people. Computers would have no problem with it.
They won't have any trouble learning to control the vehicle. But they rely on video to see lane lines, edge of the road, etc. Eventually they will be taught to use the same clues that people use ( such as following the tire tracks of the car in front of you. ) It is a problem that they will solve, but have not yet, because it is a lower priority than other things.
Speaking of cameras and self-driving cars, what about the data from the cameras?
the car has at least 4, perhaps more, cameras, always on. At least when the car is in use, maybe even when not in use.
That data could stay in the car but think of the convenience of having it continuously uploaded for more efficient traffic flow etc.
Think of what could be done with all those images.
It requires lots of storage, sure. But storage is dirt cheap. A terabyte drive 15 years ago was pretty much unthinkable. Now I have a 3TB drive that fits in my pocket. I got it for under $100 at Costco
Processing power is the problem but even processing power is pretty cheap and powerful. The death of Moore's Law has been predicted since at least the 80's but it keeps chugging along.
There are some pretty cool cameras pretty cheap. Look at this robot with 3D vision. They sell it as a commercial item and it uses MicroSoft Kinect cameras that they buy at Walmart. (Microsoft makes them jump through all sorts of hoops to go through them) https://www.youtube.com/watch?v=2kj6r0iAjvg They do a lot of extra stuff to it and brand it but at base it is a Kinect.
BTW: Amazon sells nifty fairly high-rez borescopes/cameras that plug into your phone or tablet for about $15.
I'm not really paranoid. Paranoia is an irrational fear. My fears are entirely rational.
John Henry
It will miss a lot of the clues on which you base your decisions. It will pick up on a lot of clues that you or I don't notice. And it will make statistically better decisions regarding being over or under cautious.
OK, I'll assume this is true.
Now fast forward to a court room.
Driver on the stand (obviously grieving): He.... he... just darted out in front of me, I just didn't have time to see him. I saw his cell phone fly up at the moment of impact. I guess he wasn't watching for cars.
Corporate shill on stand: Our algorithms result in "statistically better decisions" than the average driver. It's just that, in this case, the program failed to avoid the collision.
I just can't figure out which jury will find negligence. Can you?
I want to be the lawyer for the plaintiff when the video clearly shows a pretty young woman, intently staring at her phone by the curb, who suddenly steps in front of the rules engine driven car, the car whose rules engine couldn't figure out that she was completely distracted and unpredictable.
But they rely on video to see lane lines, edge of the road, etc. Eventually they will be taught to use the same clues that people use
I have a couple of cars with lidar features, they shut down when the snow starts really coming down and the car is covered in salty slush and ice. The blind spot detection goes or the "collision avoidance."
Eventually they will be taught to use the same clues that people use
Human eyes are far superior to cameras in situations where the colors are all similar, or where there are huge variations in the dynamic range of colors and you need to pick something important out of an area where the differences in color are far more subtle than they are in the camera image as a whole.
Maybe if you spent a few months learning about digital photography and image processing, you would have a deeper understanding of the actual problems with computer perception, as they exist.
The number of pedestrians hit by vehicles in Low Angeles is atrocious. Weld driving vehicles could not be worse than human drivers.
Autocorrect!
I guess you could sort of more closely simulate the human eye by running three cameras with consecutive f stops bracketed around the default, high dynamic range; but you have to programmatically blend the three photos into a single image, then treat the result, or process all three in real time.
Moore's law is sort of analogous to the 2nd law, it's about trends, not specific predictions.
It seems that there is a difference between driving a car in accordance with traffic laws, and even to do so well, versus driving defensively and predicting possible dangers.
Suppose you come to an intersection. There is a large, leafy bush to your right. On the other side of the road, you see the back of an octagonal sign facing the oncoming traffic. Do you infer the possibility of a stop sign behind the bush? I’ll bet most of us commenting here do so without consciously realizing it. But does the Uber AI? Our eyes spot movement, including a deer coming to the side of the road. Can the Uber sensors do that?
It’ll be a while before this technology is ready for prime time.
...I care about people enough to keep my eye on them. You've got to teach the self-driving cars to act like they care.
No, you don't. The self-driving car does not need to care in order to pay attention and drive carefully, it can do nothing else.
That is the huge advantage of self-driving cars. They do not care about anybody in any way whatsoever, and yet always act in the way that is in everyone's best interest anyway.
In other words, self-driving cars are libertarians!
Corporate shill on stand: Our algorithms result in "statistically better decisions" than the average driver. It's just that, in this case, the program failed to avoid the collision.
I just can't figure out which jury will find negligence. Can you?
Let's correct this:
Corporate Shill on Stand:
Here's the video of the person suddenly running into the street. You can see the vehicle initiated emergency braking and evasion within .1 seconds and stopped the vehicle within 20 feet. The best human reaction time is .3 seconds and the average is .5 seconds.
To be blunt, i hope this fad dies. I do not further autonomy removed from people. This is "fixing" a non problem.
Do not WANT further autonomy, I mean.
Do you infer the possibility of a stop sign behind the bush? I’ll bet most of us commenting here do so without consciously realizing it. But does the Uber AI?
Uber AI doesn't need to infer it. The driving regulatory framework will be built into the AI and it knows every stop sign, yield, road closure, work zone, and school zone in the country.
You're inventing problems to justify your opposition.
tim in vermont said...
Maybe if you spent a few months learning about digital photography and image processing, you would have a deeper understanding of the actual problems with computer perception, as they exist.
Lol.
How about a Master's degree from a top 10 comp sci program? Concentration in Computational Perception and Robotics? Courses in computer vision, computational photography, machine learning, general AI, and AI for self-driving cars? On top of professional experience in mapping, photogrammetry, and image processing?
tim in vermont said...
"And it will make statistically better decisions regarding being over or under cautious."
That's just faith. You can't know that.
Note the word "will". And he's right, computers "will" be better than humans, and probably soon if it's not already true.
Right now: "AI now matches or exceeds the ability of experts in medicine and other fields to interpret what they see"
"Whereas humans can easily distinguish between a cat and a dog, CNNs allow machines to categorize specific breeds more successfully than people can."
And
"This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks."
On top of professional experience in mapping, photogrammetry, and image processing?
I will note that the mapping and photogrammetry experience is fairly out-of-date, and my professional experience with image processing is both a limited amount, and limited to near-ideal imaging conditions.
I had a cruise control that constantly made very small changes in the throttle. Drove me crazy, so I rarely used it.
Self driving would be most useful to me in stop and go traffic, which I loathe, but will it sense brake lights several cars ahead, or wait until the car in front slows?
The complexity of our neural networks is not yet fully understood and, in trying to emulate them, their complexity is further revealed.
The complexity of our neural networks is not yet fully understood and, in trying to emulate them, their complexity is further revealed.
It's worse than that. Matter has no inwards. You remove one surface only to meet with another.
Coleridge, after Schelling.
Biographia Literaria chapters 5-8 cover why AI can't work, an argument that still works today.
Those who are ignorant of lit crit are doomed to repeat it.
A computer can beat humans at tic tac toe, by wearing them down.
Long ago I programmed a VIC-20 in a Kmart to sit at a read saying "I can guess your age, hit enter"
If you hit enter, it asks
IS IT 0?
n
IS IT 1?
n
IS IT 2?
n
IS IT 3?
n
etc.
works the same way.
Personally, when I'm driving, and I see someone standing next to the road, I always get just a bit worried. I'm always imagining that they might suddenly and inexplicably step out into the road. Someone standing next to the road is often enough to cause me to tap on the brakes and try to get as far away from them as the road allows.
But speaking of this, why in general, do Hispanic pedestrians walk in the road? I have seen this so many times. There will be a sidewalk not five feet away, and I would think it would be obvious that the purpose of the sidewalk is for pedestrians to walk on it, and yet time and again the person will be walking three or four feet out in the road.
I just did a search on google for "Hispanic people walking in the road" and found that I'm not the only person to ask this. The Federal Highway Administration and the National Highway Safety Administration funded a study to ask Hispanics why they are doing this:
Hispanic Pedestrian and Bicycle Safety
see https://safety.fhwa.dot.gov/ped_bike/hispanic/fhwanhtsa/fhwahtsa.pdf
rhhardin said...
Biographia Literaria chapters 5-8 cover why AI can't work...
Fortunately for us, AI is too stupid to understand the argument, and therefore just goes on working anyway.
So I'm thinking about what I just wrote and realized that I can make two predictions.
First, the programs to autonomously drive the cars are going to have code added to slow down the vehicles any time they "see" a human being near the road.
Second, there will be people that deliberately walk beside the road with the intent of slowing down everyone on it.
There's no need to slow down. A computer's reflexes and attention are fast at all times. The computer has to recognize the situation and be coded to act properly, which, if the situation is not sensible enough to fall within the program and its failsafes, it can't do. Neither can you, of course.
All systems fail, and a failsafe system fails by failing to fail safe. - John Gall
If a child bounds into the street, watch out for a ball.
Roadnet is a Pro-Choice artificial intelligence that does not recognize a person's right to life.
Rick:
Corporate Shill on Stand:
Here's the video of the person suddenly running into the street. You can see the vehicle initiated emergency braking and evasion within .1 seconds and stopped the vehicle within 20 feet. The best human reaction time is .3 seconds and the average is .5 seconds.
Rick, you missed two things:
1. Tim in Vermont's additional comment, and
2: Juries have emotions and don't always worship at the alter of "SCIENCE!".
Lets see, we have virgin millennial geeks who still live with Mum and didn't get their drivers license until the age of 28 teaching a computer how to safety operate a motor vehicle.
What could go wrong?
If I ever hit a person — I have
even if it were their fault — It was
I would carry that experience with me I do
and suffer emotionally for the rest of my life. I don't suffer emotionally, but I do carry irrational guilt, and I definitely don't want to ever do it again.
From research I've done it appears that my experience is very similar to train drivers who hit the idiots that stray onto the tracks.
Howard said...
Lets see, we have virgin millennial geeks who still live with Mum and didn't get their drivers license until the age of 28 teaching a computer how to safety operate a motor vehicle.
What could go wrong?
1) Juries
2) Legislatures
I was rear-ended in stopped traffic years ago by a woman driving over a hill, and I had enough time after I heard her brakes squealing first to take my foot off the brake and then to change my mind to avoid being pushed into the car in front of me, since I'd just had my front end replaced after a hit and run on my parked car.
AI might have prevented both accidents, but would it let me speed on an empty road?
It should prevent DUI accidents, so the bars will love it, taxi companies not so much. I don't think we'll ever get to everyone on Uber, but with young people today, who knows?
Like Ann, I always assume the worst possible actions by pedestrians (I do this for other drivers and cyclists, too). If they are close enough to suddenly jump in front me, I always assume they will. I always give myself the chance to stop in time.
Rick, you missed two things:
1. Tim in Vermont's additional comment, and
2: Juries have emotions and don't always worship at the alter of "SCIENCE!".
I don't think your retrenched position ("you're not always going to win") is much of an impediment to progress.
I'm going to repost the comment I put on this story from two days ago, since it was so late that probably nobody saw it:
Jaywalking with her bicycle "at least 60 yards from the crosswalk" at 10 p.m. People like that are a menace to traffic and by doing something stupid, not only are they endangering themselves but also causing emotional harm to innocent drivers who accidentally hit them in the dark. In this case, it was an autonomous vehicle but also had a human backup driver who probably didn't have time to react. I really am annoyed by people who bicycle at night, especially if they don't have on reflective stuff that makes them visible.
Use the damn crosswalk! Don't be a victim of stupidity!
rom viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” Moir said. The police have not released the videos.
Video night vision is part of the problem. Like it or not, you are responsible for what the car you are driving does.
tim in vermont said...
Like it or not, you are responsible for what the car you are driving does.
And pedestrians are responsible for what they do. Even when it gets them killed.
( I'm well aware that a jury may not base its decision on who was at fault. Juries are part of the problem. )
"AI is too stupid to understand the argument, and therefore just goes on working anyway."
Explain how. Can those deep neural networks operate in real time? Do the cameras and radar in use provide the same resolution as the ones in the DNN tests? Now that I know that you are an expert, I would expect better answers than " trust me, I am an expert. "
"And pedestrians are responsible for what they do. Even when it gets them killed"
Ayn Rand is not emperor.
tim in vermont said...
Ayn Rand is not emperor.
I don't know what she has to do with this. Do you disagree with my statement? If so, how and why? Which other categories of adults are not responsible for what they do?
I wonder of those pictures we have to click to verify that we are not a robot are used by Google programmers to help identify how people perceive cars, signs, and other road furniture.
No answer then on the technical question.
tim in vermont said...
No answer then on the technical question.
Sorry, I had actually missed that comment.
Explain how.
You want me to explain how AI works ( something that I'm not an expert on, but that took ~4 grad level classes to reach my current level of ignorance ) in a blog comment? Can't be done.
Can those deep neural networks operate in real time?
Yes. Training neural networks can be quite slow. Running them ( on appropriate hardware ) is very fast.
Do the cameras and radar in use provide the same resolution as the ones in the DNN tests?
The cameras? Don't know, but quite likely. Most of the DNN image processing stuff I've seen starts with relatively low resolution images. The radar? I kinda doubt that they were using radar data as input to the DNN when trying to identify dog and cat breeds.
Right, always imagine that anybody or any car is about to do the most foolish thing imaginable. It's called defensive driving.
Ignorance, or Bliss, if you prefer, consider the fact that it is always possible to avoid an accident if the hazardous condition is recognized, and the vehicle is driven so that it can stop before the hazard materializes. That is, if the vehicle correctly assesses the situation, and responds in a safe way, it will never have an accident.
And that means, in effect, that every accident is caused by a failure to correctly assess the situation, or a failure to respond correctly to the assessment, or both. Put another way, every accident is the result of a conscious decision on the part of the vehicle's designers to make the car unsafe for marketing reasons.
Jupiter said...
That is, if the vehicle correctly assesses the situation, and responds in a safe way, it will never have an accident
It is impossible to correctly assess what an erratic person will do. The best you can do is assess the possibilities. And if every vehicle was driven to avoid all possibility of having an accident, traffic would come to a virtual stop. You could not drive more than a couple miles per hour in a city without the risk of someone darting from the sidewalk into the path of your car. You could not drive much faster on the highway without the risk of someone swerving in front of you while simultaneously slamming on their brakes.
And that means, in effect, that every accident is caused by a failure to correctly assess the situation, or a failure to respond correctly to the assessment, or both.
That seems to be using an odd definition of cause. Your car has a chance of getting into an accident if you drive it to work. I can prevent that by stealing your wheels. If I fail to steal your wheels, and you get in an accident, did I cause it?
You want me to explain how AI works ( something that I'm not an expert on, but that took ~4 grad level classes to reach my current level of ignorance ) in a blog comment? Can't be done.
So you gook four grad level courses in persiflage, I guess. A simple explanation of how it differs from a rules engine would be a good start. Surely that's been covered somewhere along the line. If you can't explain it at a high level in simple language, you don't know what it is doing. You have a lot of faith in something in which you have invested a lot of time and energy, not to mention dreams for the future. Stands to reason.
I think they will be great, if they work, but we get back to the analogy of doing live medical research on non-consenting humans. Sure it would benefit huge numbers of people down the line, if we could test drugs on populations and observe the results... Sure. But the human race has mostly decided to forgo those benefits for ethical reasons.
I really mostly object to the loaded term "Artificial Intelligence," it doesn't exist in the sense of the meaning of those two words in English.
Apparently "Artificial Intelligence" is nothing more than a buzzword for a huge category of computational technologies, each mostly focused on specific problem spaces. For example, Neural Nets, Systems that infer rules from sets of data, etc. Of course none of these technologies allow the "computer cognition" machine out of the box in which it is created. There is no such thing as "Artificial Intelligence" in the sense that a lay speaker of English would understand the term. You can't say to it, for example. Figure out how to safely drive this car, in English, and have it come back with a completely gamed out graph of all of the decisions that will ever be required, nor can you count on it to come up with novel solutions to situations to which it hadn't been exposed.
It's a big grab bag of stuff cobbled together. Maybe our brains are the same, but there is something missing. Sure, a computer can "read" an optical image of text, but can it "understand" what is being said in any meaningful sense of the term, outside of things it has been programmed by a real intelligence to understand? Nope.
A simple explanation of how it differs from a rules engine would be a good start.
It differs in that AI is a broad category that covers a lot of techniques ( including rules engines ), while a rules engine is a single technique, which might achieve AI in a limited domain.
As for an overall definition/explanation, I certainly couldn't do as well as this:
Dr Charles Isbell, speaking before the House Oversight and Government Reform's subcommittee on Information Technology. ( I'm recommending just the one speaker where the video will start. I have only listened to a little bit of the others. )
tim in vermont, unlike Ignorance is Bliss I have not taken any classes in the subject but I know a few things. I have for instance constructed rule engines and have some beginning sense of their limitations and a suspicion of the incredible amount of work it would take to get a rule engine that did anything significant (and let's not neglect the possibility that it might not be even possible).
Neural Networks are something different. I think I first encountered the idea in the early 90s and it was as I recall inspired by how neurons are connected in certain layers of the brain. Part of the point of neural networks is that people don't really understand how they work. You can train them if you've chosen the right problem without knowing what they are doing. And when you have trained a neural network that can be empirically demonstrated to do something that doesn't mean that you know how it works or how the neural network solves the problem it is solving.
I dislike the phrase Artificial Intelligence because I feel it's pretentious and invites misunderstanding. For example the idea of calling a rules engine Artificial Intelligence seems wrong to me. However our brains work, it's not by rules. We can use logic, and people do use logic and it can be helpful in some circumstances, but this isn't actually how people mainly think, or so I suspect.
I think too, that there are ethical issues with experimenting with human lives, especially without the consent of the lab rats.
3/21/18, 8:29 AM
If its sensorium cannot handle random targets like a bad lady it a deer, what good is it? Certainly there have to be 5,000 deer encounters learned from in any viable self-driving solution. I can't be having sex in the back seat while Bambi is about to total my ride.
Just the way the DAS of the F-35 integrates the display of all detected threats, and their envelopes, so that pilots can fly between the raindrops as it were, so every animate or inanimate object within scope must be DATEd - detected, acquired, tracked and engaged (by avoiding) - in real time.
You also might want or need a mode where someone, say a highwayman or carjacker, is obstructing your vehicle, and you want out even if he doesn't dodge.
Also, a million people will die at the first systemwide failure. If not by crashing, by starvation when you can't get to home or safety, or by trips and deliveries not made.
There will be many deaths in the short term especially as people get used to auto braking cars and step out in front of operator braking cars
tim in vermont said...
Sure, a computer can "read" an optical image of text, but can it "understand" what is being said in any meaningful sense of the term, outside of things it has been programmed by a real intelligence to understand? Nope.
At some point I start to wonder the same thing about some of my fellow commenters.
tim in vermont said...
I think they will be great, if they work, but we get back to the analogy of doing live medical research on non-consenting humans. Sure it would benefit huge numbers of people down the line, if we could test drugs on populations and observe the results... Sure. But the human race has mostly decided to forgo those benefits for ethical reasons.
I think the medical analogy is bad. The reason medical experimentation is so creepy is that it is being done in your body. Self-driving cars are not. They are engineered object that you may have to interact with. They may fail, resulting in people dying. We introduce engineered objects into our environment all the time. Sometimes they fail, resulting in people dying. Nobody consented to being experimented on by a pedestrian bridge at FIU. Nobody would talk about such things as a medical experiment requiring consent.
The issue is not will people die from self-driving cars? The answer to that question is (and has always been) obviously yes. The question is will more people die from self-driving cars than from the human-driven cars they replace?
Based on my knowledge of the field, I believe that over the medium-term ( 10 years? ) the answer is obviously fewer people will die from self-driving cars. Over the short term, the answer depends on how responsibly the manufacturers roll out the technology. Of course there is a big advantage to getting to the market first. But there is also a huge downside to rolling out technology that is unsafe. I suspect the big players ( with a lot of money to lose ) such as Uber, Waymo, Tesla, will be cautious.
I have some concerns that a small no-name company might try to be too aggressive, taking chances to make a name for themselves. Not really sure how to handle that, from a public policy perspective. On the other hand, we let small, no-name 16-year-olds get drivers licenses, and some of them drive too aggressively, killing people.
The issue is not will people die from self-driving cars? The answer to that question is (and has always been) obviously yes. The question is will more people die from self-driving cars than from the human-driven cars they replace?
Same argument can be made for vivisection on humans. I bet the guy killed by Uber felt that his body had been invaded. It’s an artificial distinction.
Then clearly we should stop the failed experiment of allowing humans to drive cars, since that is just like a medical experiment too.
I think there is a pretty clear distinction between intentionally injuring someone for the greater good, and taking an action that occasionally ( very rarely ), accidentally injures someone, but overall saves more people than it injures.
They have released video of the accident ( stopping just before impact. ) The pedestrian is clearly in the wrong, crossing where there is no crosswalk, in the dark, and making zero effort to get out of the way of the approaching car, whose headlights would have been clearly visible.
And from the video, it appears that a human driver would have less than a second from the first sight until impact. I would not be surprised if many human drivers would have failed to avoid an accident.
Having said that, it looks really bad for the self-driving car. Just on video alone they should have hit the brakes ~0.5 seconds before impact, significantly reducing the speed at the time of impact. And while lighting was poor, it should have been ideal conditions for radar/lidar. The combined pedestrian/bicycle/bags might have been hard to identify as to what they were, but should have been easy to identify as to where they were, and how they were moving. It should have been an easy situation to avoid.
I would love to see their internal model of the situation: why did they see, how did they classify what they saw, and what predictions did they make about what would happen in the future.
একটি মন্তব্য পোস্ট করুন