After the challenge to vox_mundi about whether a post on purely self driving should be in the Tesla thread, I also wondered whether it belongs in the EV thread either.
Talking purely about self driving tech doesn't really fit with either because self driving is not exclusively in the EV domain, witness both Uber and Waymo vehicles are not EV.
However self driving does fit within the overall sphere of climate change and the impact on the climate and also policy and solutions. Especially policy. Self driving has the potential to radically reduce the number of vehicles on the road by providing cheap transport, where people need it, when people need it and at a cost which people will be willing to bear. In fact, correctly costed, people may consider the price of insurance, maintenance and outright purchase cost against the number of times they actually really need to use a personal vehicle. Especially with the growing prevalence of home delivery for so many articles.
So I thought that in my response to vox_mundi I'd start a new thread where we can throw in all the stuff about self driving which does not relate specifically to Tesla or other EV's. Of course there will still be FSD stuff for Tesla because it is impossible to separate glory/failure from FSD with it being one of the pillars that Tesla is relying on to catapult the company into a position from which it will not fail.
To the response.
... There's every reason to think Waymo's competitors will face this same dilemma as they move toward large-scale commercial deployments. By now, a number of companies have developed self-driving cars that can handle most situations correctly most of the time. But building a car that can go millions of miles without a significant mistake is hard. And proving it is even harder.
Yes you need tens of billions of miles of driving with evidence that the Artificial driver is as good or better than the average human. There is only one company doing this and, as it is proving on a daily basis, simulation is exactly that. Simulation. Not reality and reality trumps everything.
My take on this is "just how good is the average human anyway"? For which I went looking for articles and there has been a study done on this. However the question is much more nuanced than that.
The actual question should be "Just how good do humans think they are at driving and how will it be possible to compare real world AI driving against human perception".
Because as the linked study below shows, people are pretty crap at judging their own capabilities and some people will be scared by the firm driving stance of AI and others will be frustrated by how conservative it is.
https://www.researchgate.net/publication/331424175_Safer_than_the_average_human_driver_who_is_less_safe_than_me_Examining_a_popular_safety_benchmark_for_self-driving_carsIn the abstract is a clear statement about what is needed and there are two glaring statements in terms of perception.
Although the level of safety required before drivers will accept self-driving cars is not clear, the criterion of being safer than a human driver has become pervasive in the discourse on vehicle automation. This criterion actually means “safer than the average human driver,”
At the level of individual risk assessment, a body of research has shown that most drivers perceive themselves to be safer than the average driver
Since most drivers believe they are better than average drivers, the benchmark of achieving automation that is safer than a human driver (on average) may not represent acceptably safe performance of self-driving cars for most drivers.
And there is the problem. How do you actually categorise the Average human driver, independently, without self perception.
There was a study done on reaction times of drivers to situations. They had differing reaction times depending on whether the incident was simple or complex and expected or unexpected.
This was real world testing done with test equipment in vehicles.
https://www.degruyter.com/document/doi/10.1515/eng-2020-0004/htmlIf you look at the tables, this is the average total response time. So recognising the incident to moving leg and pressing brake pedal.
Now this is real world so, to some degree, the reactions will be slower. However I have also been tested. I was in the UK Army and in a drivers unit, we transported tanks around Germany. ADAT came in with a testing rig, you sat in a seat behind a wheel and with your right foot on the accelerator and your left foot on the floor. You watched a set of lights and reacted when they went red.
Reaction time was measured from when both, brake and clutch pedals had broken the sensors on them.
If you do a bit of digging, the actual average brain response time to visual stimulus is 250ms. So those leg responses, on top of average visual responses, would be around correct, you would think. 541ms being fast as an average, after all it only leaves just over 250ms to move the leg and brake after recognising the situation. Closer to a second, as an average, is someone who is going to be struggling at over 70mph.
So my results? 210ms total response time. To recognise the lights, move both my legs and depress both clutch and brake pedals. The ADAT guy said I don't need to worry about what is in front of me, I need to worry about what is behind me because they'll run into the back of me if I have to emergency stop. The unit was 650 drivers, average was about average for this table above.
OK so let's ignore me as a person and just look at this with a range of drivers and also put that result into the Complex/Unexpected category. Results showed that Complex Unexpected is about twice as long as simple expected. So 420ms in this case. Which is still under the lowest bracket for this table.
Just how do you get a "perceived" AI which is better than the "average" human driver. Especially when human drivers with response times in the 1,000 ms consider themselves as well over "the average". Just consider how a human driver with 1,000 ms response time would react to a Tesla, with microsecond response times, driving positively down the road. They'd be scared out of their wits. Then consider someone in the "fast" bracket. Let us say 400ms to 500ms. They're going to be bored out of their skulls and think that the AI is overcautious and wasting time.
Somehow we're going to have to get over these perceptions or we are never going to get acceptance and if we cant get acceptance we'll never get regulatory approval.
Whilst I think Elon is correct about stats and showing the number of people killed by human poor driving, as opposed to AI driving, I think there is another dimension. I think there needs to be a very extensive study on comparing people's personal perception of their driving skills to their actual performance.
I think people will get a real shock when they understand what their actual driving capability is. When you get your license/permit to drive, you don't get a % capability in driving mark. You get a pass or fail. That automatically makes drivers assume that if they passed they must be better than the median? Surely?
In fact nothing could be further than the truth. Gaining your license/permit just means you passed the absolute minimum BASE requirements in order to control your vehicle on the road. You know absolutely nothing about how well you will drive in stressing situations or how you will react when things go wrong.
We are then told that these people must be able to "judge" whether AI is good enough or not.