Support road.cc

Like this site? Help us to make it better.

forum

Tesla investigation deepens... (Graundia)

https://www.theguardian.com/technology/2022/jun/09/tesla-autopilot-crash...

Quote:

US federal regulators are deepening their investigation into Tesla’s Autopilot function after more than a dozen Tesla cars crashed into parked first-responder vehicles over a period of four years.

The National Highway Traffic Safety Administration (NHTSA) said on Thursday it was upgrading its preliminary investigation, which launched last August, to an “engineering analysis”, which is taken before the agency determines a recall.

The investigation covers all four Tesla vehicles – Models Y, X, S and 3 – representing about 830,000 vehicles that have been sold in the US.

If you're new please join in and if you have questions pop them below and the forum regulars will answer as best we can.

Add new comment

35 comments

Avatar
AlsoSomniloquism | 2 years ago
3 likes

That can't be right. 4 years ago Musk told everyone to get a tesla and turn it into a Taxi becuse the ROI of a self driving taxi would pay for itself in less then 12 months. And the technology was "available now".

I still can't believe he gets away with all his bullshit when to me there is no difference between him and  Elizabeth Holmes in them claiming stuff to Stock holders and the public which are blatant falsehoods. 

Avatar
Secret_squirrel replied to AlsoSomniloquism | 2 years ago
2 likes

False comparison.  Holmes had a product that didnt work at all.  Musk has a very decent electric car that doesnt yet self drive.

Avatar
John Stevenson replied to Secret_squirrel | 2 years ago
3 likes

Secret_squirrel wrote:

False comparison.  Holmes had a product that didnt work at all.  Musk has a very decent electric car that doesn't yet self drive.

And never will, full self-driving being a techno-utopian fantasy that requires strong AI

The only difference between Musk and Holmes is therefore the degree of deception in the grift.

Avatar
hawkinspeter replied to John Stevenson | 2 years ago
2 likes

John Stevenson wrote:

And never will, full self-driving being a techno-utopian fantasy that requires strong AI

The only difference between Musk and Holmes is therefore the degree of deception in the grift.

Self driving certainly seems like a tough problem, but I don't see why it would require "strong AI" (presumably human-level general intelligence). Certainly, there's plenty of creatures with much smaller brains than us that can navigate perfectly well (e.g. a squirrel finding their way through a maze).

To my mind, the big problem with Tesla's AI is that they're just using video cameras and trying to identify everything visually which as CAPTCHAs demonstrate is something we haven't solved with computers yet. Fitting LIDAR, although expensive, gives much better 3D information about the surroundings.

I'm still holding out for flying cars though - I'm sure we were promised that as kids.

Avatar
John Stevenson replied to hawkinspeter | 2 years ago
2 likes

hawkinspeter wrote:

Self driving certainly seems like a tough problem, but I don't see why it would require "strong AI"

Because driving in current conditions requires theory of mind to predict what the other idiots on the road are going to do.

Avatar
mdavidford replied to John Stevenson | 2 years ago
3 likes

John Stevenson wrote:

hawkinspeter wrote:

Self driving certainly seems like a tough problem, but I don't see why it would require "strong AI"

Because driving in current conditions requires theory of mind to predict what the other idiots on the road are going to do.

To be fair, we don't seem to require that of human drivers.

Avatar
hawkinspeter replied to John Stevenson | 2 years ago
1 like

John Stevenson wrote:

Because driving in current conditions requires theory of mind to predict what the other idiots on the road are going to do.
 

I don't see why that's the case. By following current trajectories, it's easy enough to extrapolate and anticipate possible collisions. It's similar to how a dog can catch a ball without needing to know about air resistance and gravity. Also, that implies that all drivers are mindful which I think we can find enough evidence to disprove.

I think Tesla are approaching it from entirely the wrong direction by having a fast car and then gradually adding self-driving features to it. It makes more sense to have slow moving autonomous vehicles (without any driver assistance) and learn about their failure modes before then grappling with the extra problems from travelling at speed.

It's like teaching a learner driver - you don't go full speed and then hand the controls over to a learner.

Avatar
John Stevenson replied to hawkinspeter | 2 years ago
2 likes

hawkinspeter wrote:

I don't see why that's the case. By following current trajectories, it's easy enough to extrapolate and anticipate possible collisions.

That's not how machine learning works. It's probablistic: "Based on previous movements, if I make this manoeuvre I have Y% chance of a collision." You can't set that percentage to zero because the vehicle would then never move, so you set a level of risk you're willing to accept. Problem is, when the vehicle encounters something outside context it'll plough on regardless, as we saw in the death of Elaine Herzberg.

Also, that implies that all drivers are mindful which I think we can find enough evidence to disprove.

I'm not implying that, as it's obviously untrue. However, despite all the obvious problems of having a couple of pounds of distractable meat in control of a vehicle, drivers go an average of 18 years between crashes in a complete range of conditions. That's largely because being human, they can predict what other humans are likely to do in a way that goes beyond machine-learning systems can do.

I think Tesla are approaching it from entirely the wrong direction by having a fast car and then gradually adding self-driving features to it. It makes more sense to have slow moving autonomous vehicles (without any driver assistance) and learn about their failure modes before then grappling with the extra problems from travelling at speed.

Agreed, and Musk has made the situation many times worse by constantly saying Tesla's would have full self-driving Real Soon Now for several years, giving Tesla owners the impression that surely Real Soon Now has arrived and they can have a snooze or watch TV.

 

Avatar
hawkinspeter replied to John Stevenson | 2 years ago
1 like

I think you're making assumptions about how autonomous systems can be made to work. It's entirely possible to have AI systems within a framework of rules and they can be programmed to slow down and stop if a situation strays out of known parameters.

The Elaine Herzberg case shows the major problem of having systems that rely on the driver being constantly alert and ready in case the system suddenly decides that it doesn't know what's going on. I believe the safety system was also disengaged as they had problems with phantom stopping, but that's an implementation issue they had rather than a problem with all autonomous systems.

Avatar
Rich_cb replied to John Stevenson | 2 years ago
2 likes

There are level 4 autonomous taxi services up and running right now.

It's not a fantasy. It's already happening.

Avatar
John Stevenson replied to Rich_cb | 2 years ago
0 likes

According to this, that's not full autonomy,

Level 4 (High Driving Automation)

The key difference between Level 3 and Level 4 automation is that Level 4 vehicles can intervene if things go wrong or there is a system failure. In this sense, these cars do not require human interaction in most circumstances. However, a human still has the option to manually override.

Avatar
Rich_cb replied to John Stevenson | 2 years ago
1 like

Level 4 has several meanings.

One of which is 'Full autonomy within a defined area'.

That is what the taxi services in Phoenix and San Francisco represent.

Nobody sits in the drivers seat and the passenger is never required to take over.

Within a decade I expect most major cities and all motorways will have autonomous vehicles being used routinely.

Avatar
John Stevenson replied to Rich_cb | 2 years ago
0 likes

Rich_cb wrote:

Level 4 has several meanings. One of which is 'Full autonomy within a defined area'.

Ah we're dealing the cloning machine's lawyers now are we? If the hedge is 'within a defined area' then that's still not full autonomy is it?

Within a decade I expect most major cities and all motorways will have autonomous vehicles being used routinely.

If that happens it won't be because the problems have been solved it'll be a combination of car industry lobbying/bribing politicians and laws that oblige pedestrians and cyclists to keep out of the way of snake-oil powered vehicles. 

Avatar
Rich_cb replied to John Stevenson | 2 years ago
1 like

Personally if the car drives itself with no input required from its passengers then I'd say it's fully autonomous. Level 4 autonomy meets those criteria in my opinion.

Level 4 vehicles operating right now in the US with no laws requiring pedestrians/cyclists etc to "keep out of the way".

Level 4 autonomy will allow the automation of all inter and intra city transport. That's where the money is so I don't think level 5 autonomy will ever exist as there's simply no profit to be made in developing it.

Avatar
John Stevenson replied to Rich_cb | 2 years ago
2 likes

Rich_cb wrote:

Level 4 vehicles operating right now in the US with no laws requiring pedestrians/cyclists etc to "keep out of the way".

Pedestrians are required to keep out of the way of motor vehicles in the US already, or did you forget the offence of jaywalking?

And many US jurisdictions have de facto the same requirement for cyclists, as can be seen by the way NYPD has a crackdown on cyclists every time one is killed by some gimboid in an SUV.

Avatar
Rich_cb replied to John Stevenson | 2 years ago
0 likes

Just to clarify there are no *actual* laws requiring pedestrians *and* cyclists to "keep out of the way"?

Autonomous vehicles are already a reality and the level 4 vehicles have, AFAIK, operated without any fatalities or serious injuries.

Avatar
AlsoSomniloquism replied to Rich_cb | 2 years ago
0 likes

Although of course driving without any headlights on wasn't an issue for the Robot, it was just lucky someone not "seeing the car" at night didn't come across it before the Police did.

Avatar
John Stevenson replied to Rich_cb | 2 years ago
0 likes

Rich_cb wrote:

Just to clarify there are no *actual* laws requiring pedestrians *and* cyclists to "keep out of the way"? Autonomous vehicles are already a reality and the level 4 vehicles have, AFAIK, operated without any fatalities or serious injuries.

What part of "de facto" are you struggling with?

You don't need actual laws when the reality of the behaviour of law enforcement is that when cyclists get hit they are punished for it.

Avatar
Rich_cb replied to John Stevenson | 2 years ago
0 likes

The fact(o) that it is not De Jure.

Avatar
hawkinspeter replied to Rich_cb | 2 years ago
0 likes

Rich_cb wrote:

That's where the money is so I don't think level 5 autonomy will ever exist as there's simply no profit to be made in developing it.

This is why I hope that some day we'll have open source autonomous driving software powering (public) vehicles. Open source has the ability to go beyond just the economic needs of the vehicle manufacturers.

Avatar
John Stevenson replied to hawkinspeter | 2 years ago
1 like

hawkinspeter wrote:

This is why I hope that some day we'll have open source autonomous driving software powering (public) vehicles. Open source has the ability to go beyond just the economic needs of the vehicle manufacturers.

Car makers should be obliged to open-source the control software for self-driving cars anyway, and offer a hefty bounty for bugs, so knowledgeable people can check it for errors.

This is an industry that's demonstrated over and over again that it can't be trusted, yet people think it'll manage to create safe autonomous vehicles. Pardon the hollow laughter.

Avatar
kil0ran replied to John Stevenson | 2 years ago
0 likes

John Stevenson wrote:

hawkinspeter wrote:

This is why I hope that some day we'll have open source autonomous driving software powering (public) vehicles. Open source has the ability to go beyond just the economic needs of the vehicle manufacturers.

Car makers should be obliged to open-source the control software for self-driving cars anyway, and offer a hefty bounty for bugs, so knowledgeable people can check it for errors.

This is an industry that's demonstrated over and over again that it can't be trusted, yet people think it'll manage to create safe autonomous vehicles. Pardon the hollow laughter.

For it (self-driving) to really work the control software should be standard for all cars, like the ECUs are in a lot of the motorsport formulae. That makes self-driving cars more predictable and significantly reduces the complexity of the problem. Of course, that would prevent car manufacturers from differentiating their vehicles on performance. Technically it's possible to have a car incapable of exceeding a speed limit right now but no manufacturer is implementing hard controls in that area. I'm sure it has nothing to do with the influence that VAG, Mercedes, and BMW have over TUV and EU safety institutions.

Avatar
chrisonabike replied to John Stevenson | 2 years ago
0 likes

I agree with the direction but just not the detail.  I'm kinda skeptical because "cars 1.0" proved that the technology will come in regardless of the "side effects" or ultimate necessity.  Powered by the force of human wants: power, approval and curiosity (solving problems or investigating shiny toys).  Facilitated by governments, the market and profit.

However as mdavidford has pointed out plenty of humans drive in a manner which could be improved upon by yesterday's tech.  As rich_cb says - there are "more than proof of concept" systems in existence.  And as to the "hard problem" it seems that the real idiots are proving to be stupider* and the artificial idiots are getting smarter all the time.

You can have it both ways though - "for any given task probably an artificial system will beat a human at it at some point" and / or "tech will never be the same as humans because it won't be having a bad day too, suggest we sack it off and go for a ride and end up becoming a pal down the pub later".

* Or "made of tier-upon-tier of tiny stupid robots".

Avatar
hawkinspeter replied to John Stevenson | 2 years ago
0 likes

John Stevenson wrote:

Car makers should be obliged to open-source the control software for self-driving cars anyway, and offer a hefty bounty for bugs, so knowledgeable people can check it for errors.

This is an industry that's demonstrated over and over again that it can't be trusted, yet people think it'll manage to create safe autonomous vehicles. Pardon the hollow laughter.

I can't see that politicians would have much sway over the car industries, so I can't see that happening. They're more likely to offer driving-as-a-service (that way they could even use GPL software without having to reveal their changes):

 

Avatar
Rich_cb replied to hawkinspeter | 2 years ago
0 likes

Much like internet search I worry that one or two companies will dominate all transport in a few years.

Most current manufacturers won't survive when transport simply becomes a service and it will be left to those few companies who have developed their own software to divide the *Dr Evil voice* trillion dollar spoils.

Avatar
Secret_squirrel replied to Rich_cb | 2 years ago
0 likes

I think JS is protesting a little too much about the area where Level 4 is used.  It's no different to a UK human driver driving on the wrong side of the road in Europe or like we see every winter where most of the driving population are unable to deal with ice and snow. 
 

I think technically self driving  will get there though maybe not for Tesla.  Whether it will get there with an sufficiently enabling legal, regulatory and insurance framework is a different question. 
The reality is that the problem is amenable to automation just like the industrial revolution was.   Does it need to be better than a F1 driver?  No it doesn't just a base line human. Exactly the same way as mass produced clothes on a loom we're not better than the finest handcrafted garments.   Eventually an infinitely cloneable AI is going to be cheaper to reproduce than a human you have to teach for 6 months before they can pass their test. 

Avatar
Rich_cb replied to Secret_squirrel | 2 years ago
1 like

It's amenable to automation and the potential market is enormous.

Add to that the fact that many countries are due to see rapid falls in their working age populations so there will be the political will to get these systems working and the outcome becomes inevitable IMHO.

Imagine a world with no speeding and no drunk/drugged/tired/distracted drivers.

The improvements in road safety just from that would be enormous even if the AI wasn't actually that good at driving!

Avatar
AlsoSomniloquism replied to Rich_cb | 2 years ago
2 likes

It will also be interesting if they sell the service with caveats of "use of this service is at your own risk and we can't be sued if the car decides to drive into the Thames by itself". After all who reads the T's and C's of most things supplied by Google, Apple, Etc. I'm pretty sure I read somewhere that the current Tesla Autopilot usage is at own risk. 

Avatar
hawkinspeter replied to AlsoSomniloquism | 2 years ago
2 likes

AlsoSomniloquism wrote:

It will also be interesting if they sell the service with caveats of "use of this service is at your own risk and we can't be sued if the car decides to drive into the Thames by itself". After all who reads the T's and C's of most things supplied by Google, Apple, Etc. I'm pretty sure I read somewhere that the current Tesla Autopilot usage is at own risk. 

Aren't they accused of turning off the autopilot system just before crashing so that they can blame the driver instead?

Avatar
brooksby replied to hawkinspeter | 2 years ago
0 likes

hawkinspeter wrote:

AlsoSomniloquism wrote:

It will also be interesting if they sell the service with caveats of "use of this service is at your own risk and we can't be sued if the car decides to drive into the Thames by itself". After all who reads the T's and C's of most things supplied by Google, Apple, Etc. I'm pretty sure I read somewhere that the current Tesla Autopilot usage is at own risk. 

Aren't they accused of turning off the autopilot system just before crashing so that they can blame the driver instead?

I hope that accusation is in tin foil hat territory?

But does lead me onto something - is the Tesla software and its updates valid for the life of the car or do you have to pay a separate subscription for firmware updates etc?  Imagine being on a long journey when suddenly you're told that your car has been bricked...

Pages

Latest Comments