- News
- Reviews
- Bikes
- Accessories
- Accessories - misc
- Computer mounts
- Bags
- Bar ends
- Bike bags & cases
- Bottle cages
- Bottles
- Cameras
- Car racks
- Child seats
- Computers
- Glasses
- GPS units
- Helmets
- Lights - front
- Lights - rear
- Lights - sets
- Locks
- Mirrors
- Mudguards
- Racks
- Pumps & CO2 inflators
- Puncture kits
- Reflectives
- Smart watches
- Stands and racks
- Trailers
- Clothing
- Components
- Bar tape & grips
- Bottom brackets
- Brake & gear cables
- Brake & STI levers
- Brake pads & spares
- Brakes
- Cassettes & freewheels
- Chains
- Chainsets & chainrings
- Derailleurs - front
- Derailleurs - rear
- Forks
- Gear levers & shifters
- Groupsets
- Handlebars & extensions
- Headsets
- Hubs
- Inner tubes
- Pedals
- Quick releases & skewers
- Saddles
- Seatposts
- Stems
- Wheels
- Tyres
- Health, fitness and nutrition
- Tools and workshop
- Miscellaneous
- Tubeless valves
- Buyers Guides
- Features
- Forum
- Recommends
- Podcast
Add new comment
35 comments
That can't be right. 4 years ago Musk told everyone to get a tesla and turn it into a Taxi becuse the ROI of a self driving taxi would pay for itself in less then 12 months. And the technology was "available now".
I still can't believe he gets away with all his bullshit when to me there is no difference between him and Elizabeth Holmes in them claiming stuff to Stock holders and the public which are blatant falsehoods.
False comparison. Holmes had a product that didnt work at all. Musk has a very decent electric car that doesnt yet self drive.
And never will, full self-driving being a techno-utopian fantasy that requires strong AI.
The only difference between Musk and Holmes is therefore the degree of deception in the grift.
Self driving certainly seems like a tough problem, but I don't see why it would require "strong AI" (presumably human-level general intelligence). Certainly, there's plenty of creatures with much smaller brains than us that can navigate perfectly well (e.g. a squirrel finding their way through a maze).
To my mind, the big problem with Tesla's AI is that they're just using video cameras and trying to identify everything visually which as CAPTCHAs demonstrate is something we haven't solved with computers yet. Fitting LIDAR, although expensive, gives much better 3D information about the surroundings.
I'm still holding out for flying cars though - I'm sure we were promised that as kids.
Because driving in current conditions requires theory of mind to predict what the other idiots on the road are going to do.
To be fair, we don't seem to require that of human drivers.
I don't see why that's the case. By following current trajectories, it's easy enough to extrapolate and anticipate possible collisions. It's similar to how a dog can catch a ball without needing to know about air resistance and gravity. Also, that implies that all drivers are mindful which I think we can find enough evidence to disprove.
I think Tesla are approaching it from entirely the wrong direction by having a fast car and then gradually adding self-driving features to it. It makes more sense to have slow moving autonomous vehicles (without any driver assistance) and learn about their failure modes before then grappling with the extra problems from travelling at speed.
It's like teaching a learner driver - you don't go full speed and then hand the controls over to a learner.
That's not how machine learning works. It's probablistic: "Based on previous movements, if I make this manoeuvre I have Y% chance of a collision." You can't set that percentage to zero because the vehicle would then never move, so you set a level of risk you're willing to accept. Problem is, when the vehicle encounters something outside context it'll plough on regardless, as we saw in the death of Elaine Herzberg.
I'm not implying that, as it's obviously untrue. However, despite all the obvious problems of having a couple of pounds of distractable meat in control of a vehicle, drivers go an average of 18 years between crashes in a complete range of conditions. That's largely because being human, they can predict what other humans are likely to do in a way that goes beyond machine-learning systems can do.
Agreed, and Musk has made the situation many times worse by constantly saying Tesla's would have full self-driving Real Soon Now for several years, giving Tesla owners the impression that surely Real Soon Now has arrived and they can have a snooze or watch TV.
I think you're making assumptions about how autonomous systems can be made to work. It's entirely possible to have AI systems within a framework of rules and they can be programmed to slow down and stop if a situation strays out of known parameters.
The Elaine Herzberg case shows the major problem of having systems that rely on the driver being constantly alert and ready in case the system suddenly decides that it doesn't know what's going on. I believe the safety system was also disengaged as they had problems with phantom stopping, but that's an implementation issue they had rather than a problem with all autonomous systems.
There are level 4 autonomous taxi services up and running right now.
It's not a fantasy. It's already happening.
According to this, that's not full autonomy,
Level 4 has several meanings.
One of which is 'Full autonomy within a defined area'.
That is what the taxi services in Phoenix and San Francisco represent.
Nobody sits in the drivers seat and the passenger is never required to take over.
Within a decade I expect most major cities and all motorways will have autonomous vehicles being used routinely.
Ah we're dealing the cloning machine's lawyers now are we? If the hedge is 'within a defined area' then that's still not full autonomy is it?
If that happens it won't be because the problems have been solved it'll be a combination of car industry lobbying/bribing politicians and laws that oblige pedestrians and cyclists to keep out of the way of snake-oil powered vehicles.
Personally if the car drives itself with no input required from its passengers then I'd say it's fully autonomous. Level 4 autonomy meets those criteria in my opinion.
Level 4 vehicles operating right now in the US with no laws requiring pedestrians/cyclists etc to "keep out of the way".
Level 4 autonomy will allow the automation of all inter and intra city transport. That's where the money is so I don't think level 5 autonomy will ever exist as there's simply no profit to be made in developing it.
Pedestrians are required to keep out of the way of motor vehicles in the US already, or did you forget the offence of jaywalking?
And many US jurisdictions have de facto the same requirement for cyclists, as can be seen by the way NYPD has a crackdown on cyclists every time one is killed by some gimboid in an SUV.
Just to clarify there are no *actual* laws requiring pedestrians *and* cyclists to "keep out of the way"?
Autonomous vehicles are already a reality and the level 4 vehicles have, AFAIK, operated without any fatalities or serious injuries.
Although of course driving without any headlights on wasn't an issue for the Robot, it was just lucky someone not "seeing the car" at night didn't come across it before the Police did.
What part of "de facto" are you struggling with?
You don't need actual laws when the reality of the behaviour of law enforcement is that when cyclists get hit they are punished for it.
The fact(o) that it is not De Jure.
This is why I hope that some day we'll have open source autonomous driving software powering (public) vehicles. Open source has the ability to go beyond just the economic needs of the vehicle manufacturers.
Car makers should be obliged to open-source the control software for self-driving cars anyway, and offer a hefty bounty for bugs, so knowledgeable people can check it for errors.
This is an industry that's demonstrated over and over again that it can't be trusted, yet people think it'll manage to create safe autonomous vehicles. Pardon the hollow laughter.
For it (self-driving) to really work the control software should be standard for all cars, like the ECUs are in a lot of the motorsport formulae. That makes self-driving cars more predictable and significantly reduces the complexity of the problem. Of course, that would prevent car manufacturers from differentiating their vehicles on performance. Technically it's possible to have a car incapable of exceeding a speed limit right now but no manufacturer is implementing hard controls in that area. I'm sure it has nothing to do with the influence that VAG, Mercedes, and BMW have over TUV and EU safety institutions.
I agree with the direction but just not the detail. I'm kinda skeptical because "cars 1.0" proved that the technology will come in regardless of the "side effects" or ultimate necessity. Powered by the force of human wants: power, approval and curiosity (solving problems or investigating shiny toys). Facilitated by governments, the market and profit.
However as mdavidford has pointed out plenty of humans drive in a manner which could be improved upon by yesterday's tech. As rich_cb says - there are "more than proof of concept" systems in existence. And as to the "hard problem" it seems that the real idiots are proving to be stupider* and the artificial idiots are getting smarter all the time.
You can have it both ways though - "for any given task probably an artificial system will beat a human at it at some point" and / or "tech will never be the same as humans because it won't be having a bad day too, suggest we sack it off and go for a ride and end up becoming a pal down the pub later".
* Or "made of tier-upon-tier of tiny stupid robots".
I can't see that politicians would have much sway over the car industries, so I can't see that happening. They're more likely to offer driving-as-a-service (that way they could even use GPL software without having to reveal their changes):
Much like internet search I worry that one or two companies will dominate all transport in a few years.
Most current manufacturers won't survive when transport simply becomes a service and it will be left to those few companies who have developed their own software to divide the *Dr Evil voice* trillion dollar spoils.
I think JS is protesting a little too much about the area where Level 4 is used. It's no different to a UK human driver driving on the wrong side of the road in Europe or like we see every winter where most of the driving population are unable to deal with ice and snow.
I think technically self driving will get there though maybe not for Tesla. Whether it will get there with an sufficiently enabling legal, regulatory and insurance framework is a different question.
The reality is that the problem is amenable to automation just like the industrial revolution was. Does it need to be better than a F1 driver? No it doesn't just a base line human. Exactly the same way as mass produced clothes on a loom we're not better than the finest handcrafted garments. Eventually an infinitely cloneable AI is going to be cheaper to reproduce than a human you have to teach for 6 months before they can pass their test.
It's amenable to automation and the potential market is enormous.
Add to that the fact that many countries are due to see rapid falls in their working age populations so there will be the political will to get these systems working and the outcome becomes inevitable IMHO.
Imagine a world with no speeding and no drunk/drugged/tired/distracted drivers.
The improvements in road safety just from that would be enormous even if the AI wasn't actually that good at driving!
It will also be interesting if they sell the service with caveats of "use of this service is at your own risk and we can't be sued if the car decides to drive into the Thames by itself". After all who reads the T's and C's of most things supplied by Google, Apple, Etc. I'm pretty sure I read somewhere that the current Tesla Autopilot usage is at own risk.
Aren't they accused of turning off the autopilot system just before crashing so that they can blame the driver instead?
I hope that accusation is in tin foil hat territory?
But does lead me onto something - is the Tesla software and its updates valid for the life of the car or do you have to pay a separate subscription for firmware updates etc? Imagine being on a long journey when suddenly you're told that your car has been bricked...
Pages