A cyclist in San Francisco was reportedly injured when a driverless taxi belonging to a company affiliated with Google crashed into the rider after failing to detect their presence.
According to the San Francisco Police Department (SFPD), the collision, involving a Waymo ‘robocab’, happened in the city’s Portero Hill district at around 3pm on Tuesday afternoon, reports SFGate.com.
Officers attended the scene after being alerted to the crash by Waymo staff and discovered the cyclist with what were described as “non-life threatening injuries.”
The SFPD, which is investigating the incident, did not disclose details of how the collision came about, although Waymo did provide its own version of events.
The crash happened at the intersection of 17th Street and Mississippi Street, with Waymo saying that the vehicle had come to a complete halt before entering the junction, just as a large lorry was approaching in the opposite direction.
According to the company, “the cyclist was occluded by the truck and quickly followed behind it, crossing into the Waymo vehicle’s path. When they became fully visible, our vehicle applied heavy braking but was not able to avoid the collision.”
Waymo said that the cyclist sustained “only minor scratches” and that the car’s passenger was not hurt.
Until 2016, Waymo was known as the Google Self-Driving Car Project, assuming its current name following a group-level reorganisation that resulted in the creation of holding company Alphabet, Inc, the world’s third-largest tech firm by revenue.
Waymo secured permission from the California Department of Motor Vehicles for its vehicles to carry passengers without the need for a safety driver who could intervene in the case of a potential collision, in November 2019, making it the first company worldwide to secure such clearance.
In a comprehensive blog post published on its website in 2021, the company outlined how it used simulations and data gathered in real-life testing to try and ensure that its vehicles could share the road safely with people on bikes, acknowledging that “cyclists in particular pose a number of unique considerations for autonomous driving technology.”
The blog post was accompanied by the video below, which the company says shows one of its vehicles – referred to as the “Waymo Driver” here – “autonomously driving alongside cyclists on The Wiggle, the mile-long, zig-zagging bicycle route from SOMA to Golden Gate Park. The Waymo Driver is in its proper driving lane, but has a bike lane with bikers ahead on its left as well as on the right.
“Although the Waymo Driver properly slows for both pairs of bikers, giving them ample space, with simulation, we can test various what-if situations such as ‘what if we added an additional cyclist in the scene’ or ‘what if we changed the cyclists to travel x times faster’,” the company added.
Last year, we reported how academics at the University of Glasgow believe self-driving cars “need to learn the language of cyclists", so such vehicles can operate safely around people on bikes.
> Researchers suggest cyclists could wear smart glasses to communicate with self-driving cars — automated vehicles “need to learn the language of cyclists”
In 2017, Stanford University robotics researcher Heather Knight said that Tesla’s Autopilot feature should “never” be used around cyclists, warning that if it were deployed in such situations, riders would be killed due to what she believed was the system’s inability to adequately detect people on bikes.
> Never use Tesla Autopilot feature around cyclists, warns robotics expert
Add new comment
20 comments
Driverless vehicle technology still has many flaws. It's going to take time before it works properly. The automotive industry could've been further ahead with the technology if it'd taken note of what was being done in the mining industry with regard to driverless vehicles.
Whilst it is going to take some time to reach the point where people choose driverless cars (to the extent that they don't own a car, but call a "robocab"), I feel that the end result can only be beneficial.
If a human driver makes a mistake, and learns from it, it's not likely than many (if any) other people will also learn from the experience.
Driverless cars, on the other hand, will benefit from the hive mind of regular updates, and drawing from the same knowledge base.
The other difference, is that driverless cars don't have the same selfish motives as people, so are far less likely to get themselves into dangerous situations.
It's definitely a tricky subject, though, as it's effectively beta testing with people's lives
I would love to know what tech you have that fills you with so much confidence for software update support, that 1) updates will be regular and continue for more than the first year of product release and 2) if the updates do come out they won't reduce performance for older models or just brick someone's car outright?
No, they are motivated by the cold instruction sets they have been programmed with. Subject to the biases of the programmers, and whatever influence their management lays down in the name of "program pressure". I don't want to be at the mercy of minimum viable product, because shareholders wanted the latest model of vehicle shipped.
Indeed - different times now but with recent news you could imagine something that made the Horizon scandal look quaint and cuddly, with the injured / the relatives being sued for damage to people's cars because "it is impossible for our vehicles to make a mistake".
You'd probably just reach a chatbot if you complained, and it would emit some emoji in sympathy and come up with "We hear your pain and we stand with you as you move forward from this! Thank you for being a part of improving road safety for everyone! This will now never happen again as that was yesterday evening and there have been several rounds of updates since then. Plus we've swapped the AI VP for a better model so there's no need (or possibility) to approach the courts. So proud you've chosen not to be a victim!"
Probably it'd be like chess software: on average they'll drive (much) better than most people, but there may be a few situations where a few humans would be better. Plus they won't handle cases outside the model e.g. they're unlikely to stop if they see someone in distress and render assistance / do CPR, or spot the neighbour's missing cat in passing and retrieve it...
For me the best solution has always been one that's relatively simple and low tec (but everywhere) and that works *with* our human deficiencies. Although of course this approach has its own major deficiency - it's poor at concentrating large sums of money in the hands of a few (a major motivation).
Don't you love it when people have been so brainwashed by the tech bros, that they actually spout the advertising drivel verbatim...
I bet you have disc brakes and eletronic shifting on your bike...
"crossing into the Waymo vehicle’s path"
or did the Waymo vehicle cross into the cyclist's path?
The thing is minimum viable product is probably still better than the average human driver. However a lot of the objectively bad drivers that treat cars as toys and roads as playgrounds, are not going to be swayed into driverless cabs.
Yes thats a interesting was of writing "Waymo decided the road was clear after the HGV and procedded. The road the waymo was crossing was not, in fact, clear" (Morgan Freeman narrator voice)
How did you work that out?
Umm... have you met the 'average human driver'?
Yes. They can be irrational and capable of unfathomable stupidity in extreme circumstances, but there are so many human drivers putting in so many miles each, the average comes out surprisingly good.
Allow me to direct you the article @andystow shared below https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Its going to be interesting to see what happens with driverless cars in the next decade because as far as I understand it, they are already far safer than those with a driver at the wheel. The problem is that people don't pay attention now and if they are in a self driving car and should be ready to take over in emergencies, that will rarely happen. They will be on their phones etc. People also like to think they are excellent drivers and that AI will never be better than them so when an accident happens with self-driving cars they go overboard with criticism.
I wonder how this will play out with the twats that close pass cyclists on purpose. If they take control to intentionall pull a dangerous manouvre they should be banned from using a car for a long time.
For interactions with other motor vehicles, I don't doubt it, but the systems have proven significantly less reliable when needing to account for vulnerable road users. I don't know the data, but at a guess I'd put them as about incapable as a 30th %ile driver for avoiding collisions with peds and cyclists.
I'll own up to being an autonomous vehicle sceptic, but I don't think removing the variability of competency of the vehicle operator should be a higher priority than reducing the number of potential interactions between vehicles and vulnerable road users. Especially as the former is more profitable, whilst the right thing to do is more valuable.
That's just what they want us to think.
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100...
That was an interesting read, thanks!
I don't believe we'll ever get to level 5 self driving, as Tesla has shown with its "full self driving" beta program, once the car appears to be able to self drive for short distances the human driver tends to start doing something else and not pay attention as they should. This will lead to a series of incidents and a public outcry to ban the technology. To get it adopted they will have to go straight to level 5 and be perfect from day 1, its a big ask!
Level 5 would be brilliant.
I suspect (the top end and depending on definitions used) level 4 would be enough for most people though is still debatable if they will reach it.
(AIUI level 5 is can drive in all conditions;
competent level 4 should be able to give up and stop safely before it hits conditions it can't cope with (or simply refuse to start because it already knows there is an extreme weather warning to shelter in place...)
Problem is of course that:
1. The step from level 2 (driver aid) to 'full' level 4 (automation in most conditions and can safely stop when it can't cope with no manual controls at all) is massive
2. Drivers won't accept level 4 automation (because clearly it isn't acceptable for the car to tell you and enforce that conditions are too dangerous to drive in.)
You can have level 5 today, its called a taxi.
You need to use taxis more often 😉
Someone needs to tell Waymo how very rare it is for the passenger or driver in a car to be injured when they drive their car into a cyclist…
But the shock and trauma that they'll have been through though...
[sarcasm]