# Driving test for driverless cars?



## John-H (Jul 13, 2005)

"Driverless vehicles may seem unfamiliar now, but over the coming years you'll start to encounter - or even use them - on a daily basis. Will it mean the end of the driving licence and changes to the rules of the road?"

http://www.bbc.co.uk/news/technology-40570592

Who or what will be blamed in an accident?
What happens if an emergency vehicle wants to get past?
Will rules be broken to maximise safety? Whose safety?
What if a pedestrian steps out? Swerve and risk the passengers or hit the pedestrian?


----------



## Stiff (Jun 15, 2015)

Ok, my two penneth worth...



John-H said:


> Who or what will be blamed in an accident?


The same as usual, whoever is at fault. 
Driverless car V 'Driver' car and the onus will be on the driver as it will be presumed to be human error (unless 'driver' can provide evidence to prove otherwise.
Driverless V Driverless. I imagine it would be a rarity but nevertheless, data could be retrieved to show which 'car' was at fault (quite possibly neither if a third party was involved - ie pedestrian/falling branch/sudden debris etc etc)



John-H said:


> What happens if an emergency vehicle wants to get past?


Exactly the same as a driven vehicle - it follows the highway codes rules for emergency vehicles (this could actually work out better as most people become confused, panic and go in all sorts of directions). Eventually, the emergency vehicles will become autonomous and communicate with the surrounding vehicles to make a safe passage through.



John-H said:


> Will rules be broken to maximise safety? Whose safety?
> What if a pedestrian steps out? Swerve and risk the passengers or hit the pedestrian?


The last two are trickier and it's a much greyer area. This would all boil down to morality (from the car makers/programmers) but I would imagine that one of the selling points would be that it would protect the occupants above all else. The recognition software, hopefully would be able to differentiate between say, a dog/cat, and a child walking out in front of the car. A child walks out - car swerves to the quickest, safest direction. A cat/dog - it doesn't.
The problem occurs when there is nowhere safe to swerve to and a worse outcome is possible if it did. :?

I think the law should allow them to occasionally nudge cyclists off their bikes when they get too frustrating. though


----------



## John-H (Jul 13, 2005)

You've hit upon some interesting scenarios regarding when choices need to be made between two risky outcomes and even called it a "moral" choice and yes it relies on the programmers (b) to programme in that high function decision and (a) to have thought of that possibility in the first place!

I think exactly those problems/decisions apply to the first two things I mentioned and many more.

Probably most accidents involve running into the back of someone which you'd hope would not happen with a driverless car, unless on ice our oil etc or when someone pulls/steps out into the path.

When something goes wrong, normally the rules go out of the window and a human copies by the seat of their pants. Can you programme that? Reliably? How do you test all eventualities and approve the software?

Your tricky "moral" choice can be involved in many accidents. Nearly every day I come across someone doing something stupid - like jumping a red light, wandering from one lane to another, etc. That's a breaking of rules/hazard situation - just like a police car or ambulance doing the same but for a good reason. How does a programme discriminate? It could pick up a transponder signal from an emergency vehicle only and sod everyone else being pushy, or does it treat every vehicle like it's an emergency vehicle?

Often driver's bump up on pavements, cross a white line or exceed the speed limit momentarily to get out of the way. Will a driverless car do the same? How does it judge the height of the kerb? Or does it become an obstruction like the worst panicking human?

Imagine it's overtaking a slow moving vehicle or other obstruction, suddenly a nutter comes round the bend doing 90mph. Most people would put their foot down to complete the manoeuvre in safety to get out of the way - but would the driverless vehicle stick to the speed limit? Yes Ok we know whose fault the accident is technically but have we just caused the death of someone because of inflexible software?

You mentioned a child crossing the road not being the same as an animal. A dog runs out - how can the programme distinguish between a dog and a small child on a bike? Does the driverless car swerve or slam on the brakes for an animal and cause someone to crash into the rear (not allowed under the highway code) - or the unthinkable? This puts a huge responsibility on the programmer or for the rules to change.
So whose fault is it/will it be?


----------



## Stiff (Jun 15, 2015)

Currently AI can very well differentiate between a dog, cat, child and apple so this could be configured to the recognition system, even in infrared like the current S-Class maybe(?).

Self driving cars would have a 360 degree overview and be able to communicate and coordinate with all other vehicles (not sure exactly how far though) and the reaction time of a computer would make it safer for a car to follow 3' off the bumper of a lead car than a person would be at many car lengths. They never get distracted, go to sleep, play with their radio or mobile phone and react to unexpected things about 10,000 times faster than people. This would allow some pretty wild avoidance maneuvers that people could never do. 
After it dodges a person then it would start avoiding striking a solid object and then a softer one. By stacking the priorities it would make everyone safer.

the smart thing to do is to just set a system up where all the cars on a certain road or city are connected . If one car stops it sends a signal to all other cars behind it to stop as well. There must be some program to which all the cars are synchronized that can calculate all cars positions, stopping and making them go at the right times to avoid many if not all collisions.

The thing is that all cars would have to be universally networked to react the same way for this to work efficiently.

Or perhaps the customer will be able to buy an "ethics" package that prioritizes according to their own beliefs. Within legality of course. Companies will sell it because it shifts liability from themselves to the car purchaser. (Possibly)


----------



## Stiff (Jun 15, 2015)

Or maybe we could have Asimov's 'Three Laws of Robotics' apply.

1) No car should harm its passengers, or - through inaction - allow its passengers to come to harm.

2) A car should obey the instructions of its passengers, providing this does not conflict with the First Law.

3) A car should protect its paintwork, especially white TT's, providing this does not conflict with the First or Second laws.


----------



## Roller Skate (May 18, 2015)

Stiff said:


> Or maybe we could have Asimov's 'Three Laws of Robotics' apply.
> 
> 1) No car should harm its passengers, or - through inaction - allow its passengers to come to harm.
> 
> ...


Probably won't work out well for Will Smith.


----------



## Stiff (Jun 15, 2015)

Roller Skate said:


> Probably won't work out well for Will Smith.


----------



## ZephyR2 (Feb 20, 2013)

Stiff said:


> Currently AI can very well differentiate between a dog, cat, child and apple so this could be configured to the recognition system, even in infrared like the current S-Class maybe(?).


Maybe but apparently they have a lot of difficulty with kangaroos. Its all to do with their hopping motion where they are on the ground at one point and then in mid air. To AI cars they seems to disappear when they are in mid hop and then reappear somewhere else.


----------



## Stiff (Jun 15, 2015)

ZephyR2 said:


> Stiff said:
> 
> 
> > Currently AI can very well differentiate between a dog, cat, child and apple so this could be configured to the recognition system, even in infrared like the current S-Class maybe(?).
> ...


We'll come to Australia last then :lol:


----------



## Roller Skate (May 18, 2015)

That'll be the end of this kind of thing then.


----------



## Stiff (Jun 15, 2015)

^ She's toast ^


----------



## Roller Skate (May 18, 2015)

Stiff said:


> ^ She's toast ^


Exactly ... with strawberry jam.

Right, Traffic Cops is on, or as we call it round here "Local Drivers".


----------



## John-H (Jul 13, 2005)

Well I'm unsure how good the AI built into driverless cars is.

Would a driverless car detect if I had my foot on a zebra crossing and right of way or just waiting and no right of way?

What about two people dressed in a pantomime horse costume? I can't see there being a pantomime horse sub routine :roll:

I really don't trust programmers to be able to programme in a lifetime of common sense. They will more likely have to rely on pre-conceived pattern recognition and simple sensor rules.

If all vehicles operated the same it would all be predictable but the danger is the diversity of vehicles, drivers and situations.


----------



## Spandex (Feb 20, 2009)

John-H said:


> I really don't trust programmers to be able to programme in a lifetime of common sense.


You say that, but then one example you gave was speeding up mid overtake in order to complete the move when you see an oncoming car. The most sensible choice is *always* to brake and pull back in behind - speeding up is never safer.

Often 'a lifetime of common sense' actually translates to 'a lifetime of getting away with bad choices'. Every time you get away with taking a risk, it reinforces that it's acceptable. In the end, you think you know better because it's always worked out ok for you in the past. A computer can be programmed to make a split second decision on what is the lowest risk course of action and make the right choice every time.

Realistically, the decisions are actually pretty simple - It's just a list of priorities. Obviously you would programme it to prioritise more vulnerable targets like pedestrians over occupants of a car, surrounded with airbags. Things like giving clearance to emergency vehicles is easy too - You just tweak the priorities. Staying between the lines at low speed drops down the priority list, pedestrians stay at the top, emergency vehicle takes priority over any other vehicle, etc.

Same with laws. They sit in the priority list. Obviously near the top (you shouldn't break the speed limit to get out the way of an emergency vehicle and you shouldn't do it to overtake either) but you'd expect the car to break certain laws if it meant avoiding a collision (crossing a solid white line to avoid a pedestrian stepping out, for example).


----------



## John-H (Jul 13, 2005)

Spandex said:


> John-H said:
> 
> 
> > I really don't trust programmers to be able to programme in a lifetime of common sense.
> ...


I disagree with your "always" absolute. Usually may be more appropriate as I can easily imagine a scenario:

You are in a long line of slow moving bunched up traffic with two vehicles in front - the first being the cause and the second content to sit there. You get to a straight, do your checks and indicate to overtake, you accelerate slightly up to the speed limit to minimise the time overtaking. There's time even if someone comes round the bend ahead.

You are half way past but a nutter comes round the bend at twice the speed limit being chased by a police car - your calculations are screwed. Do you:

(1) Put your foot down to multiply the overtaking speed differential and quickly pull safely back in front of the lead vehicle who has even started to brake for you having also seen the nutter coming.
(2) Brake sharply to pull back in behind the vehicle you just overtook but the gap has now closed and there's no room.
(3) Brake even harder hoping that there may be another gap further back along the crocodile of bunched up cars but your differential speed now requires a larger gap to pull safely into or requires others to take evasive action.



Spandex said:


> Often 'a lifetime of common sense' actually translates to 'a lifetime of getting away with bad choices'. Every time you get away with taking a risk, it reinforces that it's acceptable. In the end, you think you know better because it's always worked out ok for you in the past. A computer can be programmed to make a split second decision on what is the lowest risk course of action and make the right choice every time.


You are misinterpreting my use of the phrase "a lifetime of common sense". I was in fact referring to the pantomime horse - I used it as an example because I question what AI would make of it whereas a human, from their lifetime of experiences would know it was two jokers dressed as a horse and not an animal.

I once saw someone dressed as a snail crawl slowly across a zebra crossing on their belly as a joke. He waited until the traffic stopped and the drivers waited patiently out of surprise or amusement. The police had a word.

I therefore wouldn't be so sure about AI's ability to deal with such things correctly.



Spandex said:


> Realistically, the decisions are actually pretty simple - It's just a list of priorities.


What about priorities that were not considered to be included on the list?



Spandex said:


> Obviously you would programme it to prioritise more vulnerable targets like pedestrians over occupants of a car, surrounded with airbags. Things like giving clearance to emergency vehicles is easy too - You just tweak the priorities. Staying between the lines at low speed drops down the priority list, pedestrians stay at the top, emergency vehicle takes priority over any other vehicle, etc.


Will the pantomime horse be on this list? What about the snail or other fun costumes? Will they be placed in the lower priority animal group?

How do you recognise an emergency vehicle? Transponder signal? What about a commandeered vehicle flashing it's lights and horn? That be the same as anyone else doing the same I presume.



Spandex said:


> Same with laws. They sit in the priority list. Obviously near the top (you shouldn't break the speed limit to get out the way of an emergency vehicle and you shouldn't do it to overtake either)


I refer you to my previous example.



Spandex said:


> ... but you'd expect the car to break certain laws if it meant avoiding a collision (crossing a solid white line to avoid a pedestrian stepping out, for example).


I agree, it's Ok to break the law in an emergency - in fact it's an allowed for legal exception.


----------



## Spandex (Feb 20, 2009)

You're over-thinking it. A large moving object in the road is a high priority - it doesn't need to work out if it's a real giant snail or a human in a costume. And unlike a human, the car will already be driving at a speed where it's stopping distance is shorter than its visibility. And in the incredibly rare event of 'something' leaping out from a concealed position.. well a human driver wouldn't have time to work out exactly what it was either, he/she would, quite correctly, just react as though it was a person.

The point I'm making is that the more you simplify the decision making process, the safer it actually gets. Humans are, statistically, not very safe drivers - all that 'common sense' clearly isn't as useful as you think. They may be able to tell the difference between a pantomime horse and a real one (although with a fleeting glance at 60mph, they may not) but the decisions they make with all that common sense information they have are still horribly flawed. And remember, the AI doesn't need to make every decision correctly, it just needs to make them better than humans, on average.

As for your desperately convoluted overtake scenario, the safest option is still to brake.. even if you're ultimately left stationary on the wrong side (unlikely as that is), you've still reduced the risk of injury by reducing the closing speed of the two cars. You've also given the oncoming car significantly more time to react AND made yourself a much easier obstacle to avoid.

Accelerating reduces the distance between the two cars and the increased speed makes further manoeuvres even more difficult.


----------



## John-H (Jul 13, 2005)

The pantomime horse is largely metaphorical and really stands for all sorts of situations that may not have been covered by the programmer. As these vehicles become more prevalent we'll see how safe they are. If the rules are simplified to avoid all moving objects then you can get into scenarios of taking evasive action for a piece of paper being blown in the wind etc our variations of that and it was being postulated previously that there would be recognition between human objects and other animals etc. Hence the pantomime horse.

In practice I suspect well have very cautious driving by these vehicles which many people might find frustrating to be behind when they are suck at a busy give way and the automatic vehicle ignores people washing them out. The passengers may be glad for the caution however.

As for my overtaking scenario, I only added the police car to fit with your later statement and I added mutter for emphasis but that sort of situation is not uncommon. Slamming the anchors on can cause others to brake suddenly expecting you to pull in which could cause an accident if they are bunched up. If there's plenty of safety margin by getting past quicker that must be more sensible. It may exceed the speed limit but arguably it was an emergency situation avoided. I would hope AI could also calculate how best to escape from danger. I suspect it would never try to overtake.


----------



## Stiff (Jun 15, 2015)

John-H said:


> The pantomime horse is largely metaphorical and really stands for all sorts of situations that may not have been covered by the programmer.


It's a very valid point but I really think that recognition systems will be far superior to the human eye. Heat source, for instance, will pick up two human figures well before it detects the costume. The same applies with the Dom Jolly snail prank (although I'd tempted to override the system and drive over his legs - that should leave a decent snail trail) 





Ironically, the thing that poses the most problem are cyclists (no change there then :lol: ) 
https://www.theguardian.com/cities/2017 ... s-vehicles

This, for me, is the crux of the matter though, and the most valid.



Spandex said:


> And remember, the AI doesn't need to make every decision correctly, it just needs to make them better than humans, on average.


And it will do it this immensely better and incredibly quicker than a human ever could. Yes, there may be a few mistakes along the way, no doubt due to the input from 'human' error - programming and such - but overall the accident rate will fall dramatically until we reach a point that a 'car' fatality will be that rare that it will hit the headlines in much the same way that a plane crash does now. (They've been using autopilot for over half a century and it's much safer than leaving it down to the human element.)


----------



## Spandex (Feb 20, 2009)

John-H said:


> The pantomime horse is largely metaphorical and really stands for all sorts of situations that may not have been covered by the programmer. As these vehicles become more prevalent we'll see how safe they are. If the rules are simplified to avoid all moving objects then you can get into scenarios of taking evasive action for a piece of paper being blown in the wind etc our variations of that and it was being postulated previously that there would be recognition between human objects and other animals etc. Hence the pantomime horse.


I understand that it's a metaphor, but it's still over-complicating things. You don't need to work out what something is. If something large enough moves into your path, you take evasive action if you can do so without hitting anything else, and you brake hard if you can't. That simple algorithm covers everything about sudden incursions into your lane. If a person (or cat, or dog, or pantomime horse) runs out into the road and there's a car coming the other way, you brake hard to minimise injuries - you don't crash head on into the other car, or drive off the road into a tree to save the pedestrian.

You're trying to concoct scenarios in an attempt to find a situation where a car might make a 'bad' decision, but all the while you're ignoring the fact that humans are just as likely to make the same or worse decisions in those scenarios too. Cars will have to make snap decisions based on available data. They will have more data than humans have (due to being able to see in all directions at once, possibly in many more spectrums than us), they'll make decisions faster than we can, and they'll make them continuously throughout the emergency (with no panic or fear shutting them down). They *will* be safer than us.



John-H said:


> In practice I suspect well have very cautious driving by these vehicles which many people might find frustrating to be behind when they are suck at a busy give way and the automatic vehicle ignores people washing them out. The passengers may be glad for the caution however.


I'm sure they will seem very cautious compared to us, but this is because we've become so used to taking risks that we don't even register them anymore. We take massive risks every day in our cars and still class ourselves as 'good' drivers at the end of the day.

And remember, our acceptance of risk is often based on a feeling of control. When you're driving you will take risks that you class as completely acceptable, but if you were in the back of an Uber, you'd probably appreciate a cautious driver that took a few minutes longer getting you to your destination. It's not that you're safer than him, it's just your risk perception is skewed when you're in control.



John-H said:


> As for my overtaking scenario, I only added the police car to fit with your later statement and I added mutter for emphasis but that sort of situation is not uncommon. Slamming the anchors on can cause others to brake suddenly expecting you to pull in which could cause an accident if they are bunched up. If there's plenty of safety margin by getting past quicker that must be more sensible. It may exceed the speed limit but arguably it was an emergency situation avoided. I would hope AI could also calculate how best to escape from danger. I suspect it would never try to overtake.


Slamming on your brakes may indeed cause others to do the same but again, that's the lesser of two evils - 'might cause a rear-end shunt is infinitely better than 'might cause a high speed head on crash'. I'm sorry, but if you're in a scenario where, mid-overtake, an oncoming vehicle appears and by continuing at your current overtaking speed you'll collide, you're never going to convince me that the best option is to accelerate towards that vehicle.

Overtaking is a simple process for an AI car - it has much better awareness of the upcoming road layout than we would (assuming it wasn't a road we drove frequently). It can calculate the speed of oncoming cars to a degree we're not capable of and can calculate the exact speed required to complete the overtake safely. It can do this in a split second whilst taking into account everything else about it's location and what's going on around it. There's no reason to assume AI cars won't overtake. As long as we're comfortable allowing them that option, I'm sure they'd do it easier than we can, given the amount of additional information they have that we don't.


----------



## John-H (Jul 13, 2005)

Stiff said:


> Spandex said:
> 
> 
> > And remember, the AI doesn't need to make every decision correctly, it just needs to make them better than humans, on average.
> ...


I don't disagree with that on balance that it may turn out to be the case but that's accepting, like I was saying, that it's not perfect on which we can agree. Only time will tell how so.



Spandex said:


> John-H said:
> 
> 
> > As for my overtaking scenario, I only added the police car to fit with your later statement and I added mutter for emphasis but that sort of situation is not uncommon. Slamming the anchors on can cause others to brake suddenly expecting you to pull in which could cause an accident if they are bunched up. If there's plenty of safety margin by getting past quicker that must be more sensible. It may exceed the speed limit but arguably it was an emergency situation avoided. I would hope AI could also calculate how best to escape from danger. I suspect it would never try to overtake.
> ...


I think you are creating a loaded situation there to fit your absolute "always" statement earlier.

Well clearly in that case I would agree but that's rather obvious isn't it and certainly not the situation - the exception to your absolute- where it IS safer to complete the overtake (e.g. There's plenty of room if you speed up but it gets a little marginal if you maintain current speed) as I described. "Always" excludes exceptions and that was my point - there are exceptions. You seem to be advocating throwing away the safe exception in favour of a dodgy one.


----------



## Spandex (Feb 20, 2009)

John-H said:


> I think you are creating a loaded situation there to fit your absolute "always" statement earlier.
> 
> Well clearly in that case I would agree but that's rather obvious isn't it and certainly not the situation - the exception to your absolute- where it IS safer to complete the overtake (e.g. There's plenty of room if you speed up but it gets a little marginal if you maintain current speed) as I described. "Always" excludes exceptions and that was my point - there are exceptions. You seem to be advocating throwing away the safe exception in favour of a dodgy one.


Then you're creating an impossible situation - One where the overtake is so unbelieveably long, it's not possible to complete it at all at your current speed, but is possible to not only complete it, but to do so completely safely by speeding up. Sorry, but that's nonsense.


----------



## John-H (Jul 13, 2005)

Spandex said:


> John-H said:
> 
> 
> > I think you are creating a loaded situation there to fit your absolute "always" statement earlier.
> ...


No it isn't. I've been in and seen many situations where there is a long crocodile bunched up line behind a slow car/caravan or motor home for example. They tend to accumulate slower cars behind who don't fancy overtaking and also don't leave gaps but effectively form a long slow caravan. Then everyone else bunches up into a long crocodile. When you come across a long straight what do you do?


----------



## Spandex (Feb 20, 2009)

John-H said:


> No it isn't. I've been in and seen many situations where there is a long crocodile bunched up line behind a slow car/caravan or motor home for example. They tend to accumulate slower cars behind who don't fancy overtaking and also don't leave gaps but effectively form a long slow caravan. Then everyone else bunches up into a long crocodile. When you come across a long straight what do you do?


Sigh... I (obviously?) don't mean that long lines of traffic are impossible. I mean the physics of the entire thing just doesn't add up. The difference between crashing into an oncoming car and completing the overtake with a safe margin must be a number of seconds (otherwise I'd question your definition of 'safe'). In order to gain a number of seconds, bearing in mind the oncoming car is also speeding towards you, you would either have to be passing a ridiculously long line of cars on a ridiculously long straight road with miles of visibility, OR you would have to accelerate to a speed so high that you've actually made the overtake inherently unsafe due to the difference in speed between you and the cars you're passing.

And even if you argue that those situations are plausible, they only work if nothing else changes. If the oncoming car accelerates (a believable situation, given you've invented a police chase just to try to justify this) then you're now unable to complete the overtake AND, due to your poor decision making, you're now travelling too fast to stop or take any kind of evasive action before meeting the imaginary crooks. By speeding up you've reduced your options to one, and if that fails, you'd dead.


----------



## Spandex (Feb 20, 2009)

Honestly, I think the whole notion of 'accelerating out of trouble' in general is just macho bollocks really. Nailing it is way more heroic than braking, right? The real problem is that people don't want to back out of an overtake. It's a sign of weakness. A very public admission that you made a mistake. So people commit to the overtake and they accelerate as hard as they can and they pray nothing else changes because their ego stopped them doing what they should have done 5 seconds ago.

Fortunately, a driverless car should be capable of avoiding all the stupid scenarios that some drivers seem to think can only be fixed by booting it.


----------



## John-H (Jul 13, 2005)

:lol: Of course I wasn't implying you don't believe long lines of traffic can exist. You mention physics. Consider differential speed between you and the vehicle you are overtaking. That may be quite small.

If you are near the front of your overtake manoeuvre and the oncoming car appears with say three seconds at current overtaking differential speed before you complete the manoeuvre but by speeding up you dramatically increase the speed differential and get past in one second with a consequent huge safety margin before the oncoming car reaches you - you'd be a fool not to take the opportunity.

Are you really suggesting that you should instead brake and try and get the line of traffic to undertake you, hope nobody panics and brakes for you to pull in between, causing a potential collision to their rear end and also slowing the cars behind making it even less likely for you to reach the back and pull onto your side in time. So you and the people behind all end up doing an emergency stop, you've caused an accident to your left, you are all blocking the road and now hoping the oncoming car will also stop. What a lot of chaos and causing others to take evasive action - something also not allowed in the highway code - when all you needed to do was put your foot down and tuck back in to avoid an emergency. Nothing to do with psychology just playing safe.

If you are close to completing and speeding up WILL give you a large safety margin by getting you back on your side quicker it makes little sense to abandon that safe course of action and opt for an extended unsafe situation with you on the wrong side of the road and hoping nobody panics.

Of course if you still had some way to go to get past then it would be safer to abandon the attempt and pull back in soonest.

Of course it doesn't have to be a speeding oncoming car chased by the police it could have been just a normal car but in the short moment you see it it's difficult to judge the speed and by the time you have realised it's coming fast it may be too late - so as soon as you see it you put your foot down to quickly get back in just in case it was coming fast - that's also playing safe.

It could also be a car pulling out of a concealed entrance or junction without looking to their left. You still have to make a judgement. Depending on the situation it may again be very safe to speed up and get back in before you reach them or depending on the distance you may have no choice but to try and stop to avoid an immediate collision.

Come on just admit there can be exceptions


----------



## ZephyR2 (Feb 20, 2013)

A situation which would meet your criteria of the "long overtake" does arise in different circumstances. On motorways where there is a 50 mph speed limit for extensive roadworks I often find myself overtaking a long line of traffic moving at perhaps 49 mph while I'm doing 50 mph. A manoeuvre which can take several minutes over a long distance on a congested motorway.
Now if I need to come off at the next junction I need to assess whether I can complete the overtaking manoeuvre in time to get into the inside lane safely and in time.
I would imagine that without knowing the length of the queue of slower traffic an AI vehicle would err on the side of caution and maintain its position in the inside lane. But for how long before the exit junction would it apply this decision.


----------



## Spandex (Feb 20, 2009)

John-H said:


> Come on just admit there can be exceptions


If there are exceptions, you've not mentioned a convincing one so far.

In your last one, if you're near the end of your overtake and an oncoming car appears then why have you not got enough time to finish the 'almost complete' overtake? how is this oncoming car, which wasn't even visible in all the time prior to this moment, now going to get to you fast enough that you can't finish your overtake as planned? The only way it can is if the overtake was never safe in the first place. This is what I mean by impossible scenarios. I have to suspend disbelief in order to picture myself in a scenario where I'm literally *forced* into accelerating so that you can prove that you might be forced into accelerating.

Ok, so maybe that's your point. If you're stupid enough to start an unsafe overtake there may come a time during it that you need to 'double down' on stupidity. In for a penny, in for a pound. Foot down and hope nothing else goes wrong.

But to try to get back on topic, we were talking about whether an AI car would accelerate past the speed limit if needed and all your scenarios have told me is that an AI car would never be dumb enough to need to.


----------



## Spandex (Feb 20, 2009)

ZephyR2 said:


> But for how long before the exit junction would it apply this decision.


It's a computer. It would apply that decision for exactly the right amount of time. It knows how far away the junction is (probably to sub-metre accuracy) and it knows precisely how fast it and the other cars are travelling. It can work out the precise cut off points where it's no longer expedient to overtake a line of cars if it knows how many cars are ahead.

But if, as per your scenario, it can't work out how long the line of cars is then a human driver wouldn't be able to either. So:

1. What do you think is the best thing to do in that situation, and:
2. Why do you think an AI car couldn't do it but you could?

This is the crux of it. People keep talking about AI cars being more cautious as though they were created on another planet. *We're* building these things. If we don't want them to be over cautious then we can program them not to be. The point is that they will be capable of controlling the car better than us, they will have more data available to them, they'll make decisions faster than us and they won't suffer from ego or impatience or anger or get tired or all the other things that make us bad decision makers.

If we want them to maintain a pre-programmed speed by overtaking on the motorway until a pre-programmed distance before their exit, then slot perfectly into any gap that meets the pre-programmed criteria, then that's what they'll do. If we want them to move over two miles before the junction in order to avoid hunting for a gap at the last minute, then that's what they'll do. Our choice.


----------



## ZephyR2 (Feb 20, 2013)

Q1. I suspect an aggressive driver or one in a hurry might start overtaking and try and force their way back in when they run out of road. Whereas a less aggressive or less confident driver would hang back and wait until they reached the exit.
How would you feel about AI cars having a selection button, along the lines of the current Drive Select function, whereby a driver could chose a more adventurous approach vs something more cautious?

Q2. I'm not suggesting that an AI car couldn't do that I could do, I'm just pondering how it may be programmed to react.

Also how it might react if the situation changed once it had decided to make move and start the overtaking manoeuvre. It has calculated the length of the slow traffic and its speed and has decided it can overtake before the next junction. However, as is often the case, part way through the process a slow moving vehicle in the middle lane has moved over and now the slow moving traffic is moving at the same speed as yourself. Ahhhh!

There's a lot for AI programmers to work out. Hence the millions of miles of learning being clocked up.
Unfortunately AI cars have to be whiter than white and are widely expected by the public to be perfect. For example: A man drives his car under an artic and dies - nobody blinks and eye. An autonomous car drives under an artic and kills the driver - world news for weeks.


----------



## Stiff (Jun 15, 2015)

ZephyR2 said:


> Unfortunately AI cars have to be whiter than white and are widely expected by the public to be perfect. For example: A man drives his car under an artic and dies - nobody blinks and eye. An autonomous car drives under an artic and kills the driver - world news for weeks.


The thing with that one is that it wasn't 'fully' autonomous. The Tesla was level 2 or maybe even level 3 at best and the driver is constantly warned "_Autopilot requires full driver engagement at all times._" This guy was reading a book or something along those lines. Tesla were cleared and are using radar now to eliminate the 'clear sky' problem. But yes, it looks like there's still lots to learn before it's fail safe. 
More on that incident here: https://www.nytimes.com/2017/01/19/busi ... crash.html 
or here: https://www.wired.com/2017/01/probing-t ... f-driving/


----------



## John-H (Jul 13, 2005)

Spandex said:


> John-H said:
> 
> 
> > Come on just admit there can be exceptions
> ...


Well at least you are admitting there may be exceptions. This is different to your previous:



Spandex said:


> The most sensible choice is *always* to brake and pull back in behind...





Spandex said:


> In your last one, if you're near the end of your overtake and an oncoming car appears then why have you not got enough time to finish the 'almost complete' overtake? ...


Ok lets do a little maths.

Suppose you are overtaking a few cars tailgating a coach. The coach is driving at 30mph down a long straight. You overtake at 40mph.

You've cleared the cars and have 10 yards of coach remaining and are giving yourself 10 yards of safe distance to pull in = 20 yards differential relative distance to pull in safely.

At this point a car pulls out of a concealed entrance 50 yards away heading in your direction (they've not looked left).

At 40mph you get there in 2.5 seconds

20 yards relative distance at 10mph speed differential means you get clear in 2 seconds - that's only 0.5 seconds and 10 yards in it - a bit close. Possibly a collision if the car doesn't see you and you give it time to accelerate towards you.

You instead increase speed to 50 mph doubling your relative speed differential and get clear in 1 second dropping back to 40mph - you pull in with 1.5 seconds and 20 yards safety margin.

I think that's right and I've not made a mistake. Just an example of how speeding up can make it safer. In fact the general advice is to overtake briskly spending as little time as possible on the wrong side of the road so arguably you should have accelerated to a higher speed sooner.

Here's some good advice about overtaking multiple vehicles: http://www.driverskills.com/corporate/s ... king-tips/

Now, back to the AI thing. The article I linked to said:



> It's not something they have mastered and it's not uncommon for humans to have to take control in road tests to avoid accidents.


Here's when a AI Google car caused an accident by pulling out to avoid an obstruction and sideswiping a passing bus: https://www.theverge.com/2016/3/9/11186 ... rash-video

So they are not perfection but people tend to compare them to perfection rather than humans. There's a way to go yet and it will be interesting to see what happens.


----------



## Spandex (Feb 20, 2009)

Your example doesn't take into account acceleration - it just assumes instant speed changes. That's ok though, because you can just keep tweaking it till you get something that approaches a working scenario as long as you ignore the fact that only a nut job would accelerate at an oncoming car 50 yards away. Even I'm finding this tedious now though, so please don't feel you have to.

Yes, self-driving development platforms aren't perfect, but we're not talking about those are we? Aren't we discussing what they *will be* like when they're actually approved? Unless the title of this thread refers to a driving test for the engineers driving current test vehicles?


----------



## Spandex (Feb 20, 2009)

ZephyR2 said:


> Also how it might react if the situation changed once it had decided to make move and start the overtaking manoeuvre. It has calculated the length of the slow traffic and its speed and has decided it can overtake before the next junction. However, as is often the case, part way through the process a slow moving vehicle in the middle lane has moved over and now the slow moving traffic is moving at the same speed as yourself. Ahhhh!


But that's not a difficult problem. You have safety parameters and if you can't meet them you continue even if that means missing your exit. That's no different to how a human should treat it (I say 'should' because there will always be the morons who risk a serious crash just to take an exit they're in danger of missing).



ZephyR2 said:


> There's a lot for AI programmers to work out. Hence the millions of miles of learning being clocked up.


I'm not so sure. The difficult part of this is building an accurate real-time 3D model of the world that allows the car to make its decisions. The decisions themselves are relatively simple.

<edit> Just to be clear, I agree that they have a lot to work out, but I don't think these decisions we're discussing are a major part of it.


----------



## John-H (Jul 13, 2005)

Spandex said:


> Your example doesn't take into account acceleration - it just assumes instant speed changes. That's ok though, because you can just keep tweaking it till you get something that approaches a working scenario as long as you ignore the fact that only a nut job would accelerate at an oncoming car 50 yards away. Even I'm finding this tedious now though, so please don't feel you have to.


It was just an illustration to prove a point which you don't want to appreciate. Complaining about instant acceleration? Really :roll: I was keeping it simple and no I don't intend to turn it into a further maths paper. I'll allow you to tweak the distance the car pulls out at to where you feel comfortable having accelerated to 50mph or 60mph to nip past :wink:

Did you look at the link about overtaking multiple vehicles safely?



Spandex said:


> Yes, self-driving development platforms aren't perfect, but we're not talking about those are we? Aren't we discussing what they *will be* like when they're actually approved? Unless the title of this thread refers to a driving test for the engineers driving current test vehicles?


I did say we had some way to go. The quote and news report just underline where we are now.


----------



## Spandex (Feb 20, 2009)

John-H said:


> Did you look at the link about overtaking multiple vehicles safely?


Yes. It didn't discuss what to do if an oncoming car suddenly appeared mid-overtake so I didn't really see the relevance.


----------



## John-H (Jul 13, 2005)

Spandex said:


> John-H said:
> 
> 
> > Did you look at the link about overtaking multiple vehicles safely?
> ...


No, shame that really. I did try. Good info though. Do you think an AI car would do that?


----------



## Spandex (Feb 20, 2009)

John-H said:


> Do you think an AI car would do that?


Do what? Overtake multiple cars? Don't see why not. No such thing as a concealed entrance for an AI car. No miscalculations of other cars speed. No ego. No impatience. No surprises.


----------



## John-H (Jul 13, 2005)

I'm not so sure as you. Have you written the software already?


----------



## Spandex (Feb 20, 2009)

John-H said:


> I'm not so sure as you. Have you written the software already?


Which bit do you think is difficult?

Actually, think of it like this. If you were playing a multiplayer driving simulator game and you saw one of the computer controlled cars do a perfect overtake of multiple cars, would it surprise you? Probably not. But the only difference between that and the real world is that in the game, the computer knows everything about the 'world'. The decisions are the same, it just has more data. So the challenge with self-driving cars is to get enough data on the world around it and to build that into an accurate model. The decisions themselves are simple and the act of driving faster than, and parallel to, a row of cars is hardly rocket surgery.


----------



## John-H (Jul 13, 2005)

Well, obviously what to do part way through a long "effective single vehicle" overtake when the situation changes seems to be controversial. Would you, as a programmer allow the AI vehicle to speed up to complete an overtake if doing so was mathematically safe or always default to slowing down, having traffic undertake at an ever increasing relative speed and try to find a space to pull back in which may not be there. Same question really only this time you have removed emotion from the equation.

I'm sure there are other examples.


----------



## Spandex (Feb 20, 2009)

John-H said:


> Well, obviously what to do part way through a long "effective single vehicle" overtake when the situation changes seems to be controversial. Would you, as a programmer allow the AI vehicle to speed up to complete an overtake if doing so was mathematically safe or always default to slowing down, having traffic undertake at an ever increasing relative speed and try to find a space to pull back in which may not be there. Same question really only this time you have removed emotion from the equation.
> 
> I'm sure there are other examples.


Firstly the more accurate your world model is, the less likely an unpredictable change is. Secondly, the car can just brake if needed. All your examples were focused entirely on trying to find a situation where accelerating was safe, but completely ignored the real question of what was the *safest* option. In your last example, a car pulled out 50 yards away while you were travelling at 40mph. That's well within the stopping distance in the Highway Code (which we all know is massively conservative compared to the stopping performance of modern cars). And that assumes decelerating to 0mph - in reality you just need to decelerate gradually to a point where you can pull back in behind the last car. So why are you determined to ignore that option? If you accept that overtaking is a situation where unexpected things _can _happen, don't you think accelerating at that point puts you in an even worse position if another unexpected thing happens? Or is the rule that we only get one unexpected thing per overtake?

Obviously though, backing off and pulling back in should not be a standard part of overtaking. It's an evasive action to avoid a head on crash. It should never happen in normal driving, but is a perfectly reasonable fall-back position for emergencies.

I get it though. Really. You're trying to imagine situations where a car would *have to* do something that it's programmed* not to* do. It's useful to do thought experiments to help us understand all the nuances and edge cases of a problem, but that doesn't mean the real world solution needs to be that complex. Often these experiments just help us realise that the simple solution is not only 'good enough', but is actually better than the complex one that tries to have a unique answer to every possible question.


----------



## Spandex (Feb 20, 2009)

You know what... I gave a perfectly reasonable example earlier in the thread; crossing a solid white line to avoid a collision. That's an example where a driver (human or AI) would be expected to break the law in order to avoid a crash. So I think it's safe to assume I agree that AI should be able to make decisions that brake the law in specific situations. The law allows for humans to do it, so it seems reasonable that we would want AI to do the same.

The only reason we're banging on about overtaking is because we disagree what the correct thing to do is, regardless of who's driving. We're not arguing about AI cars, we're arguing about overtaking. Maybe that should be in another thread?


----------



## John-H (Jul 13, 2005)

John-H said:


> ... Would you, as a programmer allow the AI vehicle to speed up to complete an overtake if doing so was mathematically safe... ?


----------



## Spandex (Feb 20, 2009)

John-H said:


> John-H said:
> 
> 
> > ... Would you, as a programmer allow the AI vehicle to speed up to complete an overtake if doing so was mathematically safe... ?


Not if it exceeded the speed limit and wasn't necessary in order to avoid a collision. And given that decelerating is equally or more effective at avoiding collisions I can't see how it would ever meet the 'necessary' test.


----------



## Spandex (Feb 20, 2009)

As I said, breaking the law to avoid a collision is technically acceptable. It WILL be programmed into driverless cars as a last resort, where no legal options are available. If you can't work out how that would apply to a long overtake then it's probably for the best that you're not programming driverless cars.

But really that's not what you're obsessing with here. You're trying to justify your own actions in the face of someone who says they're wrong. And I'm not going to agree - I think you're making a dangerous choice. It's unfortunately not an uncommon choice though, and I've been that oncoming car a few times, watching some tit accelerating at me after misjudging an overtake, then pulling across at the last second. All it would take is for the front car (or cars) to speed up (maybe they haven't noticed the overtake. Maybe they think they're helping by opening up a gap behind them) and I'll end up in a ditch, or worse, because the tit has chosen a course of action that commits him to trying to finish the overtake whether it's physically possible now or not.


----------



## John-H (Jul 13, 2005)

Spandex said:


> John-H said:
> 
> 
> > John-H said:
> ...


Ooh and you avoid answering the direct question again by throwing in two new conditions so trying desperately to redefine the boundaries of my question :lol:

The first about breaking the speed limit is irrelevant because you contradict it in your next post:



Spandex said:


> As I said, breaking the law to avoid a collision is technically acceptable.


And your second condition allows *equal* status to the choice of accelerating to avoid an accident if it was "mathematically safe" as I put it.

I think that's the closest I'm going to get to an admission from you that there can be an exception to your earlier statement:



Spandex said:


> The most sensible choice is always to brake and pull back in behind - speeding up is never safer.


Except when it's *equally* as safe it appears :wink:



Spandex said:


> As I said, breaking the law to avoid a collision is technically acceptable.
> It WILL be programmed into driverless cars as a last resort, where no legal options are available. If you can't work out how that would apply to a long overtake then it's probably for the best that you're not programming driverless cars.


Isn't it those who refuse to except the possibility of exceptions to their model who are more likely do the wrong thing when those exceptions present themselves in reality?



Spandex said:


> But really that's not what you're obsessing with here. You're trying to justify your own actions in the face of someone who says they're wrong.


Now, I would have thought by now that you know I like a good debate, as do you, and it was for that purpose I picked you up on the absolute statement you made - because I don't believe it "always" to be the case - only because it was an absolute "always" and "never". If you'd said "nearly always" or "usually" I wouldn't have challenged you as I would have agreed with you. I knew too that if I challenged you you'd argue tooth and nail - and I would too. It was fun and you have my respect for your tenacity. I think we both like the challenge of debate  Do you remember this?:



brian1978 said:


> John-H said:
> 
> 
> > I'm in tears and my stomach muscles hurt :lol: :lol: :lol: :lol: :lol: :lol: :lol:


Some of the best debates on this forum have featured your well researched input!


----------



## Spandex (Feb 20, 2009)

Ok, first of all John, don't be a prick. I've tried to stop talking about this one specific question a couple of times and you're the one who just can't drop it. If you want to troll someone p*** off and find Bob.



John-H said:


> Ooh and you avoid answering the direct question again by throwing in two new conditions so trying desperately to redefine the boundaries of my question :lol:


No, I answered it very very accurately. No programmer is going to tell the car what it has to do in that situation. They're going to give it a series of parameters that will allow the car to decide. Your question isn't answerable with a 'yes' or 'no' and the only reason you want me to give a yes/no answer is so you can then proceed to pick at it.



John-H said:


> The first about breaking the speed limit is irrelevant because you contradict it in your next post


Once again, no. Nothing I have said is contradictory. I said "Not if it exceeded the speed limit *and *wasn't necessary in order to avoid a collision". I didn't choose "and" over "or" by accident.



John-H said:


> Except when it's *equally* as safe it appears :wink:


Jesus John, this is getting desperate. No, I said "equally *effective at avoiding the collision*". You quoted it in your reply, so I'm not sure how you've managed to change it only a few lines later. So, what's the difference? Well, avoiding the collision is clearly the priority at that moment in time, but consideration must be given for what happens next. Choosing between acceleration and deceleration is also a choice between two end conditions. One is inherently safer than the other so that is the one a car will choose.



John-H said:


> Isn't it those who refuse to except the possibility of exceptions to their model who are more likely do the wrong thing when those exceptions present themselves in reality?


There may be exceptions to the model, but you've not explained why this is one of them. At the risk of repeating myself, you've not spent any time looking at alternatives, you've just focused on trying to make your option look safe. It's very telling that you keep using the phrase "complete the overtake" as thought that should somehow still be the aim when an emergency occurs. We're not talking about part of overtaking procedure, we're talking about collision avoidance.

So, Let me be completely clear. The answer to your question is:


I wouldn't program a car to *do *anything in that situation other than make a decision itself. I would program it with a set of priorities and rules that would allow it to make that decision. One of those rules would be that it could only break the law (including the speed limit) if it was *necessary *in order to avoid a collision (or other emergency) and no legal options existed.

If you can think of a scenario where it would be *necessary *to speed *in order to avoid a collision* during an overtake and no legal options are available, then good for you. None of the ones you've put forward so far meet that test.


----------



## John-H (Jul 13, 2005)

Spandex said:


> Ok, first of all John, don't be a prick. I've tried to stop talking about this one specific question a couple of times and you're the one who just can't drop it. If you want to troll someone p*** off and find Bob.


Now come on if you didn't enjoy a debate you wouldn't be carrying on. Don't be so unfriendly. I did start the thread, you found it and commented, I responded and every time I did you did so don't give me that trolling nonsense as it takes at least two to debate an issue. If truly you no longer wish to continue then don't. By continuing you confirm your equal part in it. Your further statements in response seem to invite me to comment further - or is it just that you want the last word on the subject? If the latter then I'll shut up and we'll stop here. I thought you enjoyed a challenging argument. I think the points I've made are valid. I am interested in the subject and emotionally I enjoy a debate as I thought you did. I remember a good Monty Python sketch about an argument and like at the end of that it becomes unclear now how you wish to continue :wink:



Spandex said:


> John-H said:
> 
> 
> > Ooh and you avoid answering the direct question again by throwing in two new conditions so trying desperately to redefine the boundaries of my question :lol:
> ...


My question was would you as a programmer "allow" - asking if as a programmer you would specifically allow the programme to run free or disallow the possibility if the working algorithm wanted to take it in that direction. The working programme must include the ability to speed up and a whole host of other actions based on rules and boundaries. If you don't specifically disallow the possibility then you must accept the possible exception to your earlier absolute statement that it was "never" safe to accelerate in the situation and the implication that an AI car wouldn't do it. I was trying to show you that the "unemotional" programme might make that decision - whereas you were quite unfairly trying to attach emotion as the only reason people might chose to avoid danger like that. I was trying to show you that if the maths allow it then it's a valid exception. You seem to be ignoring the dangers of the situation changing behind you and not being able to slot back in. The advice is always to spend as little time on the wrong side of the road as mentioned in my link.



Spandex said:


> John-H said:
> 
> 
> > The first about breaking the speed limit is irrelevant because you contradict it in your next post
> ...


It is. There was no need to include it because you later said in an emergency and to avoid an accident you would allow laws to be broken. Why did you include it?



Spandex said:


> John-H said:
> 
> 
> > Except when it's *equally* as safe it appears :wink:
> ...


As I said it's an exception which you have allowed but and you seem to be ignoring the possibilities of being blocked from resuming your side of the carriageway. It has to I'm saying depend on the situation and there is no absolute correct pre defined action.



Spandex said:


> John-H said:
> 
> 
> > Isn't it those who refuse to except the possibility of exceptions to their model who are more likely do the wrong thing when those exceptions present themselves in reality?
> ...


Well I've tried to explain but you've refused to except them. Thank you for agreeing at least that there may be exceptions which is what I was trying to point out. I actually think it telling that you insist on trying to invent an alternative motivation.



Spandex said:


> So, Let me be completely clear. The answer to your question is:
> 
> 
> I wouldn't program a car to *do *anything in that situation other than make a decision itself. I would program it with a set of priorities and rules that would allow it to make that decision. One of those rules would be that it could only break the law (including the speed limit) if it was *necessary *in order to avoid a collision (or other emergency) and no legal options existed.


I agree with that.



Spandex said:


> If you can think of a scenario where it would be *necessary *to speed *in order to avoid a collision* during an overtake and no legal options are available, then good for you. None of the ones you've put forward so far meet that test.


I always thought I was reasonably good at explaining so I think "that test" is in your opinion. That's fair enough.


----------



## Spandex (Feb 20, 2009)

John-H said:


> My question was would you as a programmer "allow" - asking if as a programmer you would specifically allow the programme to run free or disallow the possibility if the working algorithm wanted to take it in that direction. The working programme must include the ability to speed up and a whole host of other actions based on rules and boundaries. If you don't specifically disallow the possibility then you must accept the possible exception to your earlier absolute statement that it was "never" safe to accelerate in the situation and the implication that an AI car wouldn't do it. I was trying to show you that the "unemotional" programme might make that decision - whereas you were quite unfairly trying to attach emotion as the only reason people might chose to avoid danger like that. I was trying to show you that if the maths allow it then it's a valid exception. You seem to be ignoring the dangers of the situation changing behind you and not being able to slot back in. The advice is always to spend as little time on the wrong side of the road as mentioned in my link.


I've said a number of times that breaking laws (including the speed limit) in order to avoid a collision is allowed. I'm not really sure what more needs to be said about that. We both agree there. What we're disagreeing on is whether or not that would ever be the correct choice to make in this situation. The answer is 'no'. No driverless car would ever choose that. So, effectively, even though it is allowed in principal, I would actually have programmed a driverless car never to speed up during an overtake in order to avoid a collision with an oncoming car.

This is the whole point. If I wrote a program that checked that the world wasn't a giant pumpkin every hour and then either flashed an image of the earth on a screen, or an image of a pumpkin, then effectively I've written a program that will only ever flash an image of the earth on the screen every hour. The fact that an algorithm supports alternate outcomes does not make those outcomes more likely or more sensible or more anything. The fact that a driverless cars software *technically *allowed it to speed in order to avoid a collision with an oncoming car during an overtake *under certain circumstances * is irrelevant if those circumstances can never exist. If a condition of it being allowed to speed is that it must be the only option, then it will never do so because there is always the option of decelerating.

NOW do you understand my answer?



John-H said:


> It is. There was no need to include it because you later said in an emergency and to avoid an accident you would allow laws to be broken. Why did you include it?


No clue what you're on about.



John-H said:


> As I said it's an exception which you have allowed but and you seem to be ignoring the possibilities of being blocked from resuming your side of the carriageway. It has to I'm saying depend on the situation and there is no absolute correct pre defined action.


I've covered the possibility of being blocked earlier in the conversation. If needed you can stop completely. It's still safer than attempting to complete an overtake after an emergency situation occurred. As soon as an emergency occurs, the manoeuvre you're attempting stops being relevant. The priority is to resolve the emergency in the safest way possible.



John-H said:


> I always thought I was reasonably good at explaining so I think "that test" is in your opinion. That's fair enough.


You possibly are good at explaining. I certainly feel like I understood all of your scenarios. I still don't see how any of them required the driver to speed up - you just explained how it was possible to do so without hitting the other car. If that had been all you were trying to prove then you'd have succeeded.

As for opinions - was there any point in this conversation where you thought either of us *weren't* just expressing our opinions??

<edit> OH, and I missed this:



John-H said:


> The advice is always to spend as little time on the wrong side of the road as mentioned in my link


No, that's the advice for completing an overtake, not for what to do when the overtake goes wrong. You need to stop treating this as part of a standard overtake manoeuvre. This is collision avoidance.


----------



## Spandex (Feb 20, 2009)

By the way, throughout this whole discussion, I've never said a driverless car wouldn't be *allowed *to do what you're describing. I've always said that your 'solution' was (at best) the less safe option and therefore the driverless car would never be able to choose it.

You're the one who keeps on pushing this to a meaningless yes/no question of 'is it allowed'.


----------

