When technology bites back

April 27, 2016 by Lucie Godeau
A Google self-driving car manoeuvred around some sandbags and was hit at low speed by a bus in Mountain View, California in February 2016 

From the 1912 sinking of RMS Titanic to the Chernobyl nuclear accident 30 years ago, technology has repeatedly confounded the confidence of its creators.

But it is still somehow a surprise today when we are led astray by our closest technological companions—mobile phones, GPS navigators, self-driving cars, or software that mimics human speech to interact online with people who want a chat.

"We are increasingly surrounded by machines that are meant to make our lives easier," said French philosopher Jean-Michel Besnier of the Paris-based National Centre for Scientific Research.

"The autonomous car, for example, is supposed to improve traffic, safety and give us more time. But man may feel increasingly that he is losing the initiative, that he is no longer at the controls and, because of it, no longer responsible."

There is no end of GPS mishaps to attest to this.

In March last year, a bus driver taking 50 Belgian tourists to a French ski resort in the Alps selected the wrong 'La Plagne' out of three similarly named locations on his GPS. At no point, apparently, did he lose faith in the machine as it led him 600 kilometres (400 miles) in the wrong direction until passengers could spot the Mediterranean.

GPS mishaps abound, such as a driver following GPS when he drove bus with 58 passengers under a low bridge in northern France in 2015, shearing off the top and seriously injuring six people
Bloody clashes

Four months later, a 59-year-old bus driver said he was just following his GPS when he drove a trans-European bus with 58 passengers under a low bridge in northern France, shearing off the top and seriously injuring six people.

Last month, two Israeli soldiers using the navigating app Waze, which relies on users for real-time updates, mistakenly drove into a Palestinian refugee camp, sparking bloody clashes. Waze said the drivers were to blame for deviating from the suggested route and turning off a setting that warns of dangerous areas.

Indeed, our adaption to new technology is frequently blamed for mishaps and even serious accidents, rather than the technology itself.

The World Health Organisation warns that drivers using a mobile phone are four times more likely to be involved in a crash.

Two Israeli soldiers using the mobile phone navigating app Waze in March 2016, which relies on users for real-time updates, mistakenly drove into a Palestinian refugee camp, sparking bloody clashes

In other circumstances, too, the results of such distraction can be fatal.

In Spain's famed Pamplona bullrun, a 32-year-old man was killed in August last year while he filmed the running of the bulls with his mobile phone and was surprised by one of the animals, which gored him from behind.

Deadly train crash

In one of the worst disasters blamed in part on mobile phone distraction, the driver of a Spanish train that crashed on July 24, 2013 outside the northern city of Santiago de Compostela was speaking on a mobile to a colleague onboard just before the train flew off the tracks and ploughed into a concrete siding, killing 79 people.

One day, car drivers are supposed to surrender the wheel altogether. Net yet, though.

The driver of a Spanish train that crashed in 2013 outside Santiago de Compostela was speaking on a mobile just before the train flew off the tracks killing 79 people

Google took part of the blame in February after a self-driving car manoeuvred around some sandbags and was hit at low speed by a bus in Mountain View, California.

"This accident is more proof that robot car technology is not ready for auto pilot," Consumer Watchdog privacy project director John Simpson said at the time.

Such risks cannot be blamed only on immature technology, said Valerie Peugeot, who looks into future developments at French telecoms leader Orange's research and development network, Orange Labs. "We delegate to technology choices that historically were human choices," she warned.

Racist insults

Even the world's biggest technology firms can get it horribly wrong.

Last month, Microsoft had to withdraw "bot" software, named Tay, that it had designed to respond like a teenage girl to written comments from other users on Twitter.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," a Microsoft official said.

After being led down the wrong path by other users, Tay's tweets ranged from support for Nazis and Donald Trump to sexual comments and insults aimed at women and blacks.

"C U soon humans need sleep now so many conversations today," Tay said in its final post on Twitter.

Explore further: Microsoft grounds foul-mouthed teen-speak bot

Related Stories

Microsoft grounds foul-mouthed teen-speak bot

March 24, 2016

A Microsoft "chatbot" designed to converse like a teenage girl was grounded on Thursday after its artificial intelligence software was coaxed into firing off hateful, racist comments online.

Recommended for you

Privacy becomes a selling point at tech show

January 7, 2019

Apple is not among the exhibitors at the 2019 Consumer Electronics Show, but that didn't prevent the iPhone maker from sending a message to attendees on a large billboard.

China's Huawei unveils chip for global big data market

January 7, 2019

Huawei Technologies Ltd. showed off a new processor chip for data centers and cloud computing Monday, expanding into new and growing markets despite Western warnings the company might be a security risk.

30 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
5 / 5 (4) Apr 27, 2016
There's a very weird mindset out there that just because something is tech/software it therefore is automatically perfect.

Especially with complex tech there's no way to test for every eventuality (especially user expectation...as there are as many different user expectations out there as there are users. So the potential for misunderstanding what an application can/cannot do is limitless)

Tech will make different mistakes than people. People program tech and therefore they already try to correct for the mistakes that *people* make. So when we laugh at the mistakes of automated vehicles we should always bear in mind the many mistakes it doesn't make that *people* make (like running red lights and whatnot).

We also should remember that while such mistakes may be weird that once they are detected they can be fixed for *all* vehicles at once. A human error will only cause that one human not to make the error again (if that). Other humans will not learn from his/her mistake.
Eikka
5 / 5 (2) Apr 27, 2016
There's a very weird mindset out there that just because something is tech/software it therefore is automatically perfect


It comes from the popular fiction that a computer is always superhuman in every respect. It's partly from marketing efforts, partly from sci-fi, and partly from inflated expectations of things people don't understand.

the many mistakes it doesn't make that *people* make (like running red lights and whatnot).


Those are not necessarily mistakes, and that's the problem - rigid adherence to rules is what computers do best, which is not necessarily warranted and can even lead to problems.

Imagine for example, if you're driving along at night and there's nobody else on the road, what mistake is it to run a red light? A computer would sit at an empty intersection for minutes.

Of course you can add intelligent stop lights but that's not the point - flexibility to circumstances is how people work. How do you program spontaneity?
Eikka
not rated yet Apr 27, 2016
We also should remember that while such mistakes may be weird that once they are detected they can be fixed for *all* vehicles at once.


That's fine and well if you have a case that can be generalized to all vehicles in all situations.

What you're describing is a specialized system that has a special rule for every occasion. The problem is that the real world is one exception after another, so the number of special rules soon blows out of control.

The issue is prevalent in computer vision, as that's where it was first identified. In trying to detect an object like an umbrella, there is no one rule by which you can detect it. There aren't even a handful, or hundreds of rules - there would be billions of rules to detect umbrellas in every possible way you might encounter one.

That's why the vision algorithms evolved to rely on heuristics - probabilities and "rules of thumb", which are exceptionally difficult to program as overarching "rules".
antialias_physorg
5 / 5 (1) Apr 27, 2016
partly from sci-fi, and partly from inflated expectations of things people don't understand.

Well, yeah... but you still get people (like you) who say "the next generation of nuclear reactors is foolproof"...that's what I find weird. I mean: You KNOW (and even articulate) the problem. But you fall for it yourself.

rigid adherence to rules is what computers do best, which is not necessarily warranted and can even lead to problems.

We have rules because adherence to the rules causes less problems than not adhering to these rules. That a rule can be subverted or may not apply in 100% of all cases isn't in doubt. But we're dealing with a problem that is orders of magnitude less frequent than if we didn't have the rules.

what mistake is it to run a red light?

It's against the rules.

A computer would sit at an empty intersection for minutes for the lights to change.

So? What's your rush?
Eikka
not rated yet Apr 27, 2016
but you still get people (like you) who say "the next generation of nuclear reactors is foolproof"


Have I?

It's against the rules.


So what? Strict adherence to rules is not desirable, and can in itself lead to trouble. Suppose I have a pregnant wife in the back seat - oh no, I do have to obey the traffic lights even though there's nobody else around!

Of course again, you could make a special case out of that too, and say "if in emergency..." etc. but that's just another exception to the rule in a pile of exceptions until the exceptions are the norm rather than the rule. Like "i before e" in english, or whatever way it was.

So? What's your rush?


It's competely pointless to idle there when you could just go.
antialias_physorg
5 / 5 (1) Apr 27, 2016
Suppose I have a pregnant wife in the back seat - oh no, I do have to obey the traffic lights even though there's nobody else around!

Most likely of cases. And most vehicles planned do allow you to take control. And no: just by making a few exceptions doesn't subvert a rule. Because you will not have a pregnant wife (or other emergency) in the back seat more than once or twice in your lifetime. Even if you add up all those 'piles of execeptional' cases then it's still many orders of magnitude less frequent than the regular case. How often have you felt the absolute *need* to run a red light in the last year/decade/ever? The answer is most certainly "zero" for almost everyone.

(And seriously: even if you take your wife to the hospital in your car TODAY you don't run red lights. So that's a non-issue in any case.

It's competely pointless to idle there when you could just go.

Rules of the road aren't 'suggestions'.

Have I?

Only about a gazillion times.
Eikka
not rated yet Apr 27, 2016
We have rules because adherence to the rules causes less problems than not adhering to these rules.


And sometimes we have rules just because people think we should have rules - don't forget that the rules themselves can be problematic and lead to unintended and unwanted consequences if they were to be followed to the letter.

A lot of the time in the real world, things work because people cut corners and ignore bad rules. A lot of the time in the real world, bad things happen because people abuse rules - especially by litigation.

A lot of the time the rules themselves are completely arbitrary because they cover subjects that cannot be adequately described by rules, because there exists some sort of continuum where a line has to be drawn without a good reason simply for the sake of having the rule.
Eikka
not rated yet Apr 27, 2016
Only about a gazillion times.


Then you can surely quote me on it.

I have never said next gen reactors are "foolproof", nor have I ever meant to imply that in the absolute.

And no: just by making a few exceptions doesn't subvert a rule.


You're sticking on the individual case, when I'm trying to explain that every rule has exceptions, and summing up all the exceptions to all the rules by codifying them into special rules quicky becomes an insurmountable task.

It's like those old defunct laws in the US, where some state law may have a special case of "don't ride a donkey on sundays cross the main street" that was written for some special case for some obscure reason that was important at the time, but isn't actually relevant anymore. Trying to codify everything by universal rules and exceptions that apply to everyone soon gives you an ever-growing pile of that.
Eikka
not rated yet Apr 27, 2016
My point and opinion of the nuclear reactors is to point out and avoid the hypocricy of the nirvana fallacy, where people are demanding the unattainable perfection of one thing while not demanding it of other things.

If for imagination, it is acceptable that ten people per terawatt-hour die because of hydroelectric dams bursting around the world, then why is it unacceptable that nine people per terawatt-hour die due to nuclear accidents?

Clearly we would be applying double standards.

The related issue is that people think all nuclear accidents are basically Chernobyl. When something happens, the worst happens, and -that- is where the "foolproof" argument comes in, because it's unreasonable to demand a reactor that may never fail - what is reasonable is to demand a reactor that doesn't fail too badly most of the times, and I argue it can be done.

In other words, making it practically foolproof, instead of absolutely foolproof.
Eikka
not rated yet Apr 27, 2016
Rules of the road aren't 'suggestions'.


In the real world - at least when the police isn't around - they are. People follow them because they understand the necessity of co-operation - not because of a compulsion to follow rules simply because there are rules.

Of course some people do have that compulsion, and that's an exception to that rule.

When there is no need for co-operation, there is no need for the rules. Yet another example would be crossing a street in the middle instead of taking a long detour to the safe crossing. It's technically illegal to cross the street - but if there are no cars at the moment, why would you not?

There's an old joke about a kid crying by the side of a road and an adult stopping by to console them:
-"What's the problem?"
-"Mommy said to always wait for the car to pass before crossing the road, but there's been no cars!"
antialias_physorg
5 / 5 (1) Apr 27, 2016
ules themselves can be problematic and lead to unintended and unwanted consequences

Yes, yes, yes. But you're still orders of magnitude off. If it's good to follow a rule in 99.9999% of all cases then dumping the rule just because of the 0.0001% exceptions is insane.

A lot of the time in the real world, things work because people cut corners and ignore bad rules.

And sometimes people think they make it work by cutting corners and then we get Chernobyl. Go figure. Had they only followed the rules, eh?

If you have a truck with a bad rule then lobby to get the rule changed. Don't just ignore it. Because there are *reasons* why somone made the rule. And the people who make the rules are usually a LOT smarter than you.

It's far more likely that you just don't understand the reason than it is the case that it's actually a bad rule.
Eikka
not rated yet Apr 27, 2016
Yes, yes, yes. But you're still orders of magnitude off. If it's good to follow a rule in 99.9999% of all cases then dumping the rule just because of the 0.0001% exceptions is insane.


But that's not what I'm saying.

I'm saying the computers fail when it comes to things you can't encode to explicit rules

If you have a truck with a bad rule then lobby to get the rule changed. Don't just ignore it.


It's unreasonable to always expect people to follow a bad rule while they can't overturn it. The debate over it can take decades.

Because there are *reasons* ... And the people who make the rules are usually a LOT smarter than you.


And there are bad reasons. The people - however intelligent - are not omniscient. They may be trying to solve the wrong problem, mis-interpreting information, or the rule may be obsolete, irrelevant, or an unintended effect of some other rule. In the worst case they're corrupt or indeed stupid.
Eikka
not rated yet Apr 27, 2016
Had they only followed the rules, eh?


Funny you should say that, since they were. Too bad the rules were conflicting.

Even in the design of the plant they cut corners, because on one hand they were responding to the pressure of the Politbyuro to get it done and under budget and schedule, and on the other they were trying to follow the designers' drawings. Following one rule would put them in Siberia, and following the other rule would get them off scott-free with a potential nuclear hazard that may or may not take place in the decades to come...
Eikka
not rated yet Apr 27, 2016
It's far more likely that you just don't understand the reason than it is the case that it's actually a bad rule


Ultimately there are no bad rules - only wrong places and times to apply them.

That just goes back to the point that following a rule just because it's a rule because someone happened to have set such a rule is basically irrational. That's what computers do, and that's what makes them functionally insane.
antialias_physorg
5 / 5 (1) Apr 27, 2016
I'm saying the computers fail when it comes to things you can't encode to explicit rules

There are actually things that can't be encoded in rules? Are we talking spirtual issues, here?
And remember: pick an example that a human would also not fail at.

And there are bad reasons.

As I said: If you think the reasons are bad then go to the people who made the rule (or vote on it. One of the reasons why we have a democracy, remember?)

Far more likely that you have thought less about the rule than those who made it - I.e. it's very likely that you're not seeing something they did.

For example in germany people will not cross the road at a red pedestrian light even if there's no car for miles. Bad rule? No. Because crossing the road gives a bad example to children (who may get killed following the bad example)...and I'm sure *you* didn't think of that ramification, did you?

So before you say "bad rule" think again and give it the benefit of the doubt.
Eikka
not rated yet Apr 27, 2016
"Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it."


-Buddha

It is worse if you are wrong and I believe you, than if I am wrong and I believe me.

Point being that if you yourself are wrong, you are still at your own authority to change your opinion, but if you give up the authority to someone or something else, you can no longer remedy that. That's to say, blindly following rules on the faith that someone else knows it better than you do is not rational.
Eikka
not rated yet Apr 27, 2016
There are actually things that can't be encoded in rules? Are we talking spirtual issues, here?


No. Just that there exists continuums and ambiguities which cannot be split in the middle to form a rule for the purposes of self-driving cars and AI, like "how many grains of sand is a heap".

You're evoking "rules" as in laws of physics, which is beyond the scope of this discussion.

Far more likely that you have thought less about the rule than those who made it - I.e. it's very likely that you're not seeing something they did.


Far more likely is that you have conflicting interests over the rule than those who made it. I sense an underlying implication that you think there exists some kind of "best for all in all cases" rule that can be found, and moreover IS found by the lawmakers.
Eikka
not rated yet Apr 27, 2016
For example in germany people will not cross the road at a red pedestrian light even if there's no car for miles. Bad rule? No. Because crossing the road gives a bad example to children (who may get killed following the bad example)...and I'm sure *you* didn't think of that ramification, did you?


I would rather teach my children to ignore rules when they become irrelevant. There we only have to agree to disagree. I would teach them the purpose of the rule rather than the rule itself: don't get killed.

I would say millions and millions of people waiting at red lights for no good reason is a bad thing because the cumulative loss of productivity easily amounts in thousands and thousands of lost man-hours over blindly obeying a simple rule. 10 million people wasting 5 minutes a day amounts to a lot if you think about it.

Of course I'm not saying we necessarily need to maximize output like that. Simply that it has that sort of an effect.
antialias_physorg
5 / 5 (1) Apr 27, 2016
Just that there exists continuums and ambiguities which cannot be split in the middle to form a rule for the purposes of self-driving cars and AI

And what makes you think that humans make perfect decisions in ambiguous situations?
Computers may make *different* mistakes - but as long as they make demonstrably *less* mistakes than humans in these situations then saying "but a human may have made a good decisions in some of them" is not a viable argument.

"best for all in all cases" rule that can be found, and moreover IS found by the lawmakers.

No. but I think it's far more likely that some guy who thinks about something for 5 minutes is wrong than a team of experts who go through rigorous testing and analysis before putting a rule on the books.

(Same reason why I think the AGW deniers or the ad-hoc-theory-of-everything proponents are such buffoons. They may be right - but it's *far* more likely that they are wrong than they think.)
Eikka
not rated yet Apr 27, 2016
The idea is that when a person understands the reason behind the rule, they're better able to apply the rule constructively. They may even choose to always stop and wait at red lights simply so that they wouldn't accidentally forget, which isn't necessary for everyone.

It's more robust than simply following a rule.

Which goes to the point why the AI is not robust, because it fundamentally does not understand why the rule is. It doesn't understand why it can't cross the road, just that it's not allowed to, and that's why it needs explicit rules to absolutely everything.

It's like binding a pudding with rubber bands - if you don't get absolutely everything covered, it will just break up in your face - and the difficulty is in doing it by adding one rubber band at a time - one more rule, one more exception.
antialias_physorg
5 / 5 (1) Apr 27, 2016
I would rather teach my children to ignore rules when they become irrelevant.

What you teach your children is something else.
But you teach children whether you notice them watching you or not by example. If you behave like an idiot they will think it's OK to behave like an idiot. No matter whether you're right next to them or half a block away.

I would say millions and millions of people waiting at red lights for no good reason is a bad thing because the cumulative loss of productivity

Yay - let's all be robots and be maximally productive!

What. A. Crock. Of. Shit. That. Argument. Is.

Even for you (I knew there was a reason why I have you on ignore).
(Hint: Do you work 5 minutes less because you had to wait 5 minutes at a traffic light that morning? No. You put in the 8 hours anyways. No productivity lost.)

One child's life ain't worth a minute's wait for you? Yeah. Ultra-capitalist-Eikka strikes again. Well done.
Eikka
not rated yet Apr 27, 2016
And what makes you think that humans make perfect decisions in ambiguous situations?


They don't, but that's vastly better than being completely unable to.

Again, how do you program spontaneity? How do you encode understanding of ambiguity into a program that is fundamentally not capable of understanding?

than a team of experts who go through rigorous testing and analysis before putting a rule on the books.


I'm not that optimistic about most laws and rules. A good example of your case would be things like electrical safety codes. A good example of my case would be local speed limits. Much of the time it's just about people doing -something- whether it works or not. Or, "There ought to be a law against that".
antigoracle
5 / 5 (1) Apr 27, 2016
Make something foolproof and the world will give you a "better" fool.
Eikka
not rated yet Apr 27, 2016
Yay - let's all be robots and be maximally productive!

What. A. Crock. Of. Shit. That. Argument. Is.


I already told you I didn't argue for that angle. Your complaint doesn't apply.

I simply pointed out that this effect exists - that there indeed are adverse consequences to masses of people following what on the surface looks like a harmless rule even when it's applied to where it's not needed.

But you didn't think about that, did you?

(I knew there was a reason why I have you on ignore).


That's extremely childish of you.

(Hint: Do you work 5 minutes less because you had to wait 5 minutes at a traffic light that morning? No. You put in the 8 hours anyways. No productivity lost.)


That's an over-simplified version. You could be standing in traffic lights during your working hours just as well, or get 5 minutes less (personal) work done at home, which counts just as much as work in the office in overall productivity.
Eikka
not rated yet Apr 27, 2016
What you teach your children is something else.
But you teach children whether you notice them watching you or not by example. If you behave like an idiot they will think it's OK to behave like an idiot. No matter whether you're right next to them or half a block away.


That's your interpretation of the scene. "An idiot breaking the rules". You put the rule above everything else.

What a child sees is just a man walking across the street. Should a man not cross a street?

It's your task to explain why the man should not cross the street: "the red light is on". The chid then asks "Why is the man not allowed to cross the street when the red light is on..."

And what do you say to that? Just "Shut up and do as I say"?

It's far more productive to say that the red light means cars can drive, and so the man is putting himself in danger. When the child understands that, they're more able to protect themselves than just with the observance of "always wait at red lights"
Eikka
not rated yet Apr 27, 2016
Furthermore, if the child is too small to understand the reason behind the red light, they're too young to be let into traffic unsupervised because they can simply disobey or forget the rule arbitrarily.
Eikka
5 / 5 (2) Apr 27, 2016
Make something foolproof and the world will give you a "better" fool.


Or, make the perfect law and the world will give you the perfect criminal.

Or the perfect lawyer.

There is a case for ambiguity and loose interpretation in law, because people try to poke holes in everything. Making it too precisely worded and defined means the law doesn't capture a whole lot of fringe cases, which people then start to use to bypass the law.

In other words, following the word instead of the spirit of the law. When you allow that, the problem becomes plugging all the holes in the law, and then you get volumes and volumes - whole libraries of legal text simply describing what some concept, say, "assault" means.

The problem for computers then is that they can only follow the word, because they don't understand the intent.
Eikka
not rated yet Apr 27, 2016
One child's life ain't worth a minute's wait for you? Yeah. Ultra-capitalist-Eikka strikes again. Well done.


Btw. That is surprisingly low argumentation out of you, Anti-Alias.

"Think of the children" and strawman-beating rolled in one in a sweet wrap of hyperbole. I feel like I should be offended, unless I was communicating poorly.
xponen
not rated yet Apr 28, 2016
My confidence on the safety of our technology is low because of things like 'software bugs'; how it's difficult to fix them all even if it cost 0$ to fix.

Physical technology is worse because of how much it cost to change it. Huge inertia in cost & time make people stick to aging technology & system, and slow fixes.
gkam
1 / 5 (3) May 01, 2016
"because people try to poke holes in everything"
----------------------------------

What kind of person would do that?

Not a good one, for sure, . . .

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.