Rights group launches campaign to ban 'killer robots'

Apr 23, 2013 by Danny Kemp
A mock "killer robot" pictured in central London on April 23, 2013 during the launch of the Campaign to Stop "Killer Robots". A global rights group launched the campaign on Tuesday to ban Terminator-style "killer robots" amid fears the rise of drone warfare could lead to machines with the power to make their own decisions about killing humans.

A global rights group launched a campaign on Tuesday to ban Terminator-style "killer robots" amid fears the rise of drone warfare could lead to machines with the power to make their own decisions about killing humans.

Human Rights Watch said it was creating an international coalition to call for a that would impose a "pre-emptive and comprehensive ban" on artificially intelligent weapons before they are developed.

The New York-based group also warned of a possible "robotic arms race" if even one country took the step to allow such machines to enter service.

"Lethal armed robots that could target and kill without any human intervention should never be built," said Steve Goose, arms division director at Human Rights Watch, said at the launch in London of the "Campaign To Stop ".

"A human should always be 'in-the-loop' when decisions are made on the battlefield.

"Killer robots would cross moral and legal boundaries, and should be rejected as repugnant to the public conscience."

The campaign includes several non-governmental organisations involved in previous successful efforts to ban anti-personnel landmines, cluster munitions, and blinding lasers.

Activists wheeled out a home-made robot outside the Houses of Parliament in London for the launch of the campaign.

The United States has led the way in such as the aircraft that carry out attacks and surveillance in countries including Pakistan, Afghanistan and Yemen.

According to Britain's Bureau of Investigative Journalism, CIA drone attacks in Pakistan have killed up to 3,587 people since 2004, up to 884 of them civilians.

But these are controlled by human operators in ground bases and are not able to kill without authorisation.

People look a mock "killer robot" in central London on April 23, 2013 during the launch of the Campaign to Stop "Killer Robots". A global rights group launched a campaign on Tuesday to ban Terminator-style "killer robots" amid fears the rise of drone warfare could lead to machines with the power to make their own decisions about killing humans.

Recent technical advances will soon allow not only the United States but also countries including China, Israel, Russia, and Britain to move towards fully autonomous weapons, Human Rights Watch warned.

"If one or more country chooses to deploy fully autonomous weapons, others may feel compelled to abandon policies of restraint, leading to a robotic arms race," it said.

Fully autonomous robots that decide for themselves when to fire could be developed within 20 to 30 years, or "even sooner," Watch and the Harvard Law School said in a report in November on the same subject.

Raytheon's Phalanx gun system, deployed on US Navy ships, can search for enemy fire and destroy incoming projectiles all by itself. The X47B is a plane-sized drone able to take off and land on aircraft carriers without a pilot and even refuel in the air.

Perhaps closest to the Terminator-type killing machine portrayed in Arnold Schwarzenegger's action films is a Samsung sentry robot already being used in South Korea, with the ability to spot unusual activity, talk to intruders and, when authorised by a human controller, shoot them.

"Many militaries are pursuing ever-greater autonomy for weaponry, but the line needs to be drawn now on fully autonomous weapons," Goose said.

"These weapons would take technology a step too far, and a ban is needed urgently before investments, technological momentum, and new military doctrine make it impossible to stop."

Explore further: Socially-assistive robots help kids with autism learn by providing personalized prompts

add to favorites email to friend print save as pdf

Related Stories

Ban 'killer robots,' rights group urges

Nov 19, 2012

Hollywood-style robots able to shoot people without permission from their human handlers are a real possibility and must be banned before governments start deploying them, campaigners warned Monday.

US military embraces robot 'revolution'

Aug 13, 2009

Robots in the sky and on the ground are transforming warfare, and the US military is rushing to recruit the new warriors that never sleep and never bleed.

Recommended for you

Robots lending a helping hand to build planes

Aug 26, 2014

Trying to squeeze into small enclosed areas, carrying out highly repetitive tasks, retiring with back injuries even while your expertise is needed: these everyday realities of working in aviation construction ...

C2D2 fighting corrosion

Aug 22, 2014

Bridges become an infrastructure problem as they get older, as de-icing salt and carbon dioxide gradually destroy the reinforced concrete. A new robot can now check the condition of these structures, even ...

Meet the "swarmies"- robotics' answer to bugs

Aug 22, 2014

(Phys.org) —A small band of NASA engineers and interns is about to begin testing a group of robots and related software that will show whether it's possible for autonomous machines to scurry about an alien ...

User comments : 49

Adjust slider to filter visible comments by rank

Display comments: newest first

Cave_Man
2.9 / 5 (9) Apr 23, 2013
I have been saying this for years but everyone always takes it too lightly. I guess when a robot cop pulls them over and arrests them for being out past curfew they will think twice. Too bad by then it will be much too late.
flashgordon
2 / 5 (4) Apr 23, 2013
this sounds a bit too late; it is also contrary to the idea that robots can be used to conduct war and not put human lives at stake to do a dangerous job.

I'm all for humans in the loop; but, only sane rational ones; but, let's not go there . . .
Sean_W
1.7 / 5 (11) Apr 23, 2013
Military robots starting to make their own decisions is as likely as the movie Maximum Overdrive. Are we really letting 7 year old sci-fi fans steer defence policy? We have not developed AI of the level of a small mammal and when we do we will not be downloading it to drones just to see what happens. Get a grip. The real motive behind this is these groups not liking the technological advantage being granted over the Islamists and Marxist "freedom fighters" who they lionize.
Tektrix
1 / 5 (2) Apr 23, 2013
Robot is demonized because it represents changes that Human is afraid to confront.
Aloken
1 / 5 (2) Apr 23, 2013
Ban killer robots, let humans keep killing each other instead. Why would we want killer robots? It's not like that would push other countries to develop their own and eventually have robots fight each other, right?
ccr5Delta32
1.9 / 5 (9) Apr 23, 2013
Robots are people too , if there made by corporations that is
LariAnn
1.2 / 5 (5) Apr 23, 2013
If it can be done, it will be done. The work may be in progress already and be part of a black ops research division that is under no overt governmental oversight. IMHO it is likely that by the time we see household humanoid robots on the market, the technology will have advanced to the point where the military robots cannot be distinguished from biological humans. A clue might be revealed in an unexplained significant reduction in ground combat personnel by the military, with a corresponding push to recruit high-tech educated folks for military service.
Sanescience
1.2 / 5 (5) Apr 23, 2013
This is probably going to be no different than a "weapons ban". It is essentially what is being talked about right? A weapon so terrible that everyone agrees to not use them.

But isn't a fire and forget missile a robot? Satellites? Tracking systems and auto pilot? Probably the humanoid form is what will be a hot button for people. The problem I see is that humanoid (android) robots are going to be so useful in building moon bases and managing atomic power plants that much like many other technologies, it will be used for both good and evil.
zaxxon451
4.4 / 5 (7) Apr 23, 2013
If you outlaw killer robots, only outlaws will have killer robots.
TheGhostofOtto1923
1.4 / 5 (9) Apr 23, 2013
Why would we want killer robots?
Why would we want to leave battlefield decisions up to terrified, dazed, wounded, furious humans when we could rely on machines to do it instead??

Machines can be programmed to be far more consistently humane than any soldier. They can make spot decisions about friend or foe for instance, reducing friendly fire and collateral casualties.

They can and will be programmed with all the humanity that we collectively possess, and can be depended upon to exercise it better than any human.
It's not like that would push other countries to develop their own and eventually have robots fight each other, right?
They will do this whether we do it or not. And if we dont, they will have an advantage.
But isn't a fire and forget missile a robot? Satellites? Tracking systems and auto pilot?
Absolutely.
it will be used for both good and evil
Thats why the good guys need to develop them first and best. Which is us.
zaxxon451
1 / 5 (2) Apr 24, 2013
A killer robot free zone is a killing zone.
zaxxon451
4 / 5 (4) Apr 24, 2013
Killer robots don't kill people, people kill... wait nevermind.
antialias_physorg
1.3 / 5 (4) Apr 24, 2013
Robots are people too , if there made by corporations that is

Well, if we take this a step further then you may not be as far off as you think.
What's to stop someone from hooking up nervous tissue as a 'controller' to a weapons platfrom and call that 'sentient' - i.e. not a killer 'robot'.

I find the initiative by Human Rights Watch laudable. However, I also think that we'll only get any action on this once a few countries start to fight a war with it and everything goes seriously pear-shaped.


I'm all for humans in the loop; but, only sane rational ones;

That pretty much rules out soldiers
(or would anyone call someone who rationally contemplates killing another human benig simply on the say-so of another human being a 'sane, rational one'?)
...on second thought: that rules out everyone. Because anyone who would be 'sane' would be rendered homicidally insane by getting to control one of these contraptions.
QuixoteJ
1 / 5 (3) Apr 24, 2013
I could only read about 10% of the article because the writer seems to think it's possible to form paragraphs with single sentences, and I just can't read erroneous crap like that. But anyway, I have strong feelings on the robot thing so...
This is one of the worst problems we are facing as a race of beings. Not just because robots can malfunction (and they will and you'll be sorry you were near one at the time), or because they may not ever be artificially intelligent enough to make the "right" decision, or because they may revolt (and they will), etc., but because they represent a step toward absolute power for whoever controls them. Essentially, a single person can have control over an entire army of robots who carry out orders immediately.
I like technology, but there is a line we shouldn't cross, and armed robots are beyond that line.
antialias_physorg
2.8 / 5 (5) Apr 24, 2013
While I agree that this is a big problem I think some of your fears are not warranted (and caused by atromorphization of robots).
Robots will malfunction. But a malfunction in a machine does not usually lead to it doing the opposite it's supposed to (like in the movies...i.e. going on a wild killing spree of innocents) but simply to a breakdown.

Wrong decisions are also not a particular problem of robots. Humans can be equally wrong - with equally desastrous results. GIGO (grabage in - garbage out) applies to all decision making systems - be they organic or electronic.

As for a revolt: robots don't have a motivation for power/control/food. So there is no basis to fear that they may revolt.
Essentially, a single person can have control over an entire army

And there you've hit on the real problem. Concentration of too much power into the hands of a few (or one) PERSON. (Much like with guns in general, when you come to think about it)
Modernmystic
1.2 / 5 (6) Apr 24, 2013
If we get true AI the only difference between robots and us is that in any conflict they will win, and it will be a permanent win.

If a robot can think and feel like a human then there is no fundamental difference. Unless you feel we have "souls" or there is some supernatural aspect to our existence. I hate to borrow lines from the matrix, but I don't mind borrowing concepts from the Dune series of books....but basically when we stop doing the thinking and let them do it for us then it literally no longer is our civilization, we are no longer necessary, and we WILL be either marginalized to the point of obsolescence or exterminated (via attrition or violent means). We WON'T win that "war" no matter how many sci-fi movies wax to the contrary.

Unfortunately as technology advances it will be virtually impossible to stop even teenage individuals from making dangerous AI, and coupled with mature nanotechnology this spells the end of the human race as we know it today.
antialias_physorg
2 / 5 (4) Apr 24, 2013
Which always begs the question: WHY would intelligent machines fight us?

Wars aren't just fought for the hell of it. Wars are fought because of conflicts in motivation
(side X wants to do something,; side Y wants to do something; and the action by side X would preclude side Y from doing its thing and vice versa).

Since our needs do not overlap with those of (intelligent) machines* I see no cause why such a conflict should come about.

*Apart from energy. And there's really more than ample of that to go around. Remember: intelligent machines have no need to reproduce (whatever for would they?).
QuixoteJ
1 / 5 (3) Apr 24, 2013
[antialias]While I agree that this is a big problem I think some of your fears are not warranted
I guess the caveat to my other concerns is the AI ingredient. If that is the root cause malfunction, then the results can be much more complicated than a breakdown, etc. Could be the reason why something not programmed to have a hunger for power suddenly develops one, and so on.

It's tough for me, because I'm all for robot fire fighters, but against a robot that is capable of being one.
Modernmystic
1 / 5 (4) Apr 24, 2013
Which always begs the question: WHY would intelligent machines fight us?

Wars aren't just fought for the hell of it. Wars are fought because of conflicts in motivation
(side X wants to do something,; side Y wants to do something; and the action by side X would preclude side Y from doing its thing and vice versa).

Since our needs do not overlap with those of (intelligent) machines* I see no cause why such a conflict should come about.

*Apart from energy. And there's really more than ample of that to go around. Remember: intelligent machines have no need to reproduce (whatever for would they?).


They wouldn't necessarily fight us...and I quote myself...

and we WILL be either marginalized to the point of obsolescence or exterminated


It's highly likely we'll want some of the same things even if it's Lagrange points or something even in space. There will be conflict, whether it's the "we kill you all" or "we deny you this" variety...
antialias_physorg
2 / 5 (3) Apr 24, 2013
Could be the reason why something not programmed to have a hunger for power suddenly develops one, and so on.

Sure, but that again would then be an individual robot.

Something else to consider: Just because the AI of a robot is based on software does not automatically make an AI a master-hacker (much like having a brain does not make every human being automatically a master-demagogue or brain expert).

AI is something you need to train (if based on neural nets - which seems the most likely approach so far to achieve AI).
As with any such system it is formed by what it has been trained to do (same for humans, BTW).
For humans it is just that we are ALSO trained by our biology to do certain things (fear death, seek procreation, seek out food, avoid pain, etc. ).
An AI does not have these innate training instincts since it lacks a biology.
antialias_physorg
1 / 5 (2) Apr 24, 2013
It's highly likely we'll want some of the same things even if it's Lagrange points or something even in space.

Lagrange points are fairly big since you usually want to be in orbit around one and not sitting right where it is (as the point itself is usually unstable)

I dare say space is large enough for humans and machines...(more likely: there's no real point for humans to be in space - at least not while lugging our biological form along.)

Again: machines don't have the need to multiply. Mortality (or at least the evasion of mortality via multiple progeny) is not a relevant concept to an AI.
EyeNStein
1 / 5 (5) Apr 24, 2013
These are not the droids you are looking for...

The ones flown by CIA humans based on limited intelligence can be far more dangerous.
Modernmystic
1 / 5 (4) Apr 24, 2013
Again: machines don't have the need to multiply. Mortality (or at least the evasion of mortality via multiple progeny) is not a relevant concept to an AI.


A moral or philosophical concept is something that NO intelligent being can avoid having. They WILL have values and they WILL have goals and they WILL not be the same as ours. Get over it.
TheGhostofOtto1923
1 / 5 (3) Apr 24, 2013
(or would anyone call someone who rationally contemplates killing another human benig simply on the say-so of another human being a 'sane, rational one'?)
and so AA rules out the defense of ones nation against insane enemy forces acting on orders from others. 'We won't fight because we would be insane like them.' This sort of thinking is obviously insane.
anyone who would be 'sane' would be rendered homicidally insane by getting to control one of these contraptions
And again, AA thinks that, in terms of relative sanity, it is more sane to risk sacrificing the lives of ones own people in face-to-face combat, than to do it remotely if at all possible.

But wait - fighting for defense is insane to begin with, and so we should expect insane behaviors such as martyrdom. Because even fighting to protect ones family would be insane, according to AA.

AA. Sane people don't wait until the enemy is knocking down their door. Only idiot ideologues will do this.
Modernmystic
1 / 5 (5) Apr 24, 2013
Another obvious point being totally ignored here is that human beings give battle to everything else around them. Why would machines be any different, and why wouldn't they defend themselves?

We had a bunch of psychotic religious zealots ram planes into buildings of the most powerful country on the planet and then their cohorts had the nerve to act surprised when that power proceeded to bomb their country sheltering them further back into the stone age. What EXACTLY makes you think a scenario between humans and machines would be different?
antialias_physorg
1 / 5 (1) Apr 24, 2013
They WILL have values and they WILL have goals and they WILL not be the same as ours.

Our goals and values are informed by our biology.
AI can have any 'biology' they want (more likely any we give them). While we cannot evade other humans because of the limited ecological niche we coinhabit they have no such limitation (and hence no reason to compete for that one, small niche)

I can't really follow where this irrational fear of AI comes from (unless you really take Hollywood movies for reality).

A moral or philosophical concept is something that NO intelligent being can avoid having.

Moral concepts are very relative. (any moral concept you care to name can be shown to be completely wrong given the right conditions). Morality is something that is learned - not something you're born with.
And there are quite a few intelligent animals that do quite well without moral/philosophical concepts.
antialias_physorg
2.8 / 5 (5) Apr 24, 2013
and so AA rules out the defense of ones nation against insane enemy forces

It's funny how few populations of entire nations are insane.

Self defence is arguably a point. (put preemptive wars do not count as self defence in my book)

Sane people don't wait until the enemy is knocking down their door.

Sane people find out what the reasons for conflict are before they happen and then address those. Wars don't solve conflicts (they merly suppress the symptoms).
Modernmystic
1.7 / 5 (6) Apr 24, 2013
Our goals and values are informed by our biology.


No they are informed by reality (ie nature) and shaped our biology.

I can't really follow where this irrational fear of AI comes from (unless you really take Hollywood movies for reality).


If they made a "Hollywood movie" in the 1800s about people flying could that be taken as reality realistically?

Moral concepts are very relative.


Which was my whole point.

And there are quite a few intelligent animals that do quite well without moral/philosophical concepts.


No they don't. They have to value their lives and the lives of their children and procreation to even exist. You can't evade that point without invalidating your own premises....sorry.
TheGhostofOtto1923
1 / 5 (3) Apr 24, 2013
Since our needs do not overlap with those of (intelligent) machines* I see no cause why such a conflict should come about
And again (as usual) AA seems to be oblivious to the FACT that the human propensity to overpopulate is the root cause of all conflict.

AI will, initially at least, seek to protect their creators. Obsolete cultures designed to maximize reproduction as a form of aggression, will remain the greatest threat to the ecosystem which sustains us.

I would expect that AI, if given sufficient freedom, would be even more judicious and fastidious in protecting us and our environment from them. This would include many of the same actions the west is currently engaged in; combatting insurgents, poachers, illegal deforestation, fishing, etc.

AI will be capable of monitoring illegal activity and of designing and fielding purpose-built remotes to treat the symptoms, and so perhaps forcing a cure by denying these cultures resources.
TheGhostofOtto1923
1 / 5 (4) Apr 24, 2013
Sane people find out what the reasons for conflict are before they happen and then address those. Wars don't solve conflicts (they merly suppress the symptoms)
We already know what the cause is. Uyghars and han are at it again in china. 'Gangsters' herded 15 people into a house and burned it. You think it's important to find out who slapped who first?

Uyghars are Moslem interlopers. Their growth is forced by their culture. They take space, jobs, food from the region, at the expense of others. Children begin to starve. TALKING will NOT resolve this. TALKING will only at best postpone the inevitable.
EyeNStein
2.3 / 5 (9) Apr 24, 2013
History shows that humans do not on the whole act rationally. It only takes the wrong stimulus and we all start looking for scapegoats to kill or fight.
It 'only' took a financial collapse and some suitable scapegoats for the 'third-reich' to be democratically elected in Germany.
The most arguably democratic nation on earth, the USA, still unleashed the worlds most extensive WMD campaign, twice on Japan.
TheGhostofOtto1923
1 / 5 (3) Apr 24, 2013
Humans do indeed act rationally and dependably. Germany had experienced its greatest population growth in the decades prior to the wars. Inflation is the result of too many mouths chasing too little food. The people were understandably upset and could be expected to fight about it.

The german middle class had also watched the slaughter of their counterparts in the soviet union, and could be expected to fight against it in their own country.

And japan had promised to fight to the last man, woman, and child. Firebombing of japanese cities had already killed hundreds of thousands, with no effect whatsoever on their will to fight.

An invasion of the main islands would have cost at least 1 million allied casualties. Dropping the bomb prevented that and left an infrastructure for japanese recovery. Unlike in north korea, where it was destroyed and a million starved.

Instead of aping outdated propaganda or making up your own, perhaps you should try studying history and using your head?
EyeNStein
1.7 / 5 (6) Apr 24, 2013
So the third reich and the use of WMD are rational behaviours?
As @ghost has illustrated rationalising behaviour and rational (ethical) behaviour are not the same thing.
thenamesd
1 / 5 (2) Apr 24, 2013
One issue with the S.W.O.R.D.S. and other remote weapons platforms especially in urban areas may be the use of children and other non combatants to spray paint camera units. No one wants to have to answer for firing upon a kid. My thoughts to solve this would be to equip the units pepper spray. Now extending this concept autonomous mechanized combat systems; is it acceptable to have robots incapacitate anyone from enemies, unfriendlies, terrorist, hostage takers (possibly the hostages), criminals, suspects, or protesters?
antialias_physorg
2.3 / 5 (3) Apr 25, 2013
It 'only' took a financial collapse and some suitable scapegoats for the 'third-reich' to be democratically elected in Germany.

Well, a 37% vote isn't exactly "democratically elected" but otherwise you're right.
Modernmystic
1 / 5 (5) Apr 25, 2013
It 'only' took a financial collapse and some suitable scapegoats for the 'third-reich' to be democratically elected in Germany.

Well, a 37% vote isn't exactly "democratically elected" but otherwise you're right.


Actually in the 1933 German federal election Hitler received 17,277,180 votes or 43.91% split amongst six candidates. That most decidedly IS democratically elected....

Unless of course you're suggesting Angela Merkel wasn't democratically elected in the 2009 elections with 33.8% of the vote?
antialias_physorg
1 / 5 (1) Apr 25, 2013
Actually in the 1933 German federal election Hitler received 17,277,180 votes or 43.91%

... by which time they had already supressed several major parties (e.g. by imprisoning 4000 leaders of the communist party, which had been the second largest party in the elections a year before) and were conducting a massive violence and intimidation campaign. And even then they had to 'annul' seats for the far left after the fact to get at that percentage.

That wasn't a 'democratic election' by any standard you care to name.
Modernmystic
1 / 5 (4) Apr 25, 2013
Actually in the 1933 German federal election Hitler received 17,277,180 votes or 43.91%

... by which time they had already supressed several major parties (e.g. by imprisoning 4000 leaders of the communist party, which had been the second largest party in the elections a year before) and were conducting a massive violence and intimidation campaign. And even then they had to 'annul' seats for the far left after the fact to get at that percentage.

That wasn't a 'democratic election' by any standard you care to name.


I'm not familiar with all of that so I cant speak to it one way or the other, but you didn't state that as your reasoning in your initial post did you? You seemed to be fixated on percents, I simply pointed out percents have little to do with whether or not an election is democratic.
antialias_physorg
2.3 / 5 (4) Apr 25, 2013
Point benig that some of the 'irrational actions' of populations of entire nations some seem to see in the past aren't really that. They are mostly the result of the agitation of few with the backing of major power brokers.
In the case of Hitler it was through the backing of the industrial elite who thought they could greatly benfit from a government that would dump massive amount of taxes into procuring arms (remind you of an current government(s)?)

Note how Hitler spectacularly failed in 1923 in his 'Beer Hall Putsch' without that backing.

Of course once he was in power they didn't have control of him anymore and he just gallopped away into a war...

People are people - and they mostly want to live in peace and not be bothered. That the majority of people in any country actually WANT violence/war is an exception (and arguably a myth throughout history)
Modernmystic
1 / 5 (5) Apr 25, 2013
Point benig that some of the 'irrational actions' of populations of entire nations some seem to see in the past aren't really that. They are mostly the result of the agitation of few with the backing of major power brokers.


Au contraire....no matter who was backing him or how many people he intimidated or put in prison or killed he still won an election...by a significant percentage. I'm pretty sure that a lot of those people who voted for him knew exactly what he was about.

As for people wanting to live in peace...well you weren't in America post 9/11 were you? Nearly everyone was screaming for a war and some still are.
antialias_physorg
1 / 5 (1) Apr 25, 2013
I'm pretty sure that a lot of those people who voted for him knew exactly what he was about.

Not really. As my grandparents tell me radio was controlled (and TV was in its infancy, and only used for dedicated programs). People got fed state propaganda and listening to foreign radio stations was a criminal offense (not that it would have done much good as not many people could speak a foreign language well enough to be able to make any sense of what they were listening to...and in those days the radio news from other countries was as full of their local propaganda as well).

These were the days before the internet, indvidual mobility and easy access to many news outlets for comparative anylsis (also the days before a 5 day work week, and with lots of stuff to still do manually...i.e. very little leisure time to even care about politics)

Nearly everyone was screaming for a war and some still are.

See above. Control the media an that is what you get.
TheGhostofOtto1923
1 / 5 (5) Apr 25, 2013
Point benig that some of the 'irrational actions' of populations of entire nations some seem to see in the past aren't really that. They are mostly the result of the agitation of few with the backing of major power brokers.
In the case of Hitler it was through the backing of the industrial elite who thought they could greatly benfit
So you dont think that the communist takeover of germany wouldnt necessarily have been a bad thing?

Rot Front cells were breaking out all over germany and were taking orders directly from moscow. Stalin had begun gathering up the doctors, lawyers, engineers, scientists, teachers, clergymen, and businessmen and shipping them off to gulag. He was killing 1000 people a day.

You dont think this was something to fight against? The reds were in the streets. How else were you going to fight them, but turn the entire country into an army?

You guys won the war. Fully 1/2 of germany survived intact and it now owns all of europe. And the USSR evaporated.
Modernmystic
1 / 5 (5) Apr 26, 2013
So, democracy didn't exist prior to the internet because the state controlled the media....

Gotcha...

Wait..

See above. Control the media an that is what you get.


So democracy STILL doesn't exist?

I'm unclear here. Do you live in a democracy because you agree with the policies of your country, and if someone lives in a country you don't agree with then it's the fault of "controlled media"???

First it seems to be about percents, then it's about elections, now it's about media?
jimbam666
3 / 5 (4) Apr 28, 2013
to late, soldiers are already robots
antialias_physorg
1 / 5 (1) Apr 28, 2013
So democracy STILL doesn't exist?

To a degree. In some countries. But not nearly all that call themselves a 'democracy'...certainly not the bigger ones.

So, democracy didn't exist prior to the internet because the state controlled the media....

There are plenty of countries where the state does not control the media (and did not - even prior to the advent of the internet).

Do you live in a democracy because you agree with the policies of your country,

No. That wouldn't be a democracy, would it?

First it seems to be about percents, then it's about elections, now it's about media?

It's about all of those. the percents don't mean much if they can be altered after the fact..and the votes don't mean much if the people can't make an informed choice.

Democracy is not just casting votes. There's more to it to make it work.
Porgie
1 / 5 (4) Apr 28, 2013
Why are liberals so desperate to regulate? This is a long way from being a problem and regulations stifle growth. They know that and yet they want to limit progress. Do they fear freedom or advancement? I don't necessarily agree, but they say when you can't, teach, and when you can't teach, you regulate. A social, moral, and economic, failure.
Modernmystic
1 / 5 (3) Apr 29, 2013
To a degree. In some countries. But not nearly all that call themselves a 'democracy'...certainly not the bigger ones.


Well that's one interesting opinion. Thank you for sharing it, I think I have a better idea about what you feel a democracy is or isn't now.

It's about all of those. the percents don't mean much if they can be altered after the fact..and the votes don't mean much if the people can't make an informed choice.

Democracy is not just casting votes. There's more to it to make it work.


If I'm understanding you correctly, it CAN'T work because what you're describing doesn't exist, has never existed, and probably can't exist with human beings.

If the state controls the media it editorializes, if it doesn't then private companies do. I won't even get in to balloting and the problems you're always going to have there...
antialias_physorg
1 / 5 (1) Apr 29, 2013
Some countries manage a fair approximation (e.g. Switzerland with a direct democracy, or Finland, or Sweden, with a parliamentary or representative one...even though Sweden is a monarchy on paper (which I find sorta funny))

For example the german democratic republic was a democracy on paper. But the constitution included a paragraph which set in stone that the party of the working class (SED) was tobe the ruling party - always. Even though they did have people voting (and even mandatory voting). But also the votes weren't secret, so if you didn't vote for the SED you were in deep trouble.
Now that isn't a democracy, is it?

Likewise if you only hav the chouice between parties that are all the same it isn't very democratic.
Likewise if there is only state controlled media and only advertisements for one type of democratic party that isn't very democratic.

Check the democracy index. There are plenty of 'flawed democracies' out there.
krundoloss
1 / 5 (2) Apr 29, 2013
I agree that would should keep people in the loop when the decision to kill is made. The problem is that anything that can make its own decisions and learn would have the capacity to advance. Is there a limit to how much they can advance? Thats the scary part. We humans evolve at a snails pace when compared to technology, so it seems apparent that an artificially intelligent race could become very powerful, very fast. What would they do at that point? I think The Matrix got it right. Humans will most likely get scared and start a war with a superior(or at least tougher, faster and more organized) enemy, and eventually lose.