Musk is right that automated killer robots is a real threat to our humanity - The Best Online Debate Website | DebateIsland.com - Debate Anything The Best Online Debate Website | DebateIsland.com
frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

The Best Online Debate Website | DebateIsland.com. The only online debate website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the leading online debate website. Debate popular topics, debate news, or debate anything! Debate online for free! DebateIsland is utilizing Artifical Intelligence to transform online debating.


The best online Debate website - DebateIsland.com! The only Online Debate Website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the Leading Online Debate website. Debate popular topics, Debate news, or Debate anything! Debate online for free!

Musk is right that automated killer robots is a real threat to our humanity
in Military

By agsragsr 858 Pts edited July 2017
 Musk warned us for a worst case threat to humanity, automated AI-based killer robots.  Looks like it is starting to come true with this technology already, and it is scary to think how it can evolve without appropriate governance.

Elon Musk's Worst Nightmare: Russian AK-47 Maker Builds Fully-Automated "Killer Robot"


http://www.zerohedge.com/news/2017-07-16/elon-musks-worst-nightmare-russian-ak-47-maker-builds-fully-automated-killer-robot

I agree with Elon Musk that we need to establish governance for something like that before it becomes too late.

The debate over the role robots will play in the future of warfare is one that is taking place right now as the development of automated lethal technology is truly beginning to take shape. Predator drone style combat machines are just the tip of the iceberg for what is to come down the line of lethal weaponry and some are worried that when robots are calling the shots, things could get a little out of hand.

Recently there has been some debate at the U.N. about “killer robots,” with prominent scientists, researchers, and Human rights organizations all warning that this type of technology – lethal tech. that divorces the need for human control – could cause a slew of unintended consequence to the detriment of humanity.

joecavalry
  1. Live Poll

    Musk is right that automated killer robots is a real threat to our humanity

    15 votes
    1. Real threat and something needs to be done
      46.67%
    2. Too theoretical, just let it evolve naturally
      53.33%
Live Long and Prosper
About Persuade Me

Persuaded Argument

  • Winning Argument ✓
    @agsr This is just the continuation of a trend that is at least as old as the 2nd Industrial Revolution.  In WWI, the US began development of the Kittering Bug.  In WWII, Joe Kennedy Jr. was killed in a mission that used a TV guided B-17 as a flying bomb.  As tech evolves, it inevitably finds its way into weapons systems, in fact military applications have been the driving force behind a majority of the most significant tech advancements, and there's no indication that this will change. 

    Assuming something should be done about it, what could be done?  A new treaty?  That wouldn't be any more effective than the 1922 Washington Naval Treaty that banned Germany and Japan from manufacturing battleships, or the current nuclear nonproliferation treaty.  What would such a treaty cover?  Development of more accurate and automated systems will never stop, even if an operator is required by treaty to make the final determination.  R&D of new and better sensors will continue.  The only difference between a fully autonomous system and an automated human mechanized system like a BattleMech is a software switch.  How do you ensure no one develops a software switch?



Debra AI Prediction

Predicted To Win
Predicted 2nd Place
11%
Margin

Details +



Arguments

  • I think that Elon Musk has neen watching too many Terminator movies. Sure, it's a concern, but so is other use of AI technology that will have a more immediate impact.  I don't think that implementations such as ones mentioned in this article need to make everyone panick, as it still can be controlled by people as part of their programming.
    ale5
  • ale5ale5 245 Pts
    @islander507, I respectfully disagree. That is a real concern and we should take action.  Souless robotic killers with Artifical Intelligence is definitely a reality.  The machines in this article is already a scary start.  I would follow Elon Musk suggestion and form a governance body to prevent a real life Terminator story.
    agsr
    It's kind of fun to do the impossible
    - Walt Disney
  • CYDdhartaCYDdharta 1225 Pts
    It would seem we're a way away from this becoming a serious concern. 


    Security Robot ‘Drowns’ Itself in Fountain
  • @CYDdharta, it will have to evolve many generations to your point before it can avoid incidents like the one in your article.  That said, it will quickly evolve in innovative improvements with help of AI and image recognition improvements.  Terminator, or doctor who Daliks come to mind 
  • I believe a lot of what Musk says and appreciate his passion for technology, but all the stuff he is saying about AI being a threat to humanity I don't believe.
    DebateIslander and a DebateIsland.com lover. 
  • Remote drones are a more realistic "threat" than AI robots.  AI experts, (like interdimensional travel experts) have too many issues making a robot autonomously interact with the world.  There are too many variables, especially in the chaos of war.  Stopping a war with other humans, or winning one, if it comes down to it, should be the bigger concern.  That and civil unrest.
  • agsragsr 858 Pts
    @Rodinon, agree with your assessment about more immediate concerns.  At the same time, AI robots are still a concern in the future and ince tech evolves can become problematic 
    Live Long and Prosper
  • @Rodinon

    I would say if it is ever a problem, it is likely a very long way off.  At least 100-300 years.  And here is my reasoning.  We don't know how human, or even animal intelligence works, let alone the human brain.  Is intelligence even well defined?  We have some general ideas, but these have yet to translate to anything as intelligent as some invertebrates.  I imagine a war against an AI opponent would likely be very easy to win, because so much falls outside of what they could currently be programmed to deal with, and they lack the broad mental skillsets of humans or even animals.  They could initially start with effective ways to kill us, but the ingenuity of humans and chaos of the real world would quickly move the AI into situations they are not equipped to comprehend or deal with.  This threat pales in comparison with the very real threat of actual humans who are actively doing harm to others, right now.  And it will continue.  Civil unrest and wars are and will continue to be common.  People of various factions will continue to infiltrate public gatherings and use whatever is at their disposal to end the lives of their fellow human.  Sharp objects from the kitchen.  Unarmed transportation.  Flammable liquids.  Easily obtainable harsh chemicals.  Manufactured weapons.  Blunt objects.  Sticks.  Rocks.  Their fists.  The potential to cause real and more effective damage by an determined person of average intelligence trumps any threat by a well equipped super genius with a breakthrough in AI.

    That's why part of me wonders why is everyone afraid of AI now?  It's not even in it's infancy.  It's only a developing concept.  But I look out on my entire life, and the blood shed flows in torrents.  In my own country, in my own adult lifetime, restrictions on civilians in response to actual attacks, if I had told people in 1990 about it, would have thought I was talking about dystopian fiction.  What do you mean the government took over airport security and feels all passengers up or makes them take off their belts and shoes?  Yet we face legal action if we offend the same type of people carried out these attacks?  Something is rotten in Denmark.
  • The "Vietnam War" was happening with or without us. It started before we got involved and it continued when we got out. Yes, we could've stayed out of the conflict. We could have ignored our ally's plea for help, we could've just left when the enemy drug us into the war by attacking one of our naval vessels. But it would've happened either way.

  • That is ridiculous!  :D
  • MayCaesarMayCaesar 2803 Pts
    agsr said:
    I agree with Elon Musk that we need to establish governance for something like that before it becomes too late.
    I think this is the part which is a much bigger threat to humanity than any robot killers we can possibly construct.

    First of all, Elon Musk is a very successful businessman and he knows a lot about economy, science and technology - but his philosophical musings tend to be somewhat superficial, so I would not take him as an authority in this matter.

    Now, the dangers of being destroyed by our own technology getting out of hand are always there. However, alongside the technological evolution, our security measures are also evolving. Remember how in 90-s hackers had almost a free access to almost every computer on the planet, and nowadays they can only get in by tricking the users into disclosing their private data? The antivirus technology simply evolved much faster than the hacking techniques, and in the end the Internet space became much more secure. I think this is the case with automation as well: the security protocols have reached a very high reliability level, and where in the past we had regular worker deaths on factories, now we have automated constructs working full time with little-to-no maintenance needed and almost zero emergencies.

    The proposal to establish governance over killer robots, however, is truly something that can lead to a dire end. When the governance is established, it has to be put in someone's hands. Suppose, the hands of the government, or a private company affiliated with the government. That company/government then has control over the robots, over their security protocols, over their deployment. How likely is it that this control will never be abused, that the robot killers far surpassing any other military division we can possibly have in their strength, will not be used against the people they were intended to protect?

    What I see as a more realistic and viable solution is development of decentralized security protocols by private companies, independent analysis of them by scientists and programmers, and then deployment of the robots on the field once their reliability has been established. The more independent the deployed robots from the external intrusion, the less likely they are to be misused, and the more resistant they become to misuse in the first place. 

    Just like Musk's Teslas are not governed in any special way, despite being essentially automated cars that can in theory deal a lot of damage, robot killers should not be governed either. Technology itself is not nearly as scary, as what a human can do with it.
  • I think that Elon Musk has neen watching too many Terminator movies. Sure, it's a concern, but so is other use of AI technology that will have a more immediate impact.  I don't think that implementations such as ones mentioned in this article need to make everyone panick, as it still can be controlled by people as part of their programming.
    I agree, however a simple regulation making engineers and scientists slow down to ensure the safety of humans and all other living beings seems too important to pass up. It's better safe than sorry. 
Sign In or Register to comment.

Back To Top

DebateIsland.com

| The Best Online Debate Experience!
2019 DebateIsland.com, All rights reserved. DebateIsland.com | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us

customerservice@debateisland.com
Awesome Debates
BestDealWins.com
Terms of Service

Get In Touch