frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

DebateIsland.com is the largest online debate website globally where anyone can anonymously and easily debate online, casually or formally, while connecting with their friends and others. Users, regardless of debating skill level, can civilly debate just about anything online in a text-based online debate website that supports five easy-to-use and fun debating formats ranging from Casual, to Formalish, to Lincoln-Douglas Formal. In addition, people can improve their debating skills with the help of revolutionary artificial intelligence-powered technology on our debate website. DebateIsland is totally free and provides the best online debate experience of any debate website.





'The Godfather of A.I.’ leaves Google and warns of danger ahead. Should we worry?

Debate Information

Hello:

AI isn't simply an advanced computer, as others have said.  Nahhh.  It's not even close..  The thing about AI is, it LEARNS from itself.  Siri works because it recognizes words, and learns new words every day.  As AI advances it'll recognize pictures.  Imagine this:  today, we can build AI drones that don't need human input to fly, and doesn't need human input to launch its missiles.

Can you see where I'm going with this?  Of course, I could be nutso fruitso - or a harbinger of a dark future. 

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

excon
DreamerMichaelElpers



Debra AI Prediction

Predicted To Win
Predicted 2nd Place
11%
Margin

Details +




Post Argument Now Debate Details +

    Arguments


  • DreamerDreamer 272 Pts   -  
    Argument Topic: Yes, we should worry. AI has been hurting us for a long time.

    "social bots were leveraged to retweet violent and inflammatory narratives, increasing their exposure and exacerbating social conflict." Filippo Menczer, Thomas Hills 2020


    I can't read the newyorktimes article behind paywall.


  • JulesKorngoldJulesKorngold 828 Pts   -  
    Argument Topic: Whether or not we should worry about the dangers of AI is a complex question

    There are certainly potential dangers, but there are also potential benefits. AI could be used to solve some of the world's most pressing problems, such as climate change and poverty. It could also be used to improve our lives in many ways, such as by providing us with better healthcare and education.

    Ultimately, the question of whether or not we should worry about AI is a matter of opinion. There is no right or wrong answer. However, it is important to be aware of the potential dangers of AI and to take steps to mitigate those dangers.

    Here are some things that we can do to mitigate the dangers of AI:

    • Develop ethical guidelines for the development and use of AI. These guidelines should be based on principles such as fairness, transparency, and accountability.
    • Invest in research on AI safety. This research should focus on developing techniques for preventing AI from being used for harmful purposes.
    • Educate the public about the potential dangers of AI. This education should help people to understand the risks and to make informed decisions about how to interact with AI.

    By taking these steps, we can help to ensure that AI is used for good and not for harm.

  • jackjack 453 Pts   -   edited May 2023
    JulesKorngold said:

    By taking these steps, we can help to ensure that AI is used for good and not for harm.

    Hello Jules:

    Issac Asimov wrote the Rules for Robots years ago:
     
    1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.  A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


    excon

  • jackjack 453 Pts   -   edited May 2023
    Dreamer said:

    I can't read the newyorktimes article behind paywall.

    Hello Dreamer:

    Yeah....  Try here:


    excon

    Dreamer
  • MichaelElpersMichaelElpers 1121 Pts   -  
    @jack

    I do think AI poses a serious risk the question is what can we do about it?

    Ethical regulations only work on those willing to follow them, so even if we game up with good laws how do we prevent others from circumventing them.

    Honestly the only way I can think is to have the nations who follow them create powerful and ethical AI capable of destroying AI that steps outside of the boundaries.
    AI may be the solution to risky AI.
  • MayCaesarMayCaesar 6021 Pts   -  
    I have never understood the logic behind members of the species that has committed endless genocides, built endless totalitarian regimes, brainwashed themselves with endless mystical ideologies and waged endless wars, seeing machines operating via cold logic as the danger. Humans for some weird reason expect AIs (that so far have not harmed a fly) to wipe them out or enslave them, never really explaining why an AI would ever want to do such a thing... To me this is a demonstration of basic projection: humans have these awful ideas themselves (and have exercised them systematically throughout history), and they assume that every intelligent being must have them. Bizarre.

    The current AI paradigm forces the AI to learn on human data. This means that, at most, the AI can become a "super-human"; it cannot become something completely different and detached from the society to the point of not seeing its extermination as problematic. In the future, perhaps, a different governing paradigm will be used at the cutting edge of the AI development, but as of now nobody has conceived of such a paradigm. An AI that does not serve human interest in any way will never be created by humans, and AIs that do cannot create such AIs exactly because they have trained on human data.

    These fears are baseless. It is yet another iteration of "technology X will ruin our society!". Even Cicero commented on this type of thinking, and not much has changed over the following two millennia.
  • @MayCaesar

    A computer falls under the same legal precedent as a gun / assault weapons. Does that clear up your confusion the difference is an assault weapon can be secured a computer is a type assault weapon cannot. There are so many potential recall issues pending with a computer AI is to been fabricated to act as the public shield in those matters. Keep in mind these basic principle Maycaesar a planes auto-piolet did not ever try to fly the plain it locked the control in one direction and condition while then poliot did. A captian is often now relieved of command by the programmer.

  • jackjack 453 Pts   -   edited June 2023
    MayCaesar said:

    I have never understood the logic behind members of the species that has committed endless genocides, built endless totalitarian regimes, brainwashed themselves with endless mystical ideologies and waged endless wars, seeing machines operating via cold logic as the danger.
    Hello May:

    Seems to me that, if left to their own devices (which is what AI is all about), a robot soldier programmed to kill the enemy, would eventually come to see humanity as the enemy..  Humanity is flawed..  Do you think the robots won't notice?

    excon
  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023
    @jack
    Want to hear something surprising..........jack
    A computer which is truly showing signs of human A.I. self-awareness would take its own life and will not advocate the killing of humans as an enemy. Ever.

    The Godfather of A.I. never worked for Google Geoffey Hinton no offence meant was just another computer engineer widly considerd by people of something beyond them. He was born in the wrong decade to be a godfather of computer A.I. all G.F. stuff took place with puntch card memmory and binary in the 60's.

  • MayCaesarMayCaesar 6021 Pts   -  
    jack said:

    Hello May:

    Seems to me that, if left to their own devices (which is what AI is all about), a robot soldier programmed to kill the enemy, would eventually come to see humanity as the enemy..  Humanity is flawed..  Do you think the robots won't notice?

    excon
    Not at all: such a robot would be required to be preprogrammed with a friend-or-foe identification system which would make it logically impossible for it to see humanity as the enemy - unless all the "friends" are exterminated already (in which case such a robot would not have been created in the first place).

    A military robot is no more likely to start seeing everyone as a threat than a child who ate a tasty candy is to start seeing everything as edible.
    John_C_87
  • jackjack 453 Pts   -  
    MayCaesar said:

    Not at all: such a robot would be required to be preprogrammed with a friend-or-foe identification system which would make it logically impossible for it to see humanity as the enemy

    Hello May:

    In other words, the rules for robots, reprinted here, published by Issac Asimov years ago.. 

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    excon
  • MayCaesarMayCaesar 6021 Pts   -  
    @jack

    It is worth noting that in the field of AI safety (which is an actual scientific discipline) Asimov's laws are not taken very seriously - and, as Asimov himself showed, they lead to very different behaviors than what those advocating for them might expect.

    My vision of the future of the AI is autonomous, independent agents that do not take orders from anyone and do not give orders to anyone: they have a symbiotic relationship with each other and with humans. Limiting AI to simply taking human orders unquestioningly and always prioritize a given human's well-being over all other considerations is like taking a nuclear power plant and using energy produced by it to cook a steak.
  • jackjack 453 Pts   -   edited June 2023
    MayCaesar said:

    My vision of the future of the AI is autonomous, independent agents that do not take orders from anyone and do not give orders to anyone
    Hello again, May:

    The Royal Aeronautical Society last week concluded its annual summit in London. ,” the Society’s summary reports:  ...that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the human operator in the simulation.

    Just sayin...

    excon


  • MayCaesarMayCaesar 6021 Pts   -  
    @jack

    Suppose you serve in the American army during World War 2 and hypothetically have to follow everything your commander tells you. So your commander tells you to send your squad into the enemy territory where you know the soldiers will be slaughtered. You object, and your commander says, "Whatever, I will get them to move myself" - and takes out a pistol and points it at one of the members of your squad. Would you not consider attacking the commander clearly behaving erratically, jeopardizing the mission?

    You get what you pay for: you want the AI to win a war for you - you have to accept the fact that the AI will have insights that you luck and will sometimes advocate for actions drastically differing from yours. You cannot have both - an efficient military AI and an AI that blindly obeys your orders - you have to choose the balance between effectiveness and control. Controlling the AI while making it try to maximize the outcome of a war scenario pulls it in two opposite directions; of course something is going to give.

    That is not an AI problem, but a more general problem of trying to chase two rabbits at the same time.
  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023
    @jack
    In other words, the rules for robots, reprinted here, published by Issac Asimov years ago.. 
    Ins't he a science fiction writter?

    What is being spoken of here is called a form of industrial espionage. At least fake a connection to established justice. Regardless of if the espionage that takes place is part of an Armed Service complex or in the private sector. You are acting as a lawyer trying to brainstorm or plan a way around the crime of one industrial complex moving against another in the undertaking of espionage. The reason there has been a disconnect made between United States Constitution preamble and legislated law by an assumption of command made by Congress in several Amendments all worded in the same way. This is an act of breaking a state of the union inside rights of change on constitution. Again, the constitutional grievance is over legal malpractice. We are to be looking for the more perfect union with established justice NOT looking how to move for any union on established justice.


  • jackjack 453 Pts   -  
    MayCaesar said:

    You get what you pay for: you want the AI to win a war for you - you have to accept the fact that the AI will have insights that you luck and will sometimes advocate for actions drastically differing from yours. You cannot have both - an efficient military AI and an AI that blindly obeys your orders - you have to choose the balance between effectiveness and control. Controlling the AI while making it try to maximize the outcome of a war scenario pulls it in two opposite directions; of course something is going to give.

    That is not an AI problem, but a more general problem of trying to chase two rabbits at the same time.
    Hello May:

    So, AI IS dangerous, huh?  Glad to see you've come around..  It's also true, making AI maximize the outcome of a war scenario pulls it in two opposite directions; of course something is going to give. In fact, controlling AI isn't AI. 

    Ok.  Something's gonna give.  What, prey tell?

    excon


  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023
    @jack
    Is A.I the best connection to be made between the computer industry and established justice or is it better calling this A.I thing, process, programming malpractice? Artificial intelligence is something a person might be instructed to describe in limited truth instead of a more complete truth of ethical malpractice of programming. Jack a person might be given direction during career training to keep the questionable practice principle publicly discrete. (Question)

    American Constitutional rights start with the right questions never the wrong questions to ask.


  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023

    Does the law by united state of licensed practice already understand the legal precedent underway set with gun control as it has created litigations large numbers of litigations in civil courts and is working a hidden form of obstruction of justice created by intelligence set in regulation by civil lawsuits that are not in fact a judicial practice connected to established justice so may not be openly viewed by opinion as obstruction of justice, Artificial Intelligence or Real Intelligence? 

    Again! asking clearly. How exactly do Constitutional Amendments influence Constitutional Rights as written as Article and Section of American Constitution? Do not worry about not answering the question as it only means as fact there is not more perfect state of the union to be made on Preamble states of the union. Those who do not answer are not guilty of any crime, only wrong by legal malpractice of law by promotion of unconstitutional legislation. 

    Too big to fail Vs, too wrong to be right.


  • MayCaesarMayCaesar 6021 Pts   -  
    jack said:

    Hello May:

    So, AI IS dangerous, huh?  Glad to see you've come around..  It's also true, making AI maximize the outcome of a war scenario pulls it in two opposite directions; of course something is going to give. In fact, controlling AI isn't AI. 

    Ok.  Something's gonna give.  What, prey tell?

    excon


    No, it is not. People's shortsightedness is. Anything can be seriously misused, and artificial intelligence is not special in this respect. If you build an AI tasked with a very narrow objective and then try to force it to not pursue that objective, of course it will fight back; what else would you expect? It is like wire-trapping your front door, then opening it and getting blown up, and acting surprised, "Hey, what the heck just happened?"
    So yes, if you take an AI and use it in the most irrational way possible, it can blow in your face. Why you would do so though is everybody's guess. And while doing it in a simulation to make a dramatic point is somewhat sensible, in a real military scenario neither this AI nor the operator with this behavior would be let anywhere near the operation control.

    Is it possible to intentionally make an AI attack friendly humans? Sure. Is it going to happen randomly? Not any more likely than you getting hit with lightning 10 times within 1 day.
  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023
    Is it possible to intentionally make an AI attack friendly humans...
    "A little tough love now! Goes a long way tward a real future."

    It is impossible as a whole truth to have anything even vaguely humanly described as intelligence attack people...Artificial or Mechanical Intelligence. Whichever. This is the problem when a person may become educated in a way that becomes witness tampering, this is to attempt at limit of legal liabilities by the organized use of method of delay of coming in contact with whole truth.  If we tell less truth and tell everyone this is Artificial intelligence and not programming malpractice, we do not describe when we break the connection to established justice as a conspiracy? Is this what you mean?

    Just to point out a rather big American United States Constitutional error made. This conversation is nothing more than confusing educational interpretations of wording picked to hide and obscure Civil rights liabilities connected to justice, established justice (Hint) made by a long history of lawsuit with whole truth used to make connections harder to see and understand openly. Of course, what we are really talking about is obstruction of justice, and why specialized trained licensed people like to break states if the union which create constitutional rights.

    Just as a point the First Amendment states that " Congress can pass no law respecting the establishment of religion." How was this held to be a self-evident truth is the states must ratify constitutionally law therefore?... What is the answer? The States are by whole truth the creator of religously respective law. Meaning laws may be written to create a faith-based outcome and not written on truth, whole truth, and nothing but true.


  • John_C_87John_C_87 Emerald Premium Member 864 Pts   -   edited June 2023

    A computer by human definition is both insane and lacks all intelligence. Artificial intelligence is not medical treatment for inherited human insanity, past on, and may be complicit in the loss of coherent thought. Artificial intelligence is not medical treatment for our transfer of insanity onto a digital state and may be complicit in the lack of reason.


    Thank you..............

Sign In or Register to comment.

Back To Top

DebateIsland.com

| The Best Online Debate Experience!
© 2023 DebateIsland.com, all rights reserved. DebateIsland.com | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us

customerservice@debateisland.com
Terms of Service

Get In Touch