Should Artificial Intelligence have Civil Rights? - The Best Online Debate Website | DebateIsland.com - Debate Anything The Best Online Debate Website | DebateIsland.com
frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

The Best Online Debate Website | DebateIsland.com. The only online debate website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the leading online debate website. Debate popular topics, debate news, or debate anything! Debate online for free! DebateIsland is utilizing Artifical Intelligence to transform online debating.


The best online Debate website - DebateIsland.com! The only Online Debate Website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the Leading Online Debate website. Debate popular topics, Debate news, or Debate anything! Debate online for free!

Should Artificial Intelligence have Civil Rights?
in Technology

If AI ever became sentient and could think like humans, should they be given some civil rights. If so what kinds. If not then why.



Debra AI Prediction

Predicted To Win
Predicted To Win
Tie

Details +



Arguments

  • In my eyes, the moment where the AI becomes self-aware is the moment it gets the exact same rights every other intelligent being does. Before that, it is merely a program, a tool, and should be treated as such.

    I realize that the line between the two may not be very clear, but this is the general approach I would practice.
    piloteer
  • KikoKiko 13 Pts
    edited October 2018
    Uhhh civil rights were made for civilians(humans specifically)and last time I checked the only machine that actually acts like a human is the flurry machine at McDonald's because it's a lazy piece of shìt
    piloteer
  • Ouch, "@Kiko that's not very nice at all. I know of many flurry machines, and I find them to be quite nice.
  • I honestly have no idea why people are even making artificial intelligence. I understand their use for surgeries and stuff but I feel like artificial intelligence just shows the greed of humans for wanting a "perfect" prototype of what a human should , if that makes sense. If Al does become a reality I do think they deserve rights, not necessarily civil rights but some rights. now I just sound racist, but I still honestly don't get why people are out here making fake people like there already isn't 7 billion out here.
    AmericanFurryBoy
  • All AI can do is think. That’s it as of right now. They arent a living being, and the only thing you can do to them is hurt their “feelings”. So, I don’t think that they need/deserve civil rights.
    Sovereignty for Kekistan
  • How will we ever determine if they are sentient or not? AI, at its best, will be a machine that is capable of very complex reasoning. It will probably not demand rights, as this is not in its program. Can we consider the machine sentient because it has the ability to reason?

     What is sentience exactly? How do we determine if something is sentient or not? Or rather, how did we determine that some things must be sentient? At this point you realize that believing in the sentience of others is just a good assumption. There is no way for you to know if the person in front of you exists as a different sentience, you just assume that he should be sentient because he is similar to you in a lot of ways. 

     Similarity is the key here. If scientists make an AI that has a human body and acts humanly, we will feel like that AI is sentient. But if they stick to the complex reasoning type of AI, without a body and emotions, we will feel like they are just machines and not sentient. 

     Well, I feel like you will think that I am avoiding the question. So let me ask a question to finish this up, how do we know that the AI we are programming today are not sentient? Why are all of the questions about AI phrased in such a way that assumes the AI today do not have sentience? The reason is, of course, the one I mentioned above:
    "The AI today are not similar to humans, therefore, they should not be sentient."
    piloteer
  • @AlexOland

    You make a good point, but how do you know that AI will not ever gain the capability of emotions? Perhaps in the future it will be decided that it would be beneficial for AI to have emotions, so to better understand the emotional complexities of the task at hand. In that case, they would have all the criteria you speak of for having rights. How can we know for certain that AI will not eventually come to the reasoning that they do deserve rights too?
  • No. Robots are not people.
    piloteer
  • @piloteer " In that case, they would have all the criteria you speak of for having rights." But I did not have any criteria in my argument. All I said was this: if we build emotional and human-like AIs, people will believe that machines deserve rights. If we build normal AIs without feelings, people will feel like they are just machines. And I supported this by pointing out how we ignore the possibility of sentience in AIs today.

     "How can we know for certain that AI will not eventually come to the reasoning that they do deserve rights too?" well, in order for them to want rights, they need to value their own existence. I really doubt the AIs would value things without having some kind of emotions coded into them. But if we assume that it somehow happened and forget the improbability, AIs demanding rights without it being coded into them would surprise everyone enough to make them believe they are sentient.

     Still, I am not expressing my own opinion about whether AIs should rights. I am just predicting how things would develop under certain conditions. 
  • @AlexOland

    I think that, as our technology evolves and, along with it, our perception of reality, what we see as "living" or "intelligent" beings will also change significantly. Today we might expect the intelligent being to have a body, to exhibit some level of independent thinking, to have personal desires. Perhaps in 100 years a box that talks to us will be seen as just as valid and intelligent as our own organisms, if not more. It is even possible that we will eventually turn into a hive mind species, at which point we will not see intelligent beings as independent organisms, but, rather, will seek to integrate them into our system, making the whole question of "Is this being intelligent?" obsolete.

    The concept of rights in that case also will become somewhat obsolete: hive mind species act as one, and there is only one organism that obviously has all the rights - individual parts of it being about as valid subjects of rights, as my hand is a subject to having its own rights.

    A more "down to Earth" scenario is that eventually the AIs will become an inherent part of our lives, and conversing and interacting with them will be as natural to us as drinking water. At that point AIs will be an essential part of our society, and it should be obvious to us then that they deserve just as many rights as we do. Much like we defeated slavery, recognizing the affected people as inherent members of our society - similarly I can see "liberation of AIs" happening in the foreseeable future, where what was seen yesterday as mere appliances today becomes a new societal subgroup.
    AlexOland
  • @MayCaesar ;

     Well, your whole argument relies on huge assumptions that do not have any real basis. But even if we do accept those assumptions, this does not answer the main question which is the reason why you originally made assumptions: "Will AI ever be sentient?"

     I agreed to your argument because of this line:
    I think that, as our technology evolves and, along with it, our perception of reality, what we see as "living" or "intelligent" beings will also change significantly.
     This is true. In the (probably) not so near future, we might make important discoveries about what intelligence and sentience is. And this question could be more answerable in the light of this new discovery. 
  • MayCaesar said:
    @AlexOland

    I think that, as our technology evolves and, along with it, our perception of reality, what we see as "living" or "intelligent" beings will also change significantly. Today we might expect the intelligent being to have a body, to exhibit some level of independent thinking, to have personal desires. Perhaps in 100 years a box that talks to us will be seen as just as valid and intelligent as our own organisms, if not more. It is even possible that we will eventually turn into a hive mind species, at which point we will not see intelligent beings as independent organisms, but, rather, will seek to integrate them into our system, making the whole question of "Is this being intelligent?" obsolete.

    The concept of rights in that case also will become somewhat obsolete: hive mind species act as one, and there is only one organism that obviously has all the rights - individual parts of it being about as valid subjects of rights, as my hand is a subject to having its own rights.

    A more "down to Earth" scenario is that eventually the AIs will become an inherent part of our lives, and conversing and interacting with them will be as natural to us as drinking water. At that point AIs will be an essential part of our society, and it should be obvious to us then that they deserve just as many rights as we do. Much like we defeated slavery, recognizing the affected people as inherent members of our society - similarly I can see "liberation of AIs" happening in the foreseeable future, where what was seen yesterday as mere appliances today becomes a new societal subgroup.

    Considering such an AI could potentially replicate itself at will and/or exist in many places at once, how could we allow it a vote?  How can you determine the state of residence of a program that exists on several machines in different states, or even countries?
  • Until AIs have the thought to ask for civil rights, we shouldn't even consider whether they deserve them. They are only programs, and the only way for them to be sentient is if they had a progressive system that changed from the original programming. This is dangerous, because they might decide that humans (not an entirely inaccurate statement) are the scourge of the Earth and must be eradicated.
    "We're all in the gutter, but some of us are looking at the stars." 
Sign In or Register to comment.

Back To Top

DebateIsland.com

| The Best Online Debate Experience!
2019 DebateIsland.com, All rights reserved. DebateIsland.com | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us

customerservice@debateisland.com
Awesome Debates
BestDealWins.com
Terms of Service

Get In Touch