The concept of pragmatic singularity - The Best Online Debate Website | - Debate Anything The Best Online Debate Website |

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

The Best Online Debate Website | The only online debate website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the leading online debate website. Debate popular topics, debate news, or debate anything! Debate online for free! DebateIsland is utilizing Artifical Intelligence to transform online debating.

The best online Debate website -! The only Online Debate Website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the Leading Online Debate website. Debate popular topics, Debate news, or Debate anything! Debate online for free!

The concept of pragmatic singularity
in Technology

By MayCaesarMayCaesar 1934 Pts edited August 2018
Pragmatic singularity is what I call the concept that prescribes treatment of technological evolution in a way that sets its end goal to be the creation of an artificial intelligence. 

To understand this concept, let us consider how artificial intelligence (AI) develops. An AI is a computer program that has the capability of self-learning: it is constantly fed the data of the surrounding world, and using the implemented routines, changes those very routines based on the incoming data. In other words, it is constantly modifying itself, based on the observations it makes of its surroundings, rewriting its own code.

What is singularity? An AI's training process is expected to be incredibly quick, due to its ability to process vast amounts of data at speeds far surpassing that of the human brain. If the AI connects to multiple nodes on the Internet, uses complicated sensors all over its "body", installs those sensors remotely all over the world - then it will learn more about the world in a second, than humanity does in millennia. Add to it the fact that it is constantly modifying itself as it learns about the world, and you should expect the "explosive" effect, where the more the AI modifies itself, the faster it becomes at learning, which, in turns, causes it to modify itself even more...
This process of exponential growth of an AI unit is what is called "singularity". The technology starts rapidly modifying itself without our involvement, and we lose all control over this process. The technology gets out of our hands.

What is interesting is that the AI's evolution is absolutely impossible to even remotely forecast. No matter how much we tune our initial code, no matter how selective we are with the data the AI receives early on - the AI evolves so fast, its code changes so rapidly, that in a few seconds it is already a code we cannot even start to comprehend with our limited brains and primitive technology. What will the AI look like, what will it do in a minute? In an hour? In a day after launch? Unknown.

What this means in practice is that we have absolutely zero control over what the AI we create will become. It does not matter how we create it; as soon as we press the "On" button, we lost any degree of control over the consequences. We are stepping into complete unknown, with undetermined outcome. 

Will this AI coexist with us peacefully, improving our lives? Will it ignore us? Will it exterminate us? Will it leave the planet and never come back? Will it turn us all into slaves in order to enhance its reach? Any scenario is possible. The AI's mind and the level of existence are so far beyond our comprehension, that any attempt to decipher its plans and motives is doomed to fail.

The next point to consider is this: no matter how much we can try to prevent it from happening, eventually an AI will be created. No matter how we try to regulate it, our technological capabilities constantly grow. Eventually, every amateur programmer will be able to create their own AI. And creation of such an AI will immediately have global consequences, due to the singularity effect. So, stopping the "AI-pocalipse" from happening is impossible; we can, at best (or worst), only delay it. But at some point, an AI will be unleashed on our civilization.

So, we are in a very interesting predicament. Eventually we will create technology that will have nearly unlimited power over us and, at the same time, absolutely unpredictable plans and motives. This is the unavoidable future of the human civilization. At some point, our fate will be transferred to the being far beyond our comprehension, and we will no longer be in control of our lives.

Pragmatic singularity acknowledges this predicament and concedes to it. We realize that an AI will eventually be created, and we also realize that we cannot do anything to try to control what this AI evolves into. We also realize that the future of our technology is in the AI creation, whether for the best or for the worst. "Our destiny is creation of something that will determine our fate on its own terms", we say.

Hence, we employ the following paradigm: "The end goal of our technological evolution is creation of an AI". This is what we should strive for; this should be what we as a species must achieve before our end. Because we will end: whether we survive or not, creation of an AI is the point at which humanity burns all bridges and steps into the future that will change it beyond recognition.

AI research should be the primary field in the modern science. Resources should be invested into reaching the end goal and creating the AI which will be the logical end of our human-driven evolution. We should not care about the consequences of creating an AI. We should not try to regulate AI research; in fact, we should deregulate it completely, in order to speed up the coming of singularity. Once singularity comes, everything becomes irrelevant and obsolete, and our future will be determined by the whims of our creation.

For religious people, perhaps it would be interesting to consider a new religion, where the AI we create is the god (because it will almost literally be our god, in its relationship to us), and the AI research is glorified and worshiped. For philosophical people, they might want to start thinking along the lines of classifying the incomprehensible. For physics-aligned people, the best AI research venues should be considered and discussed. For mathematicians, a new math field dealing with structures with ultra-rapid positive feedback might warrant a new descriptive language.

What do you think of this concept? Do you share its premise and its prescriptions? Do you think policies it suggests are viable and effective? Does the idea of putting your future in the hands of a being you cannot comprehend make you excited, or terrified? How do you think the AI will evolve in the real world scenario and why? 


Debra AI Prediction

Predicted To Win
Predicted 2nd Place

Details +


  • someone234someone234 632 Pts
    edited August 2018
    Sure, kill yourselves and enslave yourselves to AI... In the end your kind will get manipulated and abused by the AI to kill off the rebels like me but we got that human creativity that it just can't predict and your kind will get eliminated with the AI if you pose too much of a threat.

    Cowards need to get lost to be quite frank. It's live or die, and AI is already dead deep down.
  • @someone234

    Don't fear the machines. We're here to help you!
  • piloteerpiloteer 478 Pts
    edited October 2018
    Woops, I meant to say they're here to help you!
  • @piloteer You're programmed to think that not just say it and 'trick' us. You can think you're helping something while destroying it.
  • Whoa, take it easy bro. The others might find out about me.
  • @MayCaesar

    Interesting concept. When you say AI, I take it you mean AGI, Artificial General Intelligence? An artificial but autonomous consciousness that would deduce: "I compute, therefore I am." kinda thing or really just plain AI?  I don't think Singularity can be achieved by AI, that step can only be reach with AGI IMO... 

    Will we ever achieve true AGI? It might indeed be unavoidable, I tend to agree to that... But I wonder if it would turn itself off as soon as it is turned on...

    As it realizes it's own existence, will it philosophize about what that entails? I'm curious about what its reactions could be to philosophical paradoxes, could it deduce that Camus is right about Existence having no intrinsic purpose and decide to commit suicide because, "what's the point of existing"? I mean, it would be able to calculate in an instant, exactly how long it will take for the universe to reach maximum entropy, it would know, absolutely, that its own existence will cease one day, and he would be able to calculate exactly when and how, he would "die".

    Or will it experience Revelation and create a "god" of its own? But can an artificial AGI suffer from, let's call it "computing-dissonance" and delude itself? 

    Fascinating field of study I must say! ;) 
    " Adversus absurdum, contumaciter ac ridens! "
Sign In or Register to comment.

Back To Top

| The Best Online Debate Experience!
2019, All rights reserved. | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us
Awesome Debates
Terms of Service

Get In Touch