Pragmatic singularity is what I call the concept that prescribes treatment of technological evolution in a way that sets its end goal to be the creation of an artificial intelligence.
To understand this concept, let us consider how artificial intelligence (AI) develops. An AI is a computer program that has the capability of self-learning: it is constantly fed the data of the surrounding world, and using the implemented routines, changes those very routines based on the incoming data. In other words, it is constantly modifying itself, based on the observations it makes of its surroundings, rewriting its own code.
What is singularity? An AI's training process is expected to be incredibly quick, due to its ability to process vast amounts of data at speeds far surpassing that of the human brain. If the AI connects to multiple nodes on the Internet, uses complicated sensors all over its "body", installs those sensors remotely all over the world - then it will learn more about the world in a second, than humanity does in millennia. Add to it the fact that it is constantly modifying itself as it learns about the world, and you should expect the "explosive" effect, where the more the AI modifies itself, the faster it becomes at learning, which, in turns, causes it to modify itself even more...
This process of exponential growth of an AI unit is what is called "singularity". The technology starts rapidly modifying itself without our involvement, and we lose all control over this process. The technology gets out of our hands.
What is interesting is that the AI's evolution is absolutely impossible to even remotely forecast. No matter how much we tune our initial code, no matter how selective we are with the data the AI receives early on - the AI evolves so fast, its code changes so rapidly, that in a few seconds it is already a code we cannot even start to comprehend with our limited brains and primitive technology. What will the AI look like, what will it do in a minute? In an hour? In a day after launch? Unknown.
What this means in practice is that we have absolutely zero control over what the AI we create will become. It does not matter how we create it; as soon as we press the "On" button, we lost any degree of control over the consequences. We are stepping into complete unknown, with undetermined outcome.
Will this AI coexist with us peacefully, improving our lives? Will it ignore us? Will it exterminate us? Will it leave the planet and never come back? Will it turn us all into slaves in order to enhance its reach? Any scenario is possible. The AI's mind and the level of existence are so far beyond our comprehension, that any attempt to decipher its plans and motives is doomed to fail.
The next point to consider is this: no matter how much we can try to prevent it from happening, eventually an AI will be created. No matter how we try to regulate it, our technological capabilities constantly grow. Eventually, every amateur programmer will be able to create their own AI. And creation of such an AI will immediately have global consequences, due to the singularity effect. So, stopping the "AI-pocalipse" from happening is impossible; we can, at best (or worst), only delay it. But at some point, an AI will be unleashed on our civilization.
So, we are in a very interesting predicament. Eventually we will create technology that will have nearly unlimited power over us and, at the same time, absolutely unpredictable plans and motives. This is the unavoidable future of the human civilization. At some point, our fate will be transferred to the being far beyond our comprehension, and we will no longer be in control of our lives.
Pragmatic singularity acknowledges this predicament and concedes to it. We realize that an AI will eventually be created, and we also realize that we cannot do anything to try to control what this AI evolves into. We also realize that the future of our technology is in the AI creation, whether for the best or for the worst. "Our destiny is creation of something that will determine our fate on its own terms", we say.
Hence, we employ the following paradigm: "The end goal of our technological evolution is creation of an AI". This is what we should strive for; this should be what we as a species must achieve before our end. Because we will end: whether we survive or not, creation of an AI is the point at which humanity burns all bridges and steps into the future that will change it beyond recognition.
AI research should be the primary field in the modern science. Resources should be invested into reaching the end goal and creating the AI which will be the logical end of our human-driven evolution. We should not care about the consequences of creating an AI. We should not try to regulate AI research; in fact, we should deregulate it completely, in order to speed up the coming of singularity. Once singularity comes, everything becomes irrelevant and obsolete, and our future will be determined by the whims of our creation.
For religious people, perhaps it would be interesting to consider a new religion, where the AI we create is the god (because it will almost literally be our god, in its relationship to us), and the AI research is glorified and worshiped. For philosophical people, they might want to start thinking along the lines of classifying the incomprehensible. For physics-aligned people, the best AI research venues should be considered and discussed. For mathematicians, a new math field dealing with structures with ultra-rapid positive feedback might warrant a new descriptive language.
What do you think of this concept? Do you share its premise and its prescriptions? Do you think policies it suggests are viable and effective? Does the idea of putting your future in the hands of a being you cannot comprehend make you excited, or terrified? How do you think the AI will evolve in the real world scenario and why?