The Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes —Kevin Kelly

One of the problems with discussing The Singularity, is that there are a number of definitions of the concept. It started with the idea of exponentially improving machine intelligence (AI), then added an associated technology growth, and ended with a biotechnology explosion and human-machine hybridization. So, which one are we to use? Or, can we use any of them? Is The Singularity real?

In a recent essay on the Singularity Web Log, the author raises an issue that challenges the very basis of The Singularity: the claim that technological growth is logistic, not exponential. The difference between the two equations is a limiting term. For example, take population (N) growth over time (t). Population grows at some rate (r).

Exponential: dN/dt = rN

Logisitic: dN/dt = rN * (K-N)/K

where K is some physical limiting factor, in this case, carrying capacity (see the article for a nice graphic).

Unfortunately, at this point, the essay wanders off into mysticism — K doesn’t matter because that’s a physical, not a machine intelligence concept, the map is not the territory, the machine is not the brain, my imagination is better than your imagination.

So, what about this K thing? Is it really not a limiter on machine intelligence? Is AI really not grounded in the physical world? Stated like that, the obvious answer is, of course it is. And to the extent that it is, it is limited by some definition of K. For the purposes of our discussion, K can be considered an outgrowth of the difference between electrons and molecules, to use Nicholas Negroponte’s phrase. Molecules are heavy, take up space, and are expensive to move. Electrons are essentially free, and can be moved anywhere almost instantly, and almost for free. Shifting publishing from paper books to e-books (still a work in progress) totally changed the dynamics of the industry. This electron/molecule dichotomy is what drives our discussion of K.

Take the most basic definition of The Singularity: that soon we will have the ability to build an AI that is better at designing AI’s than we are. At that messianic point the growth in AI capabilities will become exponential and we cannot foresee the ending. The trouble is, there’s a difference between the *concept* of a really strong AI and the *implementation* of the concept. An AI is implemented as computer code running on computer chips. Can this super AI¹ design AI², the next generation of chips and software exponentially faster than humans can? Of course it can, that’s the basis of The Singularity. Can we then retool a $5billion wafer fab to *produce* those chips for AI² exponentially faster? Can we manufacture the motherboards that will accept those chips? Build arrays of servers and ship them and install them at server farms around the world before AI³ comes down the pike? Perhaps AI¹ can show us how to do it faster, but *exponentially* faster? For The Information Singularity, K is the interface between the conceptual world and the real world.

When we take the next step, from The Information Singularity to The Technology Singularity, we run into the same K. AI² might be able to design better batteries and lighter cars, but actually building them takes time. And retooling takes time, and those times are not likely to be reduced nearly as fast as the designs are improved.

And finally, the biotechnology, human hybrids, new human race singularities are likely to be the slowest of all. Yes, we will be able to modify DNA to give us healthier bodies, computer-friendly brains, and two additional primary colors, but biology will not be rushed. As the old programmer joke about bringing in more staff on tardy projects goes, *it’s like putting nine women on the job so you can produce a baby in one month*.

So, it looks like the heart of K as a limiting factor on The Singularity, is *time*. The Information Singularity will cause computations, or rather, computation-driven decisions, to be made in exponentially less time. But the real-world instantiation of those decisions will still take place in Real World time. What makes this a true constraint on The Singularity is that *time* is a fundamental concept. The very heart of The Singularity concept is exponential time. If the application of information to molecules has to take place in Real Time, then, like the speed of light, our approach to The Singularity will become slower the closer we approach it.

Now, there is one bright spot here. In the equations above, N was population. In our calculations N would be *rate of change* of information processing/technology adoption, etc. So dN/dt measures the *change* in the rate of change over time (and should probably be dT/dt/dt, where T is technology).

The essay I’m quoting from takes a doom and gloom message from an exponential rate of change vs logistic equation rate, because the: “*overarching and obvious scenarios are: dramatic change, or relative stasis.*” No, they are not.

If the Logistic Theory is correct, the *rate* of technology *change*, technology adoption, will at some point *level out*. For tens of thousands of years, humankin faced essentially zero rate of change. The next thousand years was just like the last. Then things started changing. New technology appeared at such a rate that the next century was clearly better than the last. Then the next decade. Now, we are at a point where, if you wait two years, cutting edge technology will be wildly different. And if we’ve just rolled off the exponential part of the Logistic Curve and onto the flat, that’s the way things will stay — every two years we’ll see major changes in our world.

A fast, steady increase in technology may not be as exciting as a never-ending exponential, but at least you’ll be able to say that *some* part of your four-year college education is still valid when you graduate.