Kevin Kelly's Singularity Critique is Sound and Rooted in Systems Understanding

October 01 2008 / by Alvis Brigis / In association with Future
Category: Environment   Year: General   Rating: 1

The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:

1) A Strong-AI singularity is unlikely to emerge before Google does it first.

“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

I agree that powerful intelligence far more likely to emerge as a property of the global brain and body in co-evolution with accelerating information growth than in a lab.

More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.

So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)

2) The Singularity recedes as we develop new abilities.

Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”

This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.

While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.

(For more on this, check out this Future Blogger post by Will.)

3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.

“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”

Basically Kelly is saying that AI must be networked in with the world in order to advance in a meaningful way. It cannot be a closed system. As an consequence of my belief in a Strong IA scenario I think it’s probable that most serious AI efforts will 1) blend in with the real-world due to necessity – that’s what it’ll take to develop the intelligence, and 2) that market forces will pull any serious AI into useful applications – otherwise, why build it at all?

I do take a bit of issue with Kelly’s blanket statement that we won’t get to the point where the intelligence we create will make near-instant discoveries. Technically, we as a species already do so and will continue to get even faster at innovating as we incorporate AI-ish things. But that’s really just semantics.

The phrasing of Kelly’s quote is interesting because at first glance I thought he was referring to the thought process of futurists like Kurzweil who build upon a chain of ideas in order to push one single grand scenario, which violates the cardinal rule of hard core future(s) studies and seems to be rooted in reductionism and ego.

Conclusion: I think Kelly’s piece is convincing and very necessary as the singularity debate unfolds, but it could benefit by placing more emphasis on Intelligence Amplification and the Global Body. In my opinion, the most powerful Singularity counter-scenario is one that places system-wide evolution and development of intelligence above any localized creation of the same. It seems that Kelly is getting at that, but I’d like to see it rooted more in the kind of science being talked about at places like the upcoming Evo Devo Paris Conference.

In summary, as our complex adaptive life system pushes for more topsight we auto-catalytically evolve and develop with technology, information and biology. This has been going on for all human history and will continue to happen as we strive to develop powerful intelligence. A Strong AI Singularity scenario is fundamentally flawed because it falls short in estimating how AI will co-evolve/develop with its environment and erroneously views human intelligence as basically static. Kevin Kelly is yet another critical brain appears to be backing this emerging view.

It's Oct 2008. Who's presently right about the Singularity? Ray Kurzweil or Kevin Kelly?

or Show Results

Comment Thread (10 Responses)

  1. I am kinda in-between theories here: Kelly postulates that intelligence arrises from the number of connections or “neurons” that evolve on the internet through servers, routers, etc.. Kurzweil puts forth the theory that we have to find the algorithm, and that will lead the path to intelligent machines by copying the processes of the brain and the structures.

    What I believe is that through evolution, our intelligence arose through a long series of mutations and natural selections, but cognitive reasoning is not a “human only” ability. Apes, etc—highly evolved organisms show some same abstract thinking ability as us.

    I guess the sticking point is consciousness. I don’t think it will just “arise.” If that’s the case, why hasn’t it happened already?

    Posted by: Covus   October 01, 2008
    Vote for this comment - Recommend

  2. This is exactly why Kurzweil is seen as a crank, and his very name shouldn’t be mentioned seriously by anyone.

    The message is simple. The Singularity will happen someday, but don’t hold your breath waiting for it to happen within the next 50-75 years.

    It WON’T.

    We will die of old age. The only choice we have is cryonics.

    Posted by: adbatstone80   October 02, 2008
    Vote for this comment - Recommend

  3. Hoping for the Singularity in your lifetime?


    Posted by: adbatstone80   October 02, 2008
    Vote for this comment - Recommend

  4. I chased Dick “Futuretalk” Pellietier off this site due to his baseless and misinformed claims, and ushered in a wave of skeptical thinkers to this site at last.

    It’s time for Ray Kurzweil to own up and admit his idea of the singularity was totally flawed.

    Posted by: adbatstone80   October 02, 2008
    Vote for this comment - Recommend

  5. @Adbatstone80 – There is no way he is going to do that. You really hate this guy don’t you? Anyway, a lot of things can happen in 50-75 years. If anything, you’re right – I am looking into cryonics – just in case.

    However, it is crazy to think that nothing will happen in 50-75 years. Now that is delusional.

    Posted by: Covus   October 02, 2008
    Vote for this comment - Recommend

  6. @Adbatstone80 – And you’re starting to sound like you’re frothing at the mouth, like you are on a crusade. Why do you come on this site if you don’t like it? Seems a little weird.

    Posted by: Covus   October 02, 2008
    Vote for this comment - Recommend

  7. Great post Alvis. I really thought Kelly was spot on and I think you are spot on in pointing out the importance of the environment/co-development in Kelly’s argument.

    I am not at all sure Google will create AI. Am I missing something about Google? Is it more than a few killer web apps and great search engine? Is there something more fundamentally important about Google I don’t understand? From my point of view, it is getting a lot more credit than it deserves from the press. Google isn’t the whole world wide web.

    Also, I did read Kelly’s post a little differently. Though, as I stated, I think your points are spot on. The biggest take away I got from his argument was that thinking or intelligence alone will not bring about any real change. We will have to test theories, we will have to actually put things into place—that the Singularity as an idea is flawed because it bypasses dealing with innumerable variables of real life by claiming intelligence alone will solve it all. Life and our challenges are not just one big math problem to solve.

    However I think your point that human and technology co-evolve the perfect way of thinking about how we and the technology by definition must interact with and change the environment to evolve.

    Posted by: Mielle Sullivan   October 03, 2008
    Vote for this comment - Recommend

  8. Never mind technology, I think the rate of increase of AdbatBrain’s ego is accelerating. Adbatstone80, don’t give yourself too much credit pal, Dick may be away for some other reason, and if he left because of the s##t that came from you, then this proves that you are a hostile belligerent, not that his ideas are wrong. Or do you not understand what a logical fallacy is?

    Actually, I’m more or less a singularian without the cherry on top. I recognise the power of the acceleration and convergence of different fields, but do not know where it is heading. However, your view is knee jerk hostility to any view except the one where humanity is stagnating and miserable. This means either something happened to make you bitter, or you are stupid, or both.

    Posted by: CptSunbeam   October 03, 2008
    Vote for this comment - Recommend

  9. Thank you for referencing my now months old post on this concept. In the context of Mr. Kelly’s work, I think it holds up surprisingly well, if I do say so myself. :)

    @ Covus: I guess the sticking point is consciousness. I don’t think it will just “arise.” If that’s the case, why hasn’t it happened already?

    Would we necessarily recognise such an occurance for what it is or would we regard the result as some type of error?

    Somewhat more seriously, given the rather tenuous state of conciousness in humans (who’s “hardware” is fundamentally adapted for precisely this function), why would anyone be at all surprised at it’s failure to appear in a far less accomadating structural environment?

    @ adbatstone80: Dude, take a deep breath, hold it for a few seconds and GET OVER YOURSELF. Ray Kurzweil’sbasic concept of a Singularity (a point of development beyond which our present degree of knowledge cannot extrapolate) is quite sound as a measure of our present development and also takes into account the effect(s) arising from the accelerating nature of intellectual development as a general trend of human development that is ignored by linear growth projections. I also find mr. Kurzweil’s willingness to seriously examine a variety of possible mechanism’s to approaching his boundary to be intelluctually honest, particularly as some of them are mutually contradictory in their basic tenants. What precisely do you find to be so objectionable in any of that?

    @ Mielle Sullivan: I am not at all sure Google will create AI. Am I missing something about Google? Is it more than a few killer web apps and great search engine? Is there something more fundamentally important about Google I don’t understand? From my point of view, it is getting a lot more credit than it deserves from the press. Google isn’t the whole world wide web.

    [my bold]

    Not from any lack of effort on their part you have to admit. :)

    That bit of snark aside, I think it reasonable to assume that Google (or it’s effective near-twin) will in some fashion provide the memory storage and recall function to some future true AI. You are quite correct that there is a great deal more that goes into the make up of any independent, self-directing intellect.

    More generally, I think that the development of the technology to permit artificial enhancement of human intellect (IA) will supercede the development of Artificial Intelligence (AI) as anything more than a semi-autonimous synthetic agent or human proxy. The more we study the human brain, the better we recognise just how convoluted it’s complexities really are. It seems far more likely that we will further develop particular “corrective” technologies into enhancement protocols before we succeed in creating an operable model of the human brain that functions independant of our input.

    Finally, to the best of my knowledge, I don’t believe Ray Kurzweil has ever suggested that a singularity can occur seperate from it’s physical achievement. I’m not at all clear where that misconception comes from either. As Covus notes, developing the algorithms necessary to accurately simulate human thought processes is part of the development needed, but I don’t believe Kurzweil has ever suggested that to be the only necessity. His website routinely examines pertinent hardware advances too.

    Ultimately, one of the foundational precepts to the whole Singularity concept is that we can’t know in advance what will eventually prove to be the most critical development. Only when that crucial advancement has been successfully built upon subsequently will we have any degree of certainty, by which time everyone else will know too! Until that happy day arrives, Intelligent challenges to operant theories, like Kelly has made to Kurzweil, is how we achieve our journey into the future, isn’t it?

    Posted by: Will   October 03, 2008
    Vote for this comment - Recommend

  10. Kevin Kelly is spot-on. Also a must watch:

    Google is building AI. They recently announced how they will do it:

    “The increasing number and diversity of interactions will not only direct more information to the cloud, they will also provide valuable information on how people and systems think and react.

    Thus, computer systems will have greater opportunity to learn from the collective behavior of billions of humans. They will get smarter, gleaning relationships between objects, nuances, intentions, meanings, and other deep conceptual information ..

    Traditionally, systems that solve complicated problems and queries have been called “intelligent”, but compared to earlier approaches in the field of ‘artificial intelligence’, the path that we foresee has important new elements. First of all, this system will operate on an enormous scale with an unprecedented computational power of millions of computers. It will be used by billions of people and learn from an aggregate of potentially trillions of meaningful interactions per day. It will be engineered iteratively, based on a feedback loop of quick changes, evaluation, and adjustments. And it will be built based on the needs of solving and improving concrete and useful tasks such as finding information, answering questions, performing spoken dialogue, translating text and speech, understanding images and videos, and other tasks as yet undefined. When combined with the creativity, knowledge, and drive inherent in people, this “intelligent cloud” will generate many surprising and significant benefits to mankind.”

    –Alfred Spector, VP Engineering, and Franz Och, Research Scientist

    AGI, assuming it is even possible, will come much later because we might have to actually understand what “I” is before we can invent the “AG” part. Google’s development path will not require this. Call it AI or IA; it’s really both. Google is doing both at the same time.

    I used to be enamored by the Singularity idea myself but KK has shown me the light.

    Posted by: fostiak   October 08, 2008
    Vote for this comment - Recommend