October 01 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Environment Year: General Rating: 1
The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:
1) A Strong-AI singularity is unlikely to emerge before Google does it first.
“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.
More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.
So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)
2) The Singularity recedes as we develop new abilities.
Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”
This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.
While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.
(For more on this, check out this Future Blogger post by Will.)
3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.
“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”
Basically Kelly is saying that AI must be networked in with the world in order to advance in a meaningful way. It cannot be a closed system. As an consequence of my belief in a Strong IA scenario I think it’s probable that most serious AI efforts will 1) blend in with the real-world due to necessity – that’s what it’ll take to develop the intelligence, and 2) that market forces will pull any serious AI into useful applications – otherwise, why build it at all?
I do take a bit of issue with Kelly’s blanket statement that we won’t get to the point where the intelligence we create will make near-instant discoveries. Technically, we as a species already do so and will continue to get even faster at innovating as we incorporate AI-ish things. But that’s really just semantics.
The phrasing of Kelly’s quote is interesting because at first glance I thought he was referring to the thought process of futurists like Kurzweil who build upon a chain of ideas in order to push one single grand scenario, which violates the cardinal rule of hard core future(s) studies and seems to be rooted in reductionism and ego.
Conclusion: I think Kelly’s piece is convincing and very necessary as the singularity debate unfolds, but it could benefit by placing more emphasis on Intelligence Amplification and the Global Body. In my opinion, the most powerful Singularity counter-scenario is one that places system-wide evolution and development of intelligence above any localized creation of the same. It seems that Kelly is getting at that, but I’d like to see it rooted more in the kind of science being talked about at places like the upcoming Evo Devo Paris Conference.
In summary, as our complex adaptive life system pushes for more topsight we auto-catalytically evolve and develop with technology, information and biology. This has been going on for all human history and will continue to happen as we strive to develop powerful intelligence. A Strong AI Singularity scenario is fundamentally flawed because it falls short in estimating how AI will co-evolve/develop with its environment and erroneously views human intelligence as basically static. Kevin Kelly is yet another critical brain appears to be backing this emerging view.