How Exactly Will Our System Get Smarter?

July 25 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: Beyond   Rating: 5

A favorite debate topic for many futurists, humanists, advanced defense theorists, sci-fi authors and Future Bloggers is the nature of future terrestrial intelligence increase. As change accelerates, how how exactly will we and/or the system around us get smarter?

The most popular scenario by far is Artificial General Intelligence , aka AI that equals or surpasses that of humanity, probably because it is the most immediately relatable and due to the fact that so much money is being poured into AGI research. In fact, some researchers are predicting a breakthrough in the field in just 5-10 years.

But there are a variety of other scenarios that could either outcompete this paradigm or conspire with it to accelerate intelligence in our system. These include human-based, alien-based, deeply systemic, or even exo-systemic possibilities.

Applying your particular brand of intelligence, which of the following do you think is the most optimal path to intelligence increase in the acceleration era? (Survey at end of post)

AGI: Human-generated machine intelligence such as in the films 2001: A Space Odyssey and A.I..

Individual Intelligence Amplification: Individual humans that grow vastly smarter due to hard, biological and/or soft cognitive upgrades, such as Bean in Ender’s Game.

Social Intelligence Amplification: A group or humanity as a whole collectively grows smarter, thus taking on the stewardship role for our Earth and species.

Biological Intelligence Amplification: One, more or all of the other species on Earth evolve or emerge, aided or automatically, the foremost intelligence on the planet. This could be viewed as a Gaiian awakening.

Alien Contact: Through efforts like SETI or those of the aliens themselves, we come into contact with some extra-terrestrial intelligence based in our universe that either stewards us or gives us a nice boost, a la the Vulcans in Star Trek, although this would likely be considerably more extreme.

Continue Reading

Kevin Kelly's Singularity Critique is Sound and Rooted in Systems Understanding

October 01 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Environment   Year: General   Rating: 1

The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:

1) A Strong-AI singularity is unlikely to emerge before Google does it first.

“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

I agree that powerful intelligence far more likely to emerge as a property of the global brain and body in co-evolution with accelerating information growth than in a lab.

More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.

So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)

2) The Singularity recedes as we develop new abilities.

Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”

This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.

While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.

(For more on this, check out this Future Blogger post by Will.)

3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.

“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”

Continue Reading