Exponential Human IQ Increase, Are We Living It?

March 06 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: General   Rating: 6

The Flynn Effect is a fascinating observation that average human IQ has been rising steadily since the invention of tests that measure it. It’s possible that it has been caused directly or indirectly by increased access to information, technology and human networks. If that’s the case, and the trend in human IQ is pegged to trends in these areas, then it’s also possible that we’re about to get a heck of a lot smarter in a very short span of time. Perhaps even exponentially smarter.

Ray Kurzweil has shown that technology is increasing at an exponential, or even double-exponential rate. A Berkeley study and a report by IDC both have confirmed that the amount of information on Earth is growing at an exponential rate. It is clear that advances in communication technology are facilitating an explosion in the rate of communication between people, thus increasing the value of the whole according to Metcalfe’s Network Law .

It’s undeniable that these accelerating trends have had a profound impact on social behavior, in particular our ability to solve ever more complex problems. If you don’t believe me, simply take a look at how quickly a person or a group can locate information, bounce it off of others and output that as a rich white paper, business strategy or more advanced technology—then imagine how difficult that same task would have been minus the internet, huge bodies of amassed knowledge and an environment chock full of complex and inspirational solutions to diverse problem sets.

Human brains are not closed systems. They are constantly learning better ways to input, sort and output information (ultimately this manifests as culture). In order to increase their intelligence, they must encounter information, technology and interact with other humans. It has been shown that children raised sans society are beyond dysfunctional, and that humans who miss critical periods for learning things as simple as counting from 1 to 10 or certain ways of looking at time cannot regain those abilities once the developmental windows close. This indicates that there is a strong relationship between access to information + technology and human intelligence.

But just how strong is the link? Will humans get smarter faster or is there a cut-off point after which technology and information systems speed off into a phase place where we cannot follow? Obviously, these are questions with far reaching consequences. The answers will determine how we evolve, the likelihood of our survival and/or expansion, whether AI or IA is the future, and if a singularity is possible, impossible or desirable.

The more critical the human-tech-info symbiosis, the more likely it is that the Flynn Effect will continue and translate into exponential growth of our own intelligence parallel to these other trends (auto-catalytically), rather than subsequently and as a by-product of them.

Continue Reading

How Exactly Will Our System Get Smarter?

July 25 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: Beyond   Rating: 5

A favorite debate topic for many futurists, humanists, advanced defense theorists, sci-fi authors and Future Bloggers is the nature of future terrestrial intelligence increase. As change accelerates, how how exactly will we and/or the system around us get smarter?

The most popular scenario by far is Artificial General Intelligence , aka AI that equals or surpasses that of humanity, probably because it is the most immediately relatable and due to the fact that so much money is being poured into AGI research. In fact, some researchers are predicting a breakthrough in the field in just 5-10 years.

But there are a variety of other scenarios that could either outcompete this paradigm or conspire with it to accelerate intelligence in our system. These include human-based, alien-based, deeply systemic, or even exo-systemic possibilities.

Applying your particular brand of intelligence, which of the following do you think is the most optimal path to intelligence increase in the acceleration era? (Survey at end of post)

AGI: Human-generated machine intelligence such as in the films 2001: A Space Odyssey and A.I..

Individual Intelligence Amplification: Individual humans that grow vastly smarter due to hard, biological and/or soft cognitive upgrades, such as Bean in Ender’s Game.

Social Intelligence Amplification: A group or humanity as a whole collectively grows smarter, thus taking on the stewardship role for our Earth and species.

Biological Intelligence Amplification: One, more or all of the other species on Earth evolve or emerge, aided or automatically, the foremost intelligence on the planet. This could be viewed as a Gaiian awakening.

Alien Contact: Through efforts like SETI or those of the aliens themselves, we come into contact with some extra-terrestrial intelligence based in our universe that either stewards us or gives us a nice boost, a la the Vulcans in Star Trek, although this would likely be considerably more extreme.

Continue Reading

Kevin Kelly's Singularity Critique is Sound and Rooted in Systems Understanding

October 01 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Environment   Year: General   Rating: 1

The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:

1) A Strong-AI singularity is unlikely to emerge before Google does it first.

“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

I agree that powerful intelligence far more likely to emerge as a property of the global brain and body in co-evolution with accelerating information growth than in a lab.

More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.

So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)

2) The Singularity recedes as we develop new abilities.

Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”

This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.

While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.

(For more on this, check out this Future Blogger post by Will.)

3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.

“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”

Continue Reading

The Future of Intellectual Attribution: Quantifying the Massive Idea Sea Requires Convergence

October 22 2008 / by Alvis Brigis
Category: Education   Year: 2018   Rating: 1

Intellectual attribution is far from perfect, but as we systematically quantify the nature of the vast Idea Sea in which we swim, we will also create a more effective and equitable market for new innovations.

Last week a pair of Nobel Prize winning scientists conceded that much of their research had been based on an earlier study by a geneticist who now drives a shuttle for $8/hour just to keep food on the table, but of course didn’t go so far as to offer him a share of the $1.5 million prize they’d been awarded. This example clearly brings into focus the limits of our current idea attribution economy, a system that clearly isn’t encouraging a Nobel-caliber scientist to continue innovating for broader social benefit.

But rather than jump on the IP- and patent-bashing bandwagon as many bloggers tend to do, I’d like to explore how our idea attribution system might evolve over the coming decade.

First, let me be clear about my definition of the term “idea”. Ideas can more specifically be broken down into memes – “ideas or behaviors that can pass from one person to another by learning or imitation”, memeplexes – “groups of religious, cultural, political, and idealogical doctrines and systems”, and temes – “information copied by books, phones, computers and the Internet”. These structures co-evolve with humans to ultimately form a massive sea of what we commonly refer to as ideas. Though individuals often combine memes into valuable new memeplexes, no one person can ever truly claim total ownership of a concept that is essentially an outgrowth of the idea sea.

Continue Reading