I've been rereading and thinking about the penultimate chapter of Non-Zero. The title of the chapter is a backhanded acknowledgment of the fact that many would consider the ideas therein absurd. The author himself does not so much defend them as argue that they are worth thinking about seriously.
The technology we might need for a literal global consciousness is still in the realm of science fiction, which is probably just as well. The mechanism of the brain which directs our conscious attention is not really part of our consciousness itself. Are we really ready to have that power over other people's brains - and trust them with it over ours? Neurons seem prone to certain unwarranted intimacies with each other.
I think we should start practicing with collective intelligence first. This may sound rather similar, but it's something quite familiar. Whenever a group of people solve a problem no one individual would have been capable of solving, it is in a sense an act of collective intelligence. Some of the most interesting acts of collective intelligence involve computers and the internet, perhaps because they often have the most unexplored potential.
A few of my favorites are John Hiler on the blogosphere, this unique experiment chronicled by Kevin Kelly, and Cloudmakers. Anyone have an idea for building anything like these on a truly global scale? Oh yea, the problems it contemplates should relate to the survival of the species - one of the things you'd expect an intelligent mind to consider is it's own survival, right? Of course a real godmind would be interested in many bigger things than that, but we've got to start somewhere.
Of course it's only really possible if we (like neurons) are modular units capable of self assembling into a mind more capable than any of the components. If so, maybe it will bring us closer to being ready for step two ...
Sunday, June 29, 2003
Subscribe to:
Posts (Atom)