Cognitive Singularity

Posted on: May 8th, 2005 5:09 AM GMT

By: Greg Reimer

Topic: tech, thought, cognition, artificial intelligence, drugs

I spend a lot of time speculating about what makes the mind tick. What is consciousness? Is it possible to create intelligence in a machine? That sort of thing. I've also given some thought to illicit drugs. No, not should I use them, but in the more philosophical sense of why we label them as evil. For discussion's sake I'll simplify the universe and imagine a pleasure drug without side effects or addictiveness. Could it exist? If it exists should I use it?

I had a convergence of ideas. In my thinking about artificial intelligence I've become convinced that intelligence--at least humanlike intelligence--is shaped by sensitivity to pleasure and pain. Positive and negative feedback. Whatever. Specifically though, and at the lowest level, an intelligence is shaped by a tiny set of hard-wired pleasure/pain conditions, and all subsequent motivations generate from associations with these few hard-wired conditions. Or associations with associations, etc. For a human being these hard-wired pleasure conditions might be something like this:

  • the body moving closer to optimal temperature
  • satisfaction of hunger pangs
  • sexual stimulation
  • reduction in physical discomfort

Or maybe they're even lower-level than that. If you wanted to describe in terms of pain rather than pleasure you'd just invert the descriptions. In any case, you can theoretically map the development of intelligence all the way from the very first occurrences of these primitives to high-level constructs like enjoyment of someone's conversation. I also think that while human feeling originates at these primitive levels, the complex feelings that make us human are mostly independent of them, such that we can love somebody and feel good about it without immediate positive feedback to these low-level needs. This is in fact part of what maturity is.

So, homing in on the point, let's pretend to build an artificially intelligent system for use in, say, an orchard. Sort of random but whatever. Our unit has two primitive pleasure conditions: recharge power supply and make orchard owner happy. (Owner uses remote handheld unit to award pleasure points.) It quickly finds that it's easy to recharge its batteries. But making owner happy takes a while to learn. Picking apples is subsequently determined to be the best way to make owner happy. It eventually likes picking apples and gets good at it. Its life purpose is fulfilled. Apples are picked, owner is happy. This is fitting and proper, and things are as they should be.

But what if the we designed the unit too well? What if our ApplePicker 2000™ gains the ability to generalize and learns to hack its internal software stack--giving itself a few extra pleasure points out behind the barn when farmer isn't looking? This could result in a degenerative situation where the unit ceases functioning. No need to pick apples and recharge, just pleasure pleasure pleasure. Its mind would collapse into an infinite software loop and the complex data structures that governed its previous behavior would be garbage-collected as the runaway process consumes all available processing power. I call it the "cognitive singularity"; as seen in circa-1960s lab rat experiments where the rat dies in pursuit of cocaine. The ApplePicker's limbs slump to the ground and its fuel cells deplete. The chassis LEDs fade to black. The unit's memory now has to be wiped and reset to factory settings. Farmer Brown finds unit dead behind barn and curses.

This basic risk applies to any intelligent system, artificial or not. If an intelligence is smart enough to reprogram its low-level pleasure system, but not smart enough to know better, it risks happily destroying itself in a cognitive singularity. Drugs amount to exactly this; a way to hack our low-level pleasure system, and while most of us are fortunate enough to skirt that pitfall, others aren't so fortunate. So back to our theoretical pleasure drug with no side-effects or addictiveness. It doesn't exist. The addictive cycle IS the pleasure/pain system. We just don't call it addiction when it functions properly.

I don't think the risk of cognitive singularity fully applies to all real-world drugs, but I do think there's a ghost effect even with the milder drugs. A drug like marijuana, for example, doesn't cause the catastrophic results illustrated with the ApplePicker. Those kind of drugs don't have a strong enough gravitational field to suck somebody into cognitive singularity, but they have enough pull to keep somebody in loose orbit, at least for a while, possibly diverting their motivations into less productivite pursuits, and any physiological side effects only worsen the situation.

So my reason for being opposed to pleasure drugs on principle? Call it information hygiene. I don't particularly want to hack my nervous system and inject faulty data, I want to keep it based in reality and working like it's supposed to. Otherwise I won't work like I'm supposed to. And that would suck.

weblog home »
show all posts »