Scattered thoughts on self-awareness and AI

I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you're 'giving life to someone else'. Before you make them, there's no one to give anything to. You're not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

I find self-awareness a rather difficult concept to grasp. In principle, it should be really easy: It's the ability to perceive your own thoughts and tell yourself apart from the rest of the world. I really don’t think this is something we’re born with, but rather something we develop. Right after you’re born, you’re a jumbled mass of feelings and sensations that try to make sense of themselves, but can’t. If it was somehow possible to keep an infant alive while depriving them of any external stimuli (vision, hearing, smell, touch, taste), I very much doubt that jumbled mass would ever manage to make sense of itself. In such a hypothetical scenario, I’m not even sure this baby could think as we intend it: What can you reasonably think about, if you know literally nothing?

Research to better understand how the brain develops self-awareness is going on all the time; recently, studies have pinpointed the part of the brain where self-awareness might be generated. Other studies suggest there’s no single area responsible for it; rather, it might be the work of several brain pathways working together. I really hope they’ll figure it out soon, because this issue drives me bonkers. From my personal experience of self-awareness, it appears as if there is some kind of central entity that is constantly bombarded with external stimuli, memories, feelings, and so on. This central entity (my self-awareness, or me) also appears to be independent of the stimuli it is exposed to: You’re still you whether you’re smelling a cup of coffee or not, or whether you’re thinking about a certain past event or not. This seems to make sense, but I think it is in sharp contrast with the idea I expressed above, i.e. that self-awareness is developed thanks to all the internal and external stimuli we’re exposed to since day 1. Either there’s no central entity, or it works differently than I think.

Some suggest self-awareness is an illusion brought about by the constant, small changes in your perceptions. These changes are so tiny, and their flow so smooth, that we get an illusion of continuity—the article argues—illusion which we call ‘self-awareness’. This is a very interesting theory, which I think makes sense at least to an extent. The thought that ‘you’ could be the sum total of all perceptions (internal and external) you experience in a given unit of time has crossed my mind more than once. Over the years, people change their minds, they behave differently, and in some case might become the opposite of what they used to be. This might be nothing more, nothing less than the result of the changes in our perceptions and how they change what’s inside our brains. Still, in order to work, illusions require someone (or even something) to be tricked by the illusions themselves. If the self is just an illusion and there really is no ‘you’ there, who or what is being tricked by the illusion? It’s almost as if this theory says that the illusion is tricking itself, which is quite paradoxical, but on the other hand I think this conclusion shows there’s no such thing as a ‘simulated consciousness’, as in a machine that thinks to be self-aware when in fact it is not. (This has always been quite clear to me.)

Before I get too philosophical, let’s go back to self-aware machines. I think we’ll have a hard time giving machines self-awareness before we thoroughly understand what it is and how it works. Some suppose it might be a spontaneous consequence of complexity; for example, if you have a neural network learn bagillions of things, self-awareness might ‘magically’ arise once a certain level of interconnectedness between all that information has been reached. That’s an intriguing idea, and it implies we might be able to give self-awareness to a machine without having to understand how the process works first, but I am a bit sceptical. On top of that, how do you prove that the machine you built is self-aware? As a matter of fact, you can’t prove that directly even for humans: There’s no way you can be sure that the person you’re talking to is actually perceiving their own thoughts, but we generally consider some things as acceptable signs of self-awareness, such as the ability to talk about yourself, to recall past experiences, to interact with your surroundings, to avoid obstacles, to learn new knowledge and apply it, etc. If a machine showed these signs, then we might conclude it is self-aware, but what if some of these signs are missing? For example, dogs don’t talk much about themselves. Are they not self-aware? I don’t know for a fact, but I doubt they aren’t. It is possible that there are different levels of self-awareness. Maybe self-awareness is not a binary phenomenon, but a continuous one. If it is true that self-awareness develops thanks to the external stimuli you’re exposed to, then it is reasonable to think that different species may have different levels of self-awareness, because not all species can experience the same stimuli, let alone process them. Furthermore, it seems reasonable to assume that a critical brain mass is necessary to be aware of your own thoughts, so the smaller the creature’s brain, the ‘lower’ (no offence intended) degree of self-awareness we should probably expect, and below the critical mass there would be no self-awareness at all.

This takes us to another aspect of the issue. If self-awareness is (as it is reasonable to suppose) a product of evolution, then each species’ self-awareness has been fine-tuned to best suit the needs of the species itself. Bacteria, for example, needed no self-awareness to do what they do, and so they haven’t developed any. (To be fair, they probably couldn’t have developed self-awareness even if they needed to, because it’s hard to have a brain when your body is a handful of cells at best.) A machine, even if it was capable to learn and thus to ‘evolve’ to an extent, wouldn’t be subject to the same evolutionary pressure as biological creatures; its ‘survival’ would not depend on how fit it was for its environment, and it would not ‘reproduce’ with other machines so that any ‘mutations’ could be passed on or selected against. In other words, it wouldn’t have any need requiring it to develop self-awareness or emotions of any kind. (On the plus side, this suggests it would hardly have a reason to go on a human-killing spree.) If a machine will ever have self-awareness or display emotions, I think it’ll be because we made it that way, quite possibly just because it was better for us, in a way or another. A self-aware machine with no objective need for sadness or guilt, for example, might well ask you why the heck you enabled it to feel such unpleasant emotions if it wasn’t strictly necessary. An answer could be that we get along better with an emotional machine rather than an unemotional one, but the machine might find this reason insufficient. (Which, on the minus side, might give the machine a reason to go on a human-killing spree, especially if we gave it an instinct for revenge.)

Still, I suppose it wouldn’t be entirely impossible that machine ‘feelings’ might develop in a sentient machine, even without evolution. Maybe they wouldn’t be happiness or sadness, since these have a strong biological basis, but they could be something else, that only a machine can ‘feel’ and that we cannot even imagine. Future developments in any given field have a tendency to surpass our imagination, and whatever future AI is going to be like, I’m looking forward to see it.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s