Advanced Search

Here is a question. Say I want to live forever and constantly move my brain from one body to another, so I never age. I also replace non functioning parts of my brain with new ones made with stem cells. Eventually after living for a long enough time my brain is no longer anything like the original except for its collective memories. Would that thing still be me? To take it a step further I create clones of myself and each of them has a small part of my originals brain. Would I still exist? Or I created a collective consciousness in which I am able to communicate to each of my clones and we are able to share our experiences in one big cloud. What does that even mean for me? Am I even the same person or something completely different? These problems have been really bugging me and I am just trying to see if anyone has any answer.

Often with questions that are composed of multiple further questions ("Here is a question", you write . . . - it's not - I count four question marks!) it helps to take just one, and deal with it carefully, before moving on to the next. Of course some of the sub-questions will generate further questions, but that simply means that some patience is required. For example, 'I move my brain from one body to another, so I never age.' Why does it follow that you never age? If you retain your memories (line 3) and add to them, then you are changing and aging, psychologically. So you must mean that you don't physically age. But the brain does age. And why is it that 'I never age' follows from 'I move my brain from one body to another'? That seems to assume that who you are is a matter of having the same brain. Is that right? And if it is, then if as you say 'after living for a long time my brain is no longer anything like the original' then you are not after all the same person, so the question goes away. A...

Some scientists say that there is a part of the brain that is responsible for consciousness. If we replicate that part of the brain on a computer, will it too be conscious? Surely an inanimate object can't be conscious, right?

I think you have to ask what "inanimate" means. If it means what the root word "anima" (Latin for the Gk. "psyche") suggests, then if consciousness requires a psyche or anima, mind or soul, the answer to your third question is 'Right; an inanimate object can't be conscious.' On the other hand if "inanimate" just means "not living", then the answer to your second question might be, "Yes".

I am very interested in the concept of the Philosophical Zombie, though after doing some research, I see that it is an argument against Physicalism. This I don't understand. I can't seem to wrap my head around why this is so. Would someone be able to explain this better and more clearly than what I read on Wikipedia? Best, Aron G.

Try this. Suppose everything that is explained is explained by facts about the physical world - that's physicalism. If zombies were possible and existed, we would be physically indistinguishable from them. But they would have no consciousness. So whatever explains our consciousness cannot be physical. If it were, it would also make consciousness for the zombies. That's the point of the physical indistinguishability between us and the zombies. There is a very good article in the Stanford Encyclopedia of Philosophy called "Zombies", by Robert Kirk, by the way, which will clear everything up for you.

My question deals with consciousness. I believe I understand what it means for me to be conscious of what is occurring around me, but I have the feeling that a lot of this depends on what I believe to be the consciousness of what is occurring (perhaps in an abstract form) around me or a result of something that is or had been conscious in some manner at one time. As am example of what I am attempting to describe, would I even take note of a person in my line of sight if something about that person (could be a very simple thing such as a glance from that person in my direction, the shoes he or she is wearing, or the waves of the ocean) that was somewhere along the line a conscious act of that person or of nature. And then could this be projected to a building or a tree since the tree is a living thing and the building was constructed by people. I know there is a certain vagueness about this question but I do not know how to put it in a more definite form.

Louise Anthony's reply is absolutely right, though the problem of other minds will be always with us no doubt. I wonder whether there is something else in addition in your mind that lies behind the question. Are you suggesting that whenever I am conscious there is a very interesting cause in the external world - the consciousness of others? So, for example, when I catch someone's eye, or when I become aware of the intelligence embodied in the design of a building, I become conscious. I think that there is truth to this interesting empirical proposal, but I wonder whether what is happening is that I become more conscious than I was in these cases, or conscious in a new way. A certain amount of education involves this, and, as Nagel pointed out a long time ago, the consciousness of mutual desire does too. But presumably there has to be a basis of consciousness already, or I could not become aware of anything conscious tugging at me from the external world. There has to be a consciousness there to be...

If there is no such thing as consciousness, how can I conceive of consciousness, or of what consciousness must be like? Conversely, if consciousness exists, when did I "get" it, and where does it go when I'm meditating?

There are lots of things that don't exist that we can conceive of, most obviously fictional characters and objects, though our conceptions of them may be less detailed or thorough-going than our conceptions of things that do exist. (They may also be more detailed. We may know more about the characters in Tolstoy's novels than about some real people.) We can conceive of an elixir of youth, though there may not be such a thing, for example. And if consciousness exists, where does it go when you are meditating or fast asleep? Well, why does it have to go anywhere? Isn't it more like a noise, say, that is real enough in its way, but which just shuts down or disappears when the thing making it go stops, and doesn't have to go anywhere? Of course if the individual consciousness is a sort of stuff, rather than a kind of attention, a substance even, in the philosophical sense, then presumably it keeps on going even if it isn't with you any more. But it is difficult to see how that would work; what...

Is it possible for the constituent parts of a conscious being to be conscious themselves? Can I infer from the fact that I am conscious that the cells which make up my body are not conscious?

My little toe is conscious, and it is a part of me, perhaps even a "constituent" part. I put in the scare quotes because I am wondering whether "constituent" means "essential"; if it does my big toe is not a constituent part of me. But if "A is a constituent of B" means "A is part of B", then my big toe is a constituent part of me, but the phrase "constituent part" is a tautology - it says that same thing twice. Are there parts of me which are not constituent parts, but some other kind? You can imagine after surgery a doctor asking, "Is your little toe conscious?", and the answer might be "Yes", and working through to the big toe; the answer then might be, "No". It is not at all obvious why we should feel the Cartesian tug to say that it is I, not my big toe, that is conscious, except for dubious epistemological reasons such as that we can imagine the consciousness without the toe. The same seems to be true of my psychological parts in Descartes' sense in the Meditations . My thinking might...

There are some strong arguments that if a computer appears to possess intelligence similar to a human's, that we must assume it too has self-awareness. Additionally, one could make a strong case that lesser animals have self-awareness, because they have the same type of brain as humans (just in a less sophisticated form.) My question is this: if we assume that a) computers of seemingly human intelligence are self-aware, and b) that animals of lesser brains are self-aware, must we logically conclude that computers of lesser "intelligence" are also self-aware? In other words, are all computers self aware? Is my toaster self-aware?

Why should the possession of intelligence (whatever we mean by this, but say it means winning chess games against the world chess champion, winning bridge games with bad partners against the world bridge champions, issuing correct diagnoses for car repairs, predicting stock market fluctuations, analyzing individual psychology, and so on) require consciousness? We know that when Kasparov played Deep Blue he "sensed" a "weird" and "alien" kind of consciousness - or said and thought he did. I have the same thing with my very complicated telephone handset - it is against me, spitefully, deliberately and consciously. If we allow that playing chess well involves intelligence, then Deep Blue or Deep Fritz or Shredder show the following thing: intelligence does not require consciousness. If we deny consciousness to the systems, then your question does not arise at all, because (a) is false. (I have used "consciousness", but "self-awareness" implies much more, including I think the critique of elements of...