Most considerations surrounding AI, today, focus on questions relating to whether, or not, a machine is thinking as we are said to think, or whether it can have a meaningful awareness of its environment, as we do, or, even, whether it has the ability to be sentient, and therefore can be said to be feeling entity, however very little consideration is given to questions about whether, or not, an autonomous machine can be morally responsible for its actions. I suppose, one reason might be, that it’s easier for us to equate computational operations with thinking, than it is for us to equate the calculation of hedonic outcome with making altruistic decisions based upon our deeply held moral convictions. So my question is: Can a machine have a conscience?

Read another response about Consciousness, Mind