I just had a job interview today. As is often the case, I am now nervous as to whether or not I got the job. But in the process of being nervous, I got to (over)thinking about my own nervousness and potential disappointment if I don't get the job, and I've come to wonder something: would it be rational for me to be disappointing at not getting the job? I mean, I suppose if we were to endorse the logic that if (a) something is important to me, (b) it is rational to be disappointed when important things fail/fall through, and (c) getting this job is important to me, then it seems logical to be disappointed. But why endorse this logic in the first place? Why not just apply, do your best and then, if it falls through, shrug and move on to other opportunities? Is it in any meaningful way rational to be disappointed, sad or frustrated when things don't go our way? It may be natural, and it may be human, but that doesn't mean it has to actually make sense.

Great question, and one with very deep historical roots. The ancient Stoics, for example, thought that remorse and regret were not compatible with being a true Sage, and I think the same arguments they give about these responses would also apply to those of disappointment or frustration when things don't go as you had hoped they would. But to extend this way of thinking even further, you might then go on to ask whether it is even ever really rational to hope for something that is not under your own control. For the Stoics, the only thing that is under our control (or, at least, can and should be under our control) is how we react to things. As a result, such "bad" reactions as remorse, regret, disappointment, or frustration are not the right way to respond to things that happen in the world. The true Sage would understand how the world works so well that nothing he or she would ever do would give rise to remorse or regret. Similarly, the Sage would understand the world so well that nothing would...

If there were a a good reason to believe that irrational thinking--or at least a certain train of irrational beliefs--leads to greater happiness and prosperity (and I think there is a bit of psych research that suggests this is true), could a rational person decide to think irrationally--to adopt irrational beliefs--and would that itself be a rational decision?

Before I try to give an answer to your question directly, I want to object to the claim that seems to be its basis. I do believe that recent psychological research about happiness supports at least some elements of what might be called "irrationalism." On the other hand, it seems to me that this same research always treats happiness as a purely subjective property, and I want to make clear that this subjectivist treatment of happiness is very much at odds with the objectivist presumption in most of the philosophical literature on happiness. To quote myself (the easiest author for me to remember!), "Giddy morons may suppose they pursue their interest by doing what only makes them giddier and more foolish, but sensible evaluation will conclude that such lives are nothing to envy. The addict's high, even secured by ba lifetime supply of intoxicants, is no model of surpassing success in the pursuit of self-interest" (T. C. Brickhouse and N. D. Smith, Socratic Moral Psychology , Cambridge:...

Do you only do a good deed (or just about anything), because you're gaining something from it yourself? I have thought this with my friend and she thinks people are naturally "good". I just think that as we are animals, we are naturally finding ways to survive. Of course sometimes people make bad decisions, but they are still thinking that the choice is best for them. -Heikki

Looks to me as if you and your friend are having a debate in which the only options on the table are not the only ones available for consideration. Part of what it means to be a human animal is to live with others. This means that just at the level of fitness, we will do better if we have the resources (whether natural or socialized, as I suspect a good deal of both) to deal with others in positive ways. Precisely because there are many others around us who really matter to us, the distinction between "best for me" and "best for others" becomes both artificial and also distorting. What is "best for me" is often for me to sacrifice at least some degree of narrow self-interest in order to help others to flourish. This is the kind of thing that parents and friends do for each other all the time. But it is not limited simply to those close to us. Studies have shown that people who are given money and told to spend it on others report greater happiness thanm those who are given money and told to...

Is "you should..." synonymous with "it is rational for you to..."?

Some philosophers would derive the former from the latter--Kant, for example, is generally supposed to think that obligation derives directly from rationality. But I think it is going to depend upon what specific notions of responsibility ("should") and rationality are at work. I think a good way to see how a negative answer to your question might work is to ask a different version of your question: Is it self-contradictory to say that one shouldn't always be rational, or to say that one should (sometimes) be irrational? For example, if one supposes that morality is wholly a social construct, and without any basis in reality beyond social convention (I don't believe this, but some do), then it seems to me that one might recognize duties imposed by whatever conception of morality was currently fashionable that seemed (and indeed were) irrational. But that is only if one does not also think that the principles of rationality are social constructs. Usually, however, those who think that morality...

Do you think it's possible, even theoretically, for there to exist a substantive belief (any kind, about anything) that is impervious to any argument, cannot be debunked, etc., and yet is false?

Yes, at least theoretically. An example of how this might be is given in the first of Descartes' Meditations on First Philosophy. Descartes asks us to consider a world that is governed by a kind of evil god who delights in nothing more than making us believe what is false. In such a world, we would be able to find no evidence at all to debunk the falsehoods to which the god inclined us. Descartes challenges us to see if we can be absolutely sure that we do not actually inhabit such a world! Modern popular culture has taken up this scenario in various entertaining ways. I think it is fair to say that the worlds imagined in "Total Recall," and "The Matrix" are excellent examples of scenarios that raise the theoretical possibility of false belief that is (at least for those who don't escape the Matrix!) invulnerable to refutation.

There are many arguments for the existence of god (e.g., the ontological argument) which, though interesting, probably don't actually account for the religious belief of even their primary exponents. I suspect that a person may be aware of many reasons for belief in a proposition "P" but that only some of these are actually causally linked to his belief that "P"; others he may offer as a way of persuading non-believers, or convincing them of his reasonableness, but these don't actually explain his own conviction. How do we differentiate between arguments or evidence which create belief, and those which merely support it? Is there some link that we perceive between certain reasons and belief but not others?

It might help to notice that there are distinct senses to "reasons for believing that P." The first sense (usually called "propositional justification" by epistemologists) has to do with there being some fact of the matter that would make it reasonable for me--that would justify me--in believing that P, should I happen to be aware of that fact. Hence, to use an example that has been used by others, the fact that there is smoke billowing out of the house (whether or not anyone is aware of it) is a good reason to think the house is on fire. The other sense is called "doxastic justification" by epistemologists, and has to do with what a person actually has, among his (other) beliefs, as justification for that person's belief that P. So I would be doxastically justified in believing that the house is on fire if I was aware of the smoke billowing out, and was also aware of the connection between smoke and fire. It is a point of contention among epistemologists precisely what role...

Why can't I remove my emotions (such as falling in love) by rationality?

The relationship between reason and the emotions is one that has been wondered about for a very long time--going back to our most ancient literature, including the Old Testament and Homer's Iliad . I doubt that I will be able to resolve this one for you, but I do have a suggestion to make. I'm not sure this is a philosophical question, but I also think that you (or most people) can do what you say you can't do. If you think that you are feeling a certain emotion that is not compatible with a rational assessment of things--for example, you feel as if you are falling in love with some movie star whom you will not likely ever meet--then there are various rational steps you can take to get rid of the emotion. Ever heard the one about taking a cold shower? OK, maybe it is not as simple as that, but we certainly can look for things that will divert our attention from an emotion, or will use the energies of the emotion in different ways (and thus serving to deflect it, as part of a strategy...

What is a reason (to do or believe something)? Suppose that someone who kills another person should be punished and that Ann killed somebody. Are there two reasons or just one reason to punish Ann?

Seems like one reason to me. Reasons (and reasoning) can be complex, of course, but there would be no reason to punish Ann if she did not do a punishable act, and there would be no reason to punish her if acts such as the one she did were not punishable. So the way to count the (single) reason seems to be this: Ann committed a punishable act (namely murder--not all killing seems to me to be punishable, which is why I changed your wording).