Sunday 29 December 2013

Do ghosts exist?

It's not uncommon - even in this day and age - that we carry on to find genuine belief in the existence of ghosts. Even in cultures and populations where the belief that ghosts definitely exist isn't popular, agnosticism about the existence of ghosts is probably still prevalent. By people being agnostic, I mean that they do not reject the possibility that ghosts can exist. But with all the facts that we know about the universe available today, shouldn't we be able to reach - with some certainty - a conclusion regarding the existence of ghosts? With some amateur philosophical 'maneuvering', I attempt to demonstrate here that we have very good reason to believe that ghosts, or at least ghosts according to a particular conception, do not exist.

Casper the Ghost (Photo courtesy: maditsmadfunny.wikia.com)
Of course, to do this we need to start with a conception, or a definition of a ghost. Here are some properties - off the top of my head - which we may attribute to a ghost:
  • A ghost is immaterial (non-physical)
  • A ghost at least partly consists of some element of a person who was once alive but is now dead
  • A ghost may also be conceived as the spirit,  consciousness, or soul of a dead person. 
The essential idea is that a ghost must be - whatever it is - a non-physical thing, and at least part of it belongs to an once alive person. Now let's get on to the analysis. 

To start off: what does it mean for something to be non-physical? It doesn't answer the question to simply list examples of things which are non-physical (e.g. God, mind, numbers), because these things do not tell us anything about the label 'non-physical'. Perhaps, we can understand something to be non-physical if it does not react (causally) with known physical laws. What this means is that a non-physical thing is not described by physical laws, and it is in a completely different realm from the physical world. A non-physical thing can do whatever it likes in the non-physical world, but it cannot in any way change how the physical world works. 

However, if that is true, all those ghost-featuring movies must have made a horrible mistake: EITHER ghosts are physical, OR ghosts cannot have any interactions with anything in the physical world at all. (We wouldn't have to worry about ghosts hiding in our bedrooms, because they cannot do anything physical to us)
Ghost (1990): Sam Wheat kissing Molly Jensen in this classic scene as a ghost. (Photo courtesy: tasteofcinema.com)
So, what if ghosts are physical? In my opinion, that doesn't seem to be a good way out either. To be physical, ghosts must react causally with our known physical laws. Unless we're willing to admit that physics   has gotten its fundamentals terribly wrong, it isn't consistent to hold the beliefs that ghosts can do certain things they do (e.g. alter mass, occupy no space, not made up of atoms). But remember, the knowledge of physics that human beings have built up over history has given us so much predictive success with the physical world: with that knowledge, we are able to land on the moon, send instant messages across the globe and split up atoms to create horribly powerful weapons of mass destruction. Is it really plausible to think then that ghosts exist?

Perhaps a few good rejoinders can still come from those 'ghost realists' (just made up this term to refer to those who think that ghosts exist). Firstly, realists may concede that ghosts cannot causally interact with the physical world. Okay, ghosts cannot move doors and make creepy weeping noises. But the fact that ghosts cannot interact physically does not stop ghosts from causally interacting with us in the mental realm! Effectively, this is saying that ghosts can and only can affect us psychologically. It would be more plausible then, to understand a ghost as a mental entity, something like numbers and concepts (or *beliefs?!). But would you say that numbers or concepts exist? But that is a much bigger question to deal with... (the question of whether mental things exist [not as in crazy, silly]).

The second (much less satisfactory) way that 'ghost realists' can respond is to say that ghosts are still physical, and they are still constrained by physical laws. This strategy allows us to understand ghosts as physical entities, but they would be much less robust as the 'ghosts' we know from movies and stories. Their 'status' would be equal to any other physical thing - humans, amoebas, tables, and the like - except that they are not living things! You can insist that this is true, but this approach offers a very poor and bizarre explanation of the universe, and it is largely inconsistent with what we already know about the universe.

So what's the verdict? Do ghosts exist? That very much depends on what type of ghosts we're talking about...
  • Physical ghosts (highly unlikely)
  • Non-physical ghosts capable of physical interactions (impossible; unless we define physical differently)
  • Psychological ghosts (possibly; but even if they do exist, they would be as real to the same extent that numbers are real: "it's all in the mind")
Q.E.D.

*I hesitate to compare ghosts with beliefs, because beliefs are thought to have physical counterparts by some thinkers. 


Wednesday 18 December 2013

Some Thoughts About Morality II

In Some Thoughts About Morality I, I discussed briefly about the origins of 'right' and 'wrong' and what follows from adopting the view that we ought to act morally because it follows from intuitions (conscience). I ended the post with an introduction to the "evolutionary" theory of morality, giving neither a clear nor comprehensive account of what it is about. 

As I was writing Some Thoughts About Morality I about five months ago, frankly I was at that point unaware of the literature in Evolutionary Psychology.* While I was quite illuminated by some of the material that I then came across, I will try to present my initial idea here in a non-scholarly way as I had originally intended.** 

The original "evolutionary" theory of morality is this. The claim is that there is no intrinsic value to morality, i.e. morality is not something that's good only for its own sake: morality is good only insofar as it contributes to the reproductive fitness of human beings as a group.*** The very rough idea is that if you assume there are two types of human beings in the world: (1) moral, altruistic ones versus (2) immoral, selfish ones, Group A with more moral individuals will have more reproductive fitness than Group B with more selfish individuals.
The red dots represent altruistic/moral individuals, while the blue dots represent the selfish/immoral individuals. The claim is that Group A (with moral individuals) is more likely to survive through natural selection. 
Why is this so? What is the evidence? One good reason that can be given is that such a picture of morality seems to best explain a lot of the following phenomena:
  • Altruism and selfishness (or moral and immorality) is found across all societies. 
  • Feelings about right and wrong (i.e. conscience) are universally found across cultures. 
  • The apparent lack of a rational basis for morality. For instance, it is difficult to give a good fundamental reason for doing something morally than not. Also, how often do you feel that there are just no right answers to certain moral debates, regardless of by how much our scientific knowledge will advance?
  • Why our conscience seems to emerge "naturally" like our ability to develop thought. Just think how feelings of guilt and compassion (can) emerge in humans without learning.

The plausibility of the "evolutionary" picture of morality lends support to the view that it is indeed representative of how the world really is. Interesting, such a picture of morality also fits well into the "Stag Hunt" example in Game Theory. The Stag Hunt example illustrates a situation where two individuals are always better off if they work together: if Jack and Jill go hunting separately (defect), they'll only manage to catch a rabbit each; if they cooperate and hunt together, they'll be able to catch a whole stag (which we'll assume that half a stag is way better than one rabbit). 

This table shows that cooperating is always the best outcome for the two individuals in a Stag Hunt game. (Photo courtesy of Wikimedia Commons)
Analogously, in the evolution story, what seems to have arisen is that individuals are always better off  together if they act morally. This includes not killing or injuring each other and not to act in ways which endanger the whole group (which is why we hate traitors). This is why Group A is more likely to survive than Group B: acting cooperatively increases their survival chances. 

Quick note: the reason why we still have selfish and evil individuals now may be due to the fact that the system fails to effectively "prune" the free-riders and those who are good at cheating. It's not difficult to see how living in a group large enough and possessing cheating skills good enough prevents these guys from being kicked out of the group. They get to enjoy the benefits of being in a moral society without being moral themselves.

But that's enough about evolution. IF this picture is true - what does all this say about morality? Should we be moral, just because we are born with a conscience? I suppose what all this tells us - if it is true - is that if we really want to fit a picture of morality with the rest of our knowledge about the world (particularly, scientific knowledge), we might need to resort to treating morality as something that is great as a means to an end. Psychologically, believing in this picture may lead us to treat morality less seriously, and we'll be worse off because society as a whole may become more selfish. To some extent, the "moral fabric" of society - what keeps humans working well together - relies most humans on NOT believing in this picture, grounding morality on something like "natural rights" or religion. 

Hey, isn't this an example where "ignorance (of some) is bliss (for everyone)"?

Speculative? Blasphemous? No, I don't think one post is enough to give a very comprehensive discussion of this fascinating issue. In a further post, I shall talk about Gyges' Ring and how this picture of morality fits into the story. 

*See this primer on Evolutionary Psychology by Cosmides and Tooby.
**Not to be mistaken as indicating my laziness to cite and quote /_/ 
***Reproductive fitness is measured by the number of offsprings. 

Potentially good idea: eliminating the a priori/a posteriori distinction

This is going to be one of those posts where I propose an unorthodox and highly radical idea and I give a completely insufficient and non-robust defense of it. I think I attempted something similar with moral realism in one of my previous posts before. This time, it's on the a priori/a posteriori distinction.

Typically the a priori/a posteriori distinction is understood as applying to propositions: an a priori proposition is one where its truth is knowable independent of (or prior to) experience. Philosophers usually point to propositions in geometry and mathematics, such as 'the internal angle of a triangle is equivalent to the angle of a straight line' or '2 is the positive square root of 4' for examples of a priori propositions. 

Conversely, an a posteriori proposition is one where its truth is knowable in virtue of experience. The proposition that 'Black swans exist in Australia' is an example of an a posteriori proposition, as one can only find out whether this proposition is true by visiting Australia and checking if there really are black swans in Australia. Claims made in the natural sciences are generally a posteriori.*

Now, this distinction may seem pretty unproblematic at first, for it's pretty obvious what most propositions fall under which category under the distinction. This distinction is also uncontroversial (for what I know), as it is so often used in arguments even in contemporary philosophy. But as long as the definitions of a priori and a posteriori I stated above are used, I think the distinction is problematic because it is based on the notion of experience.

Why is the notion of experience a problem? The reason why I think it causes problems for the distinction is our uncertainty of what experience refers to when we ask the question of 'what is experience'. Let's look at two possible definitions of the word:
  1. Experience refers to what is felt, seen, heard, tasted or smelled, i.e. information available to the five senses)
  2. Experience refers to whatever that is made available to the consciousness.
In general, the use of the distinction inclines towards the understanding of experience as in (1). With (1), however, you can get some pretty awkward results. Firstly, if experience is whatever that is available to the five senses, and assuming that the five senses are the only inputs to a person's experience, wouldn't it follow that all propositions are a posteriori? (Otherwise, where would the new data for the proposition come from?) Think about geometry: how do we come to have a concept of circle, or a line? Aren't they initially abstracted from the things we perceive with our five senses in the world? If there is no such thing as an a priori input, then what does the distinction mean at all?

The sun (left) and a circle (right): is it not a better explanation to say that the circle is abstracted from a natural object like the sun, then to say that we are born innate with the concept of a circle? (Left photo taken from Oia, Santorini)
Perhaps - as an immediate reaction - you're thinking about something like this: no, it's not like that; this understanding of the mind is too simplistic. Instead, one should think of the mind as having the five inputs but these five inputs are constrained by the structure of the mind. In other words, (under this view) there is no pure untainted inputs that we can get from our experience; the structure of mind make our percepts a certain way. For instance, without the mental structure that allows us to categorise things, we would not be able to individuate between things and hold the concept of a circle as an object. Without the innate principles of logical inference, we cannot infer that if P & Q is true, then P is true. In that case, you may argue that there are actually six senses: smell, vision, hearing, touch, taste and the innate structure of the mind which constrains and structures these senses. While the very last sense may seem very different from the rest, it is nonetheless an input because it contributes to our total knowledge. To end this rejoinder: if we do have six different types of input, then the a priori/a posteriori is still meaningful under definition (1). 

My second problem is this: if I am to take the view of the mind made by this rejoinder, then ultimately I will have to resort to definition (2). If every input into our mind from the five senses are cross-influenced by the sixth input into our mind, then in what sense does it mean when we say that we can classify the truth of propositions into ones knowable independent of experience and ones knowable in virtue of experience? If the innate mental structure input is part our experience, what does it mean to distinguish between a priori and a posteriori?

Picture according to definition (2) of experience
At first glance, I think my argument would most likely be rebutted by analysing the word knowable in the definition. Does that really improve the situation? I'll get to this in another post. 

*For a more detailed introduction on the distinction, I'd recommend David Papineau's (2012) book Philosophical Devices, Chapter 4.3. For now, my brief explication should be sufficient.

Monday 16 December 2013

Some Reflections on Philosophical Work: Fruits and Purposes

I am now typing up this post at the Dubai International Airport, as I wait for my transit flight. It's a three hour wait approx., and I thought that it's about time I do some serious 'teleological' reflection.

If you've read a good sample of my other posts in this blog before, you may find this more 'personal' (or less dry and boring, as I occasionally am aware of are adjectives that can aptly apply to my other posts). So hopefully this would be a change for good. But let me first post a brief update on what I've been doing. At this point, I've:
  • just spent ten weeks in London on a taught Master's programme in Philosophy,
  • (and as a result of that) spent a huge sum of money which may be better off invested in a randomly selected stock from the FTSE;
  • remained effectively unemployed and made little progress in developing any substantial career plans;
  • on a more positive note, learnt about the philosophical issues and debates which crop up in the fields of psychology and biology. (under the heading philosophy of psychology and philosophy of biology)
So perhaps the thing that's bugging me is this: what exactly have I been trying to do? I know I'm trying to learn and engage myself in these interesting philosophical debates as possible, but what good is all that? Let's say I'm not in it for the money; let's say I'm doing these intellectual pursuits to create new knowledge for mankind. But what good is philosophy of psychology to the field of psychology, and what good is philosophy of biology to the field of biology? If I am neither a practicing psychology nor biologist, then is what I do purely for the satisfaction of my own curiosity? In that case, it would seem awfully selfish of me to spend so much money (originally belonging to people who funded my degree out of love) and time where the only reward is my pleasure.

Maybe I'm just thinking about the quote that's often attributed to Richard Feynman, that "philosophy of science is as useful to scientists as ornithology is to birds." But I think the worry goes deeper. If all of philosophy is like that (i.e. pretty damn useless), then I do kind of feel like I'm just climbed many rankings on the World's Biggest Idiots for spending so much time thinking about these philosophy problems.

There are two obvious strategies to get out of this pessimistic whatever-you-call-it. First, it's always more-or-less comforting to cite the instrumental advantages of doing philosophy, e.g. enhanced ability to talk about a wide range of subjects, improved critical thinking abilities, knowledge of the history of ideas and so on. But even so, all of this doesn't seem to warrant the time and money spent doing philosophy - if these are the only rewards of philosophy - because it seems pretty probable that you can get such abilities or knowledge cheaper and quicker by other means. Hence, I'm going to need a more 'robust' purpose for what I do.
The second strategy then, is to stubbornly insist that there's nothing wrong with investing in an activity that is intrinsically valuable, or an activity that is good even if it brings no other obvious reward. An example of this - maybe - is sex; people enjoy and have no problem doing it, even if it's not for some higher purpose. Personally I'm not too keen on justifying doing philosophy by comparing it to sex, but this strategy seems somewhat just better than the first one. Maybe a better parallel is music, which people really enjoy for its own sake; but I can't say for sure if music students do not sometimes ponder or doubt the purposes of their undertakings.

OR maybe - philosophy can really yield 'solid' intellectual fruits. It's probably true that philosophy may not produce any 'positive' knowledge like Newton's Three Laws. But if you see philosophy as the activity of conceptual analysis, then you can still get 'negative' knowledge by eliminating problematic concepts and bad inferences. You probably won't ever end up with a really good model or theory about anything in philosophy, but all the critical and analytical work can help you identify WHAT would not be a good model/theory. Perhaps then, this could be a really good motivation for philosophy besides its 'pleasure' value.

...But how well can philosophy SHOW beyond question the falsity of theories? If it leaves plenty of room for debate, then one may as well accept philosophy to be merely a 'fun intellectual activity for ages 3 or above'...

Monday 25 November 2013

Six Reasons Why Philosophy Can Be Frustrating

Philosophy can be really frustrating sometimes. Here are six reasons why.

1. It just seems that some philosophy problems are created (ex nihilo as well) just for having something to be frustrated about. The naming problem in the philosophy of language seems to be one of them: this is the problem where philosophers fret about how do proper names ('Socrates', 'Obama') refer to the bearers of these names. "They just do" obviously isn't a good enough answer for philosophers. In cases such as this, a philosophy student has to sometimes try to care about how pressing the problem is. Perhaps it's not altogether a ridiculous question to ask, if you frame the question as a sub-question which will tell us more about the nature of our thoughts; but unless you bear in mind the ultimate purpose of the question, philosophy can get quite tiresome. 

2. It seems that some philosophers are just missing the point. Take the vagueness problem as an example: this is the one where philosophers wonder whether 'Peter is tall' is a true or false statement when applied to Peter, whose height is just between what you would typically call 'tall' and 'short'. Some philosophers think it's a problem because if you answer that Peter is both 'tall' and 'short', you would end up with a contradiction (p ^ ¬p). In my view, this problem does not arise when you consider how language is used (e.g. you assume that 'tall' is used relative to a conventional standard of tallness) or reject the idea that sentences can only be either true or false; this is a problem for how logic can encounter difficulties in modelling reason, and not a genuine real problem inflicting the universe. I don't believe many philosophers actually conceive of the problem in the latter view, but there isn't a very strong impression on me that there is an acknowledgement in the discourse that the problem ought to be treated in such a way. There are many other examples, especially in the area of metaphysics, where often little attention is paid to how people use language, and the nature of the discipline itself. 

3. Philosophers often use problematic concepts in arguments without coming up with an account first. I know it is is impractical to give a full account of all the fundamental concepts in presenting an argument, but this nonetheless does "grind gears". This includes the following: the correspondence theory of truth, the bivalent view of truth, the notion of knowledge (hardly it is clear what knowing means, especially in philosophy of mind and psychology), the thesis that there exists real moral truths and the notion of reference. Although one can foresee asking philosophers to be more elaborate means longer and duller philosophy texts, this seems to be a necessary evil that philosopher students have to bear with. 

4. Very often it feels that no philosophical progress (if there's such a thing) can be made without hammering at the "pillars" of your belief. I suppose this is connected to the third point; you cannot make significantly more satisfying arguments without changing how you understand the very basic concepts. This is similar to what is achieved in the cognitive revolution, the movement in analytic philosophy and the movement in pragmatism. Perhaps this is the less frustrating of the five problems; if you are given enough space and support to think freely, this can actually be a very strong motivation for doing philosophy. 

5. It's always difficult answering the question "what did you learn?" from a philosophy discourse. While you may learn as a by-product certain factual information (e.g. that Hesperus and Phosphorus are other names for the planet Venus), since most of philosophy is non-factual it seems hard to say that you've actually learnt anything. Soft skills, perhaps? But we do want to say that philosophy is good in itself, not just good for something else. But do we really want to say that "I haven't actually learnt anything from philosophy, but it's all good fun"? 

6. When there's too much quoting and history going on. Maybe I'm wrong, but I find it unhelpful if there is a significant amount of quoting and history going on in an argumentative piece of philosophy text. It's certainly interesting to set the background, but it's not clear what has been stipulated necessarily has to re-appear in what you stipulate, especially when the historical context does not affect the soundness of your argument. This is partly why I didn't like how some political philosophy courses go about - why must you say that Edmund Burke argued that 'traditions are valuable' when you can give the same argument independently? This is the exceptional case when I find an additional meta-level of thinking less appropriate: arguing a case just seems more satisfying than examining how a case has been argued. Arguing a case is a job for philosophy, but examining how a case has been argued is a job for the history of philosophy

Not that these problems will stop me from doing philosophy, but becoming aware of the source of these frustrations help me reinstate my motivations. 


Tuesday 19 November 2013

Nick Pappers on Philosophy, Poetry and the Individual

Are philosophers and poets (or artists) really that different?
Do the differences really boil down to subjective and objective "perspectives"?

This really fascinating interview of Nikolas Pappers by BOMB Magazine [LINK] pushes us to reflect on the aims and nature of philosophy, art, poetry, and our existence. There's a good amount of Plato and Nietzsche in there too, if you're interested.

A Few of My Favourite 3AM Philosopher Interviews

A few of my favourite 3AM Philosopher Interviews
(Last updated 19/11/2013)


With Gillian Russell [LINK]
I'm always most interested in knowing how philosophers answer Richard Marshall's first question of 3AM interviews, which asks them why they've decided to become philosophers. In this case, I loved how Gillian gave a simple down-to-earth answer to the question (which I would probably give something similar myself). She also made a good point: there are always going to be stressful days (no matter what you do), but it's the promise of the initial fascination (with a career or subject) returning that gets you through.

Like Gillian, my interest in philosophy of language (and in general, philosophy which draws from the empirical sciences) stems from the initial motivation that they are more tractable than other areas, such as ethics and aesthetics.

The interview includes a good introduction to a good few philosophy of language topics (analytic-synthetic distinction, Quine/Carnap debate).

With Nickolas Pappas [LINK]

With Amie Thomasson [LINK]
This interview reminds me that there are philosophers out there who are working on common sense metaphysics. The queerness of metaphysical questions and theories can sometimes make you feel that the entire metaphysics enterprise is just misdirected, confused and ultimately fruitless. That's when the link from metaphysics to common sense is all the more valuable, so we remember why we started off the inquiry in the first place.

Friday 1 November 2013

Beyond Logic

Before I was introduced to the world of academic philosophy, I used to entertain the following idea quite often: there are always things in this world that we don't know about or cannot understand, given our limited capacities. From that, I skipped carelessly to the conclusion: anything is possible. For how can you say for sure that a certain phenomenon (e.g. that ghosts exist) is impossible, when it could just be a case where you failed to know enough?

When I began venturing into 'professional' philosophy, the answer to my question seemed immediately obvious: logic. Anything is possible, except for that which is forbidden by logic. For example, a ball cannot be black or white all over, we can never draw a square circle, and a triangle can never have four sides.

I admit that I wasn't a rigorous thinker. I couldn't think of examples such as these at the time. But for some reason, I was not 'psychologically' convinced. For instance, I would hesitate if someone asked me to bet me and my family's life (say for a million dollars) that it is impossible to draw a square circle, or that it is impossible to find a married bachelor. I wouldn't take that action. If it is as William James says - that belief is measured by action - then you can certainly come to the absurd conclusion that I don't believe in logic.

You may say that it is irrational not to place such a bet, or that I suffer from an extreme inferiority complex with respect to my intellect. After all, it does seem more of a psychological issue that I am so unconfident in my ability to reason. But let me just push this a little further:

Why can't we have ideas or existing things which are beyond what logic permits? What makes us always right? If it is possible for us to get a mathematical proof wrong, why is it not possible for us to get the more basic bits wrong? In other words, why must we have certain things which are necessary?

To solve this problem, I find it useful to see logic as as a model.

If logic, like mathematics, is a tool to model the universe around us, then the results it generates are fallible. Just like how classical macroeconomics models have failed to predict stagflation, logic - as a model - can fail to predict about facts of the universe. And when logic (as a language) fails, we get paradoxes, such as the Liar Paradox*. So maybe within logic, there are things which are necessary, and not everything is possible. But we shouldn't expect that if something is forbidden by logic, it cannot exist in any form in real life. (there's this famous demonstration where someone 'showed' that a triangle can have three right angles by drawing it on a non-Euclidean space - e.g. a basketball)

I certainly don't mean that we should be expecting to be able to find a married bachelor anytime soon. But I'm still sceptical as meanings of the words 'bachelor' and 'married' can shift in real life. Also, it needs to be said that all reasoning (including my train-of-thought as I write this entry) is captured by logic - if not logic, then grammar, or syntax. Anything illogical is very likely going to be meaningless, since logic plays a very important part in our thoughts (not everything illogical is meaningless - just think of how comedy often makes use of logical contradictions).

Hence, it's not crazy to be sceptical about whether logic tells us everything about possibility and necessity. But if we reject logic altogether, none of what I've just written should make hold any force since it relies on reasoning.

*'This sentence is false.' - as Tarski has shown, this paradox remains as long as we use our natural language which contains self-referring terms.

Friday 16 August 2013

Follow your heart... not!

You're in a dilemma*

Then your friends and family give you advice. Or in the case where you asked for advice, they give you advice. 

More often than not, they offer advice of the following sort: 

"Follow your heart!"
  
"Just do what you think is right."

"Be yourself."

I don't mean to challenge the good intentions of these people. They usually mean well, as far as I can tell. What I challenge is the content of these sorts of advice, and why most of the time they don't really help us in making better decisions. 

Here's an example: 

For some reason I managed to miss a pre-booked train service to a very important event. It was a very expensive train ticket. The only options I've got are these two: (i) buy a new ticket that will require me to cancel a whole month's social activities and go on a cereal diet and, (ii) board the next train without a ticket. Suppose you know that you have virtually no risk of being penalised for choosing (ii), since you are aware of a way of evading ticket inspectors. What then does it mean exactly, to follow your heart? There are two obvious possibilities: 

(a) Follow your immediate desire and board the train without a ticket. (i.e. choice (i))
(b) Follow your conscience and buy a new ticket. (i.e. choice (ii))

If we seriously consider** the advice 'follow your heart', it seems to assume that we have a single mental disposition ('the heart') that has an opinion on what exactly to do. But is your heart your desire, or is it your conscience? If we shift the scenario to one where you have to choose between two friends (suppose they're in a fight), does the heart tell you to choose friend A who has always been there for you or friend B who's not so reliable but whom you'd really like to spend more time with? It seems then that it is impossible (and meaningless) to 'follow your heart', since you still have to choose between which heart to follow.

I'm not suggesting that there is a always a perfect solution to these dilemmas (if it's that easy, they wouldn't be dilemmas), but the point is that it tends to be an over-simplification to think that there is really some sub-conscious decision already made up when we are considering the options to a problem. It seems more accurate to describe the mind (for this purpose of resolving dilemmas) as a collection of dispositions, with each disposition competing for dominance. Examples of such dispositions are the disposition to be moral and disposition to protect one's own interest. At times one disposition may prevail over another, and in those circumstances decisions are made quickly. At times these dispositions may disagree and be equally strong, resulting in a stalemate that puts us in a dilemma. This may resemble the Freudian id and superego sort of analysis to some extent, but in the case of trilemmas or quadlemmas, I think singling out these 'threads' of disposition in our minds is a fine tool for decision analysis. Dispositions can also be thought of as values or duties, as these things also have a tendency to conflict and surely have causal powers towards our decision-making.

No doubt this is merely a model for understanding or analysing the state of the human mind in considering a decision. However, under this view, there seems to be little space for such advices to have any significant value. Why give an advice if it doesn't help? But if an advice is indeed a dangerous gift, as quoted from J. R. R. Tolkien, this sort of advice is probably more harmless than the rest.


*I really like the Cantonese translation 'intersection/junction' (交叉點). Dilemmas are also real fun to think about as value tests, e.g. "would you rather eat a spoonful of cow faeces or lose your permanent job?" This isn't really relevant to the whole entry... it's just something fun to share.
**Many times I've come across the remark that this 'method' of serious consideration is just another form of wordplay or nit-picking exercise (鑽牛角尖) that is a complete waste of time. I find it hard to understand how one can see no value at all in contemplating the meaning of expressions, when it is clear that misinterpretation of language can so often be the cause of conflicts, confusion and problems. If you still think this is an utter waste of time, you may want to give this a try. 

Thursday 8 August 2013

Paradigm Shift

If I hadn't been organising the layout of my blog today, I probably wouldn't have come to re-read and ponder upon the entries that I have written some years ago. Although it wasn't entirely unexpected, I was initially rather shocked by some of the things that I have written before, which reflected the views that I had held (see here): 

(1) I argued that Hong Kong's functional constituencies serve a function that is comparable to the UK House of Lords, viz. supplying expertise to a piece of legislation. 

(2) By arguing against a radical reform of Hong Kong's political system, I was inclining towards the general pro-establishment view that we shouldn't be dogmatically rushing into universal suffrage.

(3) I clearly thought political stability was more important than securing universal suffrage as a political goal, as I thought the sort of political instability generated by legislative delays would ultimately damage Hong Kong's economy.

No doubt my beliefs today are different, if not completely contradicting my old views:

Functional constituencies, instead of facilitating a business environment that is favourable to a laissez-faire economy, seem more to me as obstacles hindering the implementation of fair welfare measures. Alternatively,  functional constituencies appear to me as "pockets of power" responsible for the phenomena indicative of market failure, economic inequality and alleged government-business collusions. Instead of seeing universal suffrage as a radical constitutional overhaul that we should remain sceptical about, I now see it as a necessary political solution to the economic, social and welfare woes now observed in Hong Kong. I no longer see "pleasing Beijing" even as a purely realpolitik policy to secure economic benefits and political stability for Hong Kong.

This sort of change in perspective, or a paradigm shift (though I'm not sure Thomas Kuhn would approve my use of the phrase) may have been triggered by the emergence of new evidence, more rigorous contemplation , or simply as a result of some remarkable personal experience.

If I were asked to give a reason for my "paradigm shift", I would say that the new events and evidence that has arisen require me to give a better explanation, and accordingly new claims and judgments (along with it a whole new paradigm) come along with these explanations. This reason for my "paradigm shift" is only acceptable on the condition that my old views had been based on reason; had it been simply based on personal taste, I would be accused of being irrational and inconsistent. In this case, maybe I shouldn't have been so shocked after all.  

Sunday 4 August 2013

On Receptivity

Perhaps one of the greatest qualities that a person can have, aside from being knowledgeable, clever or saintly moral, is to be receptive. To be receptive is to be open, in the sense that a receptive person would be open to new ideas, practices, and cultures. To be receptive is to be willing to consider novel perspectives, and to be prepared to accept the possibility that these perspectives are better or even (if it makes sense) truer. However, I would like to think of the quality of receptivity as something more complex. I would think of receptivity as a sort of Aristotelian virtue, a sort of golden mean that lies between two vices. The first vice lies in being excessively open: if you are too indiscriminate in adopting novel ideas and practices, your individuality will soon be eroded away by the mass influx of ideas, and ultimately you will end up hopelessly overwhelmed and confused. For one, being excessively indiscriminate to what you accept would encourage you to hold multiple sets of inconsistent beliefs. On the other extreme, if you are too sceptical with what you choose to accept, you fail to benefit from what you would otherwise receive from being exposed to new perspectives. Hence, to be receptive is really to strike the appropriate balance between indiscriminate acceptance and being excessively sceptical.

By employing reason as a tool, we determine which ideas ought to be kept at bay and which deserve at least some serious consideration; then with some good use of reason, we master the virtue of receptivity.  

「學而不思則罔, 思而不學則殆。」-論語
"He who learns but not think is confused; he who thinks but not learn is at risk."- Analects

But I haven't yet said anything about the advantages of being receptive. I myself hold a certain view that receptivity can be thought of as an essential element of the "root of wisdom" (慧根), i.e. holding the key to wisdom. To be receptive is to be humble, to admit that one cannot know everything, and that one cannot claim to have access to the absolute truth. A receptive person never ceases to listen, to ask questions, or to learn. A receptive person acknowledges that he or she is always a student who has much to learn from others and Nature, and only assumes the position of a teacher with care and caution. Hence, receptivity is also a guard against arrogance and stubbornness. If one is willing to accept that there are things that one does not know (or cannot know) and that there may be better ideas out there, then naturally one would become more sceptical and reflective of one's current beliefs and stances. For very good reasons we want to be reflective and sceptical: without reflection we would naturally cling to our old beliefs, which with the passage of time tend to grow increasingly inconsistent with our phenomenal experience, and ultimately crystallise into dogmas. Hence, receptivity is essential for the attainment of intellectual, moral and spiritual progress, for it  encourages the processes of reflection and challenging our own beliefs.

Tuesday 23 July 2013

Be a chicken - it's okay

Every now and then, we experience scenarios where our peers pressure us into doing things that we feel uncomfortable or unsure about: be it pulling off a huge prank, jumping down a whole flight of stairs or consume a whole pint of beer in ten seconds. Less juvenile examples include being pressured into singing or giving a speech in public.

At some point, someone is bound to say, "c'mon, don't be a chicken!" or "man up!". 

The response is usually that of embarrassment mixed with frustration, and not unusally a pinch of anger, where we end up either succumbing to peer pressure or having to suffer from the mockery of friends (sometimes enemies).

While it is glaringly obvious when it is mentioned, but we do sometimes forget that our ability to feel fear or discomfort is a protective mechanism that we as homo sapiens have developed through evolution. Fear plays an important function in preventing us from injuring or even killing ourselves, and this applies both physically or socially: fear of singing or dancing in public can certainly be justified sometimes by the likelihood of that very act in destroying every ounce of respect that the audience has for that person. 

This by no means suggest that fear is rational - we can still have pretty irrational fear of things, such as spiders, heights, and small spaces. What I'm suggesting is that we should bear in mind that fear is sometimes useful, and that we really need to take that into consideration as we decide whether an action should be taken. 

Try the Aristotelian idea of a golden mean of virtues as a tool for consideration: both having excessive bravery (headstrong) and its opposite (cowardice) are undesirable vices, but the 'optimal', virtuous amount of bravery is courage, which stands as a mean between the two excessive vices. The golden mean can vary enormously in different circumstances, and hence if you buy Aristotle's idea then it's not always desirable not to be a chicken. For those who are interested in a similar idea discussed in Chinese philosophy, the Confucian "Middle Way" (中庸之道) does pretty much the same job - if you're looking for some consolation from Chinese philosophy. 

So relax if you manage to get yourself into one of these familiar scenarios again - it is sometimes okay to be a chicken.


Some Thoughts About Morality

Occasionally at our most 'philosophical' moments we would discover that we hold some grossly inconsistent and irreconcilable beliefs in our minds. In response, we may choose to ignore the inconsistency by assuring ourselves that there isn't actually one, and that with the passage of time the knot will untie itself. Alternatively, and ideally what philosophers tend to do, is to reflect upon our reasoning habits and find a way to metaphorically untie the knot. Naturally (and very likely) this is going to be time-consuming, but philosophers are disposed to think that the consequences of untying the knot is itself worth the time spent on untying.

One of these 'knots', I'd like to think, is the issue of morality; more specifically, it is the task of explaining the origins of morality, and related to that, the question of why we should be moral at all. 

Where does 'good' and 'bad' come from? 
Why should we be 'good'?

Philosophy textbooks, or introductory books to ethics, are abound with attempts from the history of philosophy to address these two problems; but in my opinion few have been satisfactory in offering a view that is at least not blatantly inconsistent with what we feel or (we think) we know. One such inconsistency is that between God and morality: if we don't believe in the existence of God (as referred to in the Bible), why should we be moral? Since afterlife and judgment is not likely to occur, why should we be good? If we think there is a good reason to be moral, why does the thesis that God exists faces so many difficulties in explaining why so much evil exists (The Problem of Evil), in justifying omnipotence and omniscience, and in explaining significant scientific evidence (e.g. evidence for evolution)?

Do we act morally because we fear punishment from karma or God? 
  
My personal sympathies are with the view that the personal, all-powerful and metaphysically independent-from-nature 'God' in the sense described by the Bible and classical theism fails to provide an apt description of reality. Some may label this view as atheism, but that is not the issue of debate here; my concerns are that if I am to hold the view that the Biblical God does not exist, how should I make sense of morality? Why should we be moral?

A common-sense response to this problem is what I would call the conscience (a good Chinese equivalent is 良心) reply. The reply runs as follows: since we feel 'bad' for doing immoral things, such as stealing money from an elderly woman's purse, the 'bad feeling' alone suggests that we ought to act in ways which are consistent with our conscience. Through this perspective, sympathy can perhaps be understood as one of the feelings that we get when our conscience is at work, and another example of these 'conscience-feelings' is guilt.  

Does having guilt or sympathy necessarily mean that we should act according to these feelings?
What about evil psychopaths who seem to completely lack a conscience? 

Before we dismiss this reply by saying that it offers no proper reason to act morally and that many people have a 'twisted conscience' that prevents them from seeing what actions are moral and what aren't, I must remark on the significance of the conscience reply. First, it offers us evidence (by introspection) that humans have an intuitive capability to distinguish between moral and immoral actions. This, I would argue, provides a basis in resisting the typical moral theories (like textbook Kantian ethics) which attempt to justify morality upon universal and objective reason. In other words, it seems that whether an action is moral or not should be judged using our feelings, and not reason. Similar arguments have been made by David Hume. 

Second, the conscience reply is itself evidence that we don't always act morally because of a certain fear of punishment, judgment, and going to Hell. We may act morally according to feelings, but these feelings are not necessarily fear. Alternatively one may suggest that, consistent with the 'conscience' reply, that we sometimes act morally (sometimes, if not always) out of instinct.

Now entertain this (perhaps to some repulsive) idea for a moment: suppose the moral conscience is an evolutionary survival function that homo sapiens have developed in evolution. Just like the ability to feel pain and fear, or even the ability to use language, our moral conscience has helped us humans to live peacefully and survive effectively in social groups when we evolved. Our conscience inclines us to protect and defend the weak, and to act in ways which benefit our social group most as a whole. Our conscience motivates us to sacrifice ourselves for the greater good, to give ourselves up if we judge that it would save the lives of others we care about. Another way of seeing this is that the sense of duty that we may feel towards our parents, a friend, or a spouse forms part of this conscience. 

A common sense of duty may have been important for survival: when the individuals in a social group A all act according to their conscience, group A has a higher chance of survival compared to a group B which is more 'immoral' and less keen on duties on the whole. 

Does this best explain where morality comes from? 
Does that mean it's not always desirable to be moral?

In Thoughts About Morality Part II, I shall examine the significance and implications of such an evolutionary theory of morality. 



Friday 14 June 2013

'Fetch' analogy, meta-thinking, and the is-ought gap

The Game of 'Fetch'
In a typical game of 'fetch', the master tosses an object, typically a stick, some distance away, and the dog responds by retrieving it. Once the dog retrieves the stick and returns it to the master, a round of 'fetch' is completed and the master tosses away the stick again and again for many more rounds until he or she feels bored enough to stop. It is worth noting that typically it takes the dog much longer than the master to reach a stage of boredom in this game of fetch. In the pursuit of the stick, the dog derives utility and purpose (I assume so; it explains the pleasure they display when they chase the stick). Only when the dog realises the tediousness and the absence of intrinsic meaning of the game, arguably so, does the dog cease taking interest in the game. Such behaviour finds its analogous counterpart in the human's own pursuit of worldly goals: the accumulation of wealth, power, fame, knowledge, physical beauty, and so on...where the 'stick' for the human can take on infinite possible forms. The human, like the dog, does not at first realise the emptiness of such pursuits, until at due course some occurrence or train-of-thought leads to the human's painful realisation that such pursuits have been in vain. It is at this point that the human begins to reflect more warily about the meaning and purpose of his or her actions, and life in general. Insofar as we carry on with our lives without reflecting, we would feel that our lives have been exciting and worthwhile; but once we start reflecting and realise that we have been playing an intrinsically meaningless game of 'fetch', some of us enter into an existential crisis, and resort to various means (like religion, or simply ignoring the whole matter) of solving it.

The 'Pleasure' solution
While the comparison of the dog playing a game of fetch and the human in pursuit of worldly goals may seem to paint a pessimistic picture of the human condition, perhaps it could be seen that the game of fetch is not, on the whole, an utter waste of time. For 'fetch' yields the dog utility and perhaps a false sense of purpose, and it is hard to see how the dog would fare any better in the attainment of utility in not participating in the game of fetch. In terms of the human condition, this means that realising the hollowness of pursuing material or impermanent aims (such as vanity) need not cast us into depression, so long as we are happy with what we do. This is, of course, relying on the assumption that utility, or happiness, is the sole and final aim of living (for both the human and the dog). Neither does the idea of 'pleasure is the only thing that matters' relieve us of all our restlessness, for it gives us no good reason not to simply engage in a drunken pleasure-cruise. We want our lives to be meaningful and purposeful (for some reason), and hedonism just won't cut it; more or less, the 'fetch complex' - the existential problem - stays with us.

The Is-ought gap
Perhaps we can retrieve some comfort from David Hume's is-ought gap. The idea is that there is a gap between what is the case and what ought to be the case, and you can't infer what you ought or ought not do based on what is the case. Even if it really is so that our lifelong pursuits are like a dog's stick-chasing, intrinsically meaningless and usually in vain, that alone says nothing about whether we should keep chasing sticks, or whether we should stop. It doesn't make an actual improvement to the human condition, but at least it means we don't have to act in a different way. We don't have to become pessimists, or give up our lifelong dreams, just because we see a mirror of our own lives when we see dogs chasing down sticks in parks.
Costs of meta-thinking
From this analogy also arises the 'costs' of meta-thinking. To reflect upon the purpose and the nature of one's life, or specific actions, like the human in coming to terms that his or her life-goals are ultimately derived from the vain pursuit of nothing significant, would be an example of meta-thinking. Meta-thinking can be understood as 'thinking from the outside the framework', or self-reflection. Such meta-thinking does appear to bring along a sort of 'deeper meaning' into one's understanding and perception of life, as it did for the dog and the human. But here is the worry: the consequence of meta-thinking, regardless of whether the process is voluntary or not, is to strip the being (dog or human) of the utility and purpose that the being had been enjoying from the metaphorical game of fetch; unless the being chooses to ignore the conclusions of the meta-thought, of course. An existential crisis cripples the man, stripping him of his energy and purposefulness. At the end of the day, the question to ask is this: would the being be better off in the end had there been no engagement in any meta-thinking from start till finish? In other words, would the dog be better off playing a perpetual game of fetch, without contemplating the nature of the game itself? Is it really the case that the unexamined life is not worth living, or is it the case that 'foolishness is bliss' (糊塗是福)? Philosophers are generally inclined to agree with the former, but it's not always clear if there is always a good reason to, since we can never be sure whether knowing the truth brings us overall more pleasure or more pain.

Wednesday 20 February 2013

Thinking about meaning

Meaning – in terms of meaning of linguistic expressions, rather than meaning of life – has always been a puzzling notion in philosophy (even though the meaning of life is an equally if not more puzzling notion). The central question to ask is, ‘what is meaning?’. If we choose to express the question differently, we would be asking ‘what does meaning mean?’. This seems to immediately collapse into a sort of circularity problem. How do we even begin answering the question of ‘what does meaning mean’? Since it is beyond my intellectual capacity to survey and analyse all proposed theories of meaning to date, here in this informal setting I attempt to draft an answer to this question by combining intuition and a couple of my own ideas.

Perhaps a good approach to finding out what meaning is would be to look at where meaning comes from. Whenever we utter any linguistic expression, we first form a thought relating to that expression in our minds. For example, before I say ‘I want that ice-cream!’, the thought of ‘I want that ice-cream!’ must precede it. Moreover, this preceding thought cannot be just any vague thought, but it should be the thought containing the intention to turn that thought into the corresponding linguistic expression. In this case, it would seem that meaning comes from the intentional thought that the linguistic expression is used to express. When the meaning of the linguistic expression is intended to be conveyed to an observer (assuming the person is not speaking to herself), the process of conveyance (some would call it ‘communication’) is only successful if the observer (be it a listener or a reader) can correctly infer or interpret that intended thought from the given piece of linguistic expression. With this view, even peculiar circumstances in communication such as the use of sarcasm can be explained.

One characteristic of this view is that it presumes private meaning to be possible. What I mean here by ‘private meaning’ is that the meaning of a linguistic expression does not need to depend (its ‘existence’) on there being any observer apart from the person expressing it. This notion is intuitive: people have always been known to ‘speak to themselves’ or write journals documenting their own thoughts. In the absence of any observers, the linguistic expression in question would still make sense (and hence be meaningful) to the speaker or the writer herself.

For an observer who doesn’t know any French, we would say that she doesn’t understand the meaning of the sentence ‘la neige est blanche’, as she would not have the means to infer the intentional thought of the sentence. In this formulation of ‘meaning’, to understand the meaning of a sentence is to be able to infer successfully the intentional thought of the sentence.

With hindsight, this theory resembles greatly what philosophers have called the ‘ideational theory of meaning’, which instead understands meaning as ‘ideas’. Amongst the many objections faced by the ideational theory of meaning, I find really interesting the objection which points out that there is an aspect of meaning which is inter-subjective and social, e.g. the meaning of the word ‘dog’ is common to all English speakers, but an idea is private and subjective. This is an objection which seems to also apply to the ‘intentional thought’ theory of meaning; but does it really hit? Since we cannot be completely sure that everyone shares the identical ‘thoughts’ of a ‘dog’, shouldn’t treating such thoughts as subjective (but inferable) a much more modest and prudent move?