Monday, 25 November 2013

Six Reasons Why Philosophy Can Be Frustrating

Philosophy can be really frustrating sometimes. Here are six reasons why.

1. It just seems that some philosophy problems are created (ex nihilo as well) just for having something to be frustrated about. The naming problem in the philosophy of language seems to be one of them: this is the problem where philosophers fret about how do proper names ('Socrates', 'Obama') refer to the bearers of these names. "They just do" obviously isn't a good enough answer for philosophers. In cases such as this, a philosophy student has to sometimes try to care about how pressing the problem is. Perhaps it's not altogether a ridiculous question to ask, if you frame the question as a sub-question which will tell us more about the nature of our thoughts; but unless you bear in mind the ultimate purpose of the question, philosophy can get quite tiresome. 

2. It seems that some philosophers are just missing the point. Take the vagueness problem as an example: this is the one where philosophers wonder whether 'Peter is tall' is a true or false statement when applied to Peter, whose height is just between what you would typically call 'tall' and 'short'. Some philosophers think it's a problem because if you answer that Peter is both 'tall' and 'short', you would end up with a contradiction (p ^ ¬p). In my view, this problem does not arise when you consider how language is used (e.g. you assume that 'tall' is used relative to a conventional standard of tallness) or reject the idea that sentences can only be either true or false; this is a problem for how logic can encounter difficulties in modelling reason, and not a genuine real problem inflicting the universe. I don't believe many philosophers actually conceive of the problem in the latter view, but there isn't a very strong impression on me that there is an acknowledgement in the discourse that the problem ought to be treated in such a way. There are many other examples, especially in the area of metaphysics, where often little attention is paid to how people use language, and the nature of the discipline itself. 

3. Philosophers often use problematic concepts in arguments without coming up with an account first. I know it is is impractical to give a full account of all the fundamental concepts in presenting an argument, but this nonetheless does "grind gears". This includes the following: the correspondence theory of truth, the bivalent view of truth, the notion of knowledge (hardly it is clear what knowing means, especially in philosophy of mind and psychology), the thesis that there exists real moral truths and the notion of reference. Although one can foresee asking philosophers to be more elaborate means longer and duller philosophy texts, this seems to be a necessary evil that philosopher students have to bear with. 

4. Very often it feels that no philosophical progress (if there's such a thing) can be made without hammering at the "pillars" of your belief. I suppose this is connected to the third point; you cannot make significantly more satisfying arguments without changing how you understand the very basic concepts. This is similar to what is achieved in the cognitive revolution, the movement in analytic philosophy and the movement in pragmatism. Perhaps this is the less frustrating of the five problems; if you are given enough space and support to think freely, this can actually be a very strong motivation for doing philosophy. 

5. It's always difficult answering the question "what did you learn?" from a philosophy discourse. While you may learn as a by-product certain factual information (e.g. that Hesperus and Phosphorus are other names for the planet Venus), since most of philosophy is non-factual it seems hard to say that you've actually learnt anything. Soft skills, perhaps? But we do want to say that philosophy is good in itself, not just good for something else. But do we really want to say that "I haven't actually learnt anything from philosophy, but it's all good fun"? 

6. When there's too much quoting and history going on. Maybe I'm wrong, but I find it unhelpful if there is a significant amount of quoting and history going on in an argumentative piece of philosophy text. It's certainly interesting to set the background, but it's not clear what has been stipulated necessarily has to re-appear in what you stipulate, especially when the historical context does not affect the soundness of your argument. This is partly why I didn't like how some political philosophy courses go about - why must you say that Edmund Burke argued that 'traditions are valuable' when you can give the same argument independently? This is the exceptional case when I find an additional meta-level of thinking less appropriate: arguing a case just seems more satisfying than examining how a case has been argued. Arguing a case is a job for philosophy, but examining how a case has been argued is a job for the history of philosophy

Not that these problems will stop me from doing philosophy, but becoming aware of the source of these frustrations help me reinstate my motivations. 


Tuesday, 19 November 2013

Nick Pappers on Philosophy, Poetry and the Individual

Are philosophers and poets (or artists) really that different?
Do the differences really boil down to subjective and objective "perspectives"?

This really fascinating interview of Nikolas Pappers by BOMB Magazine [LINK] pushes us to reflect on the aims and nature of philosophy, art, poetry, and our existence. There's a good amount of Plato and Nietzsche in there too, if you're interested.

A Few of My Favourite 3AM Philosopher Interviews

A few of my favourite 3AM Philosopher Interviews
(Last updated 19/11/2013)


With Gillian Russell [LINK]
I'm always most interested in knowing how philosophers answer Richard Marshall's first question of 3AM interviews, which asks them why they've decided to become philosophers. In this case, I loved how Gillian gave a simple down-to-earth answer to the question (which I would probably give something similar myself). She also made a good point: there are always going to be stressful days (no matter what you do), but it's the promise of the initial fascination (with a career or subject) returning that gets you through.

Like Gillian, my interest in philosophy of language (and in general, philosophy which draws from the empirical sciences) stems from the initial motivation that they are more tractable than other areas, such as ethics and aesthetics.

The interview includes a good introduction to a good few philosophy of language topics (analytic-synthetic distinction, Quine/Carnap debate).

With Nickolas Pappas [LINK]

With Amie Thomasson [LINK]
This interview reminds me that there are philosophers out there who are working on common sense metaphysics. The queerness of metaphysical questions and theories can sometimes make you feel that the entire metaphysics enterprise is just misdirected, confused and ultimately fruitless. That's when the link from metaphysics to common sense is all the more valuable, so we remember why we started off the inquiry in the first place.

Friday, 1 November 2013

Beyond Logic

Before I was introduced to the world of academic philosophy, I used to entertain the following idea quite often: there are always things in this world that we don't know about or cannot understand, given our limited capacities. From that, I skipped carelessly to the conclusion: anything is possible. For how can you say for sure that a certain phenomenon (e.g. that ghosts exist) is impossible, when it could just be a case where you failed to know enough?

When I began venturing into 'professional' philosophy, the answer to my question seemed immediately obvious: logic. Anything is possible, except for that which is forbidden by logic. For example, a ball cannot be black or white all over, we can never draw a square circle, and a triangle can never have four sides.

I admit that I wasn't a rigorous thinker. I couldn't think of examples such as these at the time. But for some reason, I was not 'psychologically' convinced. For instance, I would hesitate if someone asked me to bet me and my family's life (say for a million dollars) that it is impossible to draw a square circle, or that it is impossible to find a married bachelor. I wouldn't take that action. If it is as William James says - that belief is measured by action - then you can certainly come to the absurd conclusion that I don't believe in logic.

You may say that it is irrational not to place such a bet, or that I suffer from an extreme inferiority complex with respect to my intellect. After all, it does seem more of a psychological issue that I am so unconfident in my ability to reason. But let me just push this a little further:

Why can't we have ideas or existing things which are beyond what logic permits? What makes us always right? If it is possible for us to get a mathematical proof wrong, why is it not possible for us to get the more basic bits wrong? In other words, why must we have certain things which are necessary?

To solve this problem, I find it useful to see logic as as a model.

If logic, like mathematics, is a tool to model the universe around us, then the results it generates are fallible. Just like how classical macroeconomics models have failed to predict stagflation, logic - as a model - can fail to predict about facts of the universe. And when logic (as a language) fails, we get paradoxes, such as the Liar Paradox*. So maybe within logic, there are things which are necessary, and not everything is possible. But we shouldn't expect that if something is forbidden by logic, it cannot exist in any form in real life. (there's this famous demonstration where someone 'showed' that a triangle can have three right angles by drawing it on a non-Euclidean space - e.g. a basketball)

I certainly don't mean that we should be expecting to be able to find a married bachelor anytime soon. But I'm still sceptical as meanings of the words 'bachelor' and 'married' can shift in real life. Also, it needs to be said that all reasoning (including my train-of-thought as I write this entry) is captured by logic - if not logic, then grammar, or syntax. Anything illogical is very likely going to be meaningless, since logic plays a very important part in our thoughts (not everything illogical is meaningless - just think of how comedy often makes use of logical contradictions).

Hence, it's not crazy to be sceptical about whether logic tells us everything about possibility and necessity. But if we reject logic altogether, none of what I've just written should make hold any force since it relies on reasoning.

*'This sentence is false.' - as Tarski has shown, this paradox remains as long as we use our natural language which contains self-referring terms.

Friday, 16 August 2013

Follow your heart... not!

You're in a dilemma*

Then your friends and family give you advice. Or in the case where you asked for advice, they give you advice. 

More often than not, they offer advice of the following sort: 

"Follow your heart!"
  
"Just do what you think is right."

"Be yourself."

I don't mean to challenge the good intentions of these people. They usually mean well, as far as I can tell. What I challenge is the content of these sorts of advice, and why most of the time they don't really help us in making better decisions. 

Here's an example: 

For some reason I managed to miss a pre-booked train service to a very important event. It was a very expensive train ticket. The only options I've got are these two: (i) buy a new ticket that will require me to cancel a whole month's social activities and go on a cereal diet and, (ii) board the next train without a ticket. Suppose you know that you have virtually no risk of being penalised for choosing (ii), since you are aware of a way of evading ticket inspectors. What then does it mean exactly, to follow your heart? There are two obvious possibilities: 

(a) Follow your immediate desire and board the train without a ticket. (i.e. choice (i))
(b) Follow your conscience and buy a new ticket. (i.e. choice (ii))

If we seriously consider** the advice 'follow your heart', it seems to assume that we have a single mental disposition ('the heart') that has an opinion on what exactly to do. But is your heart your desire, or is it your conscience? If we shift the scenario to one where you have to choose between two friends (suppose they're in a fight), does the heart tell you to choose friend A who has always been there for you or friend B who's not so reliable but whom you'd really like to spend more time with? It seems then that it is impossible (and meaningless) to 'follow your heart', since you still have to choose between which heart to follow.

I'm not suggesting that there is a always a perfect solution to these dilemmas (if it's that easy, they wouldn't be dilemmas), but the point is that it tends to be an over-simplification to think that there is really some sub-conscious decision already made up when we are considering the options to a problem. It seems more accurate to describe the mind (for this purpose of resolving dilemmas) as a collection of dispositions, with each disposition competing for dominance. Examples of such dispositions are the disposition to be moral and disposition to protect one's own interest. At times one disposition may prevail over another, and in those circumstances decisions are made quickly. At times these dispositions may disagree and be equally strong, resulting in a stalemate that puts us in a dilemma. This may resemble the Freudian id and superego sort of analysis to some extent, but in the case of trilemmas or quadlemmas, I think singling out these 'threads' of disposition in our minds is a fine tool for decision analysis. Dispositions can also be thought of as values or duties, as these things also have a tendency to conflict and surely have causal powers towards our decision-making.

No doubt this is merely a model for understanding or analysing the state of the human mind in considering a decision. However, under this view, there seems to be little space for such advices to have any significant value. Why give an advice if it doesn't help? But if an advice is indeed a dangerous gift, as quoted from J. R. R. Tolkien, this sort of advice is probably more harmless than the rest.


*I really like the Cantonese translation 'intersection/junction' (交叉點). Dilemmas are also real fun to think about as value tests, e.g. "would you rather eat a spoonful of cow faeces or lose your permanent job?" This isn't really relevant to the whole entry... it's just something fun to share.
**Many times I've come across the remark that this 'method' of serious consideration is just another form of wordplay or nit-picking exercise (鑽牛角尖) that is a complete waste of time. I find it hard to understand how one can see no value at all in contemplating the meaning of expressions, when it is clear that misinterpretation of language can so often be the cause of conflicts, confusion and problems. If you still think this is an utter waste of time, you may want to give this a try. 

Thursday, 8 August 2013

Paradigm Shift

If I hadn't been organising the layout of my blog today, I probably wouldn't have come to re-read and ponder upon the entries that I have written some years ago. Although it wasn't entirely unexpected, I was initially rather shocked by some of the things that I have written before, which reflected the views that I had held (see here): 

(1) I argued that Hong Kong's functional constituencies serve a function that is comparable to the UK House of Lords, viz. supplying expertise to a piece of legislation. 

(2) By arguing against a radical reform of Hong Kong's political system, I was inclining towards the general pro-establishment view that we shouldn't be dogmatically rushing into universal suffrage.

(3) I clearly thought political stability was more important than securing universal suffrage as a political goal, as I thought the sort of political instability generated by legislative delays would ultimately damage Hong Kong's economy.

No doubt my beliefs today are different, if not completely contradicting my old views:

Functional constituencies, instead of facilitating a business environment that is favourable to a laissez-faire economy, seem more to me as obstacles hindering the implementation of fair welfare measures. Alternatively,  functional constituencies appear to me as "pockets of power" responsible for the phenomena indicative of market failure, economic inequality and alleged government-business collusions. Instead of seeing universal suffrage as a radical constitutional overhaul that we should remain sceptical about, I now see it as a necessary political solution to the economic, social and welfare woes now observed in Hong Kong. I no longer see "pleasing Beijing" even as a purely realpolitik policy to secure economic benefits and political stability for Hong Kong.

This sort of change in perspective, or a paradigm shift (though I'm not sure Thomas Kuhn would approve my use of the phrase) may have been triggered by the emergence of new evidence, more rigorous contemplation , or simply as a result of some remarkable personal experience.

If I were asked to give a reason for my "paradigm shift", I would say that the new events and evidence that has arisen require me to give a better explanation, and accordingly new claims and judgments (along with it a whole new paradigm) come along with these explanations. This reason for my "paradigm shift" is only acceptable on the condition that my old views had been based on reason; had it been simply based on personal taste, I would be accused of being irrational and inconsistent. In this case, maybe I shouldn't have been so shocked after all.  

Sunday, 4 August 2013

On Receptivity

Perhaps one of the greatest qualities that a person can have, aside from being knowledgeable, clever or saintly moral, is to be receptive. To be receptive is to be open, in the sense that a receptive person would be open to new ideas, practices, and cultures. To be receptive is to be willing to consider novel perspectives, and to be prepared to accept the possibility that these perspectives are better or even (if it makes sense) truer. However, I would like to think of the quality of receptivity as something more complex. I would think of receptivity as a sort of Aristotelian virtue, a sort of golden mean that lies between two vices. The first vice lies in being excessively open: if you are too indiscriminate in adopting novel ideas and practices, your individuality will soon be eroded away by the mass influx of ideas, and ultimately you will end up hopelessly overwhelmed and confused. For one, being excessively indiscriminate to what you accept would encourage you to hold multiple sets of inconsistent beliefs. On the other extreme, if you are too sceptical with what you choose to accept, you fail to benefit from what you would otherwise receive from being exposed to new perspectives. Hence, to be receptive is really to strike the appropriate balance between indiscriminate acceptance and being excessively sceptical.

By employing reason as a tool, we determine which ideas ought to be kept at bay and which deserve at least some serious consideration; then with some good use of reason, we master the virtue of receptivity.  

「學而不思則罔, 思而不學則殆。」-論語
"He who learns but not think is confused; he who thinks but not learn is at risk."- Analects

But I haven't yet said anything about the advantages of being receptive. I myself hold a certain view that receptivity can be thought of as an essential element of the "root of wisdom" (慧根), i.e. holding the key to wisdom. To be receptive is to be humble, to admit that one cannot know everything, and that one cannot claim to have access to the absolute truth. A receptive person never ceases to listen, to ask questions, or to learn. A receptive person acknowledges that he or she is always a student who has much to learn from others and Nature, and only assumes the position of a teacher with care and caution. Hence, receptivity is also a guard against arrogance and stubbornness. If one is willing to accept that there are things that one does not know (or cannot know) and that there may be better ideas out there, then naturally one would become more sceptical and reflective of one's current beliefs and stances. For very good reasons we want to be reflective and sceptical: without reflection we would naturally cling to our old beliefs, which with the passage of time tend to grow increasingly inconsistent with our phenomenal experience, and ultimately crystallise into dogmas. Hence, receptivity is essential for the attainment of intellectual, moral and spiritual progress, for it  encourages the processes of reflection and challenging our own beliefs.

Tuesday, 23 July 2013

Be a chicken - it's okay

Every now and then, we experience scenarios where our peers pressure us into doing things that we feel uncomfortable or unsure about: be it pulling off a huge prank, jumping down a whole flight of stairs or consume a whole pint of beer in ten seconds. Less juvenile examples include being pressured into singing or giving a speech in public.

At some point, someone is bound to say, "c'mon, don't be a chicken!" or "man up!". 

The response is usually that of embarrassment mixed with frustration, and not unusally a pinch of anger, where we end up either succumbing to peer pressure or having to suffer from the mockery of friends (sometimes enemies).

While it is glaringly obvious when it is mentioned, but we do sometimes forget that our ability to feel fear or discomfort is a protective mechanism that we as homo sapiens have developed through evolution. Fear plays an important function in preventing us from injuring or even killing ourselves, and this applies both physically or socially: fear of singing or dancing in public can certainly be justified sometimes by the likelihood of that very act in destroying every ounce of respect that the audience has for that person. 

This by no means suggest that fear is rational - we can still have pretty irrational fear of things, such as spiders, heights, and small spaces. What I'm suggesting is that we should bear in mind that fear is sometimes useful, and that we really need to take that into consideration as we decide whether an action should be taken. 

Try the Aristotelian idea of a golden mean of virtues as a tool for consideration: both having excessive bravery (headstrong) and its opposite (cowardice) are undesirable vices, but the 'optimal', virtuous amount of bravery is courage, which stands as a mean between the two excessive vices. The golden mean can vary enormously in different circumstances, and hence if you buy Aristotle's idea then it's not always desirable not to be a chicken. For those who are interested in a similar idea discussed in Chinese philosophy, the Confucian "Middle Way" (中庸之道) does pretty much the same job - if you're looking for some consolation from Chinese philosophy. 

So relax if you manage to get yourself into one of these familiar scenarios again - it is sometimes okay to be a chicken.


Some Thoughts About Morality

Occasionally at our most 'philosophical' moments we would discover that we hold some grossly inconsistent and irreconcilable beliefs in our minds. In response, we may choose to ignore the inconsistency by assuring ourselves that there isn't actually one, and that with the passage of time the knot will untie itself. Alternatively, and ideally what philosophers tend to do, is to reflect upon our reasoning habits and find a way to metaphorically untie the knot. Naturally (and very likely) this is going to be time-consuming, but philosophers are disposed to think that the consequences of untying the knot is itself worth the time spent on untying.

One of these 'knots', I'd like to think, is the issue of morality; more specifically, it is the task of explaining the origins of morality, and related to that, the question of why we should be moral at all. 

Where does 'good' and 'bad' come from? 
Why should we be 'good'?

Philosophy textbooks, or introductory books to ethics, are abound with attempts from the history of philosophy to address these two problems; but in my opinion few have been satisfactory in offering a view that is at least not blatantly inconsistent with what we feel or (we think) we know. One such inconsistency is that between God and morality: if we don't believe in the existence of God (as referred to in the Bible), why should we be moral? Since afterlife and judgment is not likely to occur, why should we be good? If we think there is a good reason to be moral, why does the thesis that God exists faces so many difficulties in explaining why so much evil exists (The Problem of Evil), in justifying omnipotence and omniscience, and in explaining significant scientific evidence (e.g. evidence for evolution)?

Do we act morally because we fear punishment from karma or God? 
  
My personal sympathies are with the view that the personal, all-powerful and metaphysically independent-from-nature 'God' in the sense described by the Bible and classical theism fails to provide an apt description of reality. Some may label this view as atheism, but that is not the issue of debate here; my concerns are that if I am to hold the view that the Biblical God does not exist, how should I make sense of morality? Why should we be moral?

A common-sense response to this problem is what I would call the conscience (a good Chinese equivalent is 良心) reply. The reply runs as follows: since we feel 'bad' for doing immoral things, such as stealing money from an elderly woman's purse, the 'bad feeling' alone suggests that we ought to act in ways which are consistent with our conscience. Through this perspective, sympathy can perhaps be understood as one of the feelings that we get when our conscience is at work, and another example of these 'conscience-feelings' is guilt.  

Does having guilt or sympathy necessarily mean that we should act according to these feelings?
What about evil psychopaths who seem to completely lack a conscience? 

Before we dismiss this reply by saying that it offers no proper reason to act morally and that many people have a 'twisted conscience' that prevents them from seeing what actions are moral and what aren't, I must remark on the significance of the conscience reply. First, it offers us evidence (by introspection) that humans have an intuitive capability to distinguish between moral and immoral actions. This, I would argue, provides a basis in resisting the typical moral theories (like textbook Kantian ethics) which attempt to justify morality upon universal and objective reason. In other words, it seems that whether an action is moral or not should be judged using our feelings, and not reason. Similar arguments have been made by David Hume. 

Second, the conscience reply is itself evidence that we don't always act morally because of a certain fear of punishment, judgment, and going to Hell. We may act morally according to feelings, but these feelings are not necessarily fear. Alternatively one may suggest that, consistent with the 'conscience' reply, that we sometimes act morally (sometimes, if not always) out of instinct.

Now entertain this (perhaps to some repulsive) idea for a moment: suppose the moral conscience is an evolutionary survival function that homo sapiens have developed in evolution. Just like the ability to feel pain and fear, or even the ability to use language, our moral conscience has helped us humans to live peacefully and survive effectively in social groups when we evolved. Our conscience inclines us to protect and defend the weak, and to act in ways which benefit our social group most as a whole. Our conscience motivates us to sacrifice ourselves for the greater good, to give ourselves up if we judge that it would save the lives of others we care about. Another way of seeing this is that the sense of duty that we may feel towards our parents, a friend, or a spouse forms part of this conscience. 

A common sense of duty may have been important for survival: when the individuals in a social group A all act according to their conscience, group A has a higher chance of survival compared to a group B which is more 'immoral' and less keen on duties on the whole. 

Does this best explain where morality comes from? 
Does that mean it's not always desirable to be moral?

In Thoughts About Morality Part II, I shall examine the significance and implications of such an evolutionary theory of morality. 



Friday, 14 June 2013

'Fetch' analogy, meta-thinking, and the is-ought gap

The Game of 'Fetch'
In a typical game of 'fetch', the master tosses an object, typically a stick, some distance away, and the dog responds by retrieving it. Once the dog retrieves the stick and returns it to the master, a round of 'fetch' is completed and the master tosses away the stick again and again for many more rounds until he or she feels bored enough to stop. It is worth noting that typically it takes the dog much longer than the master to reach a stage of boredom in this game of fetch. In the pursuit of the stick, the dog derives utility and purpose (I assume so; it explains the pleasure they display when they chase the stick). Only when the dog realises the tediousness and the absence of intrinsic meaning of the game, arguably so, does the dog cease taking interest in the game. Such behaviour finds its analogous counterpart in the human's own pursuit of worldly goals: the accumulation of wealth, power, fame, knowledge, physical beauty, and so on...where the 'stick' for the human can take on infinite possible forms. The human, like the dog, does not at first realise the emptiness of such pursuits, until at due course some occurrence or train-of-thought leads to the human's painful realisation that such pursuits have been in vain. It is at this point that the human begins to reflect more warily about the meaning and purpose of his or her actions, and life in general. Insofar as we carry on with our lives without reflecting, we would feel that our lives have been exciting and worthwhile; but once we start reflecting and realise that we have been playing an intrinsically meaningless game of 'fetch', some of us enter into an existential crisis, and resort to various means (like religion, or simply ignoring the whole matter) of solving it.

The 'Pleasure' solution
While the comparison of the dog playing a game of fetch and the human in pursuit of worldly goals may seem to paint a pessimistic picture of the human condition, perhaps it could be seen that the game of fetch is not, on the whole, an utter waste of time. For 'fetch' yields the dog utility and perhaps a false sense of purpose, and it is hard to see how the dog would fare any better in the attainment of utility in not participating in the game of fetch. In terms of the human condition, this means that realising the hollowness of pursuing material or impermanent aims (such as vanity) need not cast us into depression, so long as we are happy with what we do. This is, of course, relying on the assumption that utility, or happiness, is the sole and final aim of living (for both the human and the dog). Neither does the idea of 'pleasure is the only thing that matters' relieve us of all our restlessness, for it gives us no good reason not to simply engage in a drunken pleasure-cruise. We want our lives to be meaningful and purposeful (for some reason), and hedonism just won't cut it; more or less, the 'fetch complex' - the existential problem - stays with us.

The Is-ought gap
Perhaps we can retrieve some comfort from David Hume's is-ought gap. The idea is that there is a gap between what is the case and what ought to be the case, and you can't infer what you ought or ought not do based on what is the case. Even if it really is so that our lifelong pursuits are like a dog's stick-chasing, intrinsically meaningless and usually in vain, that alone says nothing about whether we should keep chasing sticks, or whether we should stop. It doesn't make an actual improvement to the human condition, but at least it means we don't have to act in a different way. We don't have to become pessimists, or give up our lifelong dreams, just because we see a mirror of our own lives when we see dogs chasing down sticks in parks.
Costs of meta-thinking
From this analogy also arises the 'costs' of meta-thinking. To reflect upon the purpose and the nature of one's life, or specific actions, like the human in coming to terms that his or her life-goals are ultimately derived from the vain pursuit of nothing significant, would be an example of meta-thinking. Meta-thinking can be understood as 'thinking from the outside the framework', or self-reflection. Such meta-thinking does appear to bring along a sort of 'deeper meaning' into one's understanding and perception of life, as it did for the dog and the human. But here is the worry: the consequence of meta-thinking, regardless of whether the process is voluntary or not, is to strip the being (dog or human) of the utility and purpose that the being had been enjoying from the metaphorical game of fetch; unless the being chooses to ignore the conclusions of the meta-thought, of course. An existential crisis cripples the man, stripping him of his energy and purposefulness. At the end of the day, the question to ask is this: would the being be better off in the end had there been no engagement in any meta-thinking from start till finish? In other words, would the dog be better off playing a perpetual game of fetch, without contemplating the nature of the game itself? Is it really the case that the unexamined life is not worth living, or is it the case that 'foolishness is bliss' (糊塗是福)? Philosophers are generally inclined to agree with the former, but it's not always clear if there is always a good reason to, since we can never be sure whether knowing the truth brings us overall more pleasure or more pain.

Wednesday, 20 February 2013

Thinking about meaning

Meaning – in terms of meaning of linguistic expressions, rather than meaning of life – has always been a puzzling notion in philosophy (even though the meaning of life is an equally if not more puzzling notion). The central question to ask is, ‘what is meaning?’. If we choose to express the question differently, we would be asking ‘what does meaning mean?’. This seems to immediately collapse into a sort of circularity problem. How do we even begin answering the question of ‘what does meaning mean’? Since it is beyond my intellectual capacity to survey and analyse all proposed theories of meaning to date, here in this informal setting I attempt to draft an answer to this question by combining intuition and a couple of my own ideas.

Perhaps a good approach to finding out what meaning is would be to look at where meaning comes from. Whenever we utter any linguistic expression, we first form a thought relating to that expression in our minds. For example, before I say ‘I want that ice-cream!’, the thought of ‘I want that ice-cream!’ must precede it. Moreover, this preceding thought cannot be just any vague thought, but it should be the thought containing the intention to turn that thought into the corresponding linguistic expression. In this case, it would seem that meaning comes from the intentional thought that the linguistic expression is used to express. When the meaning of the linguistic expression is intended to be conveyed to an observer (assuming the person is not speaking to herself), the process of conveyance (some would call it ‘communication’) is only successful if the observer (be it a listener or a reader) can correctly infer or interpret that intended thought from the given piece of linguistic expression. With this view, even peculiar circumstances in communication such as the use of sarcasm can be explained.

One characteristic of this view is that it presumes private meaning to be possible. What I mean here by ‘private meaning’ is that the meaning of a linguistic expression does not need to depend (its ‘existence’) on there being any observer apart from the person expressing it. This notion is intuitive: people have always been known to ‘speak to themselves’ or write journals documenting their own thoughts. In the absence of any observers, the linguistic expression in question would still make sense (and hence be meaningful) to the speaker or the writer herself.

For an observer who doesn’t know any French, we would say that she doesn’t understand the meaning of the sentence ‘la neige est blanche’, as she would not have the means to infer the intentional thought of the sentence. In this formulation of ‘meaning’, to understand the meaning of a sentence is to be able to infer successfully the intentional thought of the sentence.

With hindsight, this theory resembles greatly what philosophers have called the ‘ideational theory of meaning’, which instead understands meaning as ‘ideas’. Amongst the many objections faced by the ideational theory of meaning, I find really interesting the objection which points out that there is an aspect of meaning which is inter-subjective and social, e.g. the meaning of the word ‘dog’ is common to all English speakers, but an idea is private and subjective. This is an objection which seems to also apply to the ‘intentional thought’ theory of meaning; but does it really hit? Since we cannot be completely sure that everyone shares the identical ‘thoughts’ of a ‘dog’, shouldn’t treating such thoughts as subjective (but inferable) a much more modest and prudent move?

Thursday, 6 September 2012

Worldliness, Detached-ness, and other thoughts

A question that has been bothering me for the past several days is whether one should ever worry about his or her inclination being too 'worldly' (入世) or too detached (出世). This is not a new problem - but it has recently re-emerged in my thoughts. I use the words 'worldliness' and 'detached-ness' in a very particular sense. I first came across this distinction in Fung Yu-lan's A Short History of Chinese Philosophy, where Fung described the Confucius strand of philosophy as being more worldly and Daoist as more detached. A worldly philosophy, generally speaking, emphasises more on acting for ends which do not lie after death. Conversely, a detached philosophy would place more weight on the kind of development which affect our 'life' after death. It would seem that some metaphysical position (on the purpose of life, the state of 'life' after death, etc.) must be assumed in order to know - with some confidence - how one should appropriately act in life, and hence whether one should incline more towards worldliness or detached-ness. 

A person with worldly inclinations would perhaps place a higher value on gaining respect, developing healthy social ties, and fulfilling social (and civic) duties through political participation and charity. In contrast, a person with 'detached' inclinations would, in some sense, be more 'solipsistic', or even egoistic: one's own pleasure, moral development, and maximisation of one's moral and mental potential would be priorities of the highest importance in that person's life. I associate the 'worldly' life with normality, 'common-ness', and vanity - for it is easy for any person to see that all political and social endeavours will ultimately be worth NOTHING, when one's life ends and with the passage of time. Such endeavours are only temporarily meaningful, conditional upon the observer's existence and that the observer sees such endeavours from a 'worldly' perspective, rather from one that is transcendent and, if I may, 'God-like': such that history is like a flowing river, where all human effort are merely vain attempts to make tiny impressions upon the river-bed, and shall ultimately be erased by time. Yet, neither the 'detached' life sounds absolutely attractive - how should one know whether Truth, the ultimate 'Realisation', or inner peace is actually attainable, and not in fact simply a fantasy of philosophers? And - already knowing how hollow and empty the vanities of fame, wealth and rank are, how should one be able to embrace such vanities again without deceiving oneself? 

And if possible, how should one reconcile these two extreme inclinations? There are more other problems which flow from the original question. Is it really better to be an unhappy Socrates than to be a happy fool? CAN analysing grammar and semantics of words help humans decide how to live well? Do morality and social values (virtues & vices) only matter because we can - put simply - put ourselves in other people's shoes?



Friday, 17 August 2012

Arrogance, Free Will and Nature

Sometimes we become very sure of ourselves, believing that we have in our lives achieved some greatness, and even that we are superior to others. We may believe that such an opinion - that we are great or superior - is justified, qualified by certain deeds that we have performed or certain virtues that we have displayed. But are we ever truly worthy of holding such an opinion?

I question this because it has crossed my mind that our actions and behaviour are never, in the strictest sense, controlled by us. The decisions we make to execute particular actions - for example, to read a book this evening - are ultimately consequences of circumstances. Circumstances such as our upbringing, birth and particular people or events which appear in our lives, all play a role in affecting our every action and thought. While it may seem to us that we have total control over what we think and what we will, the idea that this feeling is merely an illusion can sound very compelling, though maybe at first counter-intuitive. Just think: every thought that you have ever thought must have been a consequence of an array of factors (e.g. events or individuals which inspired you to develop ideas, people who taught you language, your birth which gave you your capacity to think and develop ideas, etc.), none of which you can strictly say is your very own (even this very precise thought). Thus, it would always be foolish to genuinely believe that we are all entirely worthy of what we have thought, created or did. This is not to say that acting arrogantly or acting as if you are very proud of yourself does not necessarily have any practical social benefits - at the same time I am not suggesting that there is any - but I am merely pointing out that believing that you yourself are a self-made man, or that you deserve all the glory for any of your actions, is foolish.

None of our achievements are truly ever our very own - none of our achievements would have existed without the facilities (e.g. our mind) Nature has endowed us with, and without the very particular circumstances that Nature has brought about. This Nature of which I speak of - I am referring to the natural order of things, which some refer to as God. I refrain from referring It to as God for fear of its misleading connotations.

Ultimately, I hope this demonstrates how being humble and valuing humility as a virtue is wiser than being arrogant and too sure of oneself.

Saturday, 19 May 2012

On truth in politics

How many of us can claim that our beliefs have remained constant and unchanged throughout our entire lives? I would guess the the answer to that question is very few, if not none at all. For the insistence to hold the same belief in the light of new, conflicting evidence would be a kind of dogmatism. If dogmatism prevails, science would never progress and debates would be ultimately fruitless. In the most extreme form of dogmatism, nothing can be learnt from any enquiry or reasoning activity. Hence, it seems that there is nothing wrong or problematic about revising our beliefs when we come across new and credible evidence which demonstrates to be inconsistent with what we originally believe. In fact, it seems to be the rational thing to do. 

If this conclusion holds true, then the question of what evidence we choose to expose ourselves to is highly important. One may ask the following question, ‘Surely if one is to maximise the chances of attaining the truth, one must be exposed to ALL available evidence?’ That is a valid point. However, faced with the physical constraint of possessing finite resource and time, and that the sum of all knowledge in the world is of such an unimaginably huge quantity (if not infinite), how is it possible for us to realistically expose ourselves to ALL available evidence? 

If it is established that possessing complete awareness of all available evidence is impossible, then the overall balance of the evidence we access becomes an important issue. If it is not possible to become aware and to comprehend all the literature against, say, wealth redistribution, then the least that one can do is to ensure that one is exposed to the evidence provided by both sides of the debate. Whether any debate can be simplified into two clearly distinct sides is another issue, but staying open-minded in what we read and listen to seems to be the best approach of preventing ourselves from falsely believing in lies. 

To judge what is true and what is false, in politics, is an extremely difficult task. In contrast to philosophy, or in the natural sciences, there is a huge interest to deceive and to create exaggerated, if not downright false reports. A false report of a scandal, or perhaps the taking of a certain individual’s words out of context, for instance, can be of extreme benefit to a particular party. Even statistics can be interpreted, or should I say distorted, in ways which could advance a particular political end. How can we judge whether we are learning or whether we are being brainwashed by false ‘knowledge’?
‘Reason’ - may be the typical response to this question. But is reason infallible? Even the cleverest person in the world, it seems, is vulnerable to making mistakes. Even IF we concede that the cleverest person in the world is capable of avoiding all possible mistakes in reasoning, then it would seem that the average person must at least be vulnerable to some error in judgment. I understand that excess scepticism can debilitate action; but it seems that if we wish to hold true beliefs, or simply to avoid holding false beliefs, we must be sceptics to some extent. I also acknowledge that being overly sceptical can prevent us from fully committing to a cause; but perhaps this is only because a full commitment, when lacking a balanced knowledge ‘around a subject’, is rarely an embodiment of rationality. What this further entails is that perhaps one should always think carefully before making any claims about what they know or believe with certainty, particularly when the belief in question concerns politics.

Tuesday, 1 May 2012

Is there any reason to think that God is omnipotent?

Omnipotence, typically understood as the property of being all-powerful, is one of the unique properties of God in the classical theistic conception. Here, I shall examine two common objections raised against the view that God is omnipotent. Although the objections may sound persuasive at first glance, I shall argue that none of them are effectively knock-down arguments against the omnipotence of God. Hence I shall attempt to sustain the view that it is reasonable to think that God is omnipotent.
Since the conception of God is hardly univocal, I shall limit this discussion to the God of classical theism – the God who is omnipotent, omniscient, omnibenevolent and depends on nothing for His own existence. 
In this brief entry I shall examine two distinct objections of the following type: the first objection aims to demonstrate that omnipotence as a concept is incoherent, and the second objection that omnipotence cannot be reconciled with God’s other divine attributes.
The first objection is often formulated as a question. Similar forms such as this have often advanced by the sceptic: ‘Is God able to lift a rock so heavy that He himself is unable to lift it?’ This creates a paradox for theists: either way, God cannot be omnipotent because (1) if he can create such a rock, he is not powerful enough to lift it, and clearly the case (2) if he cannot create such a rock.
The most satisfactory response so far known to this paradox is to offer an alternative definition of ‘omnipotence’. With this definition, God is not plainly ‘all-powerful’; God is ‘all-powerful, but bounded by the laws of logic’. The second step of this response is to then, point out that ‘a rock so heavy that it cannot be lifted’ is a concept which is logically incoherent, and hence one cannot reasonably ask God to create such an object. It would be akin to asking God to create, for instance, a married bachelor or a square circle, which are all logically impossible ‘objects’.
I find this response valid. To consider the alternative – as held by Descartes - that God is all-powerful and is unlimited by logic - is itself an incomprehensible view, and hence unsustainable. It would be meaningless to employ reason and logic in arguing for the existence of God; for how could any comprehensible conclusion be reached if God is beyond the realms of reason? Moreover, the ‘heavy rock’ concept is one that does not seem to be conceivable, and it seems justify to describe it as a logically impossible concept.
The second objection which I plan to look at attacks the incoherence between omnipotence and another divine attribute: omnibenevolence, or all-good-ness. Again, the objection is most efficiently phrased in a question: can God do evil? This objection seems to place the proponents of theism in a dilemma: if God is omnipotent, then it seems definite the case that God can do evil; however, being all-good, it seems that God cannot do evil. Thus, the objection seems to place theists in a dichotomy where one of the aforementioned attributes must be rejected.
One possible response to this objection is to employ some analysis of the idea of omnibenevolence. The property of being all-good does not necessarily have to be an intrinsic property in the sense that it can limit God’s power; God can simply be all-good because all the actions which stem from God are good, in a somewhat Sartre-esque sense. While this may entail problems which come with the metaphysical view that an external, objective value of ‘good’ exists beyond God, it does demonstrate that God is indeed all-powerful in his nature. I admit that this response is not ideal as it cannot demonstrate that God is all-powerful in effect; but it seems to be more sustainable than its alternatives e.g. redefining ‘omnipotence’ as ‘all-powerful but bound by the laws of logic and morality’.
But is it reasonable to think that God is omnipotent? Although some reasonable theistic responses have been elucidated, admittedly, I don’t think my defence of theism has completely removed the force in its critics’ objections. It is, I think, no longer possible nor consistent to hold ‘omnipotence’ in its original starting definition.

Tuesday, 18 May 2010

Statistical Analysis on UK Elections 1997-2010

An Overview
This brief analytical study of the statistics in 4 recent UK general elections (1997-2010) may have many implications for the entry-level politics student. The 2010 general election saw the end to the dominance of the Labour Party (and New Labour?) as well as the first successful bid of the Liberal Democrats to Whitehall as a result of a hung parliament – only the second one to occur since 1945. Statistics from these 4 general elections also reveal an increasing share of the vote by minor parties (suggesting the end to the 2-party system?), as well as the lowest turnout percentages since the Second World War (voter apathy! Or an improving standard of living?). No new or major thesis is being put forward in this analysis, but it is hoped that these numbers will help readers understand more about the UK system of elections (and government).



The Labour Party

Out of the four general elections in 1997, 2001, 2005, and 2010, the Labour Party is relatively successful in producing a majority of seats. Although the party saw a gradually declining majority from 1997 to 2005 (from 179 to 66), in 1997 and 2001, under the leadership of Tony Blair, the party was still able to produce “landslide” majorities of 179 and 167 respectively. The Labour Party won all general elections under Tony Blair, whereas with Gordon Brown the party lost the 2010 election with only 258 seats, which was 67 seats short of a majority.

In none of the general elections was the Labour Party able to secure a majority of the popular vote: it was most successful in 1997 in securing 43.2% of the vote, and least successful in 2010 in securing only 29% of the vote. In all its “victorious” years, Labour’s mandate (i.e. right to govern) was lowest in 2005, only winning 35.3% of the vote.

The Conservative Party

In contrast to Labour, the Conservatives only won 1 out of the 4 elections between 1997 and 2010. Its victory in 2010, however, was not entirely a satisfactory one, as it was still 20 seats short of winning a majority despite winning 36.1% of the popular vote (more than Labour’s 35.3% in the 2005 g.e.). The 2010 general election was David Cameron’s first victory as the leader of the Conservative party.

As opposition
The Conservatives have managed to retain a minimum of 30.7% of the popular vote (1997) out of the 3 general elections which they secured their position as the largest opposition party. Popular support, measured in votes, was highest in 2005 at 32.4% under the leadership of Michael Howard, increasing by roughly 1% each year from 1997 in the general elections which the Conservatives were in opposition. The Conservatives ran the 3 general elections under 3 different leaders: John Major in 1997, William Hague in 2001, and Michael Howard in 2005. The 2010 general election under David Cameron saw a 3.8% increase in the popular vote for the Conservatives.

The Liberal Democrats
An increasing share of the popular vote for the Liberal Democrats could be observed in the four general elections in 1997-2010. Beginning with only 16.8% of the vote in 1997 under the leadership of Paddy Ashdown, the Liberal Democrats managed to win nearly a quarter of the popular vote at 23% in 2010 under Nick Clegg, an increase of roughly 6.2 percentage points. The biggest increase in percentage points between elections, however, occurred in the 2005 general elections which saw the Liberal Democrats’ share of the popular vote rise roughly 3.8% from 18.3% to 22.1% in four years.

In terms of seats, the Liberal Democrats actually saw a fall in 2010 despite consistently increasing in the years 1997-2005. Seats won by the Lib-Dems in the House of Commons fell from 62 in 2005 to 57 in 2010, although having won a bigger share (approx. 1%) of the popular vote. This was, however, still a considerable increase from the 1997 result, where the Lib-Dems only managed to win 46 seats under Paddy Ashdown.

Within the period, Charles Kennedy was the only leader who led the Lib-Dems to run for more than one election (2001 and 2005). Nick Clegg was the only leader who managed to lead the Liberal Democrats to form a (coalition) government.

Minor Parties
Minor parties (excl. Labour/Conservatives/Liberal Democrats) saw their apex in the share of the popular vote in 2010, which constituted a significant 11.9%. This contrasts with results in 1997, where small parties only contributed to 9.3% of the total vote. Comparing with historical statistics, the average popular vote in % for minor parties from 1945 to 1992 was only 3.35%, which demonstrates how significant the increase has been for the recent general elections. It is also worthy to look at a few case studies: while in 2005 the UKIP failed to gain any seats despite winning 2.2% of the vote, the Green Party gained its first House of Commons seat in 2010 with merely 1% of the vote. Also, in 2010, 35% of the voters supported a party other than Labour or the Conservatives.


Turnout
The turnouts in % for the four recent general elections were also considerably low. Turnouts from 1945 to 1992 average a respectable 76.7%, but the closest figure to that in the ensuing four general elections was 71.4% in 1997. Most notorious was the turnout in 2001, in which only 59% of the registered electorate voted in the general election. Between 2001 and 2010 a gradual improvement could be witnessed, with 65.1% of the electorate voting in 2010 general election.

Proportionality
The FPTP (First-Past-the-Post) system used for the UK general elections is known for creating interestingly disproportional results. In 2005, Labour won 55% of the seats in House of Commons with a mere 35.3% of the popular vote. In 2010, the Conservatives won 36.1% of the popular vote, yet had only managed to win 46% of the seats. Between 2005 and 2010, the Liberal Democrats saw an increase of about 1% in their share of the popular vote, but a fall of 8% in their share of seats. UKIP won 2.2% of the vote in 2005 with no seats, but the Green Party won one seat with only 1% of the vote in 2010.

Sunday, 16 May 2010

On Democracy in Hong Kong

Hong Kong is one unique place where you can find such a high degree of economic freedom at the same where a commonman does not possess the political right to vote for the Head of Government. Both the Wall Street Journal and Heritage Foundation has ranked Hong Kong as the world’s freest economy for a consecutive 15 years, and on the accounts of GDP per capita Hong Kong is certainly one of the most prosperous and well-developed cities in the world. Yet, does it make any sense that people should enjoy economic prosperity and freedom while being in possession of no genuine political rights?

Firstly, it is probably a good idea to clarify to what extent residents of Hong Kong are deprived of their political rights. Hong Kong’s unicameral legislature, the Legislative Council, is made up of 30 elected members and 30 “functional constituencies” (“elected” by trade and labour unions in Hong Kong). Hong Kong’s Head of Government, the Chief Executive (who is Donald Tsang atm), is elected by a group of electors of about 400 people of which the government in China grasps many by the balls.

Obviously, compared to other notable democracies like the USA and the UK, Hong Kong residents have little influence in politics: Americans get to vote for their Presidents once every four years, on top of Congressional elections which happen once every two years; Britons cast a vote once every four to five years (depending on when the incumbent Prime Minister wishes to call general election) for a MP (Member of Parliament) belonging to a party, and the leader of the party holding 50% of the seats in the House of Commons becomes the Prime Minister.

However, it is far from ideal to use (according to many leading “Democrats” in HK) the USA or the UK model in its exact form for Hong Kong. The reason for USA’s frequent elections is because of the Founding Fathers’ intent to create a system of checks and balances in a federal system (where individual states pool their sovereignty to the federal government instead of a central government giving power to local governments), so that no alliance of interests could take control of the whole system of government in a country so diverse and large. As a region where efficiency is held in such high priority, greater checks and balances would only serve to magnify the legislative inefficiencies of the current system. Being small and having a relatively homogeneous society, what Hong Kong needs is a flexible government, rather than one which has to work with its hands tied together. Just see how Hong Kong people would react if it takes two decades to pass healthcare reform.

In my opinion, UK’s system of government is more appropriate for Hong Kong’s application, but similarly to U.S.’s system, it is neither perfect nor exactly the best fit for Hong Kong. UK has a bicameral legislature, which means that it has two separate Houses in Parliament. Like Hong Kong, not all of the UK Parliament is elected; the Upper chamber, the House of Lords, is made up of appointees called Peers/Lords, and as a collective the House of Lords has the legislative power to delay bills up to 12 months. Hong Kong’s “functional constituencies” can be pretty much seen in the same light. Even though “functional constituencies” are not elected, they have a function of providing expertise to a piece of law, and ensure that some laws are not passed merely to win the votes of the public. Policies like tax hikes, building airports and laying down railway tracks are often unpopular measures which have to be pushed forward for the benefit of all. As Plato argued, a ship can only sail properly when it is guided by a captain, and not by its whole crew. Democracy shouldn’t be blindly pursued as a social objective; rather, it should be considered as a means to improve the welfare of society.

I am not arguing that Hong Kong has the perfect political system, or that it is sufficiently democratic. I agree that “functional constituencies”, despite their contributions, need to be reformed and reduced in power; the electors voting for the Chief Executive need to be increased in both numbers and diversity, and people should be allowed more say in determining who are the electors. Lastly, the entire system should be made more accountable and democratic through devices like referendums and focus groups.

It must also be understood that Hong Kong isn’t as democratic as many liberals would like it to be because of its relationship with the PRC (People’s Republic of China). Hong Kong’s current position as an economic powerhouse and an international financial centre is both being safeguarded and upheld by the PRC’s own international economic presence. Whilst every effort should be made to resist conceding on Hong Kong’s present political freedom, pushing the boundaries too far by demanding radical constitutional reforms will only result in severe and lasting destruction to Hong Kong’s present economic success. Laissez-faire and huge financial transactions don’t mean jackshit when you are under unstable politics. Politicians/political activists should be wise enough to realise this.

Friday, 9 April 2010

In Defence of the Yuan

The Sino-US currency crisis has ascended to another level.

On the 24th of March 2010, Harvard university and historian Niall Ferguson called on the Treasury to label the Yuan and other Asian currencies as currency manipulators. Most open economic sources, notably the Economist's "Big Mac" index, indicated that the Yuan was indeed - undeniably - undervalued, pretty much contradicting President Hu's opinion last week. While there is plenty of evidence to confirm that the Yuan IS indeed undervalued, U.S. Treasury Secretary Geithner has today curiously delayed publishing of the currency report which many had speculated the U.S. would brand China as a currency manipulator --- surprise, surprise? ..so what the hell is going on?

Asking several questions would probably clear things up:
--- Is the Yuan really undervalued? ---
(This is a pretty certain yes)

--- Did the Chinese manipulate the Yuan? ---
(Not that obvious - certainly China's cheap labour has contributed to China's trade surplus by creating very cheap exports - but capital restrictions imposed by the Chinese government do create a lot of distortions which would depress the Yuan. So to a certain extent yes. )

--- If the Chinese did manipulate the Yuan, to what extent is it "the Chinese reaping the fruits of the Americans"? ---
(The mainstream thought is that the Chinese had artificially suppressed the Yuan so that Chinese exports to the U.S. will be cheaper (hence more) and imported goods to China from the U.S. will be more expensive (hence less) - creating more real income for the Chinese at the U.S.'s expense, as well as the trade surplus-deficit parity between China and the U.S. .
However, there are at least 3 main factors that we should probably take into account before jumping to this conclusion.
Firstly, although the U.S. does import a huge amount from China, China is essentially in lack of raw materials which in turn, in order to fund its own production, China has to import raw materials from the U.S.; hence, increasing the value of the Yuan does not necessarily benefit China's exports as much as it is commonly perceived -- net trade values are often ignored in such calculations. Also, the U.S. trade deficit had long existed before the Yuan was considered as an undervalued currency - so an undervalued Yuan is perhaps not quite as significant for the U.S. economy as the U.S. right-wing media has portrayed it to be.

Secondly, China currently has overtaken Japan as the biggest buyer of U.S. Treasuries - its enormous foreign reserves is one of the reasons why the Yuan is currently so undervalued. Even though China is in the process of replacing U.S. gilts with gold reserves, the easiest way to inflate the Yuan would be for China to sell its U.S. Treasuries. Surely appreciation of the Yuan would not be so good for the U.S. economy...not when the yields of U.S. gilts sky-rocket and investors are "crowded" with high interest rates.

Thirdly, perhaps a minor point - but many of U.S. supermarts, Walmart, Carrefour, etc. have been vocally against Yuan appreciation as it would lead to an inevitable increase in its input costs. Undeniably, U.S. customers are indeed benefiting from a greater choice of goods as cheap Chinese exports provide them with cheaper necessities (despite questionable quality); there are always two sides of a coin when it comes to currencies. But it's probably true that you can't blame governments and big businesses for not taking consumer welfare into account - as usual.

Maybe, a fourth point: even if China was depressing its currency in order to boost its exports, it will necessarily create a negative externality somewhere else - and we all know inflation can bring about very deadly consequences. China's inflation rate can perhaps guide us as to how seriously undervalued the Yuan is. In my opinion, there is perhaps too much anxiety to pressure the Chinese government to appreciate the Yuan - as soon as their inflation rate shows signs of going through the roof, it will then become necessary and automatic for the Chinese to revalue the Yuan. )

Of course, all of this is history in the making - never in the annals of economics has a still pretty much state-controlled economy had such a inter-dependent love-hate relationship with such a free-market economy - not even between the USSR and U.S.A during the Cold War... whatever the outcome will be, it will surely be one that we cannot afford to ignore.

Sunday, 28 February 2010

Eating Dogs.

A health hazard warning may be necessary for dog-lovers who are about to read the following.

The question is simple: Is it wrong to eat dogs?

Most people will answer, "Well not really - but would you eat one?"

Personally I rather enjoy the company of dogs, and I would refuse to eat a dog even if I enter a restaurant and it's the only dish on the menu (I think I'd turn out and leave). I think I speak for a lot of people when I say dog-eating is quite a disgusting thing to do; to some it would almost be like eating your own brother or friend.

Yet, on some level it doesn't make a lot of sense when we complain about people who eat dogs. How are dogs different from other animals? Just because cows and pig and sheep aren't "man's best friend", we're allowed to eat them? Just because chickens and turkeys do not perform fancy tricks like dogs do, it is justified to eat them?

Some vegetarians may argue: Don't eat meat at all, because it's wrong altogether!
That's debatable, but it still doesn't solve the dog-specific problem we are looking at here: If we are meat-eaters and we justify eating meat on the usual reasons (e.g. men are born carnivores; survival of the fittest; I don't care about animals), does that mean it is also right to eat a dog? If it is not wrong to eat a dog, aren't we being inconsistent when we complain about people being immoral when they eat dogs?

Perhaps the biggest reason why we find eating dogs disgusting is because there is a "emotional bridge" between humans and dogs. To some extent, there is a relationship. People who keep animals like chicken or pigs as pets would tend not to eat chickens or pigs. It is an element of human emotion that we feel disgusted if we commit harm to things we love for a more trivial cause (e.g. satisfying your stomach), and human emotions constitute a part of morality. The very same reason why most (I genuinely hope I can replace "most" with "all") would say eating a human being is wrong.

Of course, this argument is based on a lot of challengeable assumptions. For example, it may be under different circumstances and for different reasons why someone may eat a dog. But meanwhile, we can perhaps settle with the conclusion that eating dog is wrong; but inconsistent for a meat-eater to hold at the same time.