The East Asian Financial Crisis of 1997 (A brief overview)

en.wikipedia.org

en.wikipedia.org

 

From the 1960s to 1990s the performance of the East Asian economies was somewhat spectacular. Their global share of GDP was consistently increasing against the rest of the world, almost doubling in the space of 30 years to around 22.5% in 1990. However, there were apparent weaknesses in their financial structure which can partly explain the cause of 1997 East Asian Financial crisis.

Success story

Until 1997 the countries of east Asia had very high growth rates

The ingredients for their success included but were not limited to: High saving and investment rates, a strong emphasis on education, a stable macroeconomic environment, freedom from high inflation or economic slumps and a highs share of trade in GDP.

Weaknesses

The main weaknesses that became apparent were:

Productivity – Rapid growth of inputs yet little increase in the productivity rate (little increase in output per unit of input)

Banking regulation – poor state of the banking regulation

Legal framework – lack of a good legal framework for dealing with companies in trouble

The Crisis

During the 1990’s there was lots of speculation on the Thai baht as property markets were heating up. After running out of foreign reserves Thailand had to float their currency, cutting its peg to the US $. The crisis started on July 2, 1997 with the collapse of the Thai baht. The sharp drop in currency was followed by speculation against the currencies of Malaysia, Indonesia, Philippines and South Korea. All of the afflicted countries sought out the IMF for assistance apart from Malaysia.

Debt-to-GDP ratios rose drastically as exchange rates fell. Interest rates of 32% in the Philippines and 65% in Indonesia over the crisis did nothing to calm investors and both countries saw a greater percentage decrease in GNP and exchange rate (against the US $) than  South Korea.

It was not exactly clear what sparked the ignition of the crisis. The first major corporate failure was in Korea when Hanbo steel collapsed under huge debts. This was soon followed by Steel motors and Kia motors which meant problems for local merchant banks who had borrowed from foreign banks.

Local banks were hit with: withdrawal of foreign funds, substantial foreign exchange losses, sharp increase in NPL’s (non performing loans) and losses on equity holdings.

Pilbeam argued the crisis was due to the banking system and its relationship with the government and corporate structure rather than macroeconomic fundamentals.

The countries wanted high economic growth and so applied pressure on firms to make large investments and on domestic banks to lend to firms. These domestic banks failed to assess the risks properly and financed the loans from residents, foreign banks and investors. These loans were also guaranteed by the government. Overall the banking system was weak, with poor regulation, low capital ratios, poor selection schemes and corruption. Thus the banks lent to firms without a regard for profitability, explaining the large number of non performing loans.

The downturn of the crisis was ‘V-shaped’ – after the sharp output contraction in 1998, growth returned in 1999 as depreciated currencies spurred higher exports.

External Factors

Japans recession through 1990’s – kept interest rates low which allowed for East Asian economies to easily finance excessive investment projects.

50% devaluation of the Chinese currency in 1994 – can partly explain the loss of cost competitiveness from the East Asian economies

Financial deregulation and Moral hazard – attracted foreign banks to invest, more so because of the government guarantee which allowed for the banks’ reckless loan making.

Sharp appreciation of the dollar from mid 1995 – led to deterioration of economies who had pegged their currencies to the dollar. Given time lags between exchange rate effects and trade, this began to affect their export performance in 1996/97.

The IMF

Objectives: prevention of default, prevention of free fall exchange rate, prevention of inflation, maintenance of fiscal discipline, restoration of investor confidence, structural reform of financial sector and banking system, structural reform of corporate sector, rebuilding of foreign reserves, limiting the decline of output.

At their disposal they had the following tools: bank closures, fiscal/monetary discipline and structural reform of the banking and corporate sector.

Did the policies work?

Policies were harsh and depend crisis

Closure of banks reduced credit and created panic for international investors which led to bank runs

Higher interest rates and tighter fiscal policy reduced output. The tight fiscal policy was viewed as harsh considering most countries were already running a fiscal surplus.

There is an argument that knowing the IMF was there to help could have worsened the problem of moral hazard.

Post crisis?

Output fell and there was a fiscal deficit. The current account improved, due to fall in imports and rise in exports after devaluation. Savings increased and investment fell. Overall the recession was short and the economies recovered quickly. This was largely to do with the boost in exports caused by the devaluation.

Insurance companies and risk neutrality

The requirements for the insurance market to be efficient are that the least risk adverse agent bears all the risk. This means that the agents will be bearing all the risk giving the insurance company a more certain outcome.

The market will also have to be in equilibrium, which means that two conditions will be have to be met. The first is the break even condition; this means that no contract makes negative profits. If the insurance company is not making profit off of a contract then this is inefficient and may mean the company would have to shut down depending on the profit made from other contracts. The market will also have an absence of unexploited opportunities for profit. If this is not that case then rival companies can exploit this opportunity and offer a better contract.

To be efficient the insurance market a firm will also have to be in perfect competition so they can offer a fair insurance (Risk premium = PR of loss x loss). If they do not offer fair insurance, another company can under-cut them and steal all their customers.

Finally, the market will need to have perfect information so they can offer the correct premium to everyone. Otherwise they will have to pool equilibrium as they will not know individual risk types. This would mean low risk people will be essentially paying for part of the high risk candidates insurance as well as their own which is not Pareto efficient. Pareto efficiency is when principal cannot be made better off without making the agent worse off. This condition must be met for the market to be efficient. If it is not met then they will be at a point where both parties can be made better off.

The insurance markets pretty much meet all the requirements above and so to an extent are efficient. Although of course perfect information is not quite possible.  Insurance companies reduce the amount moral hazard and adverse selection in the market by for example, making candidates fill out forms which void contracts if  information proves to be false and offer no claims bonuses.

To discuss whether risk neutrality is a necessary and sufficient condition for insurance to take place, we need to look at what would happen in the three possible scenarios: risk neutral, risk adverse and risk loving.

We can start by looking at a risk-neutral insurance company. In reality, large organisations such as insurance companies tend to be risk neutral for mainly two reasons: the risks are small relative to the organisations size and as they offer so many risk contracts, on average they all tend to cancel each other out. A risk neutral insurance company earn their profit from the fact that the value of premiums they receive is either greater than or equal to the expected value of the loss.

Even insurance companies that take on purely risky customers will still be risk neutral as they will raise the premiums to offset the extra risk. For example, some specialist  holiday insurance companies may only take on candidates who already have pre-existing medical conditions and find it difficult to get insured elsewhere. The companies will offset the extra risk however by charging much higher premiums. Again the value of premiums they receive will be either greater than or equal to the expected value of the loss.

An insurance company cannot be risk averse because of the nature of the insurance market. If the company is risk averse, then it will charge premiums much higher than the expected value of the loss. In the presence of perfect information and perfect competition, customers will not choose to insure with this company as there will be better alternatives elsewhere. Therefore the if the insurance company was risk averse it would get no customers and soon fail.

Finally, an insurance company will not work in the long run if it is risk loving. A risk loving firm would accept lower premiums from customers than a risk neutral firm would and hope that the gamble pays off. This would mean that they would be expected to make a loss overall as their premiums would be less than their expected loss. Therefore eventually they would be making a negative profit and would have to shut down. (Unless they are very, very lucky!)

So for all of these reasons, it’s necessary for an insurance company to be risk neutral for insurance to take place. It is also a sufficient condition for insurance to take place as they will be at a minimum, breaking even and so can continue to exist in a perfectly competitive market, and they can offer a competitive premium to survive in a market with perfect information. Being risk neutral will also allow the insurance company to be efficient, as they will be making the break even condition.

Project crashing – a few considerations

Firstly, what exactly is “crashing” an activity? Put simply, crashing an activity is the act of shortening the  overall project duration by reducing the time of a critical activity. A critical activity is an activity which must start and finish on time for the overall project to be completed on time.

There are many practical factors which a company should take into account before deciding to crash an activity. Also, depending on the type of company and project, these factors may vary.

The first consideration should be if crashing an activity will make you any money. Even if the money saved does not outweigh the cost of finishing early, crashing may still make you money by freeing up yours and your teams time to focus on other projects.

We also need to know if workers are available to do the overtime. If so, are these workers happy to do the overtime and will working this overtime makes them less productive in the following days of the project? If the project involved heavy manual labour for example, then working extra days may tire the workers out and make them less productive. Pushing the workers to do more may also increase the risk of injuries that could cause further delays. If you need to bring in extra resources to crash the project then they may not be familiar with the task at hand. Therefore you may need to train them and they may be less efficient than the current resources.

If an activity is crashed then there may be an increased risk of missing a deadline because of reduced slack. Missing deadlines may hamper client relations and so be detrimental to the business. Finishing a project on time may also be detrimental. For instance, if a company was running a stock audit for a retailer, then finishing early may make it appear as if they have cut corners in counting.

Another practical factor you would need to look at is if there are the materials available to crash an activity. If the materials will only be delivered at a certain time then crashing may not be an option.

If the company is working on multiple projects then will crashing one activity mean other projects are delayed. There may also be legal considerations to take into account, i.e. the maximum hours of the working week for staff.

Finally, before crashing an activity, a company should look at whether there are there other options to crashing which is more cost or time effective. Fast tracking is another viable option where you overlap tasks which were initially scheduled sequentially. You could also make other activities more efficient, by for example, splitting up a long task into smaller intervals as people may work harder in a short space of time than if they have to pace themselves to work a full day.

Are the assumptions of the Marshallian and Hicksian consumer optimisation problems plausible?

The assumptions made implicitly in the two models relate to the basic axioms of consumer theory which I have briefly described below.

Completeness – “for any pair of bundles, our consumer can say if one of the pairs is preferred to the other or is indifferent.” (M. Hoskins, pages 3)

Transitivity – “if APB and BPC then APC”. Also works with indifference – if A/B (A indifferent to B) and B/C then A/B (M. Hoskins, pages 4)

Continuity of preference – “for any bundle of goods, this requires another bundle ‘near it’ such that these two bundles and indifferent to each other.”  “This requires the divisibility of goods and allows us to define the marginal rate of substitution between x and y.” (M. Hoskins, pages 4)

Non satiation – “consumers always prefer more goods to fewer. This deduces that indifference curves slop down from left to right.” (M. Hoskins, pages 5)

Convexity of preference – “people prefer a mixture of bundles.” “This leads to the diminishing marginal rates of substitution. “(M. Hoskins, pages 7)

The assumptions are mainly the same for both the Hicksian and Marshallian consumer optimisation problems. We assume that both optimisation problems have the traits of completeness and transitivity as otherwise we would not know the consumers utility to work from. These two traits however can be seen an irrational when related to the real world. There may be situations in which decision making can be extremely difficult and so a consumer may not be able to decide if a bundle of goods are preferred to another or indifferent, rendering us unable to economically analyse the situation using this model. Also, it assumes that the customer knows their utility that they will receive from the product which is not always the case. The transitivity assumptions eliminate the possibility of intersecting indifference curves. At any one point in time, this could be true, for example, today I may prefer an apple to pear and a pear to an orange, and therefore I would prefer an apple to an orange. However, it does not take into account the time horizon. Tomorrow my preferences may change, and neither model takes that into account.

Continuity of preference is also assumed by both models and both assume that the marginal rate of substitution equals the price ratio (P1/P2). This however ignores quantity discounts; if you order in bulk then you are likely to get more for you money.

The non-satiation property also will not hold in reality. The consumer is always defined as per period and there is usually an upper limit on consumption of most goods in a specific time frame. This assumption also links in with the other assumption made in both models that that X1*>0 and X2*>0 (non-negativity). If there are two goods, for example, carrots and peas and the consumer does not like carrots, he does not prefer to have more as he does not like them. What also if one of the goods was pollution, which it would be rational to prefer none of.

The last assumption of the basic axioms I have listed is the convexity of preference which again for reasons listed before, may not hold in reality. People may prefer a mixture of bundles in some cases, but not in others. If they do prefer the mixture of goods then the diminishing marginal rates of substitution does makes sense in reality. For example, if the two goods were coke and water (and you preferred a mixture of the two) and you had loads of water and no coke, then eventually the enjoyment you would gain from an extra glass of water would fall. You would therefore be prepared to lose more water in comparison to the amount of coke you would gain.

Although the two optimisation problems share all the assumptions above, they also have individual assumptions. The Marshallian optimisation problem specifically assumes that the customer is a utility maximiser. This is in line with the rational theory of the consumer as he wants to maximise his utility according to his wealth. This is not always true of reality as it will depend on the type of consumer. Some consumers may well try to maximise their utility but this will mean exhausting income. For example, people who are hooked on drugs may spend all of the income on drugs which they believe gives them their maximum utility. Other consumers will want to save some of their income and in doing so happily forfeit the extra utility.

Finally, the Hicksian optimisation problem assumes that the consumer targets a utility and finds the most cost effective way to reach this. I think this is probably truer of reality than the Marshallian method; however again, it depends entirely on the consumer himself. Less wealthy consumers may decide they want a certain product and then search the internet and high street to find the best deal. A consumer who is a billionaire however may not have a target utility as money is not a worry. He may also not adhere to price ratios – if a good increases in price, he may buy more of that good as a symbol of status.

A comparison of the Gini and Atkinson Measures of Inequality

The Gini and Atkinson Indexes are both measures of inequality. The Gini measure however is a positive measure, whilst the Atkinson is a normative measure. A positive measure is purely statistical whereas a normative measure is “based on an explicit formulation of social welfare and the loss incurred from unequal distribution.”[1]

The Gini coefficient measures inequality using values of a frequency distribution. It is derived from the Lorenz curve framework. If the Lorenz curve is equal to zero, then it will be equal to the line of equality and everyone will have the same income. The more the curve deviates from the line of equality, the higher the inequality. Gini coefficients can be used to compare income distribution over time, making it possible to see how inequality changes over a period independent of absolute incomes.[2]

The main weakness of the index as a measure of income distribution is that it cannot differentiate between different kinds of inequalities. Theoretically, two Lorenz curves could intersect one another (showing different patterns of income inequality) but still result in a similar Gini coefficient. The Gini coefficient measures relative and not absolute wealth. Therefore, when a Gini coefficient of a developing country rises, this could be misleading. Changing income inequality by Gini coefficients may be down to things such as structural changes or immigration. A rising Gini coefficient suggests increasing inequality but it could be the case the number of people in the absolute poverty bracket actually decreases. Economies with similar incomes but different income distributions can actually have the same Gini index. For example, one economy has 50% of people with no income and the other 50% having equal income, giving a Gini index of 0.5. Another economy has 75% of the lowest income earning people with 25% of income while the other 25% of people have 75% of income which also gives a Gini index of 0.5. [3] Clearly the Gini coefficient will not always necessarily capture the whole picture about inequality within an economy.

Because of these limitations of the Gini coefficient, other methods can be used in combination or separately. For example, entropy measures such as the Atkinson index can be utilised.

The Atkinson measure allows for different parts of the income distribution to have certain levels of sensitivity to inequality. It incorporates a sensitivity parameter which allows the researcher to attach a weight to inequality at different points. Atkinson when creating the model was especially concerned with how the Gini measure could not place different weights on certain income brackets.[4]

 

In his paper ‘On the Measurement of Inequality’ Anthony B. Atkinson compared inequality in seven advanced and five developing countries. He compared three conventional measures (one of which was the Gini coefficient) against three differently weighted Atkinson indexes; the weight was increasing on poorer end of the scale. This data only highlighted the differences in the measures. With a weight of ε = 2 attached to the lower income scale, the Atkinson measure disagreed with the Gini coefficient in 17 cases. With ε = 1, a smaller inequality aversion, the Atkinson measure still disagreed in 5 cases. The results show that conclusions reached about income inequality is dependent on the level of inequality aversion. The reason for this is to do with the distribution of income. In developing countries, incomes are more equal at the lower end of the scale and less equal at the top than in developing countries. As we increase inequality aversion, more weight is placed upon the lower end of the scale causing distortions.[5]

The main difference between the two measures is therefore this sensitivity parameter. This parameter makes the Atkinson subjective whereas the Gini measure is objective.[6] While perhaps yielding a similar result when a low level of inequality aversion in the Atkinson measure is used, clearly by manipulating the Atkinson measure, results can differ greatly between the two. It is subjective because the user can choose what subgroups to weight more heavily than others. The Atkinson index is also subgroup consistent, this means if in a subgroup, inequality declines ceteris paribus, then overall inequality declines. It is also decomposable, which means the total inequality can be broken down to a weighted average of the inequality which exists within the subgroups. These two qualities are something which the Gini coefficient does not hold. The Gini coefficient gives an equal weight to the entire distribution but the Atkinson index gives more weight to the lower end of the distribution, this means it accounts more wholly for things such as income poverty and illiteracy.[7]