GLOBAL CRITICS

Home » Research Essays

Research Essays

Advertisements

 This section includes essays on several  research subjects that I have undertaken driven either by my consultancy services or personal interest.

________________________________________________________

ENDING POVERTY BY 2030? WHAT ABOUT MEASUREMENTS?

2014-04-11 19.46.54 2014-04-11 20.00.19

Introduction:

During the World Bank Spring Meetings 2014 it was repeatedly stamped on the floor an END POVERTY circle stating how much this was a global goal. The dream of a world free of poverty inscribed at the entrance of WB headquarters in Washington D.C. was taken to a new level in April 2013 when President Jim Yong Kim announced the twin goals to the international community: First, to end global poverty reducing the share of people living in extreme poverty to 3 % of the global population. Second to boost shared prosperity understood as increasing the average incomes of the bottom 40% of the population in each country. No doubt poverty reduction has been the overall mission of the WB but this adoption represents an unprecedented shift because for the first time the WB has endorsed specific poverty targets to guide the World Bank´s work. Reiterating these claims the international community of the post-2015 agenda gave an even bolder step endorsing the total eradication of extreme poverty by 2030. To achieve complete eradication of poverty is surely more aspirational than realistic, but no matter overoptimistic we are it clearly states how urgent this is an absolute priority.

The kind of civilization we have built depends on the way we do our national accounts and construct statistics that ultimately reflect our aspirations and the values that we assign to things. So in this case setting goals helps project a global vision but also creates a context for action and policies towards poverty reduction. But setting objectives always means to assess progress so most of the capacity to fulfill these ambitions will depend on measurements and particularly what we use as concepts, data and indicators of poverty. Disaggregating these universal statements into metric analysis will undoubtedly determine future achievements.

 

GLOBAL POVERTY ASSESSMENT: PROGRESS SHOWS THE WAY

2014-04-11 19.54.57

The overall consensus in international community endorsing the MDG in 2000 has contributed to a staggering progress in poverty reduction never witnessed before in human history. Global poverty fall fast: in 2010 only 22% of the world’s people are living on less than US$1.25 a day compared with 52% in 1980, comprising 1.2 billion people. Half a billion people have now been dragged out of the orbit of poverty, over 2 billion people gained access to improved sources of drinking water, child mortality has been reduced by 41% and Malaria has reduced their victims by one quarter.

We now hit the 2015 MDG deadline and the international community has dedicated all its efforts in recent years to understand what has shifted and what the prospects are for the future. This past-future reconciliation process led to a cloud of crowd formed by a gigantic multi-donor mist. What is possible to disentangle from this intricate and diffuse debate? One of the main contributions of the MDGs agenda was to set indicators to project a vision of progress so it is not surprising that it became essentially a monitoring exercise. One of the findings of the brainstorm was that measuring poverty continues to be a barrier to effective policymaking. The availability, frequency and quality of poverty monitoring data remain low. There are serious challenges regarding National Household surveys particularly in Sub-Saharan Africa due to institutional, political and financial obstacles that hamper data collection, analysis and public access. The need to improve household survey programs is urgent and the availability of accurate timely data is critical. Reliable statistics for monitoring development remain inadequate in many poor countries. Building statistical capacity in those countries demands increased and well-coordinated financial and technical support from development partners complemented by country ownership and governments’ commitment to spur the institutional changes needed to ensure the sustainability of capacity-building efforts. Without this overall effort future accountability of global commitments will be dampened.

 

METRICS MATTER, BUT CAN GDP AND GROWTH STILL ADD UP?

globe metrics Imagem de Growth

If measuring poverty is a priority surely in this increasingly performance-oriented society of ours, metrics matter. But one question comes into our minds: will it be possible to replicate past achievements into the future using the same measurements and old fashion path? The MDGs agenda give no answer to that.  Although efficient in setting simplistic headcount measures as poverty targets it was absent on the recommended path out of poverty unconsciously assuming  that fast growth would automatically drag out of poverty a portion of the population. But will GDP based measures still fulfil their prophecy? For decades classic GDP indicators have been the most widely used measure of economic activity. Inspired in Keynesian economics after the Great Depression, Governments took over the control using GDP as the statistic to describe the state of the economy. Up to now in mainstream economics it remains the basis of the standard system of national statistics representing wealth by basically aggregating all goods and services produced in the economy. Governments, policy makers and aid agencies have pursued obsessively GDP growth as a benchmark of economic development. If what we measure is our aspirations then in this pursuit what have we been striving for? The straightforward answer is pure market-based production. GDP was so widely used because it had the advantage of capturing in a single number the monetary valuation of aggregated goods and services produced in the market so no wonder it was essential for monitoring economic activity. But it became so handy that some were wrongly tempted to use it as an indicator of societal well-being mis-measuring our lives. One of the biggest misconceptions of GDP is that it has often been treated as a measure of economic well-being although it represents only marked-based activity. What we measure affects what we do so if we have the wrong metrics we will crave for the wrong things. Thus relying on GDP and growth as a guarantee of societal well-being implies several misconceptions. It means to target production while income and consumption are in fact more suitable to measure living standards. It means that we should track the economy as a whole rather than to focus on the household perspective which is again more pertinent for considerations of living standards. There are several other limitations of GDP as an indicator of economic well-being and social progress, particularly what concerns the poor. How meaningful is growth and GDP to describe poverty if averaging income through GDP per capita does not give any information on the bottom of the wealth distribution where typically poor are? As the agenda shifts towards shared prosperity focusing on the 40% bottom of population, classic GDP measures that look at overall population and unconsciously rely on the top richer as benchmark of wealth will not serve.

Furthermore beyond these technicalities there are also wider conceptual issues. Does GDP accounting systems fit the idiosyncrasies of developing countries?  Looking for example at Africa, with the highest incidence of poverty, does this accounting fits this continent with 80% of the labour force remaining in the informal sector and most of the household incomes relying on family work? How can GDP be suitable if it neglects non-market activities such as home-production? Surely it may be highly understated due to the informal sector and uncounted family work. Similarly one can also reflect on the relevance of Africa´s very high growth rates in the last decade based on export led natural resources exploitation if we consider that in fact most of these profits are repatriated. There are also issues related with the nits and bits of the accounting exercise of GDP. In developing countries where the government is still the largest provider of the economy the value of public production is usually badly measured. As public goods are typically free with no price associated traditionally the solution is to do measurements based on the inputs used to produce rather than on outputs. This not only ignores the productivity of public services but also neglects a crucial dimension of poverty reduction: the service delivery quality of public goods. Additional issues arise from pricing.  Price signals have to be interpreted with care in temporal and spatial comparisons that sometimes do not comply with the features of developing countries´ economies. For example in the African context prices highly fluctuate in space and time due to seasonal patterns, supply issues such deficiencies in transport and logistics or regional production patterns. Additionally prices tend to deviate from society´s underlying valuation or due to quality changes. Therefore for a number of reasons prices may not provide a useful vehicle for aggregation, if we consider that ideally they should be accounting devices unchanged while observing quantities of goods.

So it is true that flawed or biased statistics can lead us to make incorrect inferences and GDP is a wrong indicator to measure well-being, but how much of this quest for GDP increase has contributed to reduce poverty?

No doubt growth has dragged out of poverty many poor people through broad-based economic growth that generates more and better-paid jobs functioning as the structural transformation that creates middle class. But it is very likely that growth had an impact on the easy to reach poor positioned closely below the poverty line. Empirically, the explanation comes from the distributions of consumption that typically take a shape that reflects a concentration of the population around the middle of the income distribution, with a thinning of the population density around the two tails. Economic growth results in scaling up the consumption levels of all persons in the population (under the assumption of unchanged inequality) making poverty fall as economic growth progresses. However, because the majority of the population is concentrated around the middle of the consumption distribution, progressively the fraction of the population that is lifted out of poverty as a result of economic growth will decline. Growth reduces poverty because it moves a large number of people that tend to live on consumption levels near the average, (while relatively fewer live on very high or very low consumption levels), but when poverty reduction has reached what can be called the saturation point associated to the mass of people concentrated in the middle of the consumption distribution, poverty reduction will increasingly reach fewer people, even if the pace of growth remains unchanged. That is why after the big push growth may have diminishing returns on poverty meaning that the pace of poverty reduction may be lower in the future. (M. Ravallion, 2013)

 

Several WB growth simulations based on several scenarios provide interesting insights. A country growing at a steady rate under the assumption of unchanging inequality may not reduce poverty. The only way that a constant rate of poverty decline can be delivered is if growth, in fact, accelerates over time. Also if it is difficult to accept that global poverty would decline at a constant rate all the way through to 2030, one would also not expect that poverty decline will display a straight-line trajectory. Recent findings show that poverty reduction is scattered and not a uniform process. Indeed in many countries, there are resistant poverty pockets with individuals that are alienated from development process, excluded, discriminated or simply trapped in poverty which tend to be insensitive to growth to leave poverty.

Business as usual will not deliver the previous results, so albeit the striking linearity in the decline of the global poverty headcount since early 1980s, the future path may well have a significant uneven progress. (WB report, 2014)

 

Undoubtedly there exists virtually a mechanical relationship between growth and the sensitivity of poverty reduction but additional poverty reduction will only be possible assuring distribution policies to the bottom of the income distribution, i.e. the poorest. Surely growth will not be a panacea, but the idea of shared prosperity still retains the emphasis on growth but now measured by household national surveys based on income or consumption instead of national accounts. The current agenda shifts the attention to the growth of the average income (or consumption) of the bottom 40 percent of the people in a given country rather than to the previous overall average income. It changes dramatically the tone clearly stating inclusive growth as a priority.

 

In the aftermath of the financial crisis our society and economy have changed and the measures have not kept pace, measures such as GDP were a mis-representation of wealth and wellbeing and in the future no matter how thrilling poverty eradication may be, we should not rely on the omnipresent effect of growth to fulfil this vision. No doubt recent remarkable achievements were in fact a considerable reduction in absolute poverty, but in the in the future business as usual with poverty reduction mostly driven by growth, plain macro-economic stability or post-conflict catching up effects will no longer be sustainable. It shall be more meaningful to address specific issues such as security and climate change in an integrated fashion with poverty analysis than to burst growth by its own, demanding knowledge-based and assertive policy driven efforts tackling extreme poverty with more surgical precision.

 

ZERO POVERTY? THE FUTURE WILL BE THE RESULT OF PAST KNOWLEDGE.  

 

foto knowledge fotos knowledge 2

Eradicating poverty is not only bold and complex but also the ultimate challenge. To be successful in looking into the future we always need a starting point so the first step will surely be to understand what we have learned up to now. Now more than ever the international community has accumulated a deep and robust understanding of poverty. Tracking the path of how we arrived here and observing this privileged knowledge deposit may serve to understand how poverty analysis has evolved in terms of concept and measurement contributing to the recent remarkable poverty reduction achievements but also to find new solutions for the future and analyse poverty in its new forms.

 

Understanding Poverty: The Concept

To study poverty first we have to understand the concept. Poverty is frequently seen as the defining characteristic of underdevelopment making its elimination the main purpose of economic development. However if that is so, it is less obvious and consensual what poverty means or is in reality. Defining poverty has always been a complex task, mainly because it is above all a multi-dimensional concept. For some, poverty is the state of being without, often associated with need, hardship and lack of resources across a wide range of circumstances. For others, poverty is a subjective and comparative term. For some others it is a moral issue attached to an evaluative judgement and for a few others, as the economists, it will have to be scientifically established to be analysed. There is a diversity of opinions and approaches that can be used to define poverty which is very far from delivering a unique definition.

 

Definitions of Poverty:

·          Poverty is the state of having little or no money and few, or no, material possessions.·          To be impoverished is to lack or be denied adequate resources to participate meaningfully in society

·          Poverty is the state of being deprived of the essentials of well-being such as adequate housing, food, sufficient income, employment, access to required social services and social status.

·          Poverty is a situation in which a person or household lacks the resources necessary to be able to consume a certain minimum basket of goods. The basket consists either of food, clothing, housing and other essentials (moderate poverty) or of food alone (extreme poverty).

·          Poverty is the condition of possessing an income insufficient to maintain a minimal standard of living.

·          Definitions of poverty are culturally specific, and thus relative to the social norms and expectations endemic to a given nation-state. However, the condition of absolute poverty (i.e. lacking the income to maintain a minimum diet) is acknowledged worldwide.

Source:www.hsph.harvard.edu/thegeocodingproject/webpage/monograph/glossary.htm www.undp.org/rbec/nhdr/1996/georgia/glossary.htm,www.econ100.com/eu5e/open/glossarywww.education.eku.edu/Faculty_Staff/resorc/TheOrphanTrain_KParrett.htm,

Who better to answer what is poverty than poor people themselves. Following criticisms that top-down public social policies hardly reached the poor Participatory Poverty Assessments (PPAs) promoted by the World Bank have focused on the personal experience of poor people reporting their daily life and showing the human face of poverty. These PPAs reviews (43) showed that poor people report their impoverished status mainly in terms of material deprivation e.g. low incomes, lack of or unstable employment, shortage of food, inadequate housing, combined with inadequate access to health services and clean water.  However they also give weight to non-material social, psychological factors such as insecurity and loneliness; social and political conflict; lack of autonomy or exclusion from decision making.

Following this trend, poverty initially defined as material deprivation, has increasingly been systematised by hierarchal levels, that start in a more narrow and concrete level, focusing mainly on income and consumption patterns and progressively incorporating other factors such as: social spending or public expenditure on education, housing and infrastructures; assets including land livestock and housing or consumer durables such as radios (Baulch 1996). It ends in a broader and more holistic view of poverty that engulfs more psychological features and general human conditions as self-esteem and self-respect, dignity and vulnerability or as Amartya Sen stated the pure enlargement of people’s choices and freedoms.

 

Measuring Poverty: Literature, Axiomatic and Indicators

If finding a consensual and unique definition of poverty has always been a difficult task, it is not surprising that measuring it has been an even more delicate and complex process that involves considerable technical issues and theoretical assumptions.

In economics literature the first effort in studying poverty was based on social welfare functions used to measure the living standards of the population. Its main advantage was the statistical aggregation that summed up the welfare of individuals turning the distribution into a single number that provided a judgment about overall welfare. However, by aggregating the social welfare of all population in one statistic, this approach failed to isolate poor in the overall population. But in the 70´s, research although influenced by a compelling axiomatic framework on inequality measurement was also motivated by very practical considerations regarding poverty measurement such as the best way to identify and aggregate poor. This led to a flourishing literature on new indices and the debate on the construction of poverty lines (Bourgignon, Cowell et all). The new trend was to develop poverty measures that focus on poor specifically identifying poor people through thresholds called poverty lines. The use of poverty lines allowed for the first time to narrow down the study only to the poor facilitating their identification and posterior aggregation into meaningful measures. Amartya Sen and Deaton were prominent contributors particularly in the aggregation step that led to what was called the Sen´s Measure:

 

Figure 1: Sen Measure

S(x;z) = H (I+(1-I)Gp

 

Where:

x is the income distribution, z is the poverty line

H is the Headcount ratio or frequency of the poor

I is the income gap ratio or the average normalized shortfall among the poor

Gp is the Gini Coefficient among the poor

 

 

 

 

Where

 

 

 

 

 

 

 

The Sen Measure captures not only the frequency reflected in the Headcount but also the depth and distribution of the poor. Based on inequality analysis the Sen´s measure included for the first time the Gini coefficient in poverty measurements, but albeit the breakthrough from the focus on relative deprivation the measure was difficult to use in empirical applications. In fact the Sen´s measure is a normalized weighted sum of shortfalls selecting weights based in the rank order of poor income. Sen´s paper was more for theoretical discussion, having more a mind-blower effect than empirical applications. The reason for that was that it was not decomposable across subgroups and thus not useful for regional data which hampered the construction of poverty profiles.  When it comes to subgroups the measure always comes back to H which is not very interesting. Sen´s axiomatic was good but it was needed a more broadly applicable measure.

 

 

 

Presented to us in 1985, the FGT measures have become the standard for international evaluations of poverty and had a great impact on the work on theory but most of all in policy applications. Many factors contributed for the FGT measures becoming the most widely used measures of poverty. Its simple structure based on powers of normalized shortfalls facilitates communication with policymakers. However one fundamental advantage is that it allows evaluating poverty across subgroups of the population in a coherent way due to its sound axiomatic properties of additive decomposability and subgroup consistency. The initial use came from this unique practical advantage but additionally further research has shown that FGT indices were closely linked to stochastic dominance which not only enhanced predictability and robust results but also provided a unifying structure linking poverty, inequality and well-being.

 

 

 

 

 

Figure 2: Foster-Greer-Thorbecke Poverty Measures

 

 

 

Where yi is the income of individual i, z is the poverty line, N is the total population, k the number of poor people and α is a parameter that represents the degree of aversion to inequality among the poor.

This measure has the advantage of splitting into three aggregate measures through the simple change of the exponent:

If α=0   the measure gives the Headcount index (Incidence) which is the proportion of people below the poverty line.

If α=1  the Poverty Gap index (Intensity), which is the average shortfall of the poor’s income from the poverty line, averaged over the whole population.

If α=2  the Severity Index (Inequalities). This weights incomes below the poverty line convexly and so captures the inequality of incomes among the poor. Incomes further from the poverty line have more weight.

 

 

 

 

Absolute Poverty Analysis

 

The study of absolute poverty has two fundamental steps. First we choose the welfare indicator that traditionally is income or consumption per capita, although consumption is usually the favourite indicator in developing countries. Consumption is typically a better measure of current living standards because is less volatile and more easily measured than income due to the fact that most poor are engaged in the informal sector or family work in developing countries. The second step is to construct the poverty line. It is a complex exercise but to simplify we need to  ultimately answer to two philosophical questions: What is the adequate minimum level of well-being below which he/she is considered poor in a specific local context? and How can we find the minimum amount of money that corresponds to that same level of well-being? There are two main methods currently used to construct national absolute poverty lines: the Food Energy Intake method (FEI) and the Cost of Basic Needs (CBN). The FEI method is based in the assumption that as income (or expenditure) rises, food energy intake also rises, and the poverty-line is the level of expenditure, z associated to a given minimum adequate level of calorie. One of the advantages of this approach is its parsimony as it does not require any information about the prices of goods consumed, however it relies on an assumed relationship between household expenditure and food energy that may not hold throughout time so it does allow comparisons across time and regions. Thus findings of FEI may be seriously flawed and should not be used unless alternatives are unfeasible. (Ravallion and Bidani, 1994).

The CBN method is the favourite although it is slightly more complex in terms of calculation, requiring extra data on prices and detailed consumption data, but in turn it makes it more rigorous. This approach stipulates a bundle of goods that are associated to a minimum status of welfare and reflect local consumption patterns. This is the tricky part: how do we define an adequate bundle that represents basic needs? Usually the method used is to take the average nutritional requirement for an individual to be in good health, (often approximated to be 2,100 calories per person per day), as the benchmark to compile a basket of goods that reflect local diet near the poverty line.[1] Then the cost of this basket is estimated based on the prices of local foods, which makes the food poverty line. Additionally it is also included in the overall poverty line a nonfood component corresponding to mostly the costs for housing, clothing or electricity. In the absence of an objective caloric requirement, it is difficult to set what is the adequate or basic need of non-food components and indeed there is no consensus or best practice to estimate the non-food poverty line. One option is to arbitrarily stipulate a bundle that reflects basic non-food consumption and then price it accordingly[2]. There are no normative criteria and different methodologies are found in practice, but typically it is set according to food demand behavior in each sub-group of the population and is found by looking at the non-food spending of people in a neighborhood of the food poverty line. Thus in the Cost of Basic Needs approach the poverty line is estimated as the cost of a basic bundle of goods that corresponds to a low cost adequate food diet but also to  non-food basic consumption requirements.

 

The use of poverty lines allowed narrowing down the study only to the poor and most of all identifying the poor and aggregate them in meaningful measures, such as the most often quoted poverty indicator the Headcount ratio or the FGT measures. Although these measurements are the most popular and mostly used to make poverty analysis these techniques are limited money metrics that focus only on income/consumption and do not tackle the multi-dimensional side of poverty. Nevertheless measurements of absolute poverty have spread use because of its simplicity as it is very difficult to use subjective variables of poverty like self-esteem, dignity or psychological features as defining indicators. Indeed measuring the welfare of an individual/household is not an easy task, but it could be done if one restricted the concept to material or economic welfare, making it feasible. However, by doing that, a panoply of non-material factors that influence happiness and satisfaction is subtracted from the analysis for the sake of practical reasons. However albeit its limitations the absolute poverty approach allowed overcoming constraints in poverty analysis addressing the issue of identification and aggregation of poor which were fundamental to produce national poverty profiles and poverty household statistics crucial for targeting the poor and implementing  poverty reduction policies in the last decades.

 

Relative Poverty Analysis: Comparing Poverty across Countries

Although absolute poverty analysis was a huge revolution that provided for the first time robust national poverty profiles, it is not possible to use these techniques to compare two or more countries. If absolute poverty is more likely to be a priority in the agenda of national governments, relative poverty is more palatable to the international community forums eager to compare and rank countries. Monitoring global poverty estimates will not be possible without a common poverty line across all countries. Since the World Development Report of 1990 that the international 1$ a day PPP poverty line (more precisely, the line is $32.74 per month, at 1993 PPP) has been used to measure global poverty, providing a comparable standard of welfare between countries. Currently the World Bank updated the initial threshold and uses an international poverty line of $1.25 a day, in 2005 prices. Deliberately conservative this line corresponds to an average of the national poverty lines of the 15 poorest developing countries; it intentionally represents a very low threshold standard of living to assure that it is anchored to low-income countries (Chen and Ravallion 2010). Constructed to reflect the standards of the poorest it also accommodates differences in the Cost of Living through the PPP adjustment designed to enable comparison of purchasing power across countries and over time. There are several ways to measure PPP exchange rates. One is the Geary-Khamis (GK) method used by the Penn World Tables (PWT) that uses quantity weights to compute the international price indices. This technique tends to be more suitable to richer countries as it gives a too high weight to consumption patterns when measuring poverty globally. Instead the EKS method tries to correct this bias using the extended version of the bilateral Fisher index that is widely used to compare particularly developing countries.

In the last decade nothing has resonated more in the poverty agenda than the “One Dollar a Day” rhetoric. Some say it is too simplistic and for others it may seem more a bad publicity jargon than a real life benchmark, so what is the magic about it? It is a step forward in monitoring global poverty because it goes beyond national poverty lines allowing poverty comparison among countries. In assessing the extent of poverty in a given country one naturally focuses on a poverty line that is considered appropriate for that country, but poverty lines vary across countries in terms of their Cost of Living. To cope with this economic gradient, in such way that richer countries tend to adopt higher standards of living in defining poverty, the PPP correction is used to tackle this bias.  With PPP adjustment, two people with the same purchasing power over commodities should be treated the same way even if they live in different countries and only under this assumption one can infer if one individual is either poor or not poor. But why not use a common exchange rate to convert different poverty lines?

International comparisons of economic aggregates have long recognized that market exchange rates—which tend to equate purchasing power in terms of internationally traded goods—are deceptive, given that some commodities are not traded; this includes services but also many goods, including some food staples. Furthermore, known in the literature as the “Balassa-Samuelson effect”, there is likely to be a systematic effect: low real wages in developing countries entail that labor intensive non-traded goods tend to be relatively cheap. In addition it is the now widely-accepted explanation for an empirical finding known as the “Penn effect”—that GDP comparisons based on market exchange rates tend to understate the real incomes of developing countries. Similarly, market exchange rates overstate the extent of poverty in the world. So for all these reasons global economic measurement, including poverty measurement, has used Purchasing Power Parity (PPP) rates rather than market exchange rates.

 

The beyond 2015 agenda has burst discussions on the best way to measure poverty globally in which unavoidably the “One Dollar a Day” debate came up to stage. There are several views on whether this frugal line adequately embraces current standards for defining poverty. Martin Ravallion, the mentor of the 1 $ a day measures, argues that the $1,25 poverty line is necessary and useful, but suggests that to gauge sensitivity it can be used a higher line set at the double of the 1,25 Dollar a day. It has also proposed a new measure of poverty called “weakly relative poverty” that combines absolute and relative poverty to adjust over time or across countries for differences in the costs of avoiding social exclusion and relative deprivation.

 

It is widely agreed that eliminating extreme poverty in the world should take priority in thinking about our development goals going forward. The ‘$1 a day’ poverty line is a simple metric for monitoring progress toward that goal. It was chosen in 1990 as a typical line for low-income countries (as explained in Dollar a day revisited). By this measure, poverty in the world as a whole is judged by a common standard anchored to the national lines found in the poorest countries. On updated data, the current value of this international line is $1.25 a day at 2005 purchasing-power parity. Today about 1.2 billion people in the world live in households with consumption per person below this frugal line. Thankfully, the world has made progress in bringing this count down; 1.9 billion people lived below $1.25 a day in 1990.”

Martin Ravallion is the Edmond D. Villani Professor of Economics at Georgetown University, Washington DC. Ex- director of the World Bank’s research department.

 

 

Other contributions argue for higher international poverty lines as richer countries tend to use higher lines because food bundles are more expensive and the allowances made for non-food needs are more generous (Lant Pritchett, 2013). Other advocate internationally coordinated national poverty lines (Stephan Klasen, 2013) that include both the headcount and depth of multidimensional deprivation (Sabina Alkire, 2013). Others focus on relative poverty that allows the distinction across different types of poor people (Amanda Lenhardt and Andrew Shepherd, 2013). Finally, there is also the potential contribution of Big Data to poverty measurement in light of the “Data Revolution”(Emmanuel Letouzé, 2013).

Whatever potential revisions, setting the global poverty line at $1.25 per person per day in real terms means focusing on the standards of the world’s poorest.

 

 

WHY THE SHIFT TOWARDS THE EXTREME POOR? THE CHRONIC, IN TRANSIT AND THE VULNERABLE.

foto extreme poverty foto poor

After the staggering poverty reduction witnessed in the last decade the agenda has now shifted, looking at the ones still left behind, the poorest. The reason for this came from the evidence that although growth has pushed out of poverty an easy to reach poor near the poverty line, it may not be sufficient to benefit the ones lagging far behind below the poverty line, namely the extreme poor and the marginalised. As growth based poverty reduction may have diminishing returns in the future, growth may not be able to tackle some frictional poverty that remains at a very low level. Indeed in many developing countries the poverty of certain subgroups of the population is relatively insensitive to overall rising income levels, so as poverty declines it may be relatively more difficult to reduce poverty in hard-to-reach geographic pockets or among population groups that are somehow excluded from broader economic participation. These pockets of poverty emerge from a variety of reasons such as geographic remoteness, patterns of social stratification or discrimination as well as market failures that generate poverty traps. They can also be difficult to reach because they are affected by conflict or climate change or because they are simply  trapped in poverty due to failures in credit, land, or key markets access, or low levels of education, skills, or health which prevent them from succeeding themselves. The extreme poor or so  called ultra-poor suffer from severe material deprivation and several vulnerabilities, are socially excluded and/or belong to a minority, and are commonly, children, elder, sick and mostly women.

As overall poverty levels fall and these pockets come to represent the majority of those who remain poor, progress in further reducing poverty not only will be slower, but will demand addressing poverty in all its new forms. The rhetoric of the extreme poverty ultimately means that there are several types of poverty that need to be addressed differently. Thus a further mind stretching in poverty conceptualization is necessary that goes beyond classic absolute poverty and lead us to considerations about its depth and severity. Extreme poverty also tends to be as severe as resilient throughout time, so it is highly correlated with chronic poverty. Addressing extreme poverty is not only to focus on the most precarious type of poverty but also to recognize that it is the most persistent over time, so tackling extreme poverty requires ultimately looking at issues related with transitory and chronic poverty.

According to the CPRC chronic poverty is described as “those individuals and households who experience poverty for extended periods of time throughout their lives, usually for five years or more; a poverty that is often intergenerational in nature” (Hulme, 2003a:399; Hulme et al., 2001; Sen and Hulme, 2004).

 

 

 

 

Who are the Chronically Poor?

Most spend their whole life in poverty, and their children — if they survive the early years of life — are likely to be as poor as themselves. They suffer multiple-deprivations, not only little income but poor health, dying an early (and preventable) death. If they reach old age, their remaining years are often miserable ones marked by chronic illness. They are often trapped in environmentally-stressed regions, remote from infrastructure and markets. Many live in chronically-deprived countries (CDCs) marked by geographical disadvantage, inequality, war and political turmoil, and there is some overlap with the “bottom billion” discussed by Professor Paul Collier. However, many others live in countries experiencing economic growth at a national level, but with great regional or social inequality. For example, we estimate that perhaps one third of the world’s chronically poor people live in India alone. Within huge countries like India and China, there is enormous variation: several populous Indian states are larger than most African countries and suffer widespread persistent poverty and intractable development problems.

A key point to understand is that most chronically poor people are working. They are not ‘unproductive’. Even if they are at a stage in their life-cycle when they might be expected not to be working — whether childhood or old age — many will be forced through hardship to engage in economic activity of some kind. Processes of exploitation and exclusion keep many millions in poverty by limiting access to assets, services and positive social relationships.

Many slide into chronic poverty after a shock or series of shocks (e.g. ill health and injury, natural disasters, violence, economic collapse) that they cannot recover from. These are not very different from what drives poverty in general: but when shocks are severe and/or repeated, when people have few private or collective assets to ‘fall back’ on, and when institutional support (social protection, basic services, conflict resolution) is ineffective, such processes are likely to trap people in chronic poverty.”

Source: Policy Brief CPRC

 

 

 

 

Chronically poor represent today nearly half a billion people. The gravity of this kind of poverty does not lie only in the numbers, but mostly on the length of poverty representing usually long periods of their lives or even an entire lifetime but also on its contagious effect to future generations. These features make chronic poverty the most complex and more resistant kind of poverty which poses distinct or additional policy responses. Furthermore tackling chronic poor is also to understand poverty as a transitory process over time. Please look at the figure 2 to understand the difference between chronic and transitory poor. 

Figure 3: The Chronically Poor, Transitory Poor and Non-Poor

box chronically poor

  Source: CPRC 2005

 

Previous static measurements of absolute poverty analysis focusing in one point in time   provide no insights on phenomenon that are associated with extreme poverty such as transitory and chronic poverty neglecting  movements of individuals that fall in and out of poverty or households that are chronically trapped in poverty. Poverty Dynamics has been an increasingly alternative empirical tool to study the movements of poor, escaping and falling into poverty providing its transitory path and a tracking system that may show the way in and out of poverty through time. (See Fig. 3) It goes beyond the uni-dimensional approach additionally looking at poverty duration, poverty severity, poverty dynamics and household vulnerability, (Hulme and Shepherd, 2003). This poses additional monitoring challenges because traditional household census normally enables construct poverty profiles that year and sometimes it takes several years to have the following survey. During this time gap monitoring is not possible and precious information gets lost that are crucial to understand the trajectory of poverty.  Panel datasets are therefore decisive econometric tools to analyse poverty dynamics as they allow the comparison of data from households over a period of time. Although very costly and difficult to implement with consistence they shall be fundamental to understand one of the features of extreme poverty as they allow the study of persistence of poverty over time promoting better understanding of chronic and transitory poverty.

Furthermore in future endeavours to reduce extreme poverty it is also fundamental to understand what specific context they live in and what are the challenges poor face. Globally in recent years we have witnessed that conflict has intensified dramatically and in new complex forms and climate change has affected all continents with no exemption. These external shocks will expose poor which are the most vulnerable to all hazards. Again poverty dynamics shall be important because it may be useful to analyse specifically the impact of multiple fragilities into poverty. It is widely recognized that conflicts and climate change can reverse gains made in poverty reduction, throwing large numbers of vulnerable and marginalised households, previously above the poverty line, into poverty. Conflicts and climate change affect the poor and vulnerable disproportionately, especially women, children, the elderly and those recovering from external shocks. Very often, it is those living on the fringe of society without adequate coping mechanisms (savings, insurance, social safety nets or social protection) who are most vulnerable to the impacts of conflict and instability, and are most likely to fall into poverty through the consequences of war or environmental shocks.

Thus increasingly poverty literature will take on board the concept of vulnerabilities and economic resilience into poverty analysis as the recognition of the impact of external shocks on poor. Technically there are some conceptual changes: poverty becomes a stochastic phenomenon and the current poverty level of a household, may not necessarily be a good guide to the household’s expected poverty in the future. In conventional poverty analysis a household’s observed poverty level is an ex-post measure of a household’s well-being while the forward-looking anti-poverty interventions should go beyond cataloging who is currently poor and who is not and look at the future poverty through households’ vulnerability assessments.

Absolute poverty analysis that chooses a welfare indicator, identifies the poor through a poverty line and aggregates them fails to take vulnerability into account assuming that poor live in the “world of certainty” neglecting the different risks that they face and how vulnerable they are to crises/conflict throwing them into deeper poverty. (Dercon 2005:20)

 

 

 “There are many different definitions of vulnerability, but are all consensual about the link between vulnerability and risk. Coudouel and Hentschel (2000:34) argue that vulnerability goes beyond income vulnerability but also incorporates risks related to health, violence and social exclusion. But within the study of vulnerability on the opposite side there is the underlining principle of resilience. Indeed Chambers (1989:1) stated that vulnerability refers not only to the exposure to contingencies and stress, but to a defenselessness status due to a lack of means to cope without damaging loss. Poor coping strategies such as lack of assets, insurance or safety-nets increase vulnerability in the face of repeated disasters natural or political instability that can push someone from relative wealth to poverty and from poverty to extreme poverty or destitution. Wood (2003:455) believes that the poorest cannot apply risk management and strategic preparation for the future to ensure their security.”

 

 

Definitions of Vulnerability

Vulnerability assessments may be relevant to study extreme poverty because they allow understanding the impact of several external shocks on poor by creating a profile on the scale and intensity of risks that poor face. This approach allows to account damage to assets such as crops, livestock or infrastructure as well as to identify the coping strategies available such as assets; insurance and safety nets that demonstrate resilience. It may also show the impact on different groups of poor for e.g. the chronically poor. It allows policy-makers to understand the dynamics of vulnerability in fragile contexts useful to set policy recommendations in areas of provision of insurance, social protection, human rights and legal protection.

Either our goal is total poverty eradication or the target of 3% residual poverty, tackling extreme poverty means to address poverty in all its forms, namely phenomenon related with chronic and transitory poverty, vulnerability and resilience.

 

MONITORING POVERTY: THE DATA DRAMA.

hh1 hh2

Household Survey and National Statistics

No matter how ambitious international global goals may be the most important efforts towards poverty reduction will no doubt be down at the national level. Successful assessment of poverty progress will only be possible if recipient countries assume ownership of the process because the responsibility to monitor poverty progress is ultimately of individual countries. To understand if poverty has increased or decreased authorities rely mainly on household surveys that provide consumption and expenditure data that are used to construct poverty profiles. Over the past two decades there has been a significant improvement in the availability of consumption data which has supported the remarkable poverty reduction achievements. Starting with 22 surveys in 1990, the world counts today with 1000 household surveys (Ravallion, Datt, and van de Walle 1991). But albeit this privileged accumulation of knowledge, much has been said on aid coordination but not much has been done in terms of harmonization of data and there are serious challenges in household surveys that need to be overcome in the future to properly monitor poverty. One fundamental condition to infer if poverty has decreased or not, is to have comparable measures of well-being at multiple points in time. But experience has shown that albeit the improvements, the heterogeneity of instruments (for e.g. questionnaires) and methodologies used in surveys can seriously dampen their quality jeopardizing the ability to compare results rigorously. Lack of consensus in survey design for example in questionnaires can have serious consequences in terms of comparability. The questionnaire is the soul of a survey, what we ask is what we get, so changes in questionnaires have dramatic consequences in the capacity to compare poverty throughout time that can lead to misleading conclusions and doubt if poverty has indeed increased or not. For example research in data collection has widely established that factors such as the recall period and the number of food items listed have a large effect on the consumption estimated. For example findings from Beegle in Tanzania show that comparing different recall period to collect data if we increase the recall period of personal daily diary from one week to two weeks poverty also increases from 55% to 63% (Beegle 2012). So sensitivity analysis has shown that the more we increase the recall period of questionnaires the higher are poverty estimates. On the other hand short and collapsed lists of food categories compared with long detailed list of food consumption have greater poverty estimates. In fact on average a 7 day recall with a long list of food items performs better compared with more expensive and onerous methods working as the gold standard. (Beegle et all, 2012)

The implication is that innocuous changes in the survey design have significant impact on poverty estimates, so any change in data collection methods should be looked at with caution to avoid difficulties in comparability and mis-evaluations of poverty. This does not mean that customizing surveys to specificities of the country is an unrecommendable practice, for example tailoring questionnaires with a list of food that reflect local consumption patterns will result in good quality of the measure of food security, but arbitrary changes of household surveys over time will result in spurious estimates of change of absolute poverty. Indeed reality is that in many countries there is no systematic effort to collect and distribute survey data. Most household surveys are collected on an ad hoc basis driven by specific requests of governments or ministry and depending on the availability of donor´s funding. Indeed these very expensive data collection initiatives craving for donors’ funding tend to be customized to donors’ strategic interests rather than produced systematically in a rigorous way. Furthermore new instruments and variables can also be used just due to a turnover of cabinet personnel willing to change questionnaires to improve informational data forgetting the cost they incur in data comparability. Pure administrative issues related with the quality of the training, supervision, enumeration and data entry can also undermine the reliability of the results of household surveys. Additionally as poverty is crucial to the agenda of most developing countries not surprisingly questionnaires can also be changed to accommodate desired political prospects. There are also issues of confidentiality that make access to survey data restricted even in those cases where survey data has been properly collected and compiled. There is great heterogeneity across countries as to when and to what degree the data are made available to analysts outside national statistics offices which makes poverty estimates although well produced not accountable for monitoring purposes.[3] In other situations the time gap between the fielding of household surveys and the release of the data for analysis makes estimates irrelevant or inconclusive. Many reasons may explain these delays. The lengthy process may be a plausible explanation but sometimes the release of poverty estimates can also be managed according to the political agenda namely elections, particularly if poverty is likely to increase or stagnate.

Another issue that is usually overlooked is also the importance of the timing of the survey. 80% of the worlds’ poor reside in rural areas and most of the poor depend on agricultural activities that typically are seasonal. So not surprisingly in most developing countries welfare fluctuates according to seasonal patterns making poor better-off during harvest periods and worse off in lean season. From this follows that if fieldwork for data collection occurs one year during lean season and in the other after harvest it may lead to the mis-perception that poverty has reduced considerably over time when in fact it reflects the seasonal cycle of poverty and not necessarily improvement over the years.

But comparability challenges do not occur only over-time but also within the country. Spatial differences in the cost of living can be dramatic inside one country. Cost of living can be the double or more in the capital compared with rural areas and in addition prices usually differ according to regional patterns. So failure to accommodate the differences in cost of living will result in mis-identification of the poor particularly what concerns the geographic profile of poverty.

Due to panoply of reasons there is lack of a consistent and predictable flow of new household data. In some cases they are not available or come in late or in other situations they are not reliable because they are product of choices and implementation decisions that seriously dampen comparability across countries and throughout time.  These are serious challenges that demand an exhaustive assessment and evaluation of each country’s respective household data to guarantee proper calculation of global poverty.

 

In the last 30 years if it is true that Povcalnet database has accumulated an impressive stock of more than 1000 surveys representing already 129 countries that only correspond to 20 to 40 new surveys available annually. One of the big challenges of monitoring global goals is that it is necessary annual household survey data but it is not realistic to imagine this occurring in the medium term. What do we do meanwhile? One way to move forward is to improve data collection exploiting new software and technologies. Traditionally household surveys were implemented based on Pencil-And-Paper-Interviewing (PAPI), but advances in mobile technology namely in Computer Assisted Personal Interviewing (CAPI) software provide a viable alternative. Comparing both interviewing methods CAPI significantly reduces the variance of consumption and increases the mean reducing poverty measures (Caeyers, Chalmers and De Weerdt, 2011). CAPI main advantage is that it reduces the time lag between data collection and data analysis but also allows automatic checks and quality control at the entry point.

Cell phones represent also an alternative opportunity in data collection. They will not be able to completely substitute the lengthy interviews of LSMS that take 12 months, but may serve as a potential annual monitoring tool for quality control and follow-up. There are several advantages:  rapid collection of high frequency and wide data; cost-effective, flexibility on question formulation; minimization of respondents fatigue reduces attrition and non-responses. (Croke and others 2012). Hybrid initiatives such as the WB Listening to Africa providing a face-to-face baseline survey followed by phone interviews give precious data about the dynamics of poverty.

GPS instruments can also be very useful because they will allow tracking extreme poor located in remote areas or disconnected from markets or other services. Distances of land size normally based on self-reports can be updated rigorously. If issues of confidentiality are addressed by for example collecting data based on enumeration area geocoding can point-track households and improve understanding of access to services, seasonal migration patterns and real-time vulnerabilities establishing innovative causal relationships of poverty based on surgical data.

BIG DATA. What can the Data Revolution offer?

big data

One of the features of the post-2015 agenda is that future poverty reduction will occur in a new context of what was called Big Data. Driven by digital technology such as internet, mobile phones, video surveillance and geo-space mapping this new Data Revolution has allowed to produce and process a vast amount of data very quickly, a phenomenon never witnessed before. Global private companies such as Amazon, Google, Facebook or Twitter lead this high speed and large scale accumulation of data. Google processes more than 24 petabytes of data per day a volume that is thousand times the quantity of all printed material in the U.S. Library of Congress. Facebook is a brand new company that already gets more than 10 million new photos uploaded every hour and receives a click on the like button 3 billion times per day. Amazon made a patent on “item-to-item” collaborative filtering using correlations among products to foresee customers’ tastes. Google is able to predict global trends of contagious diseases such as flu or Ebola through search engines. Facebook provide a digital trail of user´s preferences and consumer profiles and Twitter tell us what is in our minds. The amount of stored information grows 4 times faster than the world economy while the processing power of computer grows nine times faster. This fast and vast data undoubtedly represents a precious opportunity to monitor poverty in the future with great potential for partnerships between public entities and private companies. Thus one of the challenges for the future is to promote efficient public-private strategic collaborations that can offer this unique possibility to use private cellphones, GPS, sensors and even web clicks as monitoring tools of poverty globally.

Big Data is not only scale and speed but using the entire random sample. If before samples were took for granted now it is possible to use all the data providing a granular view able to identify subcategories, submarkets and details with  exactitude and less sampling errors. In the past using samples as representative of large number was the result of data scarcity and an artifact to go around informational and technological constraints. While before test hypothesis were defined even before data collection, now we let data speak for itself. This new approach favours the what against the why; more than focusing on causality models we now look at correlations and connections that produce innovative data that we never thought existed. This new mindset based on adaptive dynamics to real life along with the unprecedented scale and velocity of data gathering represents a precious opportunity to have real time data on poverty. For example estimating real time food expenditure based on pay as you go mobile top ups, mapping real time disease trends, such as Ebola or assessing instantly risks and vulnerabilities poor face daily.

But this infatuation with Big Data can have some shortfalls and it needs to be implemented with cautious. No doubt the sample size reduces sampling error but this will not eliminate the bias. A large but biased sample will produce “precisely wrong statistics,” with an extremely small sampling error, but still reflecting biases. For example, mobile phone surveys offer the possibility of much faster, cheaper and more frequent data collection, but it is widely known that the sample of mobile phone users in the developing world is likely to be biased towards wealthier, more educated, younger households, and towards more men than women. One potential pitfalls these tools offer is that we could miss the very people we most seek to reach: those without access to these new technologies as it is very likely that the extremely poor maybe info-excluded from this sample automatically. (Blumenstock and Eagle, 2012). So the Big Data may lead to search answers in places where looking is easiest described by the statiscians as the “Drunkard’s search” to explain this type of observational bias.

“A drunkard is looking for his lost key under a streetlight. A policeman asks “What did you lose”. The man answers “a key, but I can’t find it.” The policeman asks him “Do you remember where you lost the key?” He replies “Yes, over there”. The policeman, who appears confused, asks “Then, why don’t you look for it over there?”. The drunkard answers “because there is no light!”

To address these issues there are proposals of blending the convenience of “Big Data” approaches with the statistical rigor of “Small Data” approaches in what was called the All Data Revolution (Lazer, et. al. 2014). The World Bank Group has promoted various initiatives that precisely blend “Big Data” with “Small Data” for poverty estimation. SWIFT (Survey of Well-being via Instant and Frequent Tracking) is one such initiative. Like typical “Small Data” efforts, SWIFT collects data from samples that are representative of underlying populations of interest. Like typical “Big Data” approaches, SWIFT applies a series of formulas/algorithms, as well as the latest ITS technology, to cut the time and cost of data collection and poverty estimation. For example, SWIFT does not estimate poverty from consumption or income data, which is time-consuming to collect, but uses formulas to estimate poverty from poverty correlates, which can be easily collected. Furthermore, by embedding the formulas into the SWIFT data management system, the correlates will be converted to poverty statistics instantly. To further cut the time for data collection and processing, SWIFT uses Computer Assisted Personal Interview (CAPI) linked to data clouds, and if possible, adopts a cell phone data collection approach. “Big Data” science is still at its early stages and innovations in this field are rolling out at the speed of light, but such innovations might yield entirely new solutions for poverty monitoring in the near future.

Conclusion:

Monitoring global poverty is ultimately to produce quality data that will be the lifeblood of decision-making, and the raw material for world accountability of poverty reduction. Without data, we cannot know how many men and women still live in poverty, how many poor escaped poverty or simply died, how many children need education, how many poor were affected by natural disasters or conflict, what is the prevalence and incidence of diseases; if water is polluted or if the fish stocks in the ocean are dangerously low. To know what we need to know to eradicate poverty involves a deliberate and systematic effort of finding out. It means seeking out high quality information that can be compared over time, between and within countries, and continuing to do so, year after year. It means careful planning, spending money on technical expertise, robust systems, and ever changing technologies. It means building public trust in the data, and expanding people’s ability to use it. If poverty is our enemy, the best way to beat it is to know all about it, only then we may fulfill the dream of zero poverty.

 

 

[1] This is a delicate task because different calorie requirement can be met with multiple food baskets leading to different costs that should accommodate local price levels, resulting in poverty lines that can vary widely.  (Pradhan and others 2000; Haughton and Khandker 2009).

 

[2] Alternatively another approach is to divide the food component by the average share of food in total household expenditure and try to find out what is the share of non-food (Orshansky 1963).

[3] For example in the case of China access to microdata is restricted, but aggregated data on the distribution of consumption is published in official statistics reports. It is possible to estimate poverty indirectly but under additional assumptions.

_________________________________________________________

ARE SLUMPS HISTORICAL FACTS, POLICY MISTAKES, OR SELF-FULFILLING PROPHECIES?

 DSCF0280

 

Introduction:

Finance is definitely the subject in Economics that creates more mixed feelings on common people. If we ought to ask what is mankind´s greatest invention surely the financial contract would not be the first in line, but albeit widely disliked it has been indispensable in human development for at least 500 years. Although not outspokenly loved truth is that when the economy blows warm and tender winds a well-tuned financial system is what smooth’s away life’s up and downs providing borrowers credit, acting as a safety net by insuring against floods/fires or transporting through time savings to be consumed when more suitable, making an uncertain world more predictable. But if finance may be a magic balloon that makes fulfill all our material dreams and cope with all our fears, the problem is when the balloon becomes a bubble and explodes BANG! In 2008 the bubble burst again, markets crashed and plans into the future got destroyed. While we suffer the pain of large financial crisis it is easy to forget that this is usually the most efficient if not divine way to change status quo. When all are winning it is simply bad timing to ask for example Goldman Sacks how come they are winning 53 million USD on bonus commissioned on exotic funds. But when we have a crash it is automatically gathered an anonymous crowd willing to understand what went wrong and what was the poison that contaminated all the system? In slumps blame is usually the best pain coping device so the first response is always to identify the villains. Obsessed by the guilty agent critics normally rush in identifying the type of bank, of investor or asset that caused the horror to ban or regulate. But the answer may well be in the foundations of modern financing itself.

 

History shows the way.

 

Looking to History is always a good place to find for answers. Modern finance has been shaped by several slumps resulting in several cumulative post-crisis regulations that can help explain how we arrived to 2008.

Bubbles and subsequent crashes are not new and have been haunting human kind for centuries. In 1637 the Tulip mania led to the crash in Holland, at the time the centre of financial markets. In 1720 there was the South Korea bubble. In the 19th century working on a crisis a decade several financial panics (1819, 1825, 1837, 1857, 1873, 1893) have hit USA shaping modern financial landscape. There is no use on describing extensively all these slumps but some of them are fundamental to understand today´s reality because they have added crucial components to our financial framework. Contemporaneous financial markets are centered in the USA but highly entangled with Britain´s markets, the European counterpart. So it is no coincidence that in all previous (2008) crises there are always involved the great Titans: New York Stock Exchange, the FED, the Treasury and Britain´s and America´s Giant Banks. Indeed it all always starts between the United -the States and the Kingdom.

DSCF0101

It was not always like that. It is difficult to imagine but a few years after USA Declaration of Independence in 1790 the young country was a boring blank space in financial terms with 5 banks and few insurers, but Alexander Hamilton the first Treasury secretary had the ambition to create a state-of-the-art financial system that would overcome Britain’s and Holland’s. For that he founded the First Bank of United States (BUS) publicly owned that would emit federal debt. This would allow the government to borrow cheaply through new bonds traded in the open market. The bank was an exciting investment opportunity so the initial auction was oversubscribed within one hour. The debt-bank pillars of any financial system were confirmed and everything seemed to be perfect. But two things failed: the bank itself was so massive that dwarfed other lenders and the bank ballooned so fast that became difficult to back its paper money in hard currency. In addition crisis always show its yin-yang pattern: all good things have a counterpart of bad things and here it was not different. The BUS promoted a deal in which investors to get hold of for example a $400 BUS shares they had to buy a $25 share certificate and pay ¾ of the remainder not in cash, but with federal bonds. This plan was fantastic because stocked demand for government debt as well as provided healthy safe assets. But one thing that is difficult to ban or regulate is the old friend that goes bad. William Duer from Eaton known to be the only Englishman to be responsible for an American Financial crash knowing that investors needed federal bonds to pay for BUS shares constructed a scheme to corner the market funded by wealthy friends that borrowed him money. As credit tightened and Duers cabal that used new debts to repay old ones became a rumour the markets got into sharp descent. But what makes this crash important to understand 2008 crisis? Apparently it is a classic: a bank grows so fast that cannot sustain itself and its virtuous implementation procedures are constructed in a formula that incentives opportunistic behavior. But the 1792 crash is important not only because it shows that classic errors are continuously repeated since the 18th century but because it created a precedent: After the 1720 crisis in France that left the economy in marasmus for years Hamilton knew what was in stake and did the first America’s bank bailout: he used public money to buy federal bonds and increase their prices protecting the banks and speculators that had bought it on inflated prices; he channeled cash to troubled lenders and allowed for banks with collateral to borrow as much as they wanted. (at a penalty rate of 7%, the usury ceiling.) Hamilton worked brilliantly and restored confidence.

 

We also see that since the 18th century that the post-crisis blame&guilt process leads to an after-slump mechanism of regulating, so rules were implemented in New York in April 1792 to protect the naïve and regulate public trading. But one lesson that can be taken on board since 1792 is that as in a cat-rat game regulations lead to efforts to corner the law and this was no exemption. Indeed unintentionally these regulations stimulated the other side of the beast and in response to this aggressive regulation a group of 24 traders met in Wall Street – under an Buttonwood tree says the story to set up their own private trading club. That group was to be the founder of the New York Stock Exchange, the greater risk taking hub in the world. Ironically regulations created the monster that would concentrate more risk ever and would be the G point of future major financial crisis.

The following crisis in 1825 has taught us new lessons and added new components to today´s modern finance. Every crisis is always precedent by a new hope. In this case it came from the New World: new independent countries from Latin America were a source of new potential investments of foreign government bonds and shares of mining firms. London in 1820 was the leading financial hub overcoming Amsterdam and was flourishing with this new exciting opportunities, but as in every crisis enthusiasm becomes euphoria and the market started trading dodgy assets. In this case the issue was distance. Relying on ship transportation the information of the new born countries always had a time gap and was scarce and fragmented. This allowed for opportunistic behaviours. The most famous was Mcgregor`s scheme that sold Poyais bonds of a new country that never existed. When investors realized that the market was unreliable the crash came again. This crisis showed the importance of asymmetric information in determining financial crisis, but gave us also something new: the famous Megabanks. Analysing the impact of the crisis the English Government realised that the banks that performed better and showed more resilience were the Scottish ones based on joint-stock lenders. Having this in mind Westminster waived the rules on ownership restrictions and gave room to what is called today The Too Big to Fail. Indeed today 5 Banks in the UK hold 75%. The idea of pooling banks to make them more robust and resilient was brilliant but what the 2008 crisis has shown was that instead the Megabanks represent also a systemic risk to the global economy. The Too Big To Fail worked as the Titanic prophecy and along with the ever greatest ship they did fail: in 2009 the RBS the biggest was the first to fall.

The 1857 slump is interesting to outline because it relates to today´s context as it was the first global financial crisis. It showed that as today the integration of economies is what makes a crash a global problem. In 1850´s Britain was prospering and exported to the rest of the world. The economy was integrated and the big change was that new economic links had formed mainly based on trade. One feature of this integration patterns has lasted up today: America consumed more than produced holding a current-account deficit of 25 million in 1857 with Britain and colonies alone. America as today bought more goods that they sold and Britain bought assets to provide the funds for America’s imports, such as China does today. Having the largest economy highly integrated in these terms with the rest of the world magnifies and amplifies the contagious mechanisms of any crash occurring in America. In 1857 what triggered the crisis was a financial innovation called the discount house that began mushrooming in England as a middleman matching investors with firms but that soon started competing with joint-banks and performing as banks. Because the Central Bank was always an active lender at which they could always withdraw the lending continued making discount houses a vital source of credit for firms. When investors got suspicious of their balance of sheets and they were right as in some cases £ 10.000 of capital were supporting risky loans of £900.000, the crash happened again. The high level of integration made the rest, transforming this one on the first global financial crisis. Regulations were based on the recognition that financial safety nets can create excessive risk-taking so the Bank of England ended borrowing and reinforced self-insurance of lending activities by keeping their own cash reserves. By refusing emergency cash the Bank of England enjoyed 50 years of financial calm promoting prudence of banking sector stripped of moral hazard.

In the 20th century Britain and America has very different approaches to banking. The Bank of England was all-powerful and proud of the financial design and a tough oversee of the system. America was the opposite with a hands-off policy it was thought that banks could take care of themselves. This has spur expansion of banking but also of trust companies that had riskier activities and spicier assets lightly regulated compared with traditional banks. The history goes the same as too much risk was accumulated. In this case the bad guys were the greedy scammers: August Heinze and Charles Morse that cornered the market with United Cooper and precipitated the panic. Up to now no big news but the 1907 crisis is important because is what made recognize the need of a proper lender of last resort in USA, that led to the creation of the Federal Reserve in 1913. Emergency Money was needed after all and Hamilton was back to business!

DSCF0073

In 1929 happened the financial Big Bang. Markets were booming more than economy itself, consumer prices were falling and no dividends were piling up in shares. This was a great policy dilemma: should we cool the markets increasing the interest rate or should we reduce it to help the economy recovery? The Federal Reserve increased the rate from 3,5% to 5% for some a catastrophic error, truth is that the increase was too small to stop the market. London stock market crashed when Clarence Hatry was arrested by fraud. 2000 banks were closed never to open again. Money supply decreased by 30% and unemployment raised to 25%. A major reform was implemented to de-risk the system. It was based on an injection of massive public supply of capital, Glass-Steagall regulations tried to neutralize future risk implementing rules that separated stock market operations from more mundane lending. A brilliant idea was the Federal Deposit Insurance Commission (FDIC) a kind of risk-free certificate that Banks could advertise as FDIC insured for guarantee. While in Hamilton’s plan the financial system should support a stable government making banks and markets supported by public debt after the Big Depression it was the other way around: state´s job was to ensure that financial systems are stable.

In 2009 crisis again a massive bailout was done engaging a staggering 6 trillion pounds only to RBS. It is true that bailouts are not a mistake, letting banks of this size fail would be much worse, but the point is that knowing that state will always bail out has perverse effects because it exposes the system to moral hazard giving incentives to over-risky activities.

History has shown that there are no wrong or wright solutions. Copping with crisis has shaped modern finance with different solutions that can only be successful in a context. If megabanks were told to be more resilient in the 19th century in the 21st century it was responsible for the global meltdown in 2008. Continuous lending from Central Bank is good for finance but it may create scaling up of excessive risk. Emergency money for bailout is reassuring and promotes recovery, but unintentionally promotes continuity of risk-taking behaviours. And as well paid lawyers know very well, regulations always create exemptions ready to be explored opportunistically. In front of all options we can only try, so in a trail-fail mechanism the after-slump solutions may oscillate on one time on emergency money and on another time on self-insurance of lending activities through cash reserves. Each slump has its own explanations and unique dynamic and that is what makes it is difficult to prevent, but history show that we can identify some common components of a financial crisis. 1) Innovation: there is always a new spicy financial instrument that becomes an attractive new investment opportunity. It can be a new market as Latin America in 1825, the new discount houses, trust companies or exotic bonds 2) every crisis is precedent by a bubble: these massive investments become a speculative bubble that it is neither sustained by market fundamentals nor by reserve cash or insurance mechanisms. 3) The crash happens in most cases when the fraudulent agents are caught –the bad guys. One important conclusion is that regulations are fundamental to de-risk or restore confidence, but one will not be able to regulate the entire world and perversely regulations always create exceptional opportunities to corner the market. Unfortunately no matter how precise we are new rules will never be able to predict the next bad guy. Another important insight is the fundamental dilemma that policy makers face: to bail out or not? No doubt letting fail megabanks would have catastrophic effects but knowing there is always a back-up or a lender of last resort creates an incentive to incur in over-risky investments exposing the system to moral-hazard mechanisms. History has shown that as in a vicious circle being safer may well mean to be edgier instead!

 

What does Economic Theories have to say about slumps? Confidence matters.

Economic Theory 2

If slumps normally start with the burst of a bubble in the front stage of Wall Street nothing is more spectacular than the crash on the reputation of economics itself. 2008 was no different; it destroyed the credibility of the economic thinking that guided policymakers for a generation. At the eve of the crisis in September 2007 we all lived in a self-congratulating mood of the virtues of what was called the “The Great Moderation”. We enjoyed the greater stability after the 80`s while in the period before from 1951 to 1979 inflation, interest rates and unemployment were high and volatile. There was a consensus that central bankers were doing a much better job using new-Keynesian monetary theory based on inflation targeting. If we were that wrong, then what does economic theory has to say about slumps? The 20th century traces cyclical swings between classical and Keynesian economics that have major disagreements on how the economy works. Classical economists think that economy is like a rocking horse it fluctuates as a result of external shocks but then goes back to the original point, or in other words the economy self-stabilizes. Instead according to Keynes General Theory the economy is like a boat in the ocean with a broken rudder, it leaves equilibrium to arrive to another equilibrium point, it does not have a self-correcting system requiring fiscal and monetary policy to achieve full employment again.

Economic Theory 3

According to classical school fundamentals rule the market. Before the Great Depression the classical explanation of slumps by Arthur Pigou on “Industrial Fluctuations” was based on fluctuations caused by weather or productivity, new inventions, monetary fluctuations, changes in tastes or pessimism. Today classical approach personified in Eugene Fama of the Chicago School still supports what is known the Efficient Market Hypothesis. Based on this theory in 1990´s investment banks began to develop new kinds of financial markets called derivatives. The idea was to split the payments from business ventures into pieces and allowed to trade different kinds of risk. New exotic financial instruments were developed and the banks that created them made huge commissions every time they changed hands (In 2006 there was the famous $53.4 million bonus paid to Goldman Sacks CEO). The Efficiency Classical Theory advocates that the creation of new markets for derivatives was responsible for growth encouraging firms to invest creating jobs and a way to share risk efficiently. But because in 1990 regulations were relaxed in USA, (waiving the Glass-Steagall Act), to allow the creation of these markets and exotic derivatives Keynesians today think instead that this deregulation was responsible for the 2007-08 crisis. But as we have seen in the previous historical review this may be a hasty conclusion as other bubbles were created precisely due to regulation. No doubt regulations such as the 1933 Glass-Steagall Act after the Great Depression promoted a period of financial calm separating commercial banks from investment banks, but we have also seen that the Wall Street was formed precisely as the result of aggressive regulations.

The issue of regulating or not is the final output of a deep and long background of economic thinking. As Classical Theory believes that economy automatically stabilizes unemployment only exists in the short-run while economic frictions do not allow prices adjustments. In the long-run it would adjust to full employment. Keynes instead thought that in the long run we are all dead and that unemployment could be persistent. Before the Great Depression no one contested that the economy would not go back to full employment, but during the Great Depression 1929 long periods of unemployment was no compatible with classical views. Keynes stepped in saying that unemployment would not adjust automatically to full employment it was necessary monetary policy and fiscal stimulus as a way out of a recession. This represented a breakthrough in economic thinking that persists to this day as western democracies began to recognize a vastly increased role for government in the management of economic affairs. What it meant is that, as Paul Krugman today would advocate, the way out of a slump is government. It should increase spending and monetary policy should decrease interest rate to make people´s lives easier. The only but is that as Keynes would say in the long run we are all dead but future generations are not and indeed using fiscal deficits by borrowing public debt puts the onus of slumps on future generations, so fiscal profligacy may be a dangerous option as we have seen in 2008. Indeed if Keynes fiscal stimulus were good restoring full employment in World War II, post-war experiences were less successful. According to the Phillips Curve, “managing aggregate demand” as King and Summers in the FED would say should have restored full employment without causing inflation during the administration of Kennedy, Johnson and Nixon, but in the 70’s stagflation made Keynes lost is way. While Classical thought could not explain why free markets fail to re-stabilise and readjust to full employment Keynes could not explain stagflation. In the 50’s Milton Friedman observing the High Inflation along with High Employment incompatible with the Keynes Phillips curve advocated that consumption depends on wealth rather than income. The permanent income theory of Friedman was against the Keynesian theory that government spending would increase income and induce consumption allowing policy to get out of recession. Instead Friedman believed that transitory incomes fluctuations have minor effect on consumption and the unemployment rate and depended on real factors (productivity of workers, preferences of households, etc) and not on aggregate demand. As a solution for the persistent unemployment it was considered the natural rate of unemployment that was considered a constant and independent of monetary and fiscal policy in the long run. In a persuasive rhetoric these ideas were further formalized by Lucas in the Rational Expectations Theory that believed that the future must be consistent on average with what happens now, this was named the Real Business Cycle Theory a revised version of General Equilibrium that with Lucas goes beyond one point in time (Walras) and looks over an entire infinite future.

Economic Theory 1

Up 2008 we were living the revival of classical rational expectations revolution, but if academic lost confidence in Keynes due to stagflation politicians still find fiscal deficits an easy and popular way out of a depression. Economic history may have shown the flaws of Keynesian theories in academia but governments still find fiscal deficits a sexy and easy to grab policy to induce recovery. Obama launched a $800 billion fiscal stimulus and Mervin King in the UK increased government debt in 80% of GDP by 2014. In Europe without autonomous monetary policy fiscal profligacy is the only salvation policy available so governments feel even more tempted to use fiscal deficits to fulfill their political national agendas albeit Merkel efforts to reduce public debt to zero. That is what triggered the European debt crisis after 2008 crash and no doubt this inconsistency of fiscal policies seriously dampens the future of the Euro zone.

History of economic thought ultimately shows the response of ideas to important transformational events and the financial crisis of 2008 represents not only a turning point but a disruptive event as fundamental to rational expectations as the Great Depression was to Classical theories of the roaring 20’s. But to rush tearing up theories may be unnecessary and premature and shows a destructive and dogmatic approach. To understand 2008 crisis the solution may as well be in the combination of Classical and Keynes approaches. From Classical economics we can take how individuals behave, how they interact and their collective choices determine aggregate outcomes and efficiency in markets. From Keynes we may take that markets do not always work well and capitalism sometimes needs some guidance, surely we can design ways of correcting the excesses of free market economies that preserve their best features without adopting the inefficiencies of centrally planned economies. But one crucial subject that modern classical economists neglect that is difficult to reconcile with the swings we observed in the value of stocks is the issue of confidence. While in Keynes confidence matters and he considers what was called the “animal spirits”, in the Classical school there is no room for market psychology. Luca´s Rational expectations insists that confidence cannot independently influence the economy, fundamentals are what rules markets and guesses about the future consider only factors such as productivity, technology or tastes. We now know (after 2008) that even if Classical theory rules out of the equation the fads and fashions of sentiments that determine consumer and producer confidence as Alan Greenspan stated in 2009 “ innate human propensity to swing between euphoria and fear has a life of its own that goes beyond economic events.” Believes of market´s participants in the value of the stock market is crucial independently of economic activity. A loss in confidence can become a self-fulfilling prophecy and lead to a downward spiral in economic activity. But confidence is more complex than it seems. If it is true that Keynes can relate with the idea of “animal spirits” there is no sound economic reason that confidence will increase because government spends more borrowing money. What 2008 crisis has shown was that monetary and fiscal policy may have Keynes effect but only if households and firms regain confidence in the economy buying houses and putting money in the stock market again. Truth is that with no confidence economy may grow but the private sector will not create jobs, which is what is happening still now in Europe. Expectations are crucial and if households maintain pessimism loss of confidence will be self-fulfilling.

 

Was the Crisis a Policy Mistake? Does the Fed works? Policies

 

When the Great Moderation period came to a halt in 2008 Central banks policies were ruled by New-Keynesian approaches influenced by David Hume quantity theory of money. As one of the flaws of Classical theory is that it divides quantities from money after the revival of Luca’s real business cycle model in the 70’s subsequently the money branch had also to be revised and ultimately transformed the quantitative theory of money into the what was called the new-Keynesian theory. If Lucas showed that monetary and fiscal policy cannot improve welfare in the short-run the New-Keynesians added a twist based on economic frictions to justify how the government monetary and fiscal policy can improve people’s lives. But let us not be fooled, albeit the confusing name: they are quantity theorists in Keynesian clothes and indeed believe that based in Hume’s adjustment process, the stock of money first affects quantities and later affects prices. This monetary transmission mechanism widely understood today by the central bankers made the Fed respond to recession in 2009 pumping money into the economy, but accordingly Alan Greenspan (former chairman of Fed) was worried that the monetary expansion of $800 billion could lead to inflation. Additionally New-Keynesians are also very far from Keynes on what concerns employment. In fact in line with rational expectations they believe that unemployment can deviate from its natural rate but cannot differ from it permanently. So one may conclude that while new-Keynesians focus on inflation targeting once again there is no sophisticated explanation to unemployment as it is reduced to a constant around which it deviates.

As there isn´t a son without a father there is also no monetary policy without a Central bank. The Central Bank activities are crucial in a financial crisis because they regulate the amount of money flowing around a modern economy, so no wonder that in the spring 2009 the policies of the Fed, the European Central Bank and “The Old Lady of Threadneedle Street” (the oldest one, the Bank of England) were on the spot line. Managing money supply during a crisis is to solve one underlining dilemma: the central bank can choose to fight the recession by lowering the interest rate but it may instead feed inflationary expectations. When the economy recovers from depression it does so at the cost of permanently higher inflation. In 2009 some were concerned that this policy would create the inflation building up of the 50’s and 60’s and they were correct. Another fundamental issue is the independence of monetary policy from government agenda.

FED

For example Fed’s era of modern monetary policy began only in 1951 with the Accord in which Fed gained significant autonomy in setting the interest rate. During the Great Depression and up to World War II short-term interest rates on Treasury Bonds were agreed to remain very low to keep down the cost of borrowing to finance the war but in 1951 Fed was free from this obligation and could choose monetary policy independently of political influence. Treasury would have preferred to keep interest rate low to reduce the cost of running additional deficits particularly when debt was already 120% of GDP in 1946. In 1951 the Treasury wanted the Fed to finance government expenditure by printing money but again this would lead to inflation. Ultimately they were correct as inflation was high until 1979 when Paul Vocker took control. After that the Fed was much more aggressive in raising the interest rate in response to inflation than after starting a period of long calm called the Great Moderation. This peaceful period from 1980-2006 was the result not only of improvement in the Fed policy that did learn how to move interest rate more effectively to prevent inflation but some say that it was also luck as there were much fewer shocks to the real economy. Truth is that lucky or not the Fed was also efficient in restoring confidence in the crash of 1987.

Traditional policies to combat recessions are based on fiscal and monetary policy. Fiscal policy is more direct but acts more slowly. It works by increasing the demand for goods as government borrows to build roads and bridges or cutting taxes which ultimately puts more money in the households. Fiscal spending along with monetary policy increases aggregate demand causing firms to employ more workers. Traditional monetary policy can often be the tool of choice because it is faster and works by stimulating private spending. The central bank lowers the yield on safe assets making Fed purchase Treasury bonds on the open market. (in the UK and Europe there are operational differences but the effect is the same). This injects liquidity into the financial system. As commercial banks try to lend the additional money interest rates fall and risky business become more profitable. When firms and households begin to purchase more goods this increases employment and eventually brings the economy out of recession. This has worked well in other recessions but it was not available to the Fed, the European Central Bank or the Bank of England in 2008.

 

Are we trapped?

City trapped

Indeed what is different about 2008 (with exception of 1934) is that even with huge packages of fiscal stimulus and emergency money the economy is struggling to recover and the particular reason for that is that when the crash happened the interest rate was already close to zero. This means that traditional monetary policy can´t go any further as buying treasury bills against cash will be exchanging a zero interest rate with another. This is called the liquidity trap and is a serious challenge for post-crisis economic recovery. In the USA economy a crisis destroys more jobs but also reacts more quickly but Europe that is always slower recovering may face the daunting effects of the liquidity trap as interest rates converge to zero and by rule cannot become negative. From this follows that central banks with this limited action follow mainly a policy of quantitative easing selling bonds with longer maturities than classic three months Treasury bonds. This is an alternative monetary policy of expanding money supply by buying a range of alternative assets including corporate debt and long-term government bonds. But the bottom line is that monetary policy that usually lowers interest rates to inject money in the economy and promote recovery cannot be used. Quantitative easing can be used meanwhile with the hope that as government drives down the return on long-term bonds households will put their money back into the stock market. We all depend on the wish that this hope pays off.

In this complex context one solution for the future could also be to coordinate FED Bank of England and UE to target additionally domestic stock market indices in addition to traditional domestic interest rate, allowing direct central bank intervention in the stock market.

 

Capitalism is the single most successful engine of growth in human history it is responsible for lifting more people from starvation and misery than any known alternative. It is not a homogeneous block it comes in different forms and should exist under a well-defined legal code. The challenge in the future is to find this perfect code, the issue is not whether to regulate or not Capitalism, it is how to tame it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: