Search Results for: "recent work on"

Calculating the Unemployment Rate

Recently several news pieces have made the claim that if the unemployment rate were calculated as it was during the Great Depression, the current rate would be close to double what it is, and creeping toward the formidable rates back in the 1930s.

Unempl-1929-2009

The first problem with this statement is that there was no official unemployment rate until the 1940s. The ones we use today were reconstructed after the fact. As unemployment ballooned during the Great Depression a number of ad hoc attempts were made to calculate the rate, and the widely divergent results led private researchers and some state and local governments to experiment with various sampling methods. In 1940 the WPA began publishing statistics on those working (the employed), those looking for work (the unemployed), and those doing something else (hiding under the bed perhaps?) and so not in the equation.*

The second problem with the statement is that it's just not true. Although the BLS has refined their surveys and made questions more specific, conceptually the unemployment formulas have not changed, and the BLS's own analysis of test data shows that the impacts of several sets of changes on the overall numbers are minor.

In 1962 high unemployment and two recessions in three years led to the formation of The Presidential Committee to Appraise Employment and Unemployment Statistics, led by Robert Gordon, and tasked with reassessing the concepts used in gathering labor-market data. The Committee gave high marks to the BLS's integrity and suggested some improvements. For several years the BLS tested new survey techniques before instituting a number of changes in 1967.

Among the most important of these were the requirement that workers must have actively sought employment in the last four weeks in order to be classified as unemployed.  A contact at BLS agrees that some discouraged workers were probably counted as unemployed before this change was made, but the effect of this migration is small. As they generally do, the BLS ran the new definitions alongside the old, in this case for 2.5 years, before adopting the new.  Although the test series is not entirely comparable with the new series, the overall unemployment rate in the new series dropped by just one-tenth of a percent and, within that, the rate for adult men was down 3/10th, up 4/10th for adult women, and off a full point for teenagers.  (Maybe they were just being teenagers: the requirement that they give a concrete example of their job search may well have reminded them of their parents and got the blank stare.) The Committee also recognized the need for more detailed data on persons outside the labor force, who are highly sensitive to changes in labor demand, and the BLS began collecting information on those who wanted a job although they were not looking for work in 1967.

In 1976, in order to provide more information on the hidden unemployed (who would presumably be part of the labor force in a full-employment scenario), the BLS first published the original U1 to U7 tables, which break out marginally attached workers.  These tables were revised in the 1994 redesign (becoming U1 to U6) and the controversial requirement that discouraged workers must have sought work in the prior year was added. This change halved the number of discouraged workers, resulting in a complete break in the time series.

But those workers can still be found in the U-6 series, which is the broadest measure of labor underutilization, and it ain't a pretty sight. Up 4.8% over the year, U-6 currently includes an ugly 13.5% of the labor force. Update: In February U-6 unemployment rose to 14.8%. There's no need to fool around with the official unemployment rate (U3) to get an accurate picture of how quickly our labor market has deteriorated: the U1 to U6 tables tell the story.

Update 03/14/2009
In response to a reader's comment:

There are three unemployment series available for the early 1930s: Stanley Lebergott’s, Michael Darby’s reworking of the Lebergott series, and the G.H. Moore series, available through NBER. (Michael Darby is the economist who pointed out that the Lebergott series included those on work-relief as unemployed. His series moves them to employed.) We used Moore’s series, which pretty much splits the difference between the other two. When you combine different series, usually necessary for long-term views, the series breaks themselves produce spikes or dips. Splicing the Darby series to the official BLS data makes it look like the unemployment rate jumped in 1940, which we did not want, and Lebergott’s inclusion of those on work relief as unemployed was in line with 1940 census practice.

Here are the yearly averages for the three series:

  Moore Lebergott Darby
1929   3.2% 3.2%
1930   8.7% 8.7%
1931   15.9% 15.3%
1932   23.6% 22.9%
1933 23.4% 24.9% 20.6%
1934 19.1% 21.7% 16.0%
1935 17.6% 20.1% 14.2%
1936 14.2% 16.9% 9.9%
1937 12.2% 14.3% 9.1%
1938 18.4% 19.0% 12.5%
1939 16.3% 17.2% 11.3%
1940   14.6% 9.5%

Basically, if you want to evaluate the effect of government work programs, compare the Lebergott series to the Darby series. If you want a more readable trend line (while avoiding accusations of playing politics) use the Moore series.

For more information and some notes on definitions, please see “Employment and Unemployment in the 1930s,” by Vanderbilt economist Robert A. Margo, available here: http://fraser.stlouisfed.org/docs/MeltzerPDFs/maremp93.pdf

Philippa Dunne and Doug Henwood

*There is currently a bit of a fracas over the reconstructed unemployment rates for the period prior to official series. Stylish Stanley Lebergott, the BLS economist who put together the most widely used series, categorized workers on emergency relief as unemployed. In the 1980s data reclassifying these workers as employed were released, a definition in line with current practice and more widely accepted. In the past month or so, those wishing to show the WPA programs did little to alleviate unemployment have been relying on the unrevised Lebergott series, and those taking the opposite view the revised data. Of course, if you compare the two series it appears that between 1934 and 1941 WPA projects took 2 to 3.5 million workers off the unemployment roles, and shaved the rate by 4 to 7 percentage points.


If We Make it to December

Since we have made it to December, that probably should be February or March, but it’s what Merle Haggard wrote. (A shout out to the handsome Hag as he recuperates in Bakersfield.) It’s hard to keep up with the onslaught of staggeringly bad economic news coming in these days. Not only that, we’re facing the fall-out of Secretary Paulson’s bewildering failure to manage the current crisis, or at least to maintain the all-important appearance that he knows what he is doing, and recently revised data indicates that the job market was in even worse shape than previously thought going into this mess. Nevertheless, like the tentatively hopeful recent lay-off in Haggard’s song, we do see some real opportunities for setting our “real” economy on a more fruitful course in the current turmoil.

During the boom there was a good bit of talk about how as a nation we can do without a strong manufacturing sector. The cliché became, “Michigan is irrelevant,” with abuse heaped on the Great Lakes manufacturing states for being beyond repair. Actually, those same states have made real progress in R&D employment, but it has not been enough to offset job losses in the automotive sector. As we re-evaluate our thinking, it’s clear that we need our manufacturing base: time to go crawling back to the Midwest. Current research suggests that design teams are more productive when they work closely with those building the products, which undermines the idea that the best course is to design things here to be made elsewhere, another reason to invest in domestic manufacturing. And the idea that accompanied waving goodbye to “dirty” manufacturing work, that there’s some sort of financial dark matter we’ve got going for us that could prevent a financial big bang, has really got to go.  It’s a shame that the terms our scientists come up with to describe true mysteries get abused like that, so let’s just say that although the current less awesome/more awful “big bang” wasn’t avoided, it’s surely a contender for great moments in creative destruction.

Multiplication

There is no question that public spending will be re-shuffled as we come to terms with the economic consequences of the slow erosion of our infrastructure, our manufacturing sector, and, in the longer term, our scientific research funding.  We’ve put together some stats on some of the economic benefits of shifting more public investment to these areas; there is reason to be hopeful.

Military spending was 3.8% of GDP in 2000, its lowest level since 1940. It rose to 4.7% in 2004, where it stayed until the end of 2006, then rose to 5.1% in the first quarter of 2008, and spiked to 7.4% in the third.  As this graph shows, our economy would look even rockier without this stimulus.

MilitContrib

With the federal budget taking on water and the economy in turmoil, military analysts are certain that big spending on big projects, projects made even bigger by cost overruns and delays, will be curtailed.  Of course, none of this will turn on a dime, but a shift away from defense spending and toward other public projects that were left to languish is a shift toward greater stimulus. Military spending has an over-all economic multiplier of 1.61, which means that for every dollar of direct investment, another 61 cents of economic activity is generated. That’s actually pretty low, about equal to spending in the retail sector. For the broad sectors, the big multipliers are in manufacturing, 2.43, and construction, 2.08. Much of the public money we will spend to avoid a deeper recession will be made in subsectors of these two strong sets with even higher multipliers.  It’s important to remember that we are also coming off the bubble in residential construction, which has among the highest sub-sector multipliers, 2.27. So, what might we expect from some of the projects in President-elect Obama’s quiver?

Sub-sector Economic Multipliers
Highest:
Motor-vehicle manufacturing 2.87
Food and tobacco manufacturing 2.61
Farms 2.33
Residential construction (sub-sector) 2.27
Local government enterprises 2.22
Lowest:
Legal services and real estate 1.49
Warehousing and storage 1.43
Fed banks, credit intermediation, related 1.39
Rankings from 2006: Secretary Paulson not
responsible for Federal Reserve banks ranking dead last.

Rocks and gravel

At their 2008 Annual Conference, American Society of Civil Engineer President D. Wayne Klotz declared this the Year of the Civilization Engineer. (We had hoped for the year of the Indy Financial Writer, but looks like they beat us to the punch.) He probably has a point, at least in terms of revenue.  As a nation we got a D in infrastructure in 2005, our most recent marking period, and ASCE projects that we need to invest $1.6 trillion to get our infrastructure into “good” condition. (For example the EPA estimates there is currently a $540 billion gap between what communities are spending and should be spending on water infrastructure. Disgusting examples of that include parasites traced to faulty pipes in a Colorado town. ASCE estimates that 27% of our bridges are “structurally deficient.” For that we have a tragic example: the collapse of the Mississippi River I-35W bridge in 2007. Those with strong stomachs can read more here: http://www.asce.org/reportcard/2005/index.cfm.)  President-elect Obama has pledged resources to this sector.  State and local government enterprises are labor-intensive (more on that below), and have an overall economic multiplier of 2.22, so the overall stimulus of spending on infrastructure projects is about twice that of spending on defense, and is basically even with residential construction.

Retrofitting: Make mine green

Obama has also promised to exert major efforts in retrofitting and other public projects aimed at a more fuel-efficient economy.  This is good news for our workers since a larger percentage of project capital is spent on labor in such projects than in new construction, where materials and underlying real estate eat up more money.  Robert Pollin of the Political Economy Research Institute at UMass, Amherst, and Bracken Hendricks of the Center for American Progress have researched the economic benefits of a $100 billion package, to be spent over the next two years, aimed at creating jobs laying the foundations for a low-carbon economy.  (For those with over-taxed memories, the stimulus checks sent to households beginning in April cost about $100 billion.) $50 billion would be allocated to tax credits to help businesses and homeowners finance retrofits and investments in renewable-energy systems, like geo-thermal.  This is important as it encourages private investment that will create jobs now, offset by lower fuel costs in the future. Direct government spending of $46 billion would be devoted to retrofitting, an expansion of freight rail and mass transit, and building smart electrical grids.  Of this, $26 billion would be devoted to retrofitting 20-billion square feet of public buildings, resulting in an estimated energy savings of $5 billion a year. The remaining $4 billion would be set aside for federal loan guarantees to underwrite private credit for investments in renewable energy and building retrofits. Using the Bureau of Economic Analysis’s input-output tables, Pollin computes that every million spent on public infrastructure creates about 17 jobs, and on green investments 16.7 jobs, which compares to 14 jobs for tax cuts for household consumption, and 11 jobs for military spending. Putting it all together he believes at the very least the program he describes would replace the 800,000 construction jobs we have lost in the last two years, and is more likely to create about 2 million jobs, bringing the unemployment rate back into the low-5% range. (More here: http://www.peri.umass.edu/fileadmin/pdf/other_publication_types/peri_report.pdf; Map of some working projects available here: http://apolloalliance.org/apollo-14 )

Michigan, Ohio, Indiana: we can’t make it without them

We have been asked our opinion on the wisdom of advancing bridge loans, in addition to the $25 billion set aside to enable the retooling necessary for producing more fuel-efficient cars, to the Detroit 3. It’s unfortunate that this question is devolving into a battle between Democrats and Republicans; there’s a lot at stake.  Auto-manufacturing has the highest overall economic multiplier of any subsector (2.87) and job multipliers between 5 and 6.5 for each primary assembly job. In a report that received wide attention, the Center for Automotive Research suggested that a failure of any one of the Detroit 3 could set off cascading job losses up to 2.5 million in the first year, as well as heavy hits to state and federal revenues. Some have argued that CAR’s numbers are too high. Well, OK then, let’s say job losses of 1 million; that’s still awfully high. Although it may be true that many of the D3 workers would eventually be picked up by the transplants, throwing the region—MI, OH and IN have a combined population of about 28 million, or 9.3% of the U.S. total—into that kind of turmoil is too risky right now.

Some are suggesting bankruptcy is the better way, but we’d suggest three major risks to that. First, an auto made by a bankrupt company would be a hard sell.  Second, restricting funds for new products that may be truly successful, like the Chevy Volt, is throwing in the towel prematurely. And, third, bankruptcy is a unpredictable process and can quickly move to liquidation, which would likely be the end of our domestic automotive industry. Some think that’s a good idea. We don’t. If we are serious about rebuilding our manufacturing sector, a clear lesson from the current meltdown, we need our domestic auto industry to be whole again.

Looking forward, it’s worth noting that the automotive industry ranks sixth in R&D spending, with annual outlays of about $18 billion.  Recently built shiny R&D centers in the Midwestern manufacturing states may be at odds with the popular stereotype of the Rust Belt, but they’re there, and it’s a smart decision to build on what we have, and develop the synergy between R&D and manufacturing that will support innovation in transportation systems. [More here: http://www.nsf.gov/statistics/seind08/c4/c4s3.htm]

The original plan was to ask for $50 billion, but the request was halved, according to one industry analyst, because the full amount sounded unrealistic.  He added, back in October, that the $50 billion now looked like chump change, and he was referencing Secretary Paulson’s $700 billion bailout, not the $7.4 trillion currently set aside by the Fed for assorted bucket work.  With somewhere between 1 and 3 million good-paying jobs at stake, a relatively modest $25 billion spent heading off yet another crisis sounds pretty good, especially since there is nothing in the TARP legislation that would prohibit this. But we need a real plan from management, and should allow no excuses from anyone. It’s alarming to hear elected officials suggest we can’t make these loans because the auto chiefs will just go back to doing what they have always done. We wouldn’t accept such ineffectual reasoning from our own children, would we?  And we need to allow for the possibility that the importance of product innovation has finally been embraced at the top levels. (http://www.cargroup.org/documents/InnovateorDie_clean.pdf)

Emergent phenomena

Since we’re talking about shuffling funds, we want to make our standard plea for public funding of scientific research at levels adequate to maintain our international leadership. A most galling development in recent years is the assertion that the US would naturally hold the lead, coupled with dwindling public funding of the crucial research behind that leadership.

Here’s one example: Throughout the 20th century, the United States held the undisputed lead in Condensed-matter and materials physics (CMMP). CMMP, the largest physics subfield, comprises both pure and applied research into complex phenomena born of simple things. Although decades often pass between the humble advances in our understanding of those simple things (rocks, ice, snow, water) and the dazzling inventions that rock our world, it is widely accepted within the scientific community that long-ranging CMMP research leads the technological revolution.

Early in the 20th century parent companies invested in their long-term futures by encouraging high-risk long-range research in their industrial labs. GE founded their labs in 1900, Bell in 1925; and IBM’s TJ Watson in 1945. The results were stunning: X-ray tubes, transistors, lasers, the integrated circuit, and the discovery of cosmic micro­wave background radiation matching just what a team of astrophysicists at nearby Princeton had calculated would linger from the Big Bang, were all products of these privately funded labs.

Although private institutions currently provide two-thirds of overall R&D funding, the rush to market pushes them to focus on incremental improvements to already existing products; they often concentrate on the D while neglecting the R, and funding of longer-range research has dropped to just 10% of the industrial investment budget. (The NAS cites the research models in many of the new venture-capital funded start-ups as a big contributor to this mindset.) The federal government remains the largest supporter of CMMP research itself: current funding levels are $600 million a year, roughly flat over the last decade in inflation-adjusted dollars. But the likelihood that a CMMP grant application will National Science Foundation funding has dropped from 38% to 22% in the last five years; new investigators face a bleaker 12% chance, down from 28%. And our CMMP PhD awards have fallen 25% over the same period. At the same time other countries are rapidly increasing funding. In the last decade the number of articles published by U.S. authors in two international scientific journals has just held steady, causing the percentage of articles published by U.S. authors to fall from 31% to 24%.  If part of the plan is to maintain our lead in the international scientific community, as well it should be, we can’t continue to accept a crippling lack of funding. But we can end with some encouraging news.

Bucky paper: That’s what we’re talking about!

In October, an international team at Florida State University announced they have made significant progress in developing manufacturing techniques that may soon make bucky paper competitive with top composite materials currently on the market. We’ll guess that when many web-surfers clicked on the link only to see that the thin sheet of aggregated carbon nanotubes is virtually indistinguishable from a small sheet of origami paper they quickly moved on.

But bucky paper has the requisite characteristics of a true materials break-through:

1. Its development is moving at a snail’s pace; 2. It is an unexpected side product of a different quest. (In 1985 researchers at Rice University set out to create the same conditions that exists in carbon-creating stars. One “extra character” showed up out of left field: the buckyball, AKA the third form of pure carbon we have discovered.  While fooling around with buckyballs, researchers stumbled upon their tendency to stick together and, by filtering them through a fine mesh, produced bucky-paper);  3. Its physical properties are hard to fathom—one tenth as heavy but potentially 500 times stronger than steel, conducts electricity like copper and disperses heat like brass (unlike other composites), and made from carbon molecules 1/50,000 the width of a human hair; and,  4. It’s a true international collaboration. Lockheed Martin Missiles chief technologist Les Kramer suggests bucky paper will be a radical technology for aerospace, others possibly a Holy Grail. (More here: http://www.hpmi.net/)

What’s not to like? Let’s make sure there’s more to come.

by Philippa Dunne· · 1 comment · Comments & Context

Banking crises around the world

Having rejected Henry Paulson’s rescue plan, it’s
not clear what Congress—or those in the broad population
opposed to a “bailout”—propose to do to
keep the financial system from imploding. But a database of systemic
banking crises recently assembled by IMF economists Luc Laevan and
Fabian Valencia ( www.imf.org/external/pubs/cat/longres.cfm?sk=22345.0  )
provides a useful map of how crises play out and what does and
doesn’t work.

Laevan and Valencia identify 124 systemic banking crises between 1970
and 2007, and assemble detailed information on 42 of them, representing
37 countries. (Some countries, like Argentina, appear multiple times.)

In almost every case, governments took active measures to mitigate the
crisis, so there is no real test of whether rescue schemes actually
work; no politician seems willing to face the consequences of letting
the chips fall where they may. But the work of Laevan and Valencia does
offer some guidance as to what works best.

Dithering Costs
One crucial lesson stands out: speed matters. This is obvious to anyone
who followed Japan’s dithering in the 1990s; standing aside
and hoping the problem goes away is not a good idea. Relatedly,
“forbearance”—regulatory indulgence, such
as permitting insolvent banks to continue in business—does
not work, as has been established in earlier research. As the authors
say, “The typical result of forbearance is a deeper hole in
the net worth of banks, crippling tax burdens to finance bank bailouts,
and even more severe credit supply contraction and economic decline
than would have occurred in the absence of forbearance.” This
suggests that suspending mark-to-market requirements is not a good idea.

Since forbearance does not work, some sort of systemic restructuring is
a key component of almost every banking crisis, meaning forced
closures, mergers, and nationalizations. Shareholders frequently lose
money in systemic restructuring, often lots of it, and are even forced
to inject fresh capital. The creation of asset management companies to
handle distressed assets is a frequent feature of restructurings, but
they do not appear to be terribly successful. More successful are
recapitalizations using public money (which can often be partly or even
fully recouped through privatization after the crisis passes); recaps
seem to result in smaller hits to GDP. But they’re not cheap:
they average 6% of GDP, which for the U.S. would be about $850 billion.

Total fiscal costs, net of eventual asset recoveries, average 13% of
GDP (over $1.8 trillion for the U.S.); the average recovery of public
outlays is around 18% of the gross outlay.

But those who don’t want to spend that kind of taxpayer money
should consider this: Laevan and Valencia find that “[t]here
appears to be a negative correlation between output losses and fiscal
costs, suggesting that the cost of a crisis is paid either through
fiscal costs or larger output losses.” And if the economy
goes into the tank, government revenues take a big hit, so
what’s saved on the expenditure side could well be lost on
the revenue side.

Oh, and about half the countries that have experienced crises have had
some form of deposit insurance. So merely expanding the
FDIC’s coverage is not likely to do the trick—and,
in any case, it’s going to be hard to escape the huge expense
of a systemic recapitalization, though using the FDIC might simplify
the politics of the rescue.

(A note on the politics of the rescue: an ABC poll shows the public to
be far more worried about the economic consequences of the
bailout’s defeat than Congress seems to be. There’s
not a lot of enthusiasm for what’s seen as handing money over
to Wall Street—but if properly structured and sold, say with
more cost recovery prospects for the government, more relief for
debtors, a rescue is not as unpopular as some would have it.)

Relevant Examples
Most of the countries in the Laevan/Valencia database are in the
developing world, and are of questionable relevance to the U.S. But TLR
has taken a closer look at four countries that offer more relevant
models: Japan, Korea, Norway, and Sweden. Some major stats for the four and the U.S. are in the table at the end of this entry, as are  graphs of some important indicators.

Sweden, now widely seen as a model of swift, bold action, kept its
ultimate fiscal costs relatively low—3.6% of GDP at first,
almost all of which was recovered through stock and asset
sales—but was unable to avoid a deep recession. At the other
end of the spectrum, Japan, the model of foot-dragging half-measures,
saved no money through its procrastination; its fiscal outlay was 24%
of GDP, almost none of which was recovered. And it was unable to avoid
recession.

Note, though, that some of the worried talk surrounding the financial
market impact of bank bailouts looks misplaced, at least on these
models. Three years after the outbreak of crisis, inflation was lower
and stock prices higher in all four countries, and government bond
yields were lower in all but Japan. It’s likely that the
deflationary effects of a credit crunch outweigh the inflationary
effects of debt finance.

Although the U.S. in 2007 had a lot in common with other countries on
the brink of a banking crisis, one thing stands out: the depth of the
current account deficit. Of the four comparison countries, only Korea
comes close to the U.S. level of red ink. The unweighted average
current account deficit of the 42 countries in the Laevan/Valencia
database was 3.9% of GDP—compared with 6.2% for the U.S. That
suggests that the U.S. has more to deal with than just resolving a
banking crisis.

A Better Bailout
So, with the modified Paulson plan dead for now, what might a better
bailout scheme look like in light of the Laevan/Valencia historical
database?

First, it must be adopted quickly. Perhaps operating through the FDIC
would be a way to accomplish that, though the FDIC will almost
certainly need to have its coffers copiously refilled.

Second, forbearance would be a bad idea; it does no one any good not to
face reality.

Third, purchasing bad assets and turning them over to an asset
management corporation is not a promising strategy.

Fourth, recapitalizing the banks should be the heart of any policy; as
the authors say, it should be selective, meaning supporting those
institutions with hope of revival, and letting the terminal go down.

And fifth, targeted relief for distressed debtors, supported with
public funds, has also shown success in earlier banking crises, and
should be part of any rescue scheme in the U.S. as well.
Crises like this are manageable. They’re expensive and
painful to resolve, but even more expensive and painful when left to
fester.

—Philippa Dunne & Doug Henwood

TLR_10_01_08

banking crises: some stats

Japan Korea Norway Sweden U.S.
start 1997 1997 1991 1991 2007
fiscal cost
gross 24.0% 31.2% 2.7% 3.6%
net 23.9% 23.2% 0.6% 0.2%
output loss 17.6% 50.1% 0.0% 30.6%  
minimum growth -2.0% -6.9% 2.8% -1.2%
pre-crisis
fiscal balance -5.1% 0.2% 2.5% 3.4% -2.6%
public debt 100.5% 8.8% 28.9% 60.1%
inflation 0.6% 4.9% 4.4% 10.9% 2.6%
GDP growth 2.8% 7.0% 1.9% 1.0% 2.9%
current account 1.4% -4.1% 2.5% -2.6% -6.2%
recap costs
gross 6.6% 19.3% 2.6% 1.9%
net 6.5% 15.8% 0.6% 1.5%
three years later
CPI -2.5% -2.1% -2.0% -7.3%
gov bonds 0.1% -3.2% -2.7% -1.0%
stocks 10.9% 12.1% 53.0% 41.0%
employment -1.7% 0.2% 1.3% -10.6%

All percentage figures except inflation, GDP growth, bond yields, stock prices, and employment are percent of GDP. Gross fiscal cost is total outlays for banking system support; net is after equity and asset sales. Output loss is total deviation from trend growth rate in GDP from the crisis year through three years after crisis onset, expressed as a percent of trend GDP. Minimum growth is the lowest level of GDP growth reached during the crisis. Pre-crisis stats are for the year before the onset of the crisis. Recap costs are costs of cash, equity, or debt injections or asset purchases to recapitalize the banking system; net is after recovery of these costs through asset sales and the like. Stats labeled “three years later” are changes in indicators three years after crisis onset. “Gov bonds” are bond yields reported by the IMF in its International Financial Statistics database. “Stocks” are the based on the stock indexes published in IFS. Stats in the first eleven rows come from “Systemic Banking Crises: A New Database,” by Luc Laeven and Fabian Valencia (IMF Working Paper 08/224) and the associated spreadsheets available from www.imf.org/external/pubs/cat/longres.cfm?sk=22345.0

. The next four rows are computed by TLR from the IFS database.

by Philippa Dunne· · 3 comments · Comments & Context

Whiplash: Trading on the MTS Rollercoaster

Originally published January 12, 2007

There ain’t no seat-belt hefty enough to keep you in your seat if you decide to take positions based on evidence in
the Monthly Treasury Statements. 

Every couple of months, an analyst seizes on a fluctuation, often a wild fluctuation,
in the Monthly Treasury Statements to make the case that the job market is either far stronger or way weaker
than the Bureau of Labor Statistics’ estimates suggest. Oh, OK, these remarks do come disproportionately
form those on the hunt for evidence that the Bureau of Labor Statistics is underestimating payrolls, witness recent attention to January’s surge in withholding at the federal level, recent stories focused on another strong showing in March receipts, and the fact that once the “hidden strength” story is out in the markets for a given month, the almost inevitable reversal in the following month never makes it to traders’ screens.

by Philippa Dunne· · 0 comments · Comments & Context

Donald Kohn’s “Less Alarmist” View

Originally published November 28, 2007

Fed vice-chair Donald Kohn spoke at the Council on Foreign Relations in New York this morning, in a session moderated by Laurence Meyer. Kohn’s prepared remarks are on the Fed’s website at:

http://www.federalreserve.gov/newsevents/speech/kohn20071128a.htm

Unsurprisingly, the text reads mostly like a standard one the one hand/on the other analysis of the sort that caused Harry Truman to demand a one-handed economist. Though Kohn would never win any public speaking awards, there were some subtleties in the delivery that might be meaningful. For example, he drew out the reading of this sentence, as if for emphasis: "Some broader repricing of risk is not surprising or unwelcome in the wake of unusually thin rewards for risk taking in several types of credit over recent years." And he emphasized the starred words in the following passage: "Consequently, we might expect a **moderate** adjustment in the availability of credit to these key spending sectors….  Heightened concerns about larger losses at financial institutions now reflected in various markets have depressed equity prices and **could** induce more intermediaries to adopt a more defensive posture in granting credit, not only for house purchases, but for other uses a well." These emphases suggest that Kohn – who emphasized he was speaking for himself only and not his colleagues – holds a less alarmist view of things than do many market participants.

by Philippa Dunne· · 0 comments · Fed Focus