Articles by: Philippa Dunne

The Fed Evokes Shakespeare

There's no shortage of reasons to worry about the health of the U.S. dollar: a still-huge trade deficit, a huge and swelling budget deficit, pervasive credit troubles, a serious recession that may linger, the threat of capital flight, etc. etc. Should we be worried about a currency crash?

That question is ambiguously phrased. It's quite possible the dollar could crash. But how bad would it be if it did? A surprising answer comes from a recent study by Federal Reserve economist Joseph E. Gagnon. The title lays out a broad hint: Currency Crashes in Industrial Countries: Much Ado About Nothing?. Gagnon's answer is basically, yeah.

Gagnon looks at 19 episodes of sharp currency depreciations-15% or more over four quarters-in 20 industrial countries since 1970. Factors that caused the crashes include inflationary policies, large current account deficits, capital outflows, and rising unemployment rates. Sometimes one factor was enough to cause the slide, sometimes it took several. But his surprising finding is that crashes were followed by poor outcomes-slow GDP growth, rising bond yields, and falling equity prices-only when inflationary policies prevailed. But the currency crash seemed not to contribute to poor outcomes-if anything, they mitigated them. The inflation did the damage, not the currency troubles.

The greatest danger from a currency crash came when the central bank was pegging a specific currency value despite inflationary policies. If the exchange rate is flexible or floating, there's considerably less trouble.

It's important to note that an inflation must be underway, and not merely expected, for a currency crash to be truly damaging. The threshold level of inflationary risk is a rate that's more than two standard deviations above the 20-country average, which works out to 7.2 percentage points.

Gagnon also finds that devaluations can successfully stimulate GDP growth and improve the foreign balance with little effect on inflation. He concludes: "Non-inflationary currency crashes uniformly had good outcomes: GDP growth was average to above average, bond yields fell, and real equity prices rose."

We're not conspiracy theorists, sad to say, but a word on what this paper might mean, aside from its interesting conclusions. There must be talk within the Fed about what would happen should the dollar fall hard-and maybe even if such a devaluation might be a good move for the U.S. Though no central banker or finance minister would ever let on that he or she is thinking this way, you can imagine the temptation. And from all this we'd conclude: avoid betting the farm on dollar strength. But if you lead a dollar-denominated life, you have little to fear from a devaluation. Things might look a little different in Beijing, but that's another story.

The Envelope Please: NBER Study finds ratio between Establishment and Household Surveys to the Cyclical

NBER has just released a working paper, "Exploring Differences in Employment between Household and Establishment Data," that presents research and analysis carried out by the Census Bureau and the BLS concerning the unusually large gap between the two major employment surveys that developed in the late 1990s. By matching individual Unemployment Insurance records with individual respondents in the Household Survey, the authors unearthed characteristics of the workers most likely to show up in one survey yet be missed in the other, and conclude that most of these workers are on the margins of the income and education spectra. For example, poor recent immigrants working under the table and highly educated consultants might both be missed by the Establishment Survey but included in the Household Survey.

But that example should not give Household Survey enthusiasts false hope. The study demonstrates that divergences between the Current Employment Survey (aka the Establishment Survey or Nonfarm Payroll) and the Current Population Survey (aka the Household Survey) are "cyclical phenomenon," with the CES outpacing the CPS during business-cycle expansions, and then falling back during recessions and the early stages of recoveries. The 60-year history of the ratio between the two surveys graphed below shows this clearly. (Take that, Kudlow & Co.) Also note that the ratio failed to rise during the most recent recovery, which seems to underscore the ongoing weakness in terms of employment growth.

Ratio_Est_HH

Based on characteristics of respondents discovered in their study, the authors contend that tight labor markets create a growing number of marginal jobs that often go unreported in the Household Survey, e.g., establishments hiring short-term workers to cover busy periods, which begins to lift the Establishment Survey. As economic conditions continue to improve, workers tend to drop informal jobs (which would be reported in the CPS but not the CES) for formal jobs (that would be reported in the CES), thus widening the gap between the two surveys. These trends then reverse as economic activity falls off, with establishments laying off workers who then turn to informal employment, moving from the CES to the CPS. The graph of the two series since 1994 directly below illustrates this process.

HH-vs-Est

The unusually large and long-lived gap between the two surveys began in 1998 as the Establishment Survey rose well above the Household Survey, and reversed in 2001, when the Household Survey remained stable as the Establishment Survey fell.  The relative pick-up in the Household Survey that got so much attention in the press was the unwinding of the prior trend and not the beginning of a new one. To put some numbers on it, in comparing UI and CPS data the authors found that jobs counted in the UI but not in the CPS grew by 2.3 million between 1996 and 2001, while jobs counted by the CPS and not by the UI grew by just 600K. Between 2001 and 2003, jobs counted by the CPS but not the UI grew by 800K, while jobs counted in the UI but not the CPS fell by 500K.

 So next time any of us, understandably, seeks solace in the Household Survey’s strength when the Payroll Survey disappoints, we need to remember that NBER researchers, who are responsible for officially calling recessions, have determined that such relative strength is indicative of a weakening economy.

NBER Working Paper No. 14805, issued in March 2009, "Exploring Differences in Employment between Household and Establishment Data," by Katharine G. Abraham, John C. Haltiwanger, Kristin Sandusky and James Spletzer, available here (subscription required): http://www.nber.org/papers/w14805.pdf

Prescient TLR calls on three major economic turns

Dramatic slides in housing prices, auto sales, and oil prices have caused—need we point out?—tremendous turmoil over the past year.

We were out in front on all three (click to enlarge):

Auto Sales
Auto Sales 

Oil Prices
Oil Prices 

Housing Prices
Housing Prices

Can you afford *not* to know what we are writing about today? Please call Marni at 877-324-1893 for a trial subscription.

by Philippa Dunne· · 2 comments · Comments & Context

Calculating the Unemployment Rate

Recently several news pieces have made the claim that if the unemployment rate were calculated as it was during the Great Depression, the current rate would be close to double what it is, and creeping toward the formidable rates back in the 1930s.

Unempl-1929-2009

The first problem with this statement is that there was no official unemployment rate until the 1940s. The ones we use today were reconstructed after the fact. As unemployment ballooned during the Great Depression a number of ad hoc attempts were made to calculate the rate, and the widely divergent results led private researchers and some state and local governments to experiment with various sampling methods. In 1940 the WPA began publishing statistics on those working (the employed), those looking for work (the unemployed), and those doing something else (hiding under the bed perhaps?) and so not in the equation.*

The second problem with the statement is that it's just not true. Although the BLS has refined their surveys and made questions more specific, conceptually the unemployment formulas have not changed, and the BLS's own analysis of test data shows that the impacts of several sets of changes on the overall numbers are minor.

In 1962 high unemployment and two recessions in three years led to the formation of The Presidential Committee to Appraise Employment and Unemployment Statistics, led by Robert Gordon, and tasked with reassessing the concepts used in gathering labor-market data. The Committee gave high marks to the BLS's integrity and suggested some improvements. For several years the BLS tested new survey techniques before instituting a number of changes in 1967.

Among the most important of these were the requirement that workers must have actively sought employment in the last four weeks in order to be classified as unemployed.  A contact at BLS agrees that some discouraged workers were probably counted as unemployed before this change was made, but the effect of this migration is small. As they generally do, the BLS ran the new definitions alongside the old, in this case for 2.5 years, before adopting the new.  Although the test series is not entirely comparable with the new series, the overall unemployment rate in the new series dropped by just one-tenth of a percent and, within that, the rate for adult men was down 3/10th, up 4/10th for adult women, and off a full point for teenagers.  (Maybe they were just being teenagers: the requirement that they give a concrete example of their job search may well have reminded them of their parents and got the blank stare.) The Committee also recognized the need for more detailed data on persons outside the labor force, who are highly sensitive to changes in labor demand, and the BLS began collecting information on those who wanted a job although they were not looking for work in 1967.

In 1976, in order to provide more information on the hidden unemployed (who would presumably be part of the labor force in a full-employment scenario), the BLS first published the original U1 to U7 tables, which break out marginally attached workers.  These tables were revised in the 1994 redesign (becoming U1 to U6) and the controversial requirement that discouraged workers must have sought work in the prior year was added. This change halved the number of discouraged workers, resulting in a complete break in the time series.

But those workers can still be found in the U-6 series, which is the broadest measure of labor underutilization, and it ain't a pretty sight. Up 4.8% over the year, U-6 currently includes an ugly 13.5% of the labor force. Update: In February U-6 unemployment rose to 14.8%. There's no need to fool around with the official unemployment rate (U3) to get an accurate picture of how quickly our labor market has deteriorated: the U1 to U6 tables tell the story.

Update 03/14/2009
In response to a reader's comment:

There are three unemployment series available for the early 1930s: Stanley Lebergott’s, Michael Darby’s reworking of the Lebergott series, and the G.H. Moore series, available through NBER. (Michael Darby is the economist who pointed out that the Lebergott series included those on work-relief as unemployed. His series moves them to employed.) We used Moore’s series, which pretty much splits the difference between the other two. When you combine different series, usually necessary for long-term views, the series breaks themselves produce spikes or dips. Splicing the Darby series to the official BLS data makes it look like the unemployment rate jumped in 1940, which we did not want, and Lebergott’s inclusion of those on work relief as unemployed was in line with 1940 census practice.

Here are the yearly averages for the three series:

  Moore Lebergott Darby
1929   3.2% 3.2%
1930   8.7% 8.7%
1931   15.9% 15.3%
1932   23.6% 22.9%
1933 23.4% 24.9% 20.6%
1934 19.1% 21.7% 16.0%
1935 17.6% 20.1% 14.2%
1936 14.2% 16.9% 9.9%
1937 12.2% 14.3% 9.1%
1938 18.4% 19.0% 12.5%
1939 16.3% 17.2% 11.3%
1940   14.6% 9.5%

Basically, if you want to evaluate the effect of government work programs, compare the Lebergott series to the Darby series. If you want a more readable trend line (while avoiding accusations of playing politics) use the Moore series.

For more information and some notes on definitions, please see “Employment and Unemployment in the 1930s,” by Vanderbilt economist Robert A. Margo, available here: http://fraser.stlouisfed.org/docs/MeltzerPDFs/maremp93.pdf

Philippa Dunne and Doug Henwood

*There is currently a bit of a fracas over the reconstructed unemployment rates for the period prior to official series. Stylish Stanley Lebergott, the BLS economist who put together the most widely used series, categorized workers on emergency relief as unemployed. In the 1980s data reclassifying these workers as employed were released, a definition in line with current practice and more widely accepted. In the past month or so, those wishing to show the WPA programs did little to alleviate unemployment have been relying on the unrevised Lebergott series, and those taking the opposite view the revised data. Of course, if you compare the two series it appears that between 1934 and 1941 WPA projects took 2 to 3.5 million workers off the unemployment roles, and shaved the rate by 4 to 7 percentage points.


Why do they do it?

Arguing about performance-based pay is running neck-and-neck with grousing about bonuses, whether they be too big or too small, among popular topics this holiday season. In their upcoming paper, “Give & Take: Incentive Framing in Compensation Contract,” Judi McLean Parks (Washington University in St Louis) and James W. Hesford (Cornell University) test out their hunch that certain compensation packages may be linked to rising fraud, losses from which are currently estimated by the Association of Certified Fraud Examiners to total something like $994 billion annually.

Since compensation packages are considered a central tool in managerial control systems, managerial accounting research has long taken an interest in how compensation packages influence behavior.  A primary foundation of such research is agency theory, a model that assumes agent and principal are self-interested, but in divergent goals, that agents will shirk, “if necessary, [with] guile and deceit,” and that principals will attempt to control agents through monitoring or by aligning the interest of the agents with their own.  In theory, performance-based compensation systems are one way to accomplish the latter.  This may have worked well at GM in the 1980s, when line-workers were put on performance-based pay, so that when GM did well, they did well, but when the agents themselves are reporting the results, such contingency packages may instead encourage financial mismanagement and deceit.

In an effort to supplement empirical studies of performance-based pay, and to include penalty-contingencies, which are actually quite common, McLean Parks and Hesford undertook a controlled study.  Rather than the obvious choice of rats as participants, the authors brought in a random sample of students, paying them for solving anagrams under three compensation packages: flat salary, performance-based bonus, and performance-based penalty.  Each student was given a package with instructions, a “high-quality attractive pen” (keep your eye on these), and self-evaluation forms. Once they turned in the self-scored performance sheets, they threw away their actual work, allowing plenty of opportunity for fraud.

Basic results: those receiving flat salaries were the most honest in their reporting, those on bonus-contingent schedules were less honest, and those on penalty-contingent schedules were the least honest.  Even worse, when no ethics statement was signed, those on penalty-contingent pay were three times as likely as those on salary, and twice as likely as those on bonus-contingent pay, to steal those attractive pens.

The authors unearthed a concern about the use of ethics statements, such as the attestations all CEOs must sign under Sarbanes/Oxley.  Although, overall, 46% of those who did not sign such statements stole their pens, and only 29% of those who did sign statements did not “misappropriate assets,” the details are more complicated.  Those facing performance-based penalties were more likely to misrepresent their performance if they had signed such statements than if they had not. The authors suspect that the existence of the statements themselves suggested to the agents that the principals were weak on apprehending fraud. Why else would they be required to sign such statements?

McLean Parks sums up: “For years we have touted the basic mantra of pay for performance because that's the way you get the best performance. Maybe you get the best performance reported, but what's the underlying performance?"

Not really in the holiday spirit, but you can read the full study, still under review, here:

https://www.business.utah.edu/humis/docs/organization_962_1224879507.pdf

by Philippa Dunne· · 0 comments · Comments & Context