Sunday, January 15, 2017

Cracks in the anti-behavioral dam?


This is purely my impression, buttressed with some anecdotes; I don't have any systematic data to back this up. But in both papers and casual discussion, I'm seeing macro people taking behavioral ideas more seriously. 

"Behavioral" is a very squishy idea, but basically I think of it as meaning "imperfect use of information". The difficulty with labeling a model "behavioral" is that we don't really know what information is available. This is why I believe there's a fundamental equivalence between behavioral and informational models - for any "informational" model where agents don't know all of the facts, there's an observationally equivalent "behavioral" model where they do observe the facts and just don't make use of them. 

But anyway, in macro, most models use Rational Expectations, so let's think of "behavioral" as just meaning "non-RE". Actually, non-RE models have been kicking around for a long time - for example, Sargent's learning models, or Mankiw and Reis' sticky information models. What seems to be changing (slightly) is that A) younger people seem to be making non-RE models, B) people are recommending non-RE models for policy analysis, and C) the departures from RE are getting more stark. 

Some recent examples I've seen are:

1. The learning approach to New Keynesian models, promulgated by Evans et al., which seems to be solidly mainstream

2. Mike Woodford's response to Neo-Fisherism, which relies crucially on a slight departure from RE

3. Xavier Gabaix's behavioral New-Keynesian model, where consumers are short-term thinkers instead of infinitely far-ahead-looking

These are all well-established people making these models - they cut their teeth on RE models for years before daring to venture out into behavioral waters. But now I'm starting to see young people doing behavioral stuff as well. A good example, sent to me by Kurt Mitman, is this paper by Kozlowski, Veldkamp, and Venkateswaran, entitled "The Tail that Wags the Economy: Belief-Driven Business Cycles and Persistent Stagnation". 

The basic idea of the paper is that instead of knowing the true PDF of macroeconomic shocks, people re-estimate the distribution every time they see a shock. Not too crazy, right? But that seemingly small departure from RE has big business-cycle implications. 

The reason is tail events. When big shocks are rare, just one of them can change people's whole understanding of how the economy works. How many events like the Great Depression have there been in American history? Really, there are only two since we started keeping national accounts. Two! In 2008 we abruptly went from "There was that one really bad depression one time" to "Whoa, this is a thing that can happen multiple times!". To think that this would have zero impact on agents' beliefs about the economy - which is exactly what RE demands we think - seems implausible. The authors write:
No one knows the true distribution of shocks to the economy. Economists typically assume that agents in their models do know this distribution as a way to discipline beliefs. But assuming that agents do the same kind of real-time estimation that an econometrician would do is equally disciplined and more plausible. For many applications, assuming full knowledge has little effect on outcomes and offers tractability. But for outcomes that are sensitive to tail probabilities, the difference between knowing these probabilities and estimating them with real-time data can be large.
Anyway, to make a long story short, this can produce long economic stagnations, like the one we just had. Taking a gander at the literature review section, I see that these authors didn't aren't the first to use this mechanism in a theory - it looks like it can be traced back to a 2007 AER paper by Lars Hansen. The other similar papers the authors cite, however, all come from 2013 or later, showing that this sort of idea has been gaining currency recently and rapidly. 

Now, Koslowski et al. do dodge one important issue: what data set do agents use to estimate the distribution of economic shocks? The data set they use goes back to Word War 2 - they don't even include the Great Depression. But even if we go back further than that, we'll miss earlier episodes like the Panic of 1873, when good national accounts just weren't kept at all. Data availability is so recent that there's almost an observational equivalence between assuming that people use all the available data, vs assuming that people overweight data from their own lifetimes.

If the authors - or some other authors - were to assume that people overweight data from their own lifetimes, as evidence from Malmendier and Nagel suggests, it would have important implications down the line. Instead of people's expectations slowly converging to RE over the decades (centuries?), people would forget the lessons of history and continue being surprised by depressions every 50 or 100 years or so. 

For now, macroeconomists don't have to worry about this question. Authors like Koslowski et al. can frame their papers as quasi-behavioral papers, where RE is limited by data availability, instead of fully behavioral papers where RE is limited by collective forgetting. So these are still only cracks in the anti-behavioral dam, not a full torrential flood.

But my question is this: What happens when people start applying this mechanism to more complicated shock processes? What if the economy has regime switches that last decades? What if there is more than one kind of rare shock (e.g. the Great Inflation of the 70s/80s)? I've seen some people try to model stuff like this, and the end result can come out looking like practically any type of non-rational expectations you can think of. Meanwhile, empirical macro people are starting to pay more attention to survey measures of expectations. And people from behavioral finance are starting to put things extrapolative expectations into macro models, to explain macro facts. And evidence like that collected by Malmendier and Nagel continues to pile up.

And I should mention casual conversation as well. More and more young macro people that I interact with, including (even especially?) those who run in "freshwater" circles, are saying that behavioral explanations will have to be part of our understanding of how consumption works. Here's an example from a recent blog comment. 

So I wouldn't be surprised to see some more cracks in the anti-behavioral dam in the years to come. Chris House, my old macro prof, proclaimed three years ago that behaviorism was a dead end and would never have a transformative impact on macro. But seeing papers like Koslowski et al.'s, I'm thinking that his prediction now looks to have been quite ill-timed.


Updates

Just for fun, I'll post some more random behavioral macro papers I see.

"Explaining Consumption Excess Sensitivity with Near-Rationality: Evidence from Large Predetermined Payments", by Kueng

"Mortality Beliefs and Household Finance Puzzles", by Heimer, Myrseth, and Schoenle

"Learning about Consumption Dynamics", by Johannes, Lochstoer, and Mou

"Understanding Uncertainty Shocks and the Role of Black Swans", by Orlik and Veldkamp

"The Liquid Hand-to-Mouth: Evidence from Personal Finance Management Software", by Olafsson and Pagel

Friday, January 13, 2017

The $30k Hypothesis


I wrote a Bloomberg View post about the Permanent Income Hypothesis. Basically, more and more research is piling up showing that it doesn't fit real consumption patterns. Some consumption smoothing takes place, but there's also a substantial amount of hand-to-mouth consuming going on. Most economists I know of have already accepted this fact, and usually chalk the hand-to-mouth behavior up to liquidity constraints (or, less commonly, to precautionary saving).

But a new paper on unemployment insurance extension casts major doubt on these standard fixes, especially on liquidity constraints as the culprit. A long-anticipated transitory shock - UI expiration -- shouldn't produce a big bump in consumption even if people are liquidity constrained. Nor is home production the answer, since unemployed people are already at home long before UI expires. Something else is going on here - either people interpret UI expiration as a (false) signal of the expected duration of unemployment, or they expected Congress to extend UI at the last minute, or they're just short-term thinkers in general. Or something else. I predict that as more and more good consumption data become available, more and more of this short-termist behavior will be observed, putting ever more pressure on people who use standard models of consumption behavior.

Anyway, as expected, some people came out to defend the good ol' PIH, including my friend David Andolfatto, one of the web's best econ bloggers and a ruthless enforcer of Fed dress codes. David's claim was that the PIH is still useful in some cases, and not in others.

That's fine...IF we know ex ante what the cases are. If it's just an ex post thing - "consumers look like they're completely smoothing in this case, but not in this other case" - then the theory has no predictive power ex ante. How do you know if consumers are going to perfectly smooth in advance, if sometimes they do and sometimes they don't, and you don't know why? Liquidity constraints, which we can probably observe, are one reason to expect PIH not to hold, but they're only one reason - as the Ganong and Noel paper shows, there are other important reasons out there, and we don't know what they are yet.

Here's an example of why I think we can't just be satisfied with the notion that theories work sometimes and not others. Consider a very simple theory of consumption: the $30k Hypothesis. Stated simply, it's the hypothesis that households consume $30,000 a year, every year.

Rigorous statistical tests will reject this hypothesis. But so what? Rigorous statistical tests will reject any leading economic theory, especially as data gets better and better. All theories are wrong (right?). Some households do consume $30k, or close to it. So the $30k hypothesis is obviously right in some cases, wrong in others.

So should we use the $30k Hypothesis to inform our policy decisions? How should we know when to use it and when not to? Judgment? Plausibility? Political expedience?

This is a reductio, of course - if you don't impose any systematic restrictions on when to use a theory, you become completely anti-empirical, and priors rule everything. This is also why I'm a little uneasy about Dani Rodrik's idea that economists should rely heavily on judgment to pick which model to use in which situation. With an infinite array of models on the shelf, economists can always find one that supports their desired conclusions. I worry that judgment contains a lot more bias than real information.

Tuesday, January 03, 2017

Scenarios for the future of racial politics in America


If you don't live in a sensory deprivation tank, you probably noticed that the 2016 presidential election was rather racially charged. Many on the Democratic side charged Trump and his voters with racism, white supremacism, etc. Political scientists found that Trump's most ardent supporters were especially likely to score high on what they call "racial resentment" - their term for the belief that black Americans are getting more than they deserve. Meanwhile, the election results were very polarized by race:


Trump's victory was almost entirely furnished by the white vote, while Clinton overwhelmingly won all minorities. This repeated the pattern of 2012.  

Race has always been important in American politics - except for a brief period in the mid 20th century, blacks and Southern whites have always been on opposite sides of the partisan divide. The 1964 Civil Rights Act is widely acknowledged to have spurred the shift of Southern whites from the Democrats to the GOP. Meanwhile, "ethnics" - East and South European immigrants of the early 20th century - voted reliably Democratic until they merged with whites into the modern version of the white racial group.

Many (myself included) also believe that race is important to U.S. political economy. I buy the story that racial divisions are one of the big reasons that America doesn't have as big a welfare state as Europe. One big example of this is the way the GOP has profited from the "line-cutting" narrative - the idea that black Americans (and possibly other groups as well) are getting more than their fair share, "cutting in line" in front of more deserving whites. That narrative has probably damped white support for social safety nets.

So race is really important. But there are three big reasons why racial politics aren't set in stone. First, racial coalitions can change, as when Southern whites and blacks briefly united to support FDR. Second, racial definitions can change, as when "ethnics" joined the white race in the latter half of the 20th century. And third, the salience of race in politics can increase and decrease. So predicting the future of racial politics in the U.S. is no easy task.

Here are the possible scenarios, as I see them. These are extreme scenarios, of course; reality will probably be a lot messier, just as saying "Hispanics vote Democrat" ignores the 29% who voted for Trump. But anyway, here are five futures I can imagine:


1. Scenario 1: Race Loses Salience

This is not a future in which racial divisions vanish or America becomes "colorblind". It simply means that racial divisions would no longer the main dividing line in American public life. People would largely stop defining their political interests by race. The GOP starts appealing to more nonwhites, and the Democrats start appealing to more whites - maybe because the parties shift their ideologies, or maybe because the racial groups themselves change what they want. Intermarriage helps by blurring the boundaries between races. In this scenario, Americans go back to fighting over economics, or perhaps national security or religion, instead of about race.

(There's a very extreme form of this scenario where race does vanish, and "American" becomes a catch-all racial group that absorbs all the groups. But I consider this extreme version to be pretty unlikely.)


2. Scenario 2: White Expands

This would be a repeat of what happened in the 20th century. Just as Italians, Jews, and Slavs became "white", Asians and Hispanics could come to be regarded as part of the same group as whites. In this scenario, high rates of intermarriage between whites, Asians, and Hispanics, combined with the fact that many Hispanics already identify as white, blur the distinction between the three groups. The new group might be called "white", or it might be called something else.

In this scenario, blacks would be the odd group out, as they ended up being in the 20th century. Since black people are expected to stay at only around 13% of the American population, even with continued African immigration, this means that there could be no winning coalition that did not include a very large piece of the new racial majority. Race would lose some (but not all) salience, as it did in the late 20th century, when economic issues joined racial issues as the dividing lines between Republican and Democrat.


3. Scenario 3: All Against Whites

In this scenario, tensions between whites and the other racial groups continue to rise. The GOP gains an increasing share of the white vote, while Asians and Hispanics become even more overwhelmingly Democratic. Asians, Hispanics, and blacks might or might not start to consider themselves a single race, but they would be united politically by their opposition to whites. Since other races are approaching demographic parity with whites, this scenario might see an increasingly racialized but still even split - whites could desert the Dems at about the same rate that they lost demographic heft, leaving the two parties still roughly equal for decades.


4. Scenario 4: White Splits

This is similar to Scenario 3, except that the white racial group would split in two. The dividing line might be education, or perhaps just politics itself. Those who left the white race would simply stop self-identifying as white. They might go back to identifying with their national ancestries ("German-American", "Irish-American", etc.), they might combine with Asians, Hispanics, and/or blacks into a new racial group, or they might create some new category for themselves. Meanwhile, the rump "white" race would simply be the GOP-voting part of the current white race, and would continue to identify as white. The dividing line would still mainly be race, but now the Democrats would have a structural advantage as the percentages of Asians and Hispanics increased.


5. Scenario 5: Politics Becomes Race

This is the weirdest scenario. It's a bit similar to Scenario 4, except that some Asians and Hispanics also leave their races and join the GOP-voting whites both electorally and racially. The nation would still have two big racial blocs, and the electoral dividing line would still be race - so this is different than Scenario 1 - but the American races of the future would in no way resemble the ones we see today. Politics and race would fuse into a single concept. Democrats and Republicans would become like Hutus and Tutsis, Bosniaks and Serbs - not necessarily able to tell each other apart visually, yet deeply believing themselves to be two totally different peoples. As you can see from the aforementioned analogies, I consider this to be a pretty pessimistic scenario.


These five scenarios don't exhaust the possibilities (everyone could start to identify as black!), but they're the only ones that seem to me to have any chance of happening. Actually, I'm not sure about Scenario 1 - it's kind of wishful thinking on my part.

Scenario 2 has the weight of history on its side - it's happened twice before. The white race in America has proven very capable of expanding to take in new entrants, as it did with Germans and Swedes in the 19th century and East and South Europeans in the 20th.

Scenario 3 is most similar to the recent electoral outcomes, so it's sort of a straight-line trend projection of increasing racial polarization. I also consider this to be a pretty pessimistic scenario.

Scenario 4 is a projection of a somewhat less prominent trend - the increasing polarization of the white electorate by education. College has emerged as one of the key institutions of American society, if not the key institution, and there's a chance that skill-biased technological change will make that situation irreversible. A combination of progressive education and the venom of GOP-voting whites could cause liberal whites to simply decide that the white race isn't something they want to be a part of anymore.

Scenario 5 is the projection of yet another trend - the Big Sort. Like-minded Americans are already moving near each other and marrying each other. Social media, and the splintering of mass media in general, could accelerate the trend. Partisanship is virulent in America at the best of times, and it does seem conceivable that it might eventually be even more powerful than race.

So what do you think? Did I miss any plausible scenarios? Which scenario do you think will come to pass? Which will be best for the Democrats, and which will be best for the GOP? How can the parties nudge American society toward their desired scenario? And what would be the consequences of each scenario, for policy, for people's lives, and for the integrity of the nation-state? How should we intellectuals try to steer the populace, if indeed we have any ability to do so? These are the big questions, and they're all beyond my ability to answer just yet.

Sunday, January 01, 2017

Some thoughts on UBI, jobs, and dignity


One of the more interesting arguments these days is between proponents of a universal basic income (UBI) and promoters of policies to help people get jobs, such as a job guarantee (JG). To some extent, these policies aren't really in conflict - it's perfectly possible for the government to mail people monthly checks and try to help them get jobs. But there are some tradeoffs here. First, there's money - both UBI and JG cost money, and more importantly cost real resources, which are always in limited supply. Also, there's political attention/capital/focus - talking up UBI takes time and attention away from talking up JG and other pro-employment policies.

One of the key arguments used by supporters of pro-employment policies - myself included - is that work is essential to many people's sense of self-worth and dignity. There's a more extreme variant of this argument, which says that large-scale government handouts actually destroy dignity throughout society. Josh Barro promotes this more extreme argument in a recent post.

This seems possible, but it's very hard to get evidence about whether welfare payments are actually dignity-destroying. Anyone who goes on welfare probably has other bad stuff happening to them in life, so there's a big endogeneity problem. Meanwhile, time-series analyses of nationwide aggregate happiness before and after welfare policy implementation are unlikely to tell us much. The best way to study this would be to find some natural experiment that made one group of people eligible for a big UBI-style welfare benefit, without allowing switching between groups - for example, payouts to some Native American group might fit the bill. My prior is that handouts are not destructive to dignity and self-worth, as Barro assumes - I predict that they basically have no effect one way or the other. But this is an empirical question worth looking into.

In his own post, Matt Bruenig argues against Barro. His argument, basically, is that many rich people earn passive income, and seem to be doing just fine in the dignity department:
If passive income is so destructive, then you would think that centuries of dedicating one-third of national income to it would have burned society to the ground by now...In 2015, according to PSZ, the richest 1% of people in America received 20.2% of all the income in the nation. Ten points of that 20.2% came from equity income, net interest, housing rents, and the capital component of mixed income...1 in 10 dollars of income produced in this country is paid out to the richest 1% without them having to work for it.
I don't think this constitutes an effective rebuttal of Barro, for the following reasons:

1. "Work" is subjective. Many rich people believe that investing constitutes work (I'd probably beg to differ, but no one listens to me). And founding a successful business, which creates capital gains, certainly requires a lot of work. 

2. Passive income very well might be destructive to the self-worth of the rich, on the margin. In fact, I have known a number of rich kids who inherited their wealth, and devoted their youths to self-destructive pursuits like drug sales and petty crime. It could be that for many rich people, the dignity-destroying effects of unearned income are merely outweighed by the dignity-enhancing effects of high social status and relative position.

3. Many rich people became wealthy through work - either a highly paid profession like CEO, or by starting their own companies. This past work may provide dignity for old rich people, just as retired people of all classes may derive dignity from their years of prior effort.

And Matt's argument certainly doesn't counter the less extreme version of the "jobs and dignity" argument (i.e., the version made by Yours Truly). Even if passive income isn't actively harmful to dignity, it might not be helpful either, in which case pro-employment policies would be more effective than UBI in promoting dignity.

But that said, I do think Matt's policy proposal is a good one:
A national UBI would work very similarly. The US federal government would employ various strategies (mandatory share issuances, wealth taxes, counter-cyclical asset purchases, etc.) to build up a big wealth fund that owns capital assets. Those capital assets would deliver returns. And then the returns would be parceled out as a social dividend.
This is something I've suggested as well. I see it as an insurance policy against the possibility that robots might really render large subsets of human workers obsolete. 

UBI isn't a bad policy. If robots take most of our jobs, it will be an absolutely essential policy. I just don't think it solves the dignity problem. And with Trump winning elections in part by promising to restore dignity, I think Democrats need an issue to counter him, and jobs policy is far more likely than UBI to fit this bill. 

Saturday, December 31, 2016

Who is responsible when an article gets misread?


How much of the responsibility for understanding lies with the writer of an article, and how much with the reader? This is not an easy question to answer. Obviously both sides bear some responsibility. There are articles so baroque and circuitous that to get the point would require an unreasonable amount of time and effort to parse, even for the smartest reader. And there are readers who skim articles so lazily that even the simplest and most clearly written points are lost. Most cases fall somewhere in between. And the fact that writers don't usually get to write their headlines complicates the issue.

See what you think about this one. The other day, Susan Dynarski wrote an op-ed in the New York Times criticizing school vouchers (a subject I've written about myself). Dynarski opens with the observation that economists are generally less supportive of vouchers than they are of most free-market policies:
You might think that most economists agree with this overall approach, because economists generally like free markets. For example, over 90 percent of the members of the University of Chicago’s panel of leading economists thought that ride-hailing services like Uber and Lyft made consumers better off by providing competition for the highly regulated taxi industry. 
But economists are far less optimistic about what an unfettered market can achieve in education. Only a third of economists on the Chicago panel agreed that students would be better off if they all had access to vouchers to use at any private (or public) school of their choice.
Here's the actual poll: 


As you can see, the modal economist opinion is uncertain about whether vouchers would improve educational quality, while the median is between "uncertain" and "agree". This clearly supports Dynarski's statement that economists are "far less optimistic about vouchers than about Uber and Lyft. 

The headline of the article (which Dynarski of course did not write) might overstate the case a little bit: "Free Market for Education? Economists Generally Don’t Buy It". Whether the IGM survey shows that economists "generally don't buy" vouchers depends on what you think "don't buy" and "generally" mean. It's a little click-bait-y, like most headlines, but in my opinion not too bad. 

Scott Alexander, however, was pretty up in arms about this article. He writes:
By leaving it at “only a third of economists support vouchers”, the article implies that there is an economic consensus against the policy. Heck, it more than implies it – its title is “Free Market For Education: Economists Generally Don’t Buy It”. But its own source suggests that, of economists who have an opinion, a large majority are pro-voucher... 
I think this is really poor journalistic practice and implies the opinion of the nation’s economists to be the opposite of what it really is. I hope the Times prints a correction.
A correction!! Of course no correction will be printed, because no incorrect statements were made. Dynarski said that economists are "far less optimistic" about vouchers than about Uber/Lyft, and this is true. She also reported close to the correct percentage of economists who said they supported the policy in the IGM poll ("a third" for 36%). 

Scott is upset because Dynarski left out other information he considered pertinent - i.e., the breakdown between economists who were "uncertain" and those who "disagree". Scott thinks that information is pertinent because he thinks the article is trying to argue that most economists think vouchers are bad. 

If Dynarski were in fact trying to make that case, then yes, it would have been misleading to omit the breakdown between "uncertain" and "disagree". But she wasn't. In fact, her article was arguing that economists tend to have reservations about vouchers. And she supports her case well with data.

This is a special kind of straw man fallacy. Straw manning is where you present a caricature of your opponent's argument. But there's a particularly insidious kind of straw man where you characerize someone's arguments correctly, but get their thesis wrong. You misread someone's argument, and then criticize them for failing to support your misreading. Other examples of this fallacy might be:

1. You write an article citing Autor et al. to show that the costs of trade can be very high. Someone else says "This doesn't prove autarky is better than free trade!" But of course, you weren't trying to prove that.

2. You write an article arguing that solar is cost-competitive with fossil fuels by pointing out that solar power is expanding rapidly. Someone else says "Solar is still a TINY fraction of global generating capacity!" But of course, you weren't trying to refute that.

3. You write an article saying we shouldn't listen to libertarian calls to dismantle our institutions. Someone else says "Libertarians aren't powerful enough to dismantle our institutions!" But of course, you weren't trying to say they are.

I think Scott is doing this with respect to Dynarski's article. To be fair, his misreading was somewhat assisted by the headline the NYT put on the piece. But once he was reminded of the fact that the headline wasn't Dynarski's, and once he re-read the article itself and realized what its actual thesis was, I think he should have muted his criticism. 

Instead, he doubled down. He argued that most reasonable people, reading the article, would think it was arguing that economists are mostly against vouchers. But his justification for this continues to rely very heavily on the wording of the headline:
First, I feel like you could write exactly the opposite headline. “Public School: Economists Generally Don’t Buy It”... 
Second, the article uses economists “not buying it” as a segue into a description of why economic theory says school choice could be a bad idea... 
In the face of all of this, the New York Times reports the field’s opinion as “Free Market In Education: Economists Generally Don’t Buy It”.
On Twitter, he said: "the actual article is more misleading than the headline." But he appears to say this because he takes the headline - or, more accurately, his reading of it - as defining the thesis that Dynarski is then obligated to defend (when in fact she wrote the piece long before a headline was assigned to it). When he finds that Dynarski doesn't support his reading of a headline she didn't write, it is her article, not the headline, that he calls "misleading".

Of course, the fault here is partly that of the NYT, who used a headline that focused only on one part of Dynarski's article and overstated that part. It's a little harsh for me to say "Come on, man, you should know an article isn't about what its headline says it's about!" Misleading headlines are a problem, it's absolutely true. But after learning that Dynarski didn't write the headline, I think Scott should have been able to then read the article on its own, and go back evaluate the arguments Dynarski actually makes. It's the refusal to do this that seems to me to constitute a straw-man fallacy.

Anyway, one last point: I think Dynarski is actually wrong that economists are more wary of vouchers than other free-market policies. Yes, economists in general are probably wary of voucher schemes. But they're also a lot more favorable to government intervention in a variety of cases than Dynarski claims. Klein and Stern (2006) have some very broad survey data (much broader than IGM). They find that 67.1% of economists support "government production of schooling" at the k-12 level, with 14.4% uncertain and 17.4% opposed. But they also record strong support for a variety of other interventionist policies, such as income redistribution, various types of regulation, and stabilization policy. On many of these issues, economists are more interventionist than the general public! So I think if Dynarski makes a mistake, it's to characterize economists as being generally pro-free-market. Their ambivalence about vouchers doesn't look very exceptional.

Saturday, December 24, 2016

The Fundamental Fallacy of Pop Economics


The Fundamental Fallacy of Pop Economics (which I get to name, because this is my blog and I can do whatever I want, mwahahaha) is the idea that the President controls economic outcomes.

The Fundamental Fallacy is in operation every time you hear a phrase like "the Bush boom" or "the Obama recovery". It's in effect every time someone asks "how many jobs Obama has created". It's present every time you see charts of economic activity divided up by presidential administration. For example, here's a chart from Salon writer Sean McElwee, using data from a paper by Alan Blinder and Mark Watson:


Blinder and Watson attribute the difference to "shocks to oil prices, total factor productivity, European growth, and consumer expectations of future economic conditions", but McElwee attributes it to progressive economic policy.

Larry Bartels has made headlines with similar analyses about inequality:


But the worst perpetrators of this fallacy tend to be conservative econ/finance commentators. And of these, the worst I've seen is Larry Kudlow. Kudlow is being mooted for chairman of the Council of Economic Advisers -- basically, the president's chief economist. Here's an excerpt from a Kudlow post in December 2007 (!) denying that the economy was in danger: 
The recession debate is over. It’s not gonna happen. Time to move on. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum). The Bush boom is alive and well. It’s finishing up its sixth splendid year with many more years to come.
Notice how this is actually wrong (there was a mild recession in 2001), but Kudlow explicitly associates economic good fortune with the President's term in office. Here's another, from around the same time:
The GOP...has a positive supply-side message of limited government, lower spending, and lower tax rates....I believe the economic pendulum will soon swing in favor of the GOP. There’s no recession coming. The pessimistas were wrong. It’s not going to happen. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum)...The Bush boom is alive and well.
Or here's Kudlow on Obama:
You've had so much war on business in the last eight or 10 years…I think that has really damaged the economy and has held businesses back from investing and creating jobs. It will take a while to turn that ship around," Kudlow said of Obama's economic policies.
You see the same kind of President-based magical thinking here. In fact, go back and read Kudlow's commentary over the years, and his whole body of work is shot through with this simple thesis - Republican presidents are great for the economy, Democratic presidents are terrible, etc. Kudlow has ridden the Fundamental Fallacy about as far as it's possible to ride it. 

In a recent post, James Kwak declares that Kudlow is a victim of what he calls "economism" (and which I call "101ism"). He thinks Kudlow is wedded to a vision of an economy where free markets always work best. But I respectfully disagree with James. Kudlow doesn't seem to think about supply and demand, or deadweight loss, or any of that - nothing that would be taught in an econ class. Kudlow's thinking is more instinctive and tribal - it's "Republican President = good economy". It's the idea that if the man in charge comes from Our Team, things must go well, and if it's someone from the Other Team, things are bound to be a disaster. The Fundamental Fallacy doesn't come from Econ 101 - it's far more primal than that, an upwelling of our deepest pack instincts.

So, you may ask, why is the Fundamental Fallacy a fallacy? Three basic reasons:


1. Reason 1: Policy isn't all-powerful. 

Macroeconomic models are not reliable, so it's very hard to get believable numbers for the effects of policies like the Bush tax cuts or Obama's stimulus bill. But most estimates show that the effect of both was very modest - the Bush tax cuts might have increased overall GDP by 0.5-1.5% in the short term, and probably had close to no effect in the long term. Meanwhile, the ARRA's effect on unemployment and growth was probably quite modest. Optimistic estimates have Obama's policy package reducing unemployment by about 0.5-1.5% from 2009 through 2013 - not nothing, but not nearly enough to make the Great Recession go away. And those are the most optimistic, favorable estimates.

Only in (some) econ models does policy have complete control over things like GDP and unemployment. But those models are almost certainly highly misspecified. In reality, policy has institutional constraints - nominal interest rates can't go much below zero, there's a federal debt ceiling, etc. And even more importantly, if policy becomes extreme enough, the models themselves start to lose validity - if you have the government go deeply enough into debt, the fiscal stimulus effect will no longer be the only way in which more government borrowing affects the economy. 

In reality, things like growth and unemployment are often determined by natural forces rather than government decisions. For example, I suspect that the pattern of higher growth during Democratic administrations cited by Blinder & Watson is at least partly endogenous - recessions cause white working-class voters to ignore social/identity issues and vote for Democrats like Clinton in 1992 and Obama in 2008, allowing those Democrats to take credit for the natural as well as the policy-induced parts of the recovery.


2. Reason 2: The President doesn't control policy.

Charts like those of Bartels and Blinder & Watson, as well as buzzwords like Kudlow's "Bush boom" look only at the party of the President. But Congress is often controlled by a different party. Obama and Clinton faced Republican Congresses for much of their term in office, and Reagan faced a Democratic Congress. Even when the President has a Congress of the same party, it's often difficult for him to push through his desired policies - witness Bush's failure to privatize Social Security, or Clinton's failure to enact fiscal stimulus.

Additionally, a lot of power is held by the states. Much of Obama's stimulus bill actually just went to shore up decreases in state spending. Meanwhile, the Fed controls interest rates, and though the President appoints the Fed chair, he has very little control over what that Fed chair subsequently decides to do. 


3. Policy often acts with a lag.

Cutting taxes does relatively little if spending isn't also cut. That's because if tax cuts aren't eventually matched by spending cuts, then the government has to either hike taxes, or default on its debt. Therefore, if tax cuts don't "starve the beast", their only effect will be through short-run fiscal stimulus. And tax cuts aren't a very efficient form of stimulus.

So, guess what? Tax cuts don't ever seem to lead to spending cuts. "Starve the beast" doesn't work. 

This is just one example of how policy often acts with "long and variable lags". Deregulation is another. Many people believe that Reagan's deregulations led to the boom of the late 1980s, but Carter actually slashed a lot more regulation than Reagan did. It could have taken years for those deregulations to lead to higher growth. 

Any structural policy you want to name - welfare reform, tax cuts, infrastructure spending, research spending, trade treaties - should only have its full effect after a number of years. It takes years for businesses to invest and grow, for trade patterns to shift, and (probably) for worker and consumer behavior to permanently change. Presidential terms only last 8 years at most. So even if presidents controlled policy, and even if policy was very effective, we'd still see many presidents getting credit for their predecessors' deeds.


Obviously there are some big exceptions to this. The President can start a war, and wars can make the economy boom (as in WW2 for America) or wreck it utterly (as in WW2 for everyone else). Given enough power, a President could in theory wreak havoc on the economy, as Hugo Chavez did in Venezuela. In poor countries, a strong President like Deng Xiaoping can push through reforms that change a country's entire economic destiny. 

But when a country is already rich, where the President is restrained by checks and balances, and where policy changes are not sweeping or huge - i.e., as in the United States over the past half century - we would be well-advised not to exaggerate the economic impact of the chief executive.

Wednesday, December 14, 2016

Academic signaling and the post-truth world


Lots of people are freaking out about the "post-truth world" and the "war on science". People are blaming Trump, but I think Trump is just a symptom. 

For one thing, rising distrust of science long predates the current political climate; conservative rejection of climate science is a decades-old phenomenon. It's natural for people to want to disbelieve scientific results that would lead to them making less money. And there's always a tribal element to the arguments over how to use scientific results; conservatives accurately perceive that people who hate capitalism tend to over-emphasize scientific results that imply capitalism is fundamentally destructive.

But I think things are worse now than before. The right's distrust of science has reached knee-jerk levels. And on the left, more seem willing to embrace things like anti-vax, and to be overly skeptical of scientific results saying GMOs are safe. 

Why is this happening? Well, tribalism has gotten more severe in America, for whatever reason, and tribal reality and cultural cognition are powerful forces. But I also wonder whether a few of science's wounds might be self-inflicted. The incentives for academic researchers seem like they encourage a large volume of well-publicized spurious results. 

The U.S. university system rewards professors who have done prestigious research in the past. That is what gets you tenure. That is what gets you a high salary. That is what gets you the ability to choose what city you want to work in. Exactly why the system rewards this is not quite clear, but it seems likely that some kind of signaling process is involved - profs with prestigious research records bring more prestige to the universities where they work, which helps increase undergrad demand for education there, etc. 

But for whatever reason, this is the incentive: Do prestigious research. That's the incentive not just at the top of the distribution, but for every top-200 school throughout the nation. And volume is rewarded too. So what we have is tens of thousands of academics throughout the nation all trying to publish, publish, publish. 

As the U.S. population expands, the number of undergraduates expands. Given roughly constant productivity in teaching, this means that the number of professors must expand. Which means there is an ever-increasing army of people out there trying to find and report interesting results. 

But there's no guarantee that the supply of interesting results is infinite. In some fields (currently, materials science and neuroscience), there might be plenty to find, but elsewhere (particle physics, monetary theory) the low-hanging fruit might be picked for now. If there are diminishing returns to overall research labor input at any point in time - and history suggests there are - then this means the standards for publishable results must fall, or America will be unable to provide research professors to teach all of its undergrads.

This might be why we have a replication crisis in psychology (and a quieter replication crisis in medicine, and a replication crisis in empirical economics that no one has even noticed yet). It might be why nutrition science changes its recommendations every few months. It might be a big reason for p-hacking, data mining, and specification search. It might be a reason for the proliferation of untestable theories in high-energy physics, finance, macroeconomics, and elsewhere. And it might be a reason for the flood of banal, jargon-drenched unoriginal work in the humanities.

Almost every graduate student and assistant professor I talk to complains about the amount of bullshit that gets published and popularized in their field. Part of this is the healthy skepticism of science, and part is youthful idealism coming into conflict with messy reality. But part might just be low standards for publication and popularization. 

Now, that's in addition to the incentive to get research funding. Corporate sponsorship of research can obviously bias results. And competition for increasingly scarce grant money gives scientists every incentive to oversell their results to granting agencies. Popularization of research in the media, including overstatement of results, probably helps a lot with that.  

I recall John Cochrane once shrugging at bad macro models, saying something like "Well, assistant profs need to publish." OK, but what's the impact of that on public trust in science? The public knows that a lot of psych research is B.S. They know not to trust the latest nutrition advice. They know macroeconomics basically doesn't work at all. They know the effectiveness of many pharmaceuticals has been oversold. These things have little to do with the tribal warfare between liberals and conservatives, but I bet they contribute a bit to the erosion of trust in science. 

Of course, the media (including yours truly) plays a part in this. I try to impose some quality filters by checking the methodologies of the papers I report on. I'd say I toss out about 25% of my articles because I think a paper's methodology was B.S. And even for the ones I report on, I try to mention important caveats and potential methodological weaknesses. But this is an uphill battle. If a thousand unreliable results come my way, I'm going to end up treating a few hundred of them as real.

So if America's professors are really being incentivized to crank out crap, what's the solution? The obvious move is to decouple research from teaching and limit the number of tenured research professorships nationwide. This is already being done to some extent, as universities rely more on lecturers to teach their classes, but maybe it could be accelerated. Another option is to use MOOCs and other online options to allow one professor to teach many more undergrads. 

Many people have bemoaned both of these developments, but by limiting the number of profs, they might help raise standards for what qualifies as a research finding. That won't fully restore public trust in science - political tribalism is way too powerful a force - but it might help slow it's erosion. 

Or maybe I'm completely imagining this, and academic papers are no more full of B.S. than they ever were, and it's all just tribalism and excessive media hype. I'm not sure. But it's a thought, anyway.

Friday, December 09, 2016

Is Twitter a dystopian technology?

"What will the apocalypse look like? The answer, to use a term generally understood but the specifics of which you cannot imagine, and which this document will attempt to describe, is 'warfare'."
- William Bell, Fringe


I've been wondering whether Twitter is a true dystopian technology. Meaning, a technology that makes each user better off, but makes the world worse off as a whole. 

Note that you don't need a dystopian technology to create a dystopia. Joseph Stalin, Mao Zedong, and plenty of others managed to create dystopias using much the same technologies used in much happier, freer places. The question is whether some technologies, merely by existing, push the world toward a bad equilibrium.

How could a technology that's good for each individual be bad for the world? Externalities. For example, suppose that the only fuel we had was so horribly polluting that it would destroy the planet if we used it for even just a few decades. But each individual's choice of whether to use the fuel wouldn't be enough to tip the balance away from or toward planetary destruction. So without some kind of outside authority banning the use of the fuel, or some massive unprecedented outpouring of altruism, the world would be doomed. In fact, if global warming does destroy human civilization, fossil fuels will turn out to have been a dystopian technology.


What are some other examples of dystopian technologies? I'd say the jury is still out on the nuclear bomb. If nuclear deterrence dramatically reduces war and no one ends up using nukes, then it was a good technology. But if nuclear war eventually blows up civilization, it was a dystopian technology. Obviously, if weapons of mass destruction got cheap enough, they'd put society in untenable danger. I'm pretty sure that the ending of The Stars My Destination, in which (SPOILER) a guy runs around tossing out weapons of mass destruction to everyone on the street, won't turn out well.

Another possible example might be the stirrup. Stirrups enabled cavalry to become peerless warriors. For at least a thousand years, cavalry ruled the battlefield, and nomadic tribes from the Huns to the Mongols raided and/or conquered every settled civilization. Not coincidentally, during this millennium (usually called the Middle Ages), human wealth and development didn't advance very much. Then again, when global trade eventually linked the world, horses were an important piece of the equation. So without knowing whether globalization and science would have been possible without stirrups, it's hard to tell whether stirrups made the world worse off. 


These examples illustrate how hard it is to do a cost-benefit accounting for a technology, even with the full benefit of millennia of hindsight. So I don't expect the question I ask in this post to ever have a satisfactory answer. But in any case, it's interesting to ponder.

Is Twitter a dystopian technology? Obviously, Twitter is worth it for hundreds of millions of individuals to use. The costs - harassment, bad feelings, the risk of accidentally tweeting something that harms your career - are considerable, and no doubt have contributed to the company's user growth stagnation. But for many users, those costs are still worth paying. 

But what about the externalities of Twitter? Well, one of the biggest negative externalities is war. Twitter's format is very conducive to firing off quick thoughts without carefully considering the consequences. The brevity of tweets also makes them easily subject to misinterpretation. Imagine if President Trump fired off an intemperate tweet that leads to a nuclear war with China or Russia. It's quite possible that if he were forced to use a longer-format medium like Facebook, the risk of that happening would be much lower. Already Trump has used Twitter to denounce China's military buildup in the South China Sea.


So there's that.

A more insidious way that Twitter could generate negative externalities is to contribute to political polarization. In an earlier post, I described why Twitter is conducive to bad feelings, aggression, perceived aggression, and the creation of aggressive online mobs. If you don't believe me when I say Twitter is a vicious jungle, look at what happened to the Microsoft chatbot that taught itself to tweet by watching other people:


In brief, the features of Twitter that make it conducive to fighting are:

1. Limited length of replies. 

This is the big one. Like the faster-than-light internet in A Fire Upon the Deep, Twitter allows only short, pithy replies. Short replies have no room for self-deprecation, qualifiers, jokes, praise, caveats, or any of the other social lubricants that make debates friendly and collegial. Typically, the only way to get the point across in 140 characters is to be blunt. 

In addition, short replies make it harder to establish a voice when writing; that forces readers to project an imagined voice onto each tweet. Some people will assume the best, but others will assume the worst - that the writer of the tweet is being rude, sarcastic, or aggressive. And since an (assumed) unfriendly tone does more social damage than an (assumed) friendly tone heals, the net result of random errors in tone-interpretation will be to create bad feelings, including resentment, offense, threat perception, and a feeling of persecution.

2. Retweets that make replies impossible.

When I retweet someone, all of my followers can see both her tweet and my added commentary. But the author of the tweet cannot issue a reply that all of my followers automatically see. She can reply, but only if the readers scroll down through the thread will they be able to read her reply. 

So imagine if someone tweets "I think Democrats should try to appeal to some Trump voters," and I quote this tweet, adding the comment "This person thinks we should coddle racists." All 46,700 of my followers can see my uncharitable interpretation of her statement. But if she wants to reply "No that's not what I meant at all," then my 46,700 followers will only see her reply if they click on my tweet and scroll through the thread. Most will probably not do this, since it takes time and effort. Instead, they will probably see my uncharitable interpretation, assume it's true, and feel contempt for the person who wrote the original tweet. Also, my followers will now know her handle, so they will be able to tweet mean things directly to her ("Asshole, please delete your account", etc.).

I see this happen all the time.

3. Open mentions.

On Facebook, only people I approve can comment on my posts. I can make posts open, or I can limit replies to only my personal friends. On Twitter, on the other hand, anyone can automatically reply directly to anyone else, unless they have been blocked or muted. Since mentions are the main way that people talk to each other on Twitter, this means that if you want to talk to people on Twitter, you have no choice but to scroll through the replies of anyone who decides to talk to you. There's just no way not to. In addition, there's no way to untag yourself from other people's tweets, as you can do on Facebook.



4. Anonymity

Twitter, like discussion forums - but unlike Facebook - thrives on anonymity. This of course makes the platform have lots of bots. But more importantly, as we all know, online anonymity allows people to blow off steam, and reduces people's incentive to be friendly and diplomatic.


These three basic features of Twitter's technology encourage aggressive discourse, bad feelings, harassment, and constant rhetorical combat. That's certainly a cost to the user, but it might also encourage partisanship. Each person, in order to feel emotionally safe from the constant attacks that he feels like he's getting on Twitter, might be pushed to join an ideological group - like a prison gang, for protection (this excellent analogy was originally made by Charles Johnson, the nutty right-wing reporter).

Ideological polarization creates few costs for the user. It really doesn't make my online experience much worse to join the BernieBros, or the Alt-Right, or the Social Justice Warriors, or GamerGate, or the Libertarians, or whoever. I sacrifice a little bit of opportunity to say maverick, unorthodox things, and in return I get a whole bunch of people who have my back and are willing to beat off waves of attackers on a daily basis. 


But ideological polarization might be very costly for society. Eventually, if no one listens to the other side, people can come to believe that everyone in the opposing tribe hates them and wants to destroy them. The result can be large-scale social strife, civil war, or simply long-term political dysfunction. 

Many people have remarked that Twitter wielded a huge amount of influence in the 2016 presidential election. That election generated unprecedented level of bad feelings. You can chalk this up to the candidates themselves, but it seems very likely to me that the constant, unending, bitter Twitter wars were a big part of what made the campaign so unbearable. I myself would occasionally "detox" from campaign Twitter for days at a time, and find myself feeling much better about U.S. politics, about both candidates, and about life and the world in general.

But note that political tribalism can also dramatically raise the cost of quitting Twitter. Even if the online conflict is grueling and unpleasant - and even if it would make everyone happier if the conflict would just stop, or at least quiet down - one can't abandon the battlefield to the enemy. Once you join and commit to a Twitter tribe, you can't just check out, any more than you can desert your platoon in the middle of a war. 


So it's possible that Twitter's existence is adding significantly to America's already worrying polarization problem - causing American society to turn into a new sort of war zone. And if this is indeed the case, it's the 4 aforementioned basic features of the technology that are doing it. 

Maybe the people who run Twitter are smarter than me, but personally I just don't see a way to get rid of any of those 4 features without fundamentally changing the product. And this is a product we know makes money, has a strong network effect, and fills an important niche in the media. Twitter's problems seem inherent to any Twitter-like medium, and society seems like it will always have a need for a Twitter-like medium as long as it's technologically feasible.

So to sum up: Twitter is a great tool, and creates lots of social value. Personally, I have no intention of quitting, and most of the existing users don't seem likely to quit either. But it's possible that by exacerbating polarization and fostering large-scale social conflict, Twitter will end up destroying more social value than it creates.

If this is the case - if I haven't exaggerated Twitter's social costs or underrated its benefits - what's the solution? The government could just ban Twitter, or heavily police it like in China. That seems like it would have very high costs for society, since a government that does that has basically chucked free speech out the window (just look at China). So I think that society's only real way out of Twitter Hell is to innovate its way out. Solve a dystopian tech problem by inventing utopian tech - a new product that fulfills the useful social role of Twitter but avoids the negative externalities, by somehow fostering peaceful, friendly, constructive dialogue. Or modify Twitter in ways that fix its problems, which of course I haven't thought of yet. If stirrup-using horse nomads keep conquering civilization, invent the gun. 

Which is basically a cop-out. "Invent magical new technology that solves the problem." Check. But since the march of technology only goes forward, if you stumble into a dystopian cul-de-sac, there's generally no way out but forward into the unknown.


Tuesday, December 06, 2016

More about the Econ 101 theory of labor markets


I've received many good responses to my post about the "Econ 101" (undifferentiated competitive partial equilibrium short run) theory of labor markets. But there are a couple responses that I got many times, which I think are worth discussing in detail.

To recap: I pointed out that the Econ 101 theory can't simultaneously reconcile the small sizes of the labor market responses to immigration shocks and minimum wage increases. I said this implies that the Econ 101 theory isn't a good way to think about labor markets, and that instead we should pick another go-to mental model - general equilibrium, or search and matching theory, or monopsony, etc.

Two common responses I got were:

1. "Immigration doesn't just shift labor supply; it shifts labor demand too."

2. "Labor is probably very heterogeneous, so one S-D graph shouldn't have to fit both these facts at once."

The first response is a very good one. I like it a lot. What it means is that we should always think about labor markets in terms of general equilibrium, not partial equilibrium. 

An essential part of partial equilibrium thinking (or as David Andolfatto likes to call it, "Marshall's Scissors") is that the supply and demand curves can shift independently of each other. This is not apparent just from drawing the graphs, but if you think about it, it's obvious. If any shock - a storm, a policy change, a change in the price of a substitute or complement, etc. - shifts both curves in the same direction, the effect of shocks on both price and quantity will be indeterminate. Econ 101 doesn't allow you to reason from a price change, but it should allow you to reason from a price change and a quantity change.

Immigration shifts labor demand because of general equilibrium effects. Realizing that more labor is now available in a certain city, more companies will move there or start up there, in order to take advantage of the demand for goods and services from the new immigrants. That will push the price of labor back up to where it was before the immigrants came. If that shift happens quickly, studies won't show any sizable dip in wages from the influx of immigrants. General equilibrium is harder to think about than partial equilibrium (quick mental exercise: what are the general equilibrium results of a minimum wage hike?), but in the case of labor, the evidence shows that the simpler theory just won't cut it.

OK, on to response #2. It might be the case that immigrants don't compete directly with native-born workers, but fill economic niches that previously were mostly left unfilled. It might be that if we sliced the data in just the right way, we could find a slim subgroup of people whose wages are clobbered by immigration. In fact, George Borjas has tried to do exactly this. But any properly trained empirical researcher will recognize this as data mining. 

Meanwhile, some people claim that minimum wage and immigration affect two very different segments of the labor market. That's certainly possible, but both types of studies are usually about low-wage, low-education workers. Many limit themselves to teenagers, and find the same thing. How much more granularity do you want?

The problem with invoking extreme heterogeneity to explain seemingly incommensurate labor market facts is that it turns predictive theory into useless just-so stories. Assuming unobservable characteristics - or mucking around in the data til you find some that seem to work - introduces free parameters into the model, meaning it can never really be compared with data. Heterogeneity assumptions are just the proverbial epicycles. Without some assumptions about which segment of the labor market the theory applies to, the Econ 101 theory becomes totally useless.

So while some degree of labor market heterogeneity is real, the more it gets invoked, the more it weakens the overall theory. Theories should always be penalized for sticking in more parameters.

Anyway, those were two thoughtful and interesting categories of responses. I also got a few that were...well...a little less deep.  ;-)

Saturday, December 03, 2016

An econ theory, falsified


What does it mean to "falsify" a theory? Well, you can't falsify a theory globally - even if it's shown to be false under some conditions, it might hold true far away or in the future. And almost every theory is falsifiable to some degree of precision, since almost every theory is just an approximation of reality.

What falsification really means - or should mean, anyway - is that a theory is shown to not work as well as we'd like it to under a well-known set of conditions. So since people have different expectations for a theory - some demand that theories work with high degrees of quantitative precision, while others only want them to be loose qualitative guides - whether a theory has been falsified will often be a matter of opinion.

But there are some pretty clear-cut cases. One of them is the "Econ 101" theory of the labor market. This is a model we all know very well - it has one labor supply curve and one labor demand curve, one undifferentiated type of labor and one single wage.

OK, so what are some empirical things we know about labor markets? Here are two stylized facts that, while not completely uncontroversial, are pretty one-sided in the literature:

1. A surge of immigration does not have a big immediate negative impact on wages.

2. Modest minimum wage hikes do not have a big immediate negative impact on employment.

George Borjas disputes the first of these, but he's just wrong. A few economists (and MANY pundits dispute the second, but the consensus among academic economists is pretty solid.

The first fact alone does not falsify the Econ 101 theory of labor markets. It could be the case that short-run labor demand is simply very elastic. Here's a picture of how that would work:


Since labor demand is elastic, the supply shift from a bunch of immigrants showing up in the labor market doesn't have a big effect on wages in this picture.

BUT, this is impossible to reconcile with the second stylized fact. If labor demand is very elastic, minimum wage should have big noticeable negative effects on employment (represented by the amount of green between the blue and red lines on this graph):


By the same token, if you try to explain the second stylized fact by making both labor supply and demand very inelastic, then you contradict the first stylized fact. You just can't explain both of these facts at the same time with this theory. It cannot be done.

So the Econ 101 theory of labor supply and labor demand has been falsified. It's just not a useful theory for explaining labor markets in the short term (the long term might be a different story). It's not a good approximation. It doesn't give good qualitative intuition. And it's especially bad for explaining the market for low-wage labor, which is the market that most of the aforementioned studies concentrate on.

What is a better theory of the labor market? Maybe general equilibrium (which might say that immigration creates its own demand). Maybe a model with imperfect competition (which might say that minimum wage reduces monopsony power). Maybe search and matching theory (which might say that frictions make all short-term effects pretty small). Maybe a theory with very heterogeneous types of labor. Maybe something else.

But this theory, this simple Econ 101 short-run partial-equilibrium price theory of undifferentiated labor, has been falsified. If econ pundits, policy advisors, and other public-facing econ folks were scientifically minded, we'd stop using this model in our discussions of labor markets. We'd stop casually throwing out terms like "labor demand" without first thinking very carefully about how that concept should be applied. We'd stop using this framework to think about other policies, like overtime rules, that might affect the labor market.

Sadly, though, I bet that we will not. We will continue using this falsified theory to "organize our thoughts" - i.e., we'll keep treating it as if it were true. So we will continue to make highly questionable policy recommendations. The fact that this theory is such a simple, clear, well-understood tool - so good for "organizing our thinking", even if it doesn't match reality - will keep it in use long after its sell-by date. That's what James Kwak calls "economism", and I call "101ism". Whatever it's called, it's not very scientific.


Updates

In a follow-up post, I think about a couple of common responses to this post.