Posts tagged ‘economic sociology’

Ratings game

| Gabriel |

David Waguespack and Olav Sorenson have an interesting new paper on Hollywood (their earlier Hollywood paper is here) that contributes to the literature on categorization, rankings, and sensemaking that increasingly seems to be the dominant theme in econ soc. The new paper is about MPAA ratings (G, PG, PG13, R, NC17) and finds that, controlling for the salacious of the content, the big studios get more lenient ratings than small studios. The exact mechanism through which this occurs is hard to nail down but it occurs even on the initial submission so it’s not just that studios continuously edit down and resubmit the movie until they get a PG13 (which is what I would have expected). Thus the finding is similar to some of the extant literature on how private or quasi-private ranking systems can have similar effects to government mandates but adds the theoretical twist that rankings can function as a barrier to entry. This kind of thing has been suspected by the industry itself, and in fact I heard the findings discussed on “The Business” in the car and was planning to google the paper only to find that Olav had emailed me a copy while I was in transit.

Aside from the theoretical/substantive interest, there are two methods points worth noting. First, their raw data on salaciousness is a set of three Likert scales: sex, violence, and cussing. The natural thing to do would have been to just treat these as three continuous variables or even sum them to a single index. Of course this would be making the assumption that effects are additive, linear, and the intervals on the scale are consistent. They avoided this problem by creating a massive dummy set of all combinations of the three scores. Perhaps overkill, but pretty hard to second guess (unless you’re worried about over-fitting, but they present the parametric models too and everything is consistent). Second, to allow for replication, Olav’s website has a zip with their code and data (the unique salaciousness data, not the IMDB data that is available elsewhere). This is important because as several studies have shown, “available on request” is usually a myth.

March 22, 2010 at 4:27 am

Age, Period, Flim Flam

| Gabriel |

A few months ago, The Chronicle had a very interesting article on generation gurus, who claim insight into the “millenials,” or as actual social scientists boringly call them, “the 1980s and 1990s birth cohorts.” Lots of organizations, including college admissions boards, are really interested in these gurus’ advice on how to understand the kids these days. (Which reminds me of the obnoxious creative team of Smitty and Kurt, who were brought on to Sterling Cooper to sell Martinson’s coffee to the Pepsi generation).

I remember way back when I was in high school reading a long-form magazine article (The Atlantic?) on Howe and Strauss and I thought it was a great theory, in part because some of the details seemed like they were (or ought to be) true and in part because the generational dialectic struck me as plausible. Basically they say that idealistic generations are followed by cynics who in turn are followed by pragmatic workhorses who are in turn followed by idealists, with the mechanism being that each generation reacts against its parents’ excesses. According to this schema, the reason I grew up listening to Nirvana was as a reaction to the “All you need is love” stuff of the boomers.

When I got all growed up and actually started dealing with, you know, systematic data, I was more than a little disappointed that while cohort change is not always linear, it is basically monotonic and it is definitely not cyclical or dialectical. I’m primarily an orgs guy rather than a people guy, but I’ve still done some moderately extensive age/period/cohort stuff with the GSS and SPPA and on everything I looked at (mostly social attitudes and cultural consumption), there’s absolutely no evidence whatsoever for the Howe and Strauss dialectic. So for instance, if you look at strong preference for opera and classical you first have to limit the data to BA or higher education (less educated people don’t like this music regardless of cohort) and then you see a clear trend that the music is popular with educated people born before 1950 and unpopular with educated people born after 1950. There is no distinction between “boomers” and “gen X” in the data, and in fact older boomers are still into high culture. The only issue that I’m aware of that even vaguely approximates the Howe and Strauss model is abortion attitudes, but a) the cohort effects on abortion attitudes are weak and b) the effects of cohort on other sex/reproduction opinions, like gay marriage, are monotonic.

So given that the empirical evidence for these ideas is so weak, why are college administrators, marketers, etc, so into it? I think the answer has to be that it was facially plausible and more importantly that it was pretty clear. A money quote from the article is:

Amid this complexity, the Millennials message was not only comforting but empowering. “It tickled our ears,” says Palmer H. Muntz, director of admissions and an enrollment-management consultant at Lincoln Christian University, in Illinois. “It packaged today’s youth in a way that we really wanted to see them. It gave us a formula for understanding them.”

This is reminiscent of the argument that John Campbell gave for explaining the popularity of supply side economics. His argument is basically that the idea gained popularity not because it had especially powerful theory or empirics behind it, but because it was comprehensible and gave a tractable guide to action. In theory, Lafferism is contingent on the important question of where the current tax regime lies relative to the curve’s maximum, but in practice this contingency was elided and people took it to mean “always cut taxes.” That is the appeal of the idea was not so much that we had good reasons to think it reflected reality (or more specifically, that it was applicable to current circumstances), but because it clearly prescribed action — and I think it’s worth adding, actions that were desirable in a free lunch kind of way. In the same way, if you read Howe and Strauss, they are relentlessly positive about the millennials, portraying them as a dialectically-generated reproduction of the go-getters who first stormed the beaches of Normandy and then nested into Levittown. Victory, affluence, swing music, what’s not to love about these kids?

You can see similar wishful thinking in the eagerness of municipal officials to throw consulting contracts at Richard Florida. Florida’s basic shtick is that if Methenburg, PA wants to develop they should just rezone old warehouses and put up a sign reading “Methenburg Arts District,” this will attract artists, who in turn will attract engineers, who in turn will turn Methenburg into the next Silicon Valley. I always imagine after Florida gives his powerpoint, the city councilmen or county selectmen are enthusiastically coming up with ideas about how to be “cool” like Murray Hewitt on Flight of the Conchords. It sounds like a perfect plan: Methenburg get to be “cool,” we get development, and it doesn’t require either making expenditures or forgoing revenues to any appreciable extent.

If only it were true.

January 19, 2010 at 5:14 am 2 comments

Sampling on the independent variables

| Gabriel |

At Scatterplot, Jeremy notes that in a reader poll, Megan Fox was voted both “worst” and “sexiest” actress. Personally, I’ve always found Megan Fox to be less sexy than a painfully deliberate simulacra of sexy. The interesting question Jeremy asks is whether this negative association is correlation or causation. My answer is neither, it’s truncation.

What you have to understand is that the question is implicitly about famous actresses. It is quite likely that somewhere in Glendale there is some barista with a headshot by the register who is both fugly and reads lines like a robot. However this person is not famous (and probably not even Taft-Hartleyed). If there is any meritocracy at all in Hollywood, the famous are — on average — going to be desirable in at least one dimension. They may become famous because they are hot or because they are talented, but our friend at the Starbucks on Colorado is staying at the Starbucks on Colorado.

This means that when we ask about the association of acting talent and sexiness amongst the famous, we have censored data where people who are low on both dimensions are censored out. Within the truncated sample there may be a robust negative association, but the causal relationship is very indirect, and it’s not as if having perky breasts directly obstructs the ability to convincingly express emotions (a botoxed face on the other hand …).

You can see this clearly in simulation (code is at the end of the post). I’ve modeled a population of ten thousand aspiring actresses as having two dimensions, body and mind, each of which is drawn from a random normal. As built in by assumption, there is no correlation between body and mind.

Stars are a subsample of aspirants. Star power is defined as a Poisson centered on the sum of body and mind (and re-centered to avoid negative values). That is, star power is a combination of body, mind, and luck. Only the 10% of aspirants with the most star power become famous. If we now look at the correlation of body and mind among stars, it’s negative.

This is a silly example, but it reflects a serious methodological problem that I’ve seen in the literature and I propose to call “sampling on the independent variable.” You sometimes see this directly in the sample construction when a researcher takes several overlapping datasets and combines them. If the researcher then uses membership in one of the constituent datasets (or something closely associated with it) to predict membership in another of a constituent datasets (or something closely associated with it), the beta is inevitably negative. (I recently reviewed a paper that did this and treated the negative associations as substantive findings rather than methodological artifacts).

Likewise, it is very common for a researcher to rely on prepackaged composite data rather than explicitly creating original composite data. For instance, consider that favorite population of econ soc, the Fortune 500. Fortune defines this population as the top 500 firms ranked by sales. Now imagine decomposing sales by industry. Inevitably, sales in manufacturing will be negatively correlated with sales in retail. However this is an artifact of sample truncation. In the broader population the two types of sales will be positively correlated (at least among multi-dimensional firms).

set obs 10000
gen body=rnormal()
gen mind=rnormal()
*corr in the population
corr body mind
scatter body mind
graph export bodymind_everybody.png, replace
*keep only the stars
gen talent=body+mind+3
recode talent -100/0=0
gen stardom=rpoisson(talent)
gsort -stardom
keep in 1/1000
*corr amongst stars
corr body mind
scatter body mind
graph export bodymind_stars.png, replace

January 4, 2010 at 4:51 am 10 comments

I got a “5” on the performativity exam!

| Gabriel |

So apparently high schools are encouraging huge numbers of kids to take advanced placement tests. If you’ve ever seen one of those surveys where 40% of high school kids think Obi Wan Kenobi was one of the founding fathers, you’ll be able to guess that the outcome is that very few of them pass the test. In a rare triumph of reason in things having to do with education, there is now a backlash against this. I find this interesting for two reasons:

1. I actually know something about this topic. Although I never published it, one of my two youthful forays into ethnography was at a high school college counseling office in the late 90s. One of the main things going on there was the struggle to organize the AP exams. When kids took AP classes they were obligated to take the AP exam in the spring. However in part because the school had a dismal track record of AP passage and in part because the kids themselves had to pay the AP fee, the students resisted taking the exam. This put the counselor in an adversarial relationship with the students as she tried to coerce and cajole them into paying for and taking the exam, which they (but not she) recognized was a humiliating waste of time and money.

2. Apparently the main driver of AP creep is that some high school ratings algorithms count how many kids in a high school take the AP rather than, oh, I don’t know, pass the AP. This of course is consistent with the typical nonsense you see of highly institutionalized sectors measuring performance by inputs rather than outputs. Likewise, here in California the CSU and UC have long had the concept of a weighted GPA where an AP class adds a full letter grade such that a lot of my friends in high school had GPAs of 4.2. High schools pander to these magazine rankings / college admissions criteria by adding more AP classes than they have competent students to fill. This ratings performativity is well familiar to economic sociologists thanks to work done recently on law school ratings by Espeland and Sauder. Likewise, McArdle has had two posts (part 1, part 2) recently on the development and now perfection of CBO-scoring gamesmanship.

December 23, 2009 at 1:50 pm 3 comments


| Gabriel |

On NPR the other day I heard a story about how a lobbyist forged letters to Congress from the NAACP and AAUW opposing the Waxman-Markey cap-trade bill. I thought this was amusing on several levels, only the first of which is that apparently the bill wasn’t convoluted and toothless enough to buy off all of the incumbent stakeholders as some of them hired this guy. The real interest though is that the blatant absurdity of this story heightens the basic dynamics of the bootlegger and Baptist coalition dynamic in that in this case the bootlegger was so desperate for a Baptist that he imagined one, much as the too-good-to-be-true quotes conjured by fabulist reporters heighten the absurd genre conventions of journalism.

The bootlegger and Baptist model is a part of public choice theory that argues that policy making often involves a coalition between stakeholders motivated by rentseeking and ideologues with principled positions. In the titular example, the policy is blue laws which would be supported both by Baptists who don’t like booze violating the sabbath and clandestine alcohol entrepreneurs delighted to see demand pushed from legitimate retailers to the black market. We had something close to a literal bootlegger-baptist model with the Abramoff scandal, in which various gambling interests paid the Christian Coalition to kneecap the competition. Another recent prominent example is that, before being airbrushed out of history for having, ahem, unorthodox political affiliations, Van Jones was best known for “green jobs,” which can be uncharitably described as a bit of political entrepreneurship proposing a grand bargain in which his constituents would get patronage jobs in exchange for supporting green policies.

Although bootlegger-Baptist is an econ model, soc and OB folks independently arrived at this same model by noting that resource dependence on the state is not a pure Tullock lottery, but is contingent on facial legitimacy. If you read chapter 8 of External Control of Organizations you’ll see that it’s not only the bridge between resource dependence and neo-institutionalism, but also a bootlegger-Baptist model avant le lettre.

One of the interesting things is that lately civil rights groups seem to have been the (real or imagined) Baptists of choice, and not just in the anti-Waxman-Markey forgery. So for instance a few weeks ago 72 Democratic Congressmen sent a letter to the FCC opposing net neutrality. It’s not surprising that the blue dogs were among them as you’d expect fiscal conservatives to oppose a new regulation. The interesting thing is that the letter was also signed by most of the Congressional Black Caucus, as well as “the Hispanic Technology and Telecommunications Partnership, the National Association for the Advancement of Colored People (NAACP), the Asian American Justice Center.” Their (plausible) logic was essentially that preventing telecoms from charging content providers would delay the rollout of broadband and therefore maintain the digital divide. So here we have an issue combining rent-seeking telecoms hoping to soak content providers and prevent competition from VOIP forming a coalition with civil rights groups and their legislative allies who have a principled commitment to eliminating inequality in use of technology.

I got total deja vu when I read this as the exact same thing happened a few years ago when Nielsen was attacked by the Don’t Count Us Out Coalition. The backstory is that Nielsen and Arbitron traditionally rely on diaries to collect the audience data that is used to set advertising rates. Unfortunately respondents are too lazy/stupid to complete diaries accurately. In recognition of this problem both Arbitron and Nielsen have been trying to switch to more accurate passive monitoring techniques that aren’t dependent on the diligence and recall of the respondent, but they still use diaries for sweeps.

Nielsen had the bright idea of the Local People Meter project, which would eliminate sweeps diaries in the largest media markets and rely entirely on a large continuous rolling sample using passive monitoring. This implies a substantial improvement in data quality for a large part of the advertising market. This sounds like a good thing but Nielsen found itself attacked by the “Don’t Count Us Out Coalition” which argued that Nielsen was a racist monopoly, mostly on the basis that in one or two of the test markets for LPM they undersampled blacks. The “Coalition” got some serious support in Congress until Nielsen was able to demonstrate that it was just an astroturf* group set up by NewsCorp, which stood to see a ratings drop under the improved technology. (Or more technically, the new technology would reveal that the old technology had been exaggerating the ratings of NewsCorp properties. Peterson and Anand have a great article on a similar dynamic in recorded music sales).


*Given the rather promiscuous way that people throw around the term “astroturf” it’s necessary to clarify the term. I reserve the term “astroturf” exclusively for fax machine and letterhead operations organized by a lobbyist, pr firm, or the like. It is not analytically useful to extend the term to cover things like the tea parties where elites mobilize ordinary people to come and protest. If you want to distinguish such things from the Platonic ideal of grassroots mobilization fine, call them “fertilizing the grassroots” or something, but astroturf they ain’t. Likewise, it is lazy and slanderous conspiracy-mongering to assume without further evidence that anyone who takes the same position on an issue as a stakeholder must of course be bought by the stakeholder. If you want to echo Orwell and call such people “objectively pro-X” then fine, but that don’t mean the Baptist lacks a principled reasons for siding with the bootlegger on a particular issue.

November 4, 2009 at 4:31 am 1 comment

Don said you were the market, and you were

| Gabriel |

AdAge has a report on who drinks different kinds of beer. For instance, it describes Heineken drinkers as “They love their brand badges—a role the distinctive green glass bottle may play—and in fact, this group is attracted to luxury products in general.” Ouch, better hope Betty Draper doesn’t read Don’s copy of AdAge.

Anyway, I mention it because this speaks to cultural capital, which for a long time was about musical taste but is increasingly focused on food. Likewise, there’s some very good niche partitioning literature on beer, which has been pretty salient lately given the advertising blitz for “BL Golden Wheat” (Anheuser-Busch’s brand of hefeweizen).

November 3, 2009 at 4:52 am

Towards a sociology of living death

| Gabriel |

Daniel Drezner had a post a few months ago talking about how international relations scholars of the four major schools would react to a zombie epidemic. Aside from the sheer fun of talking about something as silly as zombies, it has much the same illuminating satiric purpose as “how many X does it take to screw in a lightbulb” jokes. If you have even a cursory familiarity with IR it is well worth reading.

Here’s my humble attempt to do the same for several schools within sociology. Note that I’m not even to get into the Foucauldian “whose to say that life is ‘normal’ and living death is ‘deviant'” stuff because, really, it would be too easy. Also, I wrote this post last week and originally planned to save it for Halloween, but I figured I’d move it up given that Zombieland is doing so well with critics and at the box office.

Public Opinion. Consider the statement that “Zombies are a growing problem in society.” Would you:

  1. Strongly disagree
  2. Somewhat disagree
  3. Neither agree nor disagree
  4. Somewhat agree
  5. Strongly agree
  6. Um, how do I know you’re really with NORC and not just here to eat my brain?

Criminology. In some areas (e.g., Pittsburgh, Raccoon City), zombification is now more common that attending college or serving in the military and must be understood as a modal life course event. Furthermore, as seen in audit studies employers are unwilling to hire zombies and so the mark of zombification has persistent and reverberating effects throughout undeath (at least until complete decomposition and putrefecation). However race trumps humanity as most employers prefer to hire a white zombie over a black human.

Cultural toolkit. Being mindless, zombies have no cultural toolkit. Rather the great interest is understanding how the cultural toolkits of the living develop and are invoked during unsettled times of uncertainty, such as an onslaught of walking corpses. The human being besieged by zombies is not constrained by culture, but draws upon it. Actors can draw upon such culturally-informed tools as boarding up the windows of a farmhouse, shotgunning the undead, or simply falling into panicked blubbering.

Categorization. There’s a kind of categorical legitimacy problem to zombies. Initially zombies were supernaturally animated dead, they were sluggish but relentlessness, and they sought to eat human brains. In contrast, more recent zombies tend to be infected with a virus that leaves them still living in a biological sense but alters their behavior so as to be savage, oblivious to pain, and nimble. Furthermore even supernatural zombies are not a homogenous set but encompass varying degrees of decomposition. Thus the first issue with zombies is defining what is a zombie and if it is commensurable with similar categories (like an inferius in Harry Potter). This categorical uncertainty has effects in that insurance underwriters systematically undervalue life insurance policies against monsters that are ambiguous to categorize (zombies) as compared to those that fall into a clearly delineated category (vampires).

Neo-institutionalism. Saving humanity from the hordes of the undead is a broad goal that is easily decoupled from the means used to achieve it. Especially given that human survivors need legitimacy in order to command access to scarce resources (e.g., shotgun shells, gasoline), it is more important to use strategies that are perceived as legitimate by trading partners (i.e., other terrified humans you’re trying to recruit into your improvised human survival cooperative) than to develop technically efficient means of dispatching the living dead. Although early on strategies for dealing with the undead (panic, “hole up here until help arrives,” “we have to get out of the city,” developing a vaccine, etc) are practiced where they are most technically efficient, once a strategy achieves legitimacy it spreads via isomorphism to technically inappropriate contexts.

Population ecology. Improvised human survival cooperatives (IHSC) demonstrate the liability of newness in that many are overwhelmed and devoured immediately after formation. Furthermore, IHSC demonstrate the essentially fixed nature of organizations as those IHSC that attempt to change core strategy (eg, from “let’s hole up here until help arrives” to “we have to get out of the city”) show a greatly increased hazard for being overwhelmed and devoured.

Diffusion. Viral zombieism (e.g. Resident Evil, 28 Days Later) tends to start with a single patient zero whereas supernatural zombieism (e.g. Night of the Living Dead, the “Thriller” video) tends to start with all recently deceased bodies rising from the grave. By seeing whether the diffusion curve for zombieism more closely approximates a Bass mixed-influence model or a classic s-curve we can estimate whether zombieism is supernatural or viral, and therefore whether policy-makers should direct grants towards biomedical labs to develop a zombie vaccine or the Catholic Church to give priests a crash course in the neglected art of exorcism. Furthermore marketers can plug plausible assumptions into the Bass model so as to make projections of the size of the zombie market over time, and thus how quickly to start manufacturing such products as brain-flavored Doritos.

Social movements. The dominant debate is the extent to which anti-zombie mobilization represents changes in the political opportunity structure brought on by complete societal collapse as compared to an essentially expressive act related to cultural dislocation and contested space. Supporting the latter interpretation is that zombie hunting militias are especially likely to form in counties that have seen recent increases in immigration. (The finding holds even when controlling for such variables as gun registrations, log distance to the nearest army administered “safe zone,” etc.).

Family. Zombieism doesn’t just affect individuals, but families. Having a zombie in the family involves an average of 25 hours of care work per week, including such tasks as going to the butcher to buy pig brains, repairing the boarding that keeps the zombie securely in the basement and away from the rest of the family, and washing a variety of stains out of the zombie’s tattered clothing. Almost all of this care work is performed by women and very little of it is done by paid care workers as no care worker in her right mind is willing to be in a house with a zombie.

Applied micro-economics. We combine two unique datasets, the first being military satellite imagery of zombie mobs and the second records salvaged from the wreckage of Exxon/Mobil headquarters showing which gas stations were due to be refueled just before the start of the zombie epidemic. Since humans can use salvaged gasoline either to set the undead on fire or to power vehicles, chainsaws, etc., we have a source of plausibly exogenous heterogeneity in showing which neighborhoods were more or less hospitable environments for zombies. We show that zombies tended to shuffle towards neighborhoods with low stocks of gasoline. Hence, we find that zombies respond to incentives (just like school teachers, and sumo wrestlers, and crack dealers, and realtors, and hookers, …).

Grounded theory. One cannot fully appreciate zombies by imposing a pre-existing theoretical framework on zombies. Only participant observation can allow one to provide a thick description of the mindless zombie perspective. Unfortunately scientistic institutions tend to be unsupportive of this kind of research. Major research funders reject as “too vague and insufficiently theory-driven” proposals that describe the intention to see what findings emerge from roaming about feasting on the living. Likewise IRB panels raise issues about whether a zombie can give informed consent and whether it is ethical to kill the living and eat their brains.

Ethnomethodology. Zombieism is not so much a state of being as a set of practices and cultural scripts. It is not that one is a zombie but that one does being a zombie such that zombieism is created and enacted through interaction. Even if one is “objectively” a mindless animated corpse, one cannot really be said to be fulfilling one’s cultural role as a zombie unless one shuffles across the landscape in search of brains.

Conversation Analysis.

1  HUMAN:    Hello, (0.5) Uh, I uh, (Ya know) is anyone in there?
2  ZOMBIE1:  Br:ai[ns], =
3  ZOMBIE2:       [Br]:ain[s]
4  ZOMBIE1:              =[B]r:ains
5  HUMAN:    Uh, I uh= li:ke, Hello? =
6  ZOMBIE1:  Br:ai:ns!
7  (0.5)
8  HUMAN:    Die >motherfuckers!<
9  SHOTGUN:  Bang! (0.1) =
10 ZOMBIE1:  Aa:ar:gg[gh!]
11 SHOTGUN:         =[Chk]-Chk, (0.1) Bang!

October 13, 2009 at 4:24 am 21 comments

Uncertainty, the CBO, and health coverage

| Gabriel |

[update. #1. i've been thinking about these ideas for awhile in the context of the original Orszag v. CBO thing, but was spurred to write and post it by these thoughts by McArdle. #2. MR has an interesting post on risk vs uncertainty in the context of securities markets]

Over at OT, Katherine Chen mentions that IRB seems to be a means for universities to try to tame uncertainty. The risk/uncertainty dichotomy is generally a very interesting issue. It played a huge part in the financial crash in that most of the models and instruments based on them were much better at dealing with (routine) risk than with uncertainty (aka, “systemic risk”). Everyone was aware of the uncertainty but the really sophisticated technologies for risk provided enough comfort to help us ignore that so much was unknowable.

Currently one of the main ways we’re seeing uncertainty in action is with the CBO’s role in health finance reform. The CBO’s cost estimates are especially salient given the poor economy and Orszag/Obama’s framing of the issue as about cost. The CBO’s practice is to score bills based on a) the quantifiable parts of a bill and b) the assumption that the bill will be implemented as written. Of course qualitative parts of a bill and the possibility of time inconsistency are huge elements of uncertainty on the likely fiscal impact of any legislation. The fun thing is that this is a bipartisan frustration.

When the CBO scored an old version of the bill it said it would be a budget buster, which made Obama’s cost framing look ridiculous and scared the hell out of the blue dogs. This infuriated the pro-reform people who (correctly) noted that the CBO had not included in its estimates that IMAC would “bend the cost curve,” and thus decrease the long-term growth in health expenditures by some unknowable but presumably large amount. That is to say, the CBO balked at the uncertainty inherent in evaluating a qualitative change and so ignored the issue, thereby giving a cost estimate that was biased upwards.

More recently the CBO scored another version of the bill as being reasonably cheap, which goes a long way to repairing the political damage of its earlier estimate. This infuriates anti-reform people who note (correctly) that the bill includes automatic spending cuts and historically Congress has been loath to let automatic spending cuts in entitlements (or for that matter, scheduled tax hikes) go into effect. That is to say, the CBO balked at the uncertainty inherent in considering whether Congress suffers time inconsistency and so ignored the issue, thereby giving a cost estimate that was biased downwards.

That is to say, what looks like a straight forward accounting exercise is only partly knowable and the really interesting questions are inherently qualitative ones like do we trust IMAC to cut costs and do we trust Congress to stick to a diet. And that’s not even getting into real noodle-scratchers like pricing in the possibility that an initially cost-neutral plan chartered as a GSE would eventually get general fund subsidies or what will happen to the tax base when you factor in that making coverage less tightly coupled to employment should yield improvements in labor productivity.

September 18, 2009 at 5:18 pm

If at first you don’t succeed, try a different specification

| Gabriel |

Cristobal Young (with whom I overlapped at Princeton for a few years) has an article in the last ASR on model uncertainty, with an empirical application to religion and development. This is similar to the issue of publication bias but more complicated and harder to formally model. (You can simulate the model uncertainty problem as to control variables but beyond that it gets intractable).

In classic publication bias, the assumption is that the model is always the same and it is applied to multiple datasets. This is somewhat realistic in fields like psychology where many studies are analyses of original experimental data. However in macro-economics and macro-sociology there is just one world and so to a first approximation what happens is that there is basically just one big dataset that people just keep analyzing over and over. To a lesser extent this is true of micro literatures that rely heavily on secondary analyses of a few standard datasets (e.g., GSS and NES for public opinion; PSID and ADD-health for certain kinds of demography; SPPA for cultural consumption). What changes between these analyses is the models, most notably assumptions about the basic structure (distribution of dependent variable, error term, etc), the inclusion of control variables, and the inclusion of interaction terms.

Although Cristobal doesn’t put it like this, my interpretation is that if there were no measurement error, this wouldn’t be a bad thing as it would just involve people groping towards better specifications. However if there is error then these specifications may just be fitting the error rather than fitting the model. Cristobal shows this pretty convincingly by showing that the analysis is sensitive to the inclusion of data points suspected to be of low quality.

I think it’s also worth honoring Robert Barro for being willing to cooperate with a young unknown researcher seeking to debunk one of his findings. A lot of established scientists are complete assholes about this kind of thing and not only won’t cooperate but will do all sorts of power plays to prevent publication.

Finally, see this poli sci paper which does a meta-analysis of their two flagship journals and finds a suspicious number of papers that are just barely significant. Although, they describe the issue as “publication bias,” I think the issue is really model uncertainty.

September 17, 2009 at 3:30 pm

They never did this on Mad Men

| Gabriel |

DDB (the world’s biggest ad agency) is pretty pissed off at its Brazilian office right now. Recently an unsolicited spec ad “for” the World Wildlife Fund showed up in which an entire squadron of commercial jet liners are aimed squarely at the Manhattan skyline as it appeared at 8:45am on 9/11/01 (although in the ad the sky is overcast). AdAge describes it as:

The description of the ad submitted by the agency said “We see two airplanes blowing up the WTC’s Twin Towers…lettering reminds us that the tsunami killed 100 times more people. The film asks us to respect a planet that is brutally powerful.”

Note that this is not just morally odious (at least to Americans both in and out of the ad industry — apparently foreign ad men and ad prize judges don’t feel this as uniformly as we do) but scientifically illiterate as tsunamis aren’t plausibly connected to human activity. (The ad seems to be confusing them with hurricanes, which are plausibly connected to global warming).

Once the ad became notorious in the ad world, various people tried to track down its provenance, with the Brazilian trade magazine Meio & Mensagem finding old entry records for advertising creative competitions showing it came from DDB Brasil, which at that point ‘fessed up. Needless to say, neither the WWF nor the DDB parent company are happy about this and the responsible team at DDB Brasil was fired. To me the whole thing is best summed up in an AdAge op-ed that sees this ad as the extreme manifestation of creative run amuck in search of prestige and expression, rather than an old-fashioned sell.

Creative directors are entirely to blame for this state of affairs. The main problem is that most of them got where they are today by, you guessed it, winning creative awards. And guess the No. 1 target they’re driving — and I mean driving — their teams to achieve.

This scandal, and the attribution of the malfeasance to the awards mentality, reminded me of some interesting work lately on how prizes can shape fields. (See the bottom of the post for cites).

In advertising specifically you see a real conflict between ad people who see themselves as basically artists and those who see themselves as salesmen. The former are obviously more aligned with the awards mentality, but the latter have the “effies” (for “effective,” as compared to self-indulgent, marketing). Anyway, as seen in this little case study, some ad agencies are:

  • interested in shock value that will attract the attention of prize juries but alienate many consumers
  • so desperate to win awards that they will create spec ads without the knowledge or consent of the putative client, arrange to have them published, and then submit them in the competition.

Cites for awards literature:

[Update: for a much more pleasant PSA story, see Jay’s post on “Don’t Mess With Texas.”

September 7, 2009 at 9:20 am 1 comment

Older Posts Newer Posts

The Culture Geeks

Recent Posts


Get every new post delivered to your Inbox.

Join 1,479 other followers