Posts tagged ‘economic sociology’
| Gabriel |
At Scatterplot, Jeremy notes that in a reader poll, Megan Fox was voted both “worst” and “sexiest” actress. Personally, I’ve always found Megan Fox to be less sexy than a painfully deliberate simulacra of sexy. The interesting question Jeremy asks is whether this negative association is correlation or causation. My answer is neither, it’s truncation.
What you have to understand is that the question is implicitly about famous actresses. It is quite likely that somewhere in Glendale there is some barista with a headshot by the register who is both fugly and reads lines like a robot. However this person is not famous (and probably not even Taft-Hartleyed). If there is any meritocracy at all in Hollywood, the famous are — on average — going to be desirable in at least one dimension. They may become famous because they are hot or because they are talented, but our friend at the Starbucks on Colorado is staying at the Starbucks on Colorado.
This means that when we ask about the association of acting talent and sexiness amongst the famous, we have censored data where people who are low on both dimensions are censored out. Within the truncated sample there may be a robust negative association, but the causal relationship is very indirect, and it’s not as if having perky breasts directly obstructs the ability to convincingly express emotions (a botoxed face on the other hand …).
You can see this clearly in simulation (code is at the end of the post). I’ve modeled a population of ten thousand aspiring actresses as having two dimensions, body and mind, each of which is drawn from a random normal. As built in by assumption, there is no correlation between body and mind.
Stars are a subsample of aspirants. Star power is defined as a Poisson centered on the sum of body and mind (and re-centered to avoid negative values). That is, star power is a combination of body, mind, and luck. Only the 10% of aspirants with the most star power become famous. If we now look at the correlation of body and mind among stars, it’s negative.
This is a silly example, but it reflects a serious methodological problem that I’ve seen in the literature and I propose to call “sampling on the independent variable.” You sometimes see this directly in the sample construction when a researcher takes several overlapping datasets and combines them. If the researcher then uses membership in one of the constituent datasets (or something closely associated with it) to predict membership in another of a constituent datasets (or something closely associated with it), the beta is inevitably negative. (I recently reviewed a paper that did this and treated the negative associations as substantive findings rather than methodological artifacts).
Likewise, it is very common for a researcher to rely on prepackaged composite data rather than explicitly creating original composite data. For instance, consider that favorite population of econ soc, the Fortune 500. Fortune defines this population as the top 500 firms ranked by sales. Now imagine decomposing sales by industry. Inevitably, sales in manufacturing will be negatively correlated with sales in retail. However this is an artifact of sample truncation. In the broader population the two types of sales will be positively correlated (at least among multi-dimensional firms).
clear set obs 10000 gen body=rnormal() gen mind=rnormal() *corr in the population corr body mind scatter body mind graph export bodymind_everybody.png, replace *keep only the stars gen talent=body+mind+3 recode talent -100/0=0 gen stardom=rpoisson(talent) gsort -stardom keep in 1/1000 *corr amongst stars corr body mind scatter body mind graph export bodymind_stars.png, replace
| Gabriel |
So apparently high schools are encouraging huge numbers of kids to take advanced placement tests. If you’ve ever seen one of those surveys where 40% of high school kids think Obi Wan Kenobi was one of the founding fathers, you’ll be able to guess that the outcome is that very few of them pass the test. In a rare triumph of reason in things having to do with education, there is now a backlash against this. I find this interesting for two reasons:
1. I actually know something about this topic. Although I never published it, one of my two youthful forays into ethnography was at a high school college counseling office in the late 90s. One of the main things going on there was the struggle to organize the AP exams. When kids took AP classes they were obligated to take the AP exam in the spring. However in part because the school had a dismal track record of AP passage and in part because the kids themselves had to pay the AP fee, the students resisted taking the exam. This put the counselor in an adversarial relationship with the students as she tried to coerce and cajole them into paying for and taking the exam, which they (but not she) recognized was a humiliating waste of time and money.
2. Apparently the main driver of AP creep is that some high school ratings algorithms count how many kids in a high school take the AP rather than, oh, I don’t know, pass the AP. This of course is consistent with the typical nonsense you see of highly institutionalized sectors measuring performance by inputs rather than outputs. Likewise, here in California the CSU and UC have long had the concept of a weighted GPA where an AP class adds a full letter grade such that a lot of my friends in high school had GPAs of 4.2. High schools pander to these magazine rankings / college admissions criteria by adding more AP classes than they have competent students to fill. This ratings performativity is well familiar to economic sociologists thanks to work done recently on law school ratings by Espeland and Sauder. Likewise, McArdle has had two posts (part 1, part 2) recently on the development and now perfection of CBO-scoring gamesmanship.
| Gabriel |
On NPR the other day I heard a story about how a lobbyist forged letters to Congress from the NAACP and AAUW opposing the Waxman-Markey cap-trade bill. I thought this was amusing on several levels, only the first of which is that apparently the bill wasn’t convoluted and toothless enough to buy off all of the incumbent stakeholders as some of them hired this guy. The real interest though is that the blatant absurdity of this story heightens the basic dynamics of the bootlegger and Baptist coalition dynamic in that in this case the bootlegger was so desperate for a Baptist that he imagined one, much as the too-good-to-be-true quotes conjured by fabulist reporters heighten the absurd genre conventions of journalism.
The bootlegger and Baptist model is a part of public choice theory that argues that policy making often involves a coalition between stakeholders motivated by rentseeking and ideologues with principled positions. In the titular example, the policy is blue laws which would be supported both by Baptists who don’t like booze violating the sabbath and clandestine alcohol entrepreneurs delighted to see demand pushed from legitimate retailers to the black market. We had something close to a literal bootlegger-baptist model with the Abramoff scandal, in which various gambling interests paid the Christian Coalition to kneecap the competition. Another recent prominent example is that, before being airbrushed out of history for having, ahem, unorthodox political affiliations, Van Jones was best known for “green jobs,” which can be uncharitably described as a bit of political entrepreneurship proposing a grand bargain in which his constituents would get patronage jobs in exchange for supporting green policies.
Although bootlegger-Baptist is an econ model, soc and OB folks independently arrived at this same model by noting that resource dependence on the state is not a pure Tullock lottery, but is contingent on facial legitimacy. If you read chapter 8 of External Control of Organizations you’ll see that it’s not only the bridge between resource dependence and neo-institutionalism, but also a bootlegger-Baptist model avant le lettre.
One of the interesting things is that lately civil rights groups seem to have been the (real or imagined) Baptists of choice, and not just in the anti-Waxman-Markey forgery. So for instance a few weeks ago 72 Democratic Congressmen sent a letter to the FCC opposing net neutrality. It’s not surprising that the blue dogs were among them as you’d expect fiscal conservatives to oppose a new regulation. The interesting thing is that the letter was also signed by most of the Congressional Black Caucus, as well as “the Hispanic Technology and Telecommunications Partnership, the National Association for the Advancement of Colored People (NAACP), the Asian American Justice Center.” Their (plausible) logic was essentially that preventing telecoms from charging content providers would delay the rollout of broadband and therefore maintain the digital divide. So here we have an issue combining rent-seeking telecoms hoping to soak content providers and prevent competition from VOIP forming a coalition with civil rights groups and their legislative allies who have a principled commitment to eliminating inequality in use of technology.
I got total deja vu when I read this as the exact same thing happened a few years ago when Nielsen was attacked by the Don’t Count Us Out Coalition. The backstory is that Nielsen and Arbitron traditionally rely on diaries to collect the audience data that is used to set advertising rates. Unfortunately respondents are too lazy/stupid to complete diaries accurately. In recognition of this problem both Arbitron and Nielsen have been trying to switch to more accurate passive monitoring techniques that aren’t dependent on the diligence and recall of the respondent, but they still use diaries for sweeps.
Nielsen had the bright idea of the Local People Meter project, which would eliminate sweeps diaries in the largest media markets and rely entirely on a large continuous rolling sample using passive monitoring. This implies a substantial improvement in data quality for a large part of the advertising market. This sounds like a good thing but Nielsen found itself attacked by the “Don’t Count Us Out Coalition” which argued that Nielsen was a racist monopoly, mostly on the basis that in one or two of the test markets for LPM they undersampled blacks. The “Coalition” got some serious support in Congress until Nielsen was able to demonstrate that it was just an astroturf* group set up by NewsCorp, which stood to see a ratings drop under the improved technology. (Or more technically, the new technology would reveal that the old technology had been exaggerating the ratings of NewsCorp properties. Peterson and Anand have a great article on a similar dynamic in recorded music sales).
*Given the rather promiscuous way that people throw around the term “astroturf” it’s necessary to clarify the term. I reserve the term “astroturf” exclusively for fax machine and letterhead operations organized by a lobbyist, pr firm, or the like. It is not analytically useful to extend the term to cover things like the tea parties where elites mobilize ordinary people to come and protest. If you want to distinguish such things from the Platonic ideal of grassroots mobilization fine, call them “fertilizing the grassroots” or something, but astroturf they ain’t. Likewise, it is lazy and slanderous conspiracy-mongering to assume without further evidence that anyone who takes the same position on an issue as a stakeholder must of course be bought by the stakeholder. If you want to echo Orwell and call such people “objectively pro-X” then fine, but that don’t mean the Baptist lacks a principled reasons for siding with the bootlegger on a particular issue.
| Gabriel |
AdAge has a report on who drinks different kinds of beer. For instance, it describes Heineken drinkers as “They love their brand badges—a role the distinctive green glass bottle may play—and in fact, this group is attracted to luxury products in general.” Ouch, better hope Betty Draper doesn’t read Don’s copy of AdAge.
Anyway, I mention it because this speaks to cultural capital, which for a long time was about musical taste but is increasingly focused on food. Likewise, there’s some very good niche partitioning literature on beer, which has been pretty salient lately given the advertising blitz for “BL Golden Wheat” (Anheuser-Busch’s brand of hefeweizen).
| Gabriel |
Daniel Drezner had a post a few months ago talking about how international relations scholars of the four major schools would react to a zombie epidemic. Aside from the sheer fun of talking about something as silly as zombies, it has much the same illuminating satiric purpose as “how many X does it take to screw in a lightbulb” jokes. If you have even a cursory familiarity with IR it is well worth reading.
Here’s my humble attempt to do the same for several schools within sociology. Note that I’m not even to get into the Foucauldian “whose to say that life is ‘normal’ and living death is ‘deviant’” stuff because, really, it would be too easy. Also, I wrote this post last week and originally planned to save it for Halloween, but I figured I’d move it up given that Zombieland is doing so well with critics and at the box office.
Public Opinion. Consider the statement that “Zombies are a growing problem in society.” Would you:
- Strongly disagree
- Somewhat disagree
- Neither agree nor disagree
- Somewhat agree
- Strongly agree
- Um, how do I know you’re really with NORC and not just here to eat my brain?
Criminology. In some areas (e.g., Pittsburgh, Raccoon City), zombification is now more common that attending college or serving in the military and must be understood as a modal life course event. Furthermore, as seen in audit studies employers are unwilling to hire zombies and so the mark of zombification has persistent and reverberating effects throughout undeath (at least until complete decomposition and putrefecation). However race trumps humanity as most employers prefer to hire a white zombie over a black human.
Cultural toolkit. Being mindless, zombies have no cultural toolkit. Rather the great interest is understanding how the cultural toolkits of the living develop and are invoked during unsettled times of uncertainty, such as an onslaught of walking corpses. The human being besieged by zombies is not constrained by culture, but draws upon it. Actors can draw upon such culturally-informed tools as boarding up the windows of a farmhouse, shotgunning the undead, or simply falling into panicked blubbering.
Categorization. There’s a kind of categorical legitimacy problem to zombies. Initially zombies were supernaturally animated dead, they were sluggish but relentlessness, and they sought to eat human brains. In contrast, more recent zombies tend to be infected with a virus that leaves them still living in a biological sense but alters their behavior so as to be savage, oblivious to pain, and nimble. Furthermore even supernatural zombies are not a homogenous set but encompass varying degrees of decomposition. Thus the first issue with zombies is defining what is a zombie and if it is commensurable with similar categories (like an inferius in Harry Potter). This categorical uncertainty has effects in that insurance underwriters systematically undervalue life insurance policies against monsters that are ambiguous to categorize (zombies) as compared to those that fall into a clearly delineated category (vampires).
Neo-institutionalism. Saving humanity from the hordes of the undead is a broad goal that is easily decoupled from the means used to achieve it. Especially given that human survivors need legitimacy in order to command access to scarce resources (e.g., shotgun shells, gasoline), it is more important to use strategies that are perceived as legitimate by trading partners (i.e., other terrified humans you’re trying to recruit into your improvised human survival cooperative) than to develop technically efficient means of dispatching the living dead. Although early on strategies for dealing with the undead (panic, “hole up here until help arrives,” “we have to get out of the city,” developing a vaccine, etc) are practiced where they are most technically efficient, once a strategy achieves legitimacy it spreads via isomorphism to technically inappropriate contexts.
Population ecology. Improvised human survival cooperatives (IHSC) demonstrate the liability of newness in that many are overwhelmed and devoured immediately after formation. Furthermore, IHSC demonstrate the essentially fixed nature of organizations as those IHSC that attempt to change core strategy (eg, from “let’s hole up here until help arrives” to “we have to get out of the city”) show a greatly increased hazard for being overwhelmed and devoured.
Diffusion. Viral zombieism (e.g. Resident Evil, 28 Days Later) tends to start with a single patient zero whereas supernatural zombieism (e.g. Night of the Living Dead, the “Thriller” video) tends to start with all recently deceased bodies rising from the grave. By seeing whether the diffusion curve for zombieism more closely approximates a Bass mixed-influence model or a classic s-curve we can estimate whether zombieism is supernatural or viral, and therefore whether policy-makers should direct grants towards biomedical labs to develop a zombie vaccine or the Catholic Church to give priests a crash course in the neglected art of exorcism. Furthermore marketers can plug plausible assumptions into the Bass model so as to make projections of the size of the zombie market over time, and thus how quickly to start manufacturing such products as brain-flavored Doritos.
Social movements. The dominant debate is the extent to which anti-zombie mobilization represents changes in the political opportunity structure brought on by complete societal collapse as compared to an essentially expressive act related to cultural dislocation and contested space. Supporting the latter interpretation is that zombie hunting militias are especially likely to form in counties that have seen recent increases in immigration. (The finding holds even when controlling for such variables as gun registrations, log distance to the nearest army administered “safe zone,” etc.).
Family. Zombieism doesn’t just affect individuals, but families. Having a zombie in the family involves an average of 25 hours of care work per week, including such tasks as going to the butcher to buy pig brains, repairing the boarding that keeps the zombie securely in the basement and away from the rest of the family, and washing a variety of stains out of the zombie’s tattered clothing. Almost all of this care work is performed by women and very little of it is done by paid care workers as no care worker in her right mind is willing to be in a house with a zombie.
Applied micro-economics. We combine two unique datasets, the first being military satellite imagery of zombie mobs and the second records salvaged from the wreckage of Exxon/Mobil headquarters showing which gas stations were due to be refueled just before the start of the zombie epidemic. Since humans can use salvaged gasoline either to set the undead on fire or to power vehicles, chainsaws, etc., we have a source of plausibly exogenous heterogeneity in showing which neighborhoods were more or less hospitable environments for zombies. We show that zombies tended to shuffle towards neighborhoods with low stocks of gasoline. Hence, we find that zombies respond to incentives (just like school teachers, and sumo wrestlers, and crack dealers, and realtors, and hookers, …).
Grounded theory. One cannot fully appreciate zombies by imposing a pre-existing theoretical framework on zombies. Only participant observation can allow one to provide a thick description of the mindless zombie perspective. Unfortunately scientistic institutions tend to be unsupportive of this kind of research. Major research funders reject as “too vague and insufficiently theory-driven” proposals that describe the intention to see what findings emerge from roaming about feasting on the living. Likewise IRB panels raise issues about whether a zombie can give informed consent and whether it is ethical to kill the living and eat their brains.
Ethnomethodology. Zombieism is not so much a state of being as a set of practices and cultural scripts. It is not that one is a zombie but that one does being a zombie such that zombieism is created and enacted through interaction. Even if one is “objectively” a mindless animated corpse, one cannot really be said to be fulfilling one’s cultural role as a zombie unless one shuffles across the landscape in search of brains.
1 HUMAN: Hello, (0.5) Uh, I uh, (Ya know) is anyone in there? 2 ZOMBIE1: Br:ai[ns], = 3 ZOMBIE2: [Br]:ain[s] 4 ZOMBIE1: =[B]r:ains 5 HUMAN: Uh, I uh= li:ke, Hello? = 6 ZOMBIE1: Br:ai:ns! 7 (0.5) 8 HUMAN: Die >motherfuckers!< 9 SHOTGUN: Bang! (0.1) = 10 ZOMBIE1: Aa:ar:gg[gh!] 11 SHOTGUN: =[Chk]-Chk, (0.1) Bang!
| Gabriel |
[update. #1. i've been thinking about these ideas for awhile in the context of the original Orszag v. CBO thing, but was spurred to write and post it by these thoughts by McArdle. #2. MR has an interesting post on risk vs uncertainty in the context of securities markets]
Over at OT, Katherine Chen mentions that IRB seems to be a means for universities to try to tame uncertainty. The risk/uncertainty dichotomy is generally a very interesting issue. It played a huge part in the financial crash in that most of the models and instruments based on them were much better at dealing with (routine) risk than with uncertainty (aka, “systemic risk”). Everyone was aware of the uncertainty but the really sophisticated technologies for risk provided enough comfort to help us ignore that so much was unknowable.
Currently one of the main ways we’re seeing uncertainty in action is with the CBO’s role in health finance reform. The CBO’s cost estimates are especially salient given the poor economy and Orszag/Obama’s framing of the issue as about cost. The CBO’s practice is to score bills based on a) the quantifiable parts of a bill and b) the assumption that the bill will be implemented as written. Of course qualitative parts of a bill and the possibility of time inconsistency are huge elements of uncertainty on the likely fiscal impact of any legislation. The fun thing is that this is a bipartisan frustration.
When the CBO scored an old version of the bill it said it would be a budget buster, which made Obama’s cost framing look ridiculous and scared the hell out of the blue dogs. This infuriated the pro-reform people who (correctly) noted that the CBO had not included in its estimates that IMAC would “bend the cost curve,” and thus decrease the long-term growth in health expenditures by some unknowable but presumably large amount. That is to say, the CBO balked at the uncertainty inherent in evaluating a qualitative change and so ignored the issue, thereby giving a cost estimate that was biased upwards.
More recently the CBO scored another version of the bill as being reasonably cheap, which goes a long way to repairing the political damage of its earlier estimate. This infuriates anti-reform people who note (correctly) that the bill includes automatic spending cuts and historically Congress has been loath to let automatic spending cuts in entitlements (or for that matter, scheduled tax hikes) go into effect. That is to say, the CBO balked at the uncertainty inherent in considering whether Congress suffers time inconsistency and so ignored the issue, thereby giving a cost estimate that was biased downwards.
That is to say, what looks like a straight forward accounting exercise is only partly knowable and the really interesting questions are inherently qualitative ones like do we trust IMAC to cut costs and do we trust Congress to stick to a diet. And that’s not even getting into real noodle-scratchers like pricing in the possibility that an initially cost-neutral plan chartered as a GSE would eventually get general fund subsidies or what will happen to the tax base when you factor in that making coverage less tightly coupled to employment should yield improvements in labor productivity.
| Gabriel |
Cristobal Young (with whom I overlapped at Princeton for a few years) has an article in the last ASR on model uncertainty, with an empirical application to religion and development. This is similar to the issue of publication bias but more complicated and harder to formally model. (You can simulate the model uncertainty problem as to control variables but beyond that it gets intractable).
In classic publication bias, the assumption is that the model is always the same and it is applied to multiple datasets. This is somewhat realistic in fields like psychology where many studies are analyses of original experimental data. However in macro-economics and macro-sociology there is just one world and so to a first approximation what happens is that there is basically just one big dataset that people just keep analyzing over and over. To a lesser extent this is true of micro literatures that rely heavily on secondary analyses of a few standard datasets (e.g., GSS and NES for public opinion; PSID and ADD-health for certain kinds of demography; SPPA for cultural consumption). What changes between these analyses is the models, most notably assumptions about the basic structure (distribution of dependent variable, error term, etc), the inclusion of control variables, and the inclusion of interaction terms.
Although Cristobal doesn’t put it like this, my interpretation is that if there were no measurement error, this wouldn’t be a bad thing as it would just involve people groping towards better specifications. However if there is error then these specifications may just be fitting the error rather than fitting the model. Cristobal shows this pretty convincingly by showing that the analysis is sensitive to the inclusion of data points suspected to be of low quality.
I think it’s also worth honoring Robert Barro for being willing to cooperate with a young unknown researcher seeking to debunk one of his findings. A lot of established scientists are complete assholes about this kind of thing and not only won’t cooperate but will do all sorts of power plays to prevent publication.
Finally, see this poli sci paper which does a meta-analysis of their two flagship journals and finds a suspicious number of papers that are just barely significant. Although, they describe the issue as “publication bias,” I think the issue is really model uncertainty.
| Gabriel |
DDB (the world’s biggest ad agency) is pretty pissed off at its Brazilian office right now. Recently an unsolicited spec ad “for” the World Wildlife Fund showed up in which an entire squadron of commercial jet liners are aimed squarely at the Manhattan skyline as it appeared at 8:45am on 9/11/01 (although in the ad the sky is overcast). AdAge describes it as:
The description of the ad submitted by the agency said “We see two airplanes blowing up the WTC’s Twin Towers…lettering reminds us that the tsunami killed 100 times more people. The film asks us to respect a planet that is brutally powerful.”
Note that this is not just morally odious (at least to Americans both in and out of the ad industry — apparently foreign ad men and ad prize judges don’t feel this as uniformly as we do) but scientifically illiterate as tsunamis aren’t plausibly connected to human activity. (The ad seems to be confusing them with hurricanes, which are plausibly connected to global warming).
Once the ad became notorious in the ad world, various people tried to track down its provenance, with the Brazilian trade magazine Meio & Mensagem finding old entry records for advertising creative competitions showing it came from DDB Brasil, which at that point ‘fessed up. Needless to say, neither the WWF nor the DDB parent company are happy about this and the responsible team at DDB Brasil was fired. To me the whole thing is best summed up in an AdAge op-ed that sees this ad as the extreme manifestation of creative run amuck in search of prestige and expression, rather than an old-fashioned sell.
Creative directors are entirely to blame for this state of affairs. The main problem is that most of them got where they are today by, you guessed it, winning creative awards. And guess the No. 1 target they’re driving — and I mean driving — their teams to achieve.
This scandal, and the attribution of the malfeasance to the awards mentality, reminded me of some interesting work lately on how prizes can shape fields. (See the bottom of the post for cites).
In advertising specifically you see a real conflict between ad people who see themselves as basically artists and those who see themselves as salesmen. The former are obviously more aligned with the awards mentality, but the latter have the “effies” (for “effective,” as compared to self-indulgent, marketing). Anyway, as seen in this little case study, some ad agencies are:
- interested in shock value that will attract the attention of prize juries but alienate many consumers
- so desperate to win awards that they will create spec ads without the knowledge or consent of the putative client, arrange to have them published, and then submit them in the competition.
Cites for awards literature:
- Anand, N. and BC. Jones. 2008. “Tournament rituals, category dynamics, and field configuration: The case of the Booker Prize.” Journal of Management Studies 45:1036-1060.
- Anand, N and Mary R Watson. 2004. “Tournament Rituals in the Evolution of Fields: The Case of the Grammy Awards.” Academy of Management Journal 47:59-80.
- English, James. 2005. The Economy of Prestige: Prizes, Awards, and the Circulation of Cultural Value. Cambridge Mass.: Harvard University Press
- Frey, Bruno S. and Susanne Neckermann. 2008. “Awards: A View from Psychological Economics“. University of Zurich Institute for Empirical Research in Economics Working Paper No. 357.
[Update: for a much more pleasant PSA story, see Jay's post on “Don’t Mess With Texas.”
Like a lot of econ soc people nowadays, I’m generally more interested in “open systems” analyses of organizational fields than in anything that opens up the black-box of the firm, but two article (New Yorker and NY Times) make me seriously envy the people who do qualitative case studies. At least read the Times article and if possible the NYer, but here’s an outline:
- In NYC it’s almost impossible to fire a school teacher who has tenure (which they reach after three years). Termination involves an extremely long (NYer compares it to the OJ trial) and expensive arbitration hearing. Apparently in the last few years a total of 8 teachers have been fired mostly or entirely for poor performance, in general you have to do something like molest a kid or be a serious drunk to get fired, and even in these cases that would only be at the end of a long arbitration process.
- In the past few years, the NYC school chancellor has shut down schools that he judges to be failing. Some of the teachers were hired by other schools but several hundred have either not applied or have been rejected. The chancellor would like to figure out a way to fire these remaining teachers, but because of point “1″ most of the teachers in these schools go into the “reserve pool,” continue to be paid, and are sometimes used as subs until they get a permanent job but mostly are kept idle.
- NYC public schools have a hiring freeze, which in practice means that vacant positions can only be filled from the reserve pool.
- Many principals would rather leave the positions vacant that hire the reserve pool teachers.
Regardless of whether you sympathize more with the reserve pool teachers or with the chancellor and principals, you can agree that this is an organization with a serious internal power struggle among stakeholders and a conflicting set of rules, incentives, and expectations. There’s so much going on here that it could fill many a b-school/soc dissertation, but I’ll try to hit a few of the obvious points in a couple hundred words.
The first thing to note is that the principals clearly have a very strong preference (both expressed and articulated) for new workers over displaced incumbents. The young workers are apparently superior both in terms of price and perceived quality. The price thing is simple, since (like most civil servants) school teacher salaries are determined by seniority and credentials, it’s much cheaper to higher new entrants than incumbents. (NYC teachers start at $45,530 but with enough seniority and credentials can make up to $100,049). Furthermore there’s a kind of option value to hiring young workers in that they don’t have tenure (yet) and thus the principal can try them out on a probationary basis, whereas once you hire a displaced incumbent you’ll be stuck with them, even if problems manifest immediately.
The quality thing is more complicated. The principals perceive the reserve pool teachers as far below average because they came from failing schools and nobody else wanted to hire them when the schools closed. If you take seriously a preferential attachment/cumulative advantage argument that the schools with the most disadvantaged students and the worst reputations got stuck with the most incompetent teachers then this is an entirely rational inference on the part of the principals. Likewise it makes sense if these were good teachers ex ante but after having spent a few years in these schools they learnt (and cannot be trusted to later unlearn) a shared student/teacher culture of mediocrity. The only way I can think of to argue that the principals’ perception is flawed is to note that status is defined by one’s associations, so it’s plausible to imagine that teachers whose skill and motivation are actually typical of the district would acquire stigma from having worked with stigmatized students. (This latter model implies that family background is such a strong determinant of school outcomes that the schools did not fail the students, but vice versa). Another dimension of stigma could be ageism (which Gary Becker might define operationally as a taste for young workers net of productivity). Thus we can come up with both valid and invalid reasons why principals might think the reserve pool teachers were incompetent. Indeed, they seem to be judged deficient not just relative to the opportunity cost of young workers but in absolute terms — some principals say “they planned to eliminate open positions from their budgets rather than take on teachers they considered undesirable.”
However it’s not just that the perceived undesirability of the incumbents but the perceived desirability of the new entrants. The poor economy can be seen as a shift in the supply curve for labor so for any given price point you’re going to get a higher quality worker. Simply put, you get a better worker for your $45,530 a year when unemployment is high and it’s a buyer’s labor market, as it is now. Thus by historical standards the applicant pool now has to look really good.
If you were a principal, would you rather hire a recent graduate of Rutgers, maybe even a fired up “Teach for America” participant from Columbia, or a teacher who has been occasionally subbing since the district shut down PS 1373682 because only 5% of the students were reading at grade level? What about when you consider that between seniority and credentials (which are of dubious pedagogical utility) you’d have to pay the lifer about 1.5x – 2x the salary of the kid? The obvious answer is that most principals have an extremely strong preference for the kid and get pretty frustrated when told they have to hire the lifer.
The teacher’s union likes to emphasize the price argument and alleges that the district is trying to push out teachers with a lot of seniority. From one perspective this is an implicit admission that the seniority payscale is decoupled from productivity and thus in pushing for such a payscale the union itself has created the unintended consequence of making incentives for the district to try to push out senior teachers. From another perspective (which I’m assuming the union would be sympathetic to), young teachers are not signing up for the $45,530 a year they get at the start, but for the whole career, an important part of which is the expectation of regular raises. Under this model, the seniority payscale (and for that matter, the comparably generous pension system) is effectively a form of deferred compensation and it’s not so much that senior teachers are “overpaid” so much as that they are effectively drawing backpay and in attempting to cut them loose the district is engaging in the time inconsistent bargaining.
Another issue you see in all this is that classic of 60s org theory, loose coupling:
Several principals — who did not want their names published for fear of angering the administration or the teachers’ union — said they were circumventing the restrictions by offering new teachers jobs as long-term substitutes or hiring them as specialized teachers but placing them in regular classrooms. Some said they planned to eliminate open positions from their budgets rather than take on teachers they considered undesirable, and others said they were holding out in the hope that Mr. Klein would lift the restrictions.
That is, these principals have some slack and autonomy and are using it to evade the rules so as to get more desirable workers, either by hiding the position under a different line-item or simply by keeping it vacant so they keep the option value in the event that the rules change (a case of “regime uncertainty”).
Overall what we’re seeing is a shift in the institutional model of the school district driven by policy entrepreneurs (most notably Klein and Bloomberg) which is embraced by some stakeholders (principals) and resisted by others (teachers). The chancellor describe the change as moving from a system that serves the interests of adults — read, teachers — over students to vice versa, but another way to describe it is a shift from a highly institutionalized model that emphasizes process and rights, to one that puts more emphasis on measured results. A corollary of this is a shift, characteristic of the broader economy, from lifetime employment to one where the employee’s value to the organization is continually evaluated. What has not yet happened, but what the chancellor would like to see, is a switch from a system that puts its highest priority on avoiding false accusations against teachers (and has substantial due process safeguards to ensure this) even at the expense of thwarting accurate accusations to one that emphasizes getting an estimate of teacher quality and not worrying so much about cases where the bias goes against the teacher.
| Gabriel |
As anyone who has ever written an empirical paper knows, one of the hardest things is coming up with what can charitably be called a “compelling null” and cynically a “good straw man.” Behold, a gift I bestow (via MR) unto macro economic sociologists of the world polity school. A new NBER theory piece argues that the global institutionalization of child labor bans will delay the actual diffusion of child labor bans in low income countries. From henceforth, anyone caring to do a world polity paper (or conversely, a public choice / RCT political economy paper) can have that most desirable of things, a “competing predictions” lit review.