## Status, Sorting, and Meritocracy

*September 15, 2010 at 4:51 am* *gabrielrossman* *
1 comment *

| Gabriel |

Over at OrgTheory, Fabio asked about how much turnover we expect to see in the NRC rankings. In the comments, myself and a few other people discussed the analysis of the rankings in Burris 2004 *ASR*. Kieran mentioned the interpretation of the data that it could all be sorting.

To see how plausible this is I wrote a simulation with 500 grad students, each of whom has a latent amount of talent that can only be observed with some noise. The students are admitted in cohorts of 15 each to 34 PhD granting departments and are strictly sorted so the (apparently) best students go to the best schools. There they work on their dissertations, the quality of which is a function of their talent, luck, and (to represent the possibility that top departments teach you more) a parameter proportional to the inverse root of the department’s rank. There is then a job market, with one job line per PhD granting department, and again, strict sorting (without even an exception for the incest taboo). I then summarize the amount of reproduction as the proportion of top 10 jobs that are taken by grad students from the top ten schools.

So how plausible is the meritocracy explanation? It turns out it’s pretty plausible. This table shows the average closure for the top 10 jobs averaged over 100 runs each for several combinations of assumptions. Each cell shows, on average, what proportion of the top 10 jobs we expect to be taken by students from the top 10 schools if we take as assumptions the row and column parameters. The rows represent different assumptions about how noisy is our observation of talent when we read an application to grad school or a job search. The columns represent a scaling parameter for how much you learn at different ranked schools. For instance, if we assume a learning parameter of “1.5,” a student at the 4th highest-ranked school would learn 1.5/(4^0.5), or .75. It turns out that unless you assume noise to be *very* high (something like a unit signal:noise ratio or worse), meritocracy is pretty plausible. Furthermore, if you assume that the top schools actually educate grad students better then meritocracy looks very plausible even if there’s a lot of noise.

P of top 10 jobs taken by students from top 10 schools ---------------------------------------- Noisiness | of | Admission | s and | Diss / |How Much More Do You Learn at Job | Top Schools Market | 0 .5 1 1.5 2 ----------+----------------------------- 0 | 1 1 1 1 1 .1 | 1 1 1 1 1 .2 | 1 1 1 1 1 .3 | .999 1 1 1 1 .4 | .997 1 1 1 1 .5 | .983 .995 .999 1 1 .6 | .966 .99 .991 .999 .999 .7 | .915 .96 .982 .991 .995 .8 | .867 .932 .963 .975 .986 .9 | .817 .887 .904 .957 .977 1 | .788 .853 .873 .919 .95 ----------------------------------------

Of course, keep in mind this is all in a world of frictionless planes and perfectly spherical cows. If we assume that lots of people are choosing on other margins, or that there’s not a strict dual queue of positions and occupants (e.g., because searches are focused rather than “open”), then it gets a bit looser. Furthermore, I’m still not sure that the meritocracy model has a good explanation for the fact that academic productivity figures (citation counts, etc) have only a loose correlation with ranking.

Here’s the code, knock yourself out using different metrics of reproduction, inputting different assumptions, etc.

[Update: also see Jim Moody’s much more elaborate/realistic simulation, which gives similar results].

capture program drop socmeritocracy program define socmeritocracy local gre_noise=round(`1',.001) /* size of error term, relative to standard normal, for apparenttalent=f(talent) */ local diss_noise=round(`2',.001) /* size of error term, relative to standard normal, for dissquality=f(talent) */ local quality=round(`3',.001) /* scaling parameter for valueadded (by quality grad school) */ local cohortsize=round(`4',.001) /* size of annual graduate cohort (for each programs) */ local facultylines=round(`5',.001) /* number of faculty lines (for each program)*/ local batch `6' clear quietly set obs 500 /*create 500 BAs applying to grad school*/ quietly gen talent=rnormal() /* draw talent from normal */ quietly gen apparenttalent=talent + rnormal(0,`gre_noise') /*observe talent w error */ *grad school admissions follows strict dual queue by apparent talent and dept rank gsort -apparenttalent quietly gen gradschool=1 + floor(([_n]-1)/`cohortsize') lab var gradschool "dept rank of grad school" *how much more do you actually learn at prestigious schools quietly gen valueadded=`quality'*(1/(gradschool^0.5)) *how good is dissertation, as f(talent, gschool value added, noise) quietly gen dissquality=talent+rnormal(0,`diss_noise') + valueadded *grad school admissions follows strict dual queue of diss quality and dept rank (no incest taboo/preference) gsort -dissquality quietly gen placement=1 + floor(([_n]-1)/`facultylines') lab var placement "dept rank of 1st job" quietly sum gradschool quietly replace placement=. if placement>`r(max)' /*those not placed in PhD granting departments do not have research jobs (and may not even have finished PhD)*/ *recode outcomes in a few ways for convenience of presentation quietly gen researchjob=placement quietly recode researchjob 0/999=1 .=0 lab var researchjob "finished PhD and has research job" quietly gen gschool_type= gradschool quietly recode gschool_type 1/10=1 11/999=2 .=3 quietly gen job_type= placement quietly recode job_type 1/10=1 11/999=2 .=3 quietly gen job_top10= placement quietly recode job_top10 1/10=1 11/999=0 lab def typology 1 "top 10" 2 "lower ranked" 3 "non-research" lab val gschool_type job_type typology if "`batch'"=="1" { quietly tab gschool_type job_type, matcell(xtab) local p_reproduction=xtab[1,1]/(xtab[1,1]+xtab[2,1]) shell echo "`gre_noise' `diss_noise' `quality' `cohortsize' `facultylines' `p_reproduction'" >> socmeritocracyresults.txt } else { twoway (lowess researchjob gradschool), ytitle(Proportion Placed) xtitle(Grad School Rank) tab gschool_type job_type, chi2 } end shell echo "gre_noise diss_noise quality cohortsize facultylines p_reproduction" > socmeritocracyresults.txt forvalues gnoise=0(.1)1 { local dnoise=`gnoise' forvalues qualitylearning=0(.5)2 { forvalues i=1/100 { disp "`gnoise' `dnoise' `qualitylearning' 15 1 1 tick `i'" socmeritocracy `gnoise' `dnoise' `qualitylearning' 15 1 1 } } } insheet using socmeritocracyresults.txt, clear delim(" ") lab var gre_noise "Noisiness of Admissions and Diss / Job Market" lab var quality "How Much More Do You Learn at Top Schools" table gre_noise quality, c(m p_reproduction)

Entry filed under: Uncategorized. Tags: simulation, Stata, superstar.

1.Status, Cliques, and the Political-Science Job Market » Duck of Minerva | December 5, 2012 at 7:50 pm[…] country are going to the top ten departments. Gabriell Rossman makes a similar claim — with a simulation. But he’s a bit more circumspect than Munger, noting that: Of course, keep in mind this is […]