Archive for October, 2010

Sociology of Living Death Revisited

| Gabriel |

By a pretty wide margin (almost twice as many pageviews as the runner-up), Code and Culture‘s most popular post to-date was last year’s “Towards a sociology of living death.” If you speak French, also see some more good zombie sociology abstracts (on RCT and Bourdieuian) from Denis Colombi. I figured it was worth revisiting this between it being Halloween and the premiere of Walking Dead on AMC (which is based on a really good comic book and looks to be really good itself). Unfortunately, I could only think of one more entry to add to this literature:

Post-Marxism. Marx himself (in Die Nichtheilige Lebenstot) emphasized the question of under what conditions zombiekind would go from being a class of itself to a class for itself, but most later Marxists have agreed with Hall and Romero’s critique that it is meaningless to talk about class consciousness for entities that lack consciousness of any kind. Rather post-Marxists prefer to follow the question first elaborated in Gramsci’s Quaderni del Cimitero as to understanding the partial class autonomy of zombies as reflected in the dichotomy between traditional (i.e., hegemonic and slow-moving) and organic (i.e., anti-hegemonic and fast moving) zombieism. However as Boyle and Karabel noted, ascribing any appreciable socio-economic-political agency to zombies qua zombies (whether organic or traditional) is effectively indistinguishable from simply treating zombies as a class of their own with real class power to reshape society (specifically, into an apocalyptic hellscape).

Advertisement

October 29, 2010 at 4:55 am 1 comment

Misc Links: Stata Networks and Mac SPSS bugfix

| Gabriel |

Two quick links that might be of interest.

  • As probably became inevitable with the creation of Mata, progress marches on in bringing social networks to Stata. Specifically, SSC is now hosting “centpow.ado,” which calculates Bonacich centrality and a few related measures directly in Stata. Thanks to Zach Neal of Michigan State for contributing this command. A few more years of this kind of progress and I can do everything entirely within Stata rather than exporting my network data, using “shell” to send the work out to R/igraph, and merging back in.
  • Last week’s Java update for OS X broke some functionality in SPSS (or PASW, or IBM SPSS, or whatever they’re calling it now). If this is a problem for you, here’s some helpful advice on how to fix it. Or you could  take my less helpful advice: switch to Stata.

October 25, 2010 at 12:54 pm 1 comment

Or you could just do regressions

| Gabriel |

Over at the “Office Hours” podcast (née Contexts podcast), Jeremy Freese gives an interview about sociology and genetics. The main theme of it is that when you have a model characterized by nonlinearity, positive feedback, and other sorts of complexity, you can get misleading results from models with essentially additive assumptions like the models we use to calculate heritability coefficients. (Heritability is closely analogous to a Pearson correlation coefficient. It is usually calculated from data about outcomes for fraternal vs identitical twins and uses reasonable assumptions about how much genetics these twins share, respectively 0.5 vs 1.0).

Jeremy gives the example that if people have small differences in natural endowments, but they specialize in human capital formation in ways that play to their endowments, then this will show up as very high heritability. Jeremy suggests this is misleading since the actual genetic impact on initial endowment is relatively small. I agree in a sense, but in another way, it’s not misleading at all. That is, the heritability coefficient is accurately reflecting that a condition is a predictable consequence of genetics even if the causal mechanism is in some sense social rather than entirely about amino acids.

This is exactly the same issue as an argument I had with one of my co-authors a few years ago. We were studying how pop songs spread across radio and dividing how much of this was endogenous (stations imitating each other) versus exogenous (stations all imitating something else). The argument was how to understand the effects of the pop charts published in Billboard and Radio & Records. One of my co-authors was arguing that these are not radio stations but periodicals and therefore should be considered exogenous to the system of radio stations. Myself and the other author held the position that appearing on the pop charts is an entirely predictable consequence of being played by a lot of radio stations and therefore it is endogenous, even if the effect is proximately channeled through something outside the system. I believe this is true in an ontological sense but it’s also a convenient belief since it’s necessary to make the math work.

Anyway, back to Jeremy’s case, you have a lot of things that are predictable outcomes of genetic endowment but for the sake of argument we can assume that we are really dealing with a small initial effect that is greatly magnified by a social mechanism. I would submit that in the current set of social circumstances the heritability coefficient as naively measured is very informative. This is sometimes contrasted with how informative it is in the abstract, but if you take gene-environment interdependence (or any complex system) seriously, then “in the abstract” is a meaningless concept. Rather you can only think about a counterfactual heritability coefficient in a counterfactual social system. This calls out for counterfactual causality logic to see how effects vary on different margins, etc, of the sort developed by Pearl and operationalized for social scientists by Morgan and Winship.

Currently, American social structure allows a lot of self-assignment to different trajectories, including an expensive (at both the personal and societal level) system of “second chances” for people to get back into the academic trajectory whether they show much aptitude for it or not and have sufficient remaining years in the labor market to amortize the human capital expense or not. As such there is sorting but it’s fairly subtle and to a substantial extent voluntary. This is the situation Jeremy describes in his stylized example of people voluntarily accruing human capital to complement natural endowments.

We can contrast this with two hypothetical scenarios. In counterfactual A, imagine that we had perfect sorting to match aptitude to development. Think of how the military uses the ASVAB to assign recruits to occupational specialties. Better yet, imagine some perfectly measured and perfectly interpreted genetic screen for aptitudes measured at birth, and on that basis we sent people from daycare onwards into a humanities track, a hard science track, or various blue collar vocational tracks with no opportunity for later transfers between tracks. That is, in this scenario we would see much stronger sorting to match aptitude and career than in the status quo. In counterfactual B, we can imagine that people are again permanently and coercively tracked, but tracking is assigned by a roulette wheel. That is, there would be no association between endowments and later experiences. In these two scenarios we could puzzle out a variety of consequences. Aside from the degradation of freedom taken as an assumption of the counterfactuals, the most obvious implications are that higher sorting would increase the dispersion of various outcome measures and the apparent heritability effect whereas random sorting would decrease outcome dispersion and measured heritability.

When people talk about heritability coefficients being biased as high, they seem to have in mind something like the random sorting model. This model strikes me as only useful as a thought experiment to establish the lower bounds of heritability since in the real world a Harrison Bergeron dystopia isn’t terribly likely. Rather we can think of scenarios that are roughly similar to reality, but vary on some margin. For instance, we can imagine how various policies (e.g., merit scholarships vs. need-based scholarships) might increase or decrease the sorting of genetic endowment and complementary human capital development on the margin and by extension what impact this would have on the distribution and covariation of outcomes.

[Update 10/22/2010: On further reflection, I can think of a scenario where a naive reading of heritability coefficients would still strike me as grossly misleading, even if it were reliable, and I would prefer the “random assignment” counterfactual as “true” heritability. Imagine a society that is genetically homogenous as to skin pigmentation genes, but where having detached earlobes were a social basis for assigning people to work indoors. In this scenario, there would be non-trivial heritability for skin color even though (by assumption) this society has no variance (and hence no heritability) for genes directly affecting pigmentation. Similarly, imagine a society where children without cheek dimples were exposed to ample lead and inadequate iodine, thereby making the undimpled into a hereditary caste of half-wits even though the genes that create dimples have no direct effect on g. I suppose what I’m getting at is that social mechanisms that select on and magnify genetic endowments are one thing, whereas social processes based on completely orthogonal stigma are another.]

October 20, 2010 at 5:18 am 3 comments

fsx.ado, fork of fs.ado (capture ls as macro)

| Gabriel |

[Update: now hosted at ssc, just type “ssc install fsx” into Stata]

Nick Cox’s “fs.ado” command basically lets you capture the output of “ls” as a return macro. This is insanely useful if you want to do things like batching importing or appending.

For better or worse, when run without arguments “fs” shows all files, including files beginning with a dot. That is “fs” behaves more like the Bash “ls -a” command rather than hiding these files like the Bash “ls” command.

Since for most purposes Unix users are better off ignoring these files, I (with Nick’s blessing) wrote a fork of “fs” that by default suppresses the hidden files. The fork is called “fsx” as in “fs for Unix.” I haven’t tested it on Windows yet but it should run fine. However doing so is kind of pointless since Windows computers usually don’t have files beginning with dots unless they have been sharing a file system with Unix (for example, if a Mac user gives a Windows user data on a USB key). If you are interested in seeing the hidden files, you can either use the original “fs” command or use “fsx, all”.

After I write the help file I’ll post this to SSC.

BTW, as a coding note, this file has a lot of escaped quotes. I found this chokes TextMate’s syntax parser but Smultron highlights it correctly.

*! GHR and NJC 1.0 17 October 2010
* forked from fs.ado 1.0.5 (by NJC, Nov 2006)
program fsx, rclass
        syntax [anything] [, All ]
        version 8
        if `"`anything'"' == "" local anything *
        foreach f of local anything {
                if index("`f'", "/") | index("`f'", "\") ///
                 | index("`f'", ":") | inlist(substr("`f'", 1, 1), ".", "~") {
                        ParseSpec `f'
                        local files : dir "`d'" files "`f'"
                }
                else local files : dir . files "`f'"
                local files2 ""
                foreach f of local files {
                    if "`all'"=="all" {
                        local files2 "`files2' `f'"
                    }
                    else {
                        if strpos("`f'",".")!=1 {
                            local files2 "`files2' `f'"
                        }
                    }
                }
                local Files "`Files'`files2' "
        }
        DisplayInCols res 0 2 0 `Files'
        if trim(`"`Files'"') != "" {
                return local files `"`Files'"'
        }
end

program ParseSpec
        args f

        // first we need to strip off directory or folder information

        // if both "/" and "\" occur we want to know where the
        // last occurrence is -- which will be the first in
        // the reversed string
        // if only one of "/" and "\" occurs, index() will
        // return 0 in the other case

        local where1 = index(reverse("`f'"), "/")
        local where2 = index(reverse("`f'"), "\")
        if `where1' & `where2' local where = min(`where1', `where2')
        else                   local where = max(`where1', `where2')

        // map to position in original string and
        // extract the directory or folder
        local where = min(length("`f'"), 1 + length("`f'") - `where')
        local d = substr("`f'", 1, `where')

        // absolute references start with "/" or "\" or "." or "~"
        // or contain ":"
        local abs = inlist(substr("`f'", 1, 1), "/", "\", ".", "~")
        local abs = `abs' | index("`f'", ":")

        // prefix relative references
        if !`abs' local d "./`d'"

        // fix references to root
        else if "`d'" == "/" | "`d'" == "\" {
                local pwd "`c(pwd)'"
                local pwd : subinstr local pwd "\" "/", all
                local d = substr("`pwd'", 1, index("`pwd'","/"))
        }

        // absent filename list
        if "`f'" == "`d'" local f "*"
        else              local f = substr("`f'", `= `where' + 1', .)

        //  return to caller
        c_local f "`f'"
        c_local d "`d'"
end

program DisplayInCols /* sty #indent #pad #wid <list>*/
        gettoken sty    0 : 0
        gettoken indent 0 : 0
        gettoken pad    0 : 0
        gettoken wid    0 : 0

        local indent = cond(`indent'==. | `indent'<0, 0, `indent')
        local pad    = cond(`pad'==. | `pad'<1, 2, `pad')
        local wid    = cond(`wid'==. | `wid'<0, 0, `wid')

        local n : list sizeof 0
        if `n'==0 {
                exit
        }

        foreach x of local 0 {
                local wid = max(`wid', length(`"`x'"'))
        }

        local wid = `wid' + `pad'
        local cols = int((`c(linesize)'+1-`indent')/`wid')

        if `cols' < 2 {
                if `indent' {
                        local col "column(`=`indent'+1)"
                }
                foreach x of local 0 {
                        di as `sty' `col' `"`x'"'
                }
                exit
        }
        local lines = `n'/`cols'
        local lines = int(cond(`lines'>int(`lines'), `lines'+1, `lines'))

        /*
             1        lines+1      2*lines+1     ...  cols*lines+1
             2        lines+2      2*lines+2     ...  cols*lines+2
             3        lines+3      2*lines+3     ...  cols*lines+3
             ...      ...          ...           ...               ...
             lines    lines+lines  2*lines+lines ...  cols*lines+lines

             1        wid
        */

        * di "n=`n' cols=`cols' lines=`lines'"
        forvalues i=1(1)`lines' {
                local top = min((`cols')*`lines'+`i', `n')
                local col = `indent' + 1
                * di "`i'(`lines')`top'"
                forvalues j=`i'(`lines')`top' {
                        local x : word `j' of `0'
                        di as `sty' _column(`col') "`x'" _c
                        local col = `col' + `wid'
                }
                di as `sty'
        }
end

October 18, 2010 at 9:51 am 2 comments

Stata Programming Lecture [updated]

| Gabriel |

I gave my introduction to Stata programming lecture again. This time the lecture was cross-listed between my graduate statistics course and the UCLA ATS faculty seminar series. Here are the lecture notes.

October 14, 2010 at 3:50 pm 3 comments

The Emperor’s New Lunch Counter

| Gabriel |

Malcolm Gladwell’s piece on the Twitter revolution reminds me why I love him so much. Definitely read it, but the basic point is that the kind of weak tie social connections encouraged by these web 2.0 media are only good for cheap grace like hitting the “like” button for a Save Darfur page. For serious commitment, as in standing in front of a People’s Liberation Army tank and daring it to run you over, you need strong ties and dense communities — maybe even hierarchy. It’s actually very similar to the Centola, Miller, and Macy “Emperor’s New Clothes” model that intrinsically appealing / low cost ideas will spread through random graphs whereas intrinsically unappealing / high cost ideas require dense cliques.

There’s also a very good point about how we only believe that Twitter is important in Iran because American opinion leaders (especially journalists) are obsessed with Twitter and so that’s where they went looking for information. That is, a classic example of sampling on the dependent variable or, as Popper would put it, building up a list of white swans.

October 1, 2010 at 2:28 pm


The Culture Geeks