Archive for August, 2011
This Malawian Life
| Gabriel |
Just a quick tip to check out the current episode of This American Life, which is based on the work of my CCPR colleague Susan Watkin on HIV-related gossip in Malawi. Even if you’re not interested in health or development, it’s very interesting for what it says about social networks, diffusion, statistical discrimination, and concealed stigma. The main issue is that people constantly talk about HIV in attempts to figure out who has HIV and thus makes an undesirable sex partner but I also had a few somewhat idiosyncratic interests:
- Information does not just diffuse through social networks in the usual sense of things that would show up in your edge list or sociomatrix but also through space (I’m at the clinic next door to the HIV clinic when you pick up your meds) and through ad hoc collections of people temporarily bounded together (a bunch of people on a bus all start speculating about the HIV status of a pedestrian). I consider this more evidence for my belief that network contagion as a mechanism for information flow is over-rated.
- A lot of public health programs emphasize the coals to Newcastle policy of “encouraging discussion” and “raising awareness.” These policies were driven by cosmopolitan elites, international NGOs, etc. That is, it’s John Meyer “world society” kind of stuff run amuck.
- About a year ago our mutual grad student, Tom Hannan, started a new project that synthesizes Susan’s concerns in #2 with some of my recent theoretical/methodological interests.
Margin of Error
| Gabriel |
A few months ago, the Gary Johnson campaign for the GOP primary issued a press release responding to CNN’s decision to exclude him from their debate on grounds of viability, specifically low poll numbers. (h/t, Conor). The Johnson campaign makes some valid points about the validity of early polls, for instance that they are mostly being a function of name (*cough* Trump bubble *cough*) recognition. However they make a common mistake when they talk about sampling error:
While we have had no specific explanation from the debate sponsors, it appears that Gary Johnson’s exclusion was based on some mysterious polling arithmetic. Whatever that arithmetic was, the differences that excluded us while producing invitations for several other less-known candidates would certainly fall within the margin of error of any poll.
Without commenting on the merits of Governor Johnson’s candidacy, there are really two issues.
First, when you are making a decision on the basis of numbers you have to some threshold and cases that fall near the threshold are necessarily arbitrary. If one moves the threshold to accommodate the boundary cases this puts other cases near the boundary. Another way to think about this is that CNN’s true threshold for viability could be 4% and they’re just calling it 2% to allow in the boundary cases that are close to 4%. In some ways this is the opposite of the issue that the difference between significance and insignificance is not significant. The difference is that in science we have the luxury of saying we should postpone judgement on an issue barring further data collection (which is really what a p value around .06 or .07 usually means) whereas barring the invention of quantum televisions CNN doesn’t have the option of “maybe” giving Johnson a podium.
Second, “margin of error” is not really a statistical concept so much as a heuristic for making the concept of standard error easier to understand. The heuristic is valid for proportions near 50%, but breaks down as you get towards extremely high or extremely low proportions. Assuming a proportion of 50% gives an upper-bound estimate for standard error and while such epistemic humility is a better bias than the alternative, it can occasionally lead us astray. A simple way to understand this is that if a poll has a stated “margin of error” of +/- 3%, and the point estimate is 1%, this does not mean that the population proportion is anywhere from negative 2% to positive 4% as proportions are necessarily non-negative.
Specifically, standard error of a proportion is π(1-π)/(n-1)^0.5. Do the math plugging in a proportion of 50% and a sample size of about 1200 (both of which are typical for opinion polls) and you get standard error of about 1.5 points. To get a 95% confidence interval you multiply the standard error by +/- 2, which is where we get the usual margin of error of plus or minus 3 points. Again, that’s around 50%. If instead you plug in 1%, you get a standard error of .03%, which you can double for a margin of error of +/- .06%. In that sense, two candidates who are polling at 1% and 3% are not “in a statistical tie” even though this would be true of two candidates who are polling at 49% and 51%.
Social Structures
| Gabriel |
Shortly before ASA, I finished John Levi Martin’s Social Structures and I loved it, loved it, loved it. (Also see thoughts from Paul DiMaggio, Omar Lizardo, Neil Gross, Fabio Rojas, and Science). I find myself hoping I have to prep contemporary theory just so I can inflict it on unsuspecting undergrads. The book is all about emergence and how fairly minor changes in the nature of social mechanisms can create quite different macro social structures.* It’s just crying out for someone to write a companion suite in NetLogo, chapter by chapter. In addition, JLM knows an enormous amount of history, anthropology, and even animal behavior and uses it all very well to both illustrate his points and show how they work when the friction of reality enters. For instance, he notes that balance theory breaks down to the extent that people have some agency in defining the nature of ties and/or keeping some relations “neutral” rather than the ally versus enemy dichotomy.**
An interesting contrast is Francis Fukuyama’s Origins of Political Order, which I also liked. The two books are broadly similar in scope, giving a sweeping comparative overview of history that starts with animals and attempts to work up to the early modern era. (There are also some similarities in detail, such as their very similar understandings of the “big man” system and that domination is more likely in bounded populations). There is an obvious difference of style in that Fukuyama is easier to read and goes into more extended historical discussions but the more important differences are thematic and theoretical. One such difference is that Fukuyama follows Polybius in seeing the three major socio-political classes as the people, the aristocracy, and the monarch, with the people and the monarch often combining against the aristocracy (as seen in the Roman Revolution and in early modern absolute monarchies). In contrast, JLM’s model tends to see the monarch as just the top aristocrat, though his emphasis on the development of transitivity in command effectively accomplishes some of the same work as the Fukuyama/Polybius model.
The most important difference comes in that Fukuyama is inspired by Weber whereas JLM uses Simmel, a distinction that becomes especially distinct as they move from small tribal bands to early modern societies. Fukuyama’s book is fundamentally about the tension between kinship and law as the fundamental organizing principle of society. In Fukuyama’s account both have very old roots and modernity represents the triumph of law. In contrast, JLM sees kinship (and analogous structures like patronage) as the fundamental logics of society with modernity being similar in kind but grander in scale. In the last chapter and a half JLM discusses the early modern era and here he sounds a bit more like Fukuyama, but he’s clearly more interested in, for instance, the origins of political parties than in their transformation into modern ideological actors.
In part this is because, as Duncan Watts observed at the “author meets critics” at ASA, JLM is mostly interested in that which can be derived from micro-macro emergence and tends to downplay issues that do not fit into this framework.*** This is seen most clearly in the fact that the book winds down around the year 1800 after noting that (a) institutionalization can partially decouple mature structures from their micro origins and (b) ideology can in effect form a sort of bipartite network structure through which otherwise disconnected factions and patronage structures can be united (usually in order to provide a heuristic through which elites can practice balance theory), as with the formation of America’s original party system of Federalists and Democrats which JLM discusses in detail. Of course as I said in the “critics” Q&A, at the present most politically active Americans have a primarily ideological attachment to their party without things like ward bosses and perhaps more interestingly, a role for ideology as a bridge is not an issue restricted to the transition from early modern to modern. As is known to any reader of Gibbon, there was a similar pattern in late antiquity in how esoteric theological disputes over adoptionist Christology and reconciliation of sinners provided rallying points for core vs periphery political struggles in the late Roman empire. Since this is largely a dispute over emphasis, it’s not surprising that JLM was sympathetic to this but he noted that there are limits to what ideological affinity can accomplish and when it comes to costly action you really need micro structures. (He is of course entirely right about this as seen most clearly in the military importance of unit cohesion, but it’s still interesting that ideology has waxed and patronage waned in party systems of advanced democracies).
There are a few places in the book where JLM seemed to be arguing from end states back to micro-mechanisms and I couldn’t tell whether he meant that the micro-mechanisms necessarily exist (i.e., functionalism) or that such demanding specifications of micro-mechanisms implied that the end state was inherently unstable (i.e., emergence). For instance, in chapter three he discusses exchange of women between patrilineal lineages and notes that if there is not simple reciprocity (usually through cross-cousin marriage) then there must be either be some form of generalized reciprocity or else the bottom-ranked male lineages will go extinct. On reading this I was reminded of this classic exchange:
That is, I think it is entirely possible that powerful male lineages could have asymmetric marital exchange with less powerful male lineages and if the latter are eventually driven into extinction then that sucks for them. (The reason this wouldn’t lead to just a single male lineage clan is because, as Fukuyama notes, large clans can fissure and tracing descent back past the 5th or 6th generation is usually more political than genealogical). This is the sort of thing that can actually be answered empirically by contrasting Y chromosomes with mitochondrial DNA. For instance, a recent much publicized study showed that pretty much all ethnically English men carry the Germanic “Frisian Y” chromosome. The authors’ interpretation of this is that a Saxon mass migration displaced the indigenous Gallo-Roman population but I don’t see how this is at all inconsistent with the older elite transfer model of the Saxon invasion if we assume that the transplanted foreign elite hoarded women, including indigenous women. A testable implication of the elite transfer model is that the English would have the same Y as the Danes and Germans but similar mitochondria as the Irish and Welsh. Similarly, a 2003 study showed that 8% of men in East and Central Asia show descent on the male line from Ghengis Khan but nobody has suggested that this reflects a mass migration. Rather in the 12th and 13th centuries the Mongols used rape and polygamy to impregnate women of many Asian nations and they didn’t really give a damn if this meant extinction of the indigenous male lineages.
A very minor point but one that is important to me as a diffusion guy is that chapter five uses the technical jargon of diffusion in non-standard ways, or to be more neutral about it, he and I use terms differently. That said it’s a good chapter, it just needs to be read carefully to avoid semantic confusion.
This post may read like I’m critical of the book but that’s only because I prefer to react to and puzzle out the book rather than summarize it. What reservations I have are fairly minor and unconfident. My overall assessment is that this is a tremendously important book that should be read carefully by anyone interested in social networks, political sociology, social psychology, or economic sociology. For instance, I wish it had been published before my paper with Esparza and Bonacich as using the chapter on pecking orders would have allowed us to develop more depth to the finding about credit ranking networks. (That and it would have given us a pretext to compare Hollywood celebrities to poultry and small children). Despite the book’s foundation in graph theory, this interest should span qualitative/quantitative — at ASA Randy Collins praised the book enthusiastically and gave a very thoughtful reading and from personal conversation I know that Alice Goffman was also very impressed. I think this is because JLM’s relentless focus on interaction between people is a much thinner but nonetheless similar approach to the kinds of issues that qualitative researchers tend to engage with. Indeed, at a deep level Social Structures has more in common with ethnography than with anything that uses regression to try to describe society as a series of slope-intercept equations.
————-
* Technically, it’s about weak emergence, not strong emergence. At “author meets critics” JLM was very clear that he rejects the idea of sui generis social facts with an independent ontological status rather than just a summary or aggregation of micro structure.
** One of the small delights in the early parts of the book is that he notes how our understanding of network structure is driven in part by the ways we measure and record it. So networks based on observation of proximity are necessarily symmetric whereas networks based on sociometric surveys highlight the contingent nature of reciprocity, networks based on balance theory tend to be positive/negative whereas matrices emphasize presence/absence and are often sparse, etc. I might add to his observations in this line that the extremely common practice of projecting bipartite networks into unipartite space (as with studies of Hollywood, Broadway, corporate boards, and technical consortia) has its own sets of biases, most obviously exaggerating the importance and scalability of cliques. Also, I’ve previously remarked on a similar issue in Saller’s Personal Patronage as to how we need to be careful about directed ties being euphemistically described as symmetric ties in some of our data.
*** Watts also observed that JLM’s approach is very much a sort of 1960s sociometry and doesn’t use the recent advances in social network analysis driven by the availability of big data about computer-mediated communication (such as Watts’ current work on Twitter). JLM responded with what was essentially a performativity critique of naive reliance on web 2.0 data, noting for instance that Facebook encourages triadic closure, enforces reciprocity, and discourages deletion of old ties.
Misc Links
- One practical application and another practical application to the distinction between a pseudo-random number generator and true randomness. Fortunately your dataset usually isn’t trying to con you so PRNG is probably still good enough for sampling, bootstrapping, and permutation. The distinction is a useful illustration of how strategic action qualitatively changes things.
- In an apparent attempt to make the GRE as useless to admissions committees as the TOEFL, ETS has completely revamped the GRE to be more “practical” and less of your classic abstract
IQ testgeneral aptitude measure. I lack the expertise in psychometrics to have an opinion about that change, but what pisses me off is they abandoned the old scoring system which means that the new scores are incommensurable with the old scores and people will probably give up trying to interpret the scores and just read the percentiles. This is a problem because percentiles lead to bad decision-making. - Peter Berger had an interesting post about “the wrong side of history” as a rhetorical trope in politics with special attention to how this plays out in regards to gay marriage. You heard it here first! On the other hand, he did come up with the whole social constructionism and disenchantment of reality thing so I guess there’s that.
- If you’re thinking of panning a sociology book in a British publication you better make sure you have your facts straight. (No word on whether ASA filed an AC). I have really mixed feelings on this. On the one hand I loathe libel suits for the chilling effect they can have and how they can be abused (especially in Britain) but on the other hand I can see how pissed I’d be if somebody speciously accused me of academic fraud. It sounds like the reviewer really was negligent in this case, but I worry about whether justice in this case is worth allowing cases like Lott v Levitt, where a known fraud used a SLAPP to shout down criticism made in good faith and the critic only prevailed on appeal.
- Career opportunities in performativity (h/t Tyler @ MR)
- Interesting article from Jonathan Last on how in an OEM-ified world counterfeits aren’t necessarily shoddy knock-offs so much as a principle-agent issue of the contractor taking some of the IP rents from the OEM. (He doesn’t put the argument that explicitly, but that’s the gist of it). I’m thinking this is a potentially fruitful line of research for people interested in OEMs, embedded exchange, etc. For instance, who is more vulnerable to this sort of counterfeiting, OEMs with arms-length ties (e.g., Nike) or thoroughly embedded ties (e.g., Apple)? This is another way of asking about the trade-offs of greater monitoring against the diminution of credible threat of exit that comes with mutual dependence. Anyway, interesting stuff to think about for OT folks who are into alliances, contracting, etc.
TV Party Tonight!
| Gabriel |
A month or so ago bloggingheads had Alyssa Rosenberg and Peter Suderman (mp3 only), my two favorite politically-informed-but-not-hacks culture bloggers. In the course of their conversation they talked about “recapping” culture, which is where a blogger reacts in about 1000 words to each episode of a tv show, usually the day after it airs. I’m sure there were earlier precedents on Usenet forums, but I associate the development of this genre of criticism with Television Without Pity. TWOP recaps are almost Talmudic exegesis that take as long to read as the show itself takes to watch. There are currently many other recaps sites, most notably The Onion’s tv club, and other bloggers do just one or two shows, as Alyssa is currently doing with Breaking Bad and True Blood. It’s a very interesting genre of writing and helps illuminate some theoretical issues with the superstar effect and the demand structure for entertainment.
The superstar effect is of course Sherwin Rosen’s observation that cultural products and cultural workers have a truly ridiculous level of inequality. Rosen first noted that a scope condition is technology for infinite reproducibility and this has held up. However his theoretical mechanism was ordinal selection that was hyper-sensitive to infinitesimmal quality differences and later research has pretty definitively discarded that mechanism. Rather, most everybody now agrees that the superstar effect reflects some kind of cumulative advantage mechanism and the only question is exactly how it works. We know for a fact from Salganik’s music lab work that information cascades are a part of this, but that doesn’t mean that there aren’t also other cumulative advantage mechanisms at work.
Probably the first article to propose a cumulative advantage mechanism for the superstar effect was Moshe Adler’s “Stardom and Talent.” Adler is often cited as synthesizing network externalities and the superstar effect, that is, people read him as articulating a model of “watercooler entertainment” where entertainment is a mixed coordination game (aka, “battle of the sexes“) consumed mostly or entirely for its utility in providing topics of conversation. When you see people citing Adler they are usually arguing that cultural consumption is a means to an end of socializing. For example, imagine that (like any sane human being) you find watching golf on tv to be incredibly tedious but you force yourself to watch it so that you have something to chit chat about with your boss, who is a big golfer.
This is a compelling model, but it’s not actually the model Adler proposed, in part because he’s coming from a theoretical background that emphasizes demand (i.e., micro-economics) rather than a tradition that emphasizes homophily (i.e., sociology). What Adler actually wrote is that chit chat is a means to cultivating taste in entertainment addiction goods. Adler starts from the premise that many art forms function as addiction goods (aka, acquired tastes). However it is often difficult to consume enough of the art to get you into a place where the addiction good has positive expected value and so we use discourse about the art in order to heighten the addiction and thereby increase the utility of arts consumption. That is, I discuss a tv show with you because it helps me develop my relationship with the tv show, not because it helps me develop my relationship with you. We can see this in a formal setting when people take “[wine / opera / painting] appreciation” classes, where (in price theory terms) the class increases your addiction to the good even more so than simply consuming the good.
Adler’s model seems a bit on the aspy side and, like I said, people often get it backwards when they cite it, perhaps because they are forgetting how weird it is and one’s memory’s reconstructs the article’s argument to be more intuitive. Nonetheless, I think that Adler’s original model is also pretty compelling. Notably, there’s no reason why the causation has to go one way. It could be endogenous or it might even be contingent, with “watercooler” for some types of art and “addiction good” for others.
These are subtly different models and provide theoretical implications that are in theory distinguishable (though may be hard to disentangle in practice). In particular, I’m thinking that we can use Omar Lizardo’s argument about the different types of network ties supported by high culture versus pop culture. Omar argues that since pop culture forms a more universal social lubricant it should be (and in fact is) associated more with weak ties whereas high culture is tricky enough that it requires more strong ties.
If we extrapolate this out, we can interpret it as meaning that the “watercooler” network externality effect (ie, the common misreading of Adler) is a mechanism that supports cumulative advantage for shows that are very accessible and not terribly nuanced. That is, you might watch American Idol in order to have a bunch of 2 minute long conversations with acquaintances and strangers whom you normally come into contact with anyway. An important corollary is that you wouldn’t normally seek out fellow fans of crap but just make sure that you’re sufficiently familiar with crap to hold your own in a conversation with random people.
In contrast, we can use the “addiction goods” model (ie, Adler’s actual argument) to explain consumption of less accessible cultural objects of the sort that might sustain an entire dinner’s worth of conversation. The objects might even be so inscrutable that they are difficult to consume without having an interlocutor to help you make sense of them and so you might either seek out strangers who already consume the object or try to convince a close friend to consume the object as well so you can discuss it together. For instance if you read the first paragraph of this post and said “I don’t know or care about this Alyssa person but I’m going to click the link because I’m hoping somebody can help me understand what’s the deal with Hank’s mineral collection” then that would be an illustration of the addiction good model at work. Now if it’s just people who already consume a show finding each other that’s not cumulative advantage but homophily. However there is cumulative advantage if you start watching a show because your favorite blogger is recapping it or if you read a book to participate in a book club or if you buy your best friend a box set of the first season of Battlestar Galactica so you have someone with whom to discuss the downward spiral of Gaius Baltar. In this sense recapping is a complement to the increasing narrative complexity of popular entertainment and one way to see this is that people tend to recap shows with a serial rather than episodic structure.
Recent Comments