Posts tagged ‘sociology of science’

You Broke Peer Review. Yes, I Mean You

| Gabriel |

I’m as excited as anybody about Sociological Science as it promises a clean break from the “developmental” model of peer review by moving towards an entirely evaluative model. That is, no more anonymous co-authors making your paper worse with a bunch of non sequiturs or footnotes with hedging disclaimers. (The journal will feature frequent comment and replies, which makes debate about the paper a public dialog rather than a secret hostage negotiation). The thing is though that optimistic as I am about the new journal, I don’t think it will replace the incumbent journals overnight and so we still need to fix review at the incumbent journals.

So how did peer review get broken at the incumbent journals?

jaccuse

You and I broke it.

Your average academic’s attitude towards changes demanded in R&R is like the Goofy cartoon “Motor Mania.”

In this cartoon Goofy is a meek pedestrian dodging aggressive drivers but as soon as he gets behind the wheel himself he drives like a total asshole. Similarly, as authors we all feel harassed by peer reviewers who try to turn our paper into the paper they would have written, but then as reviewers ourselves we develop an attitude of “yer doin it wrong!!!” and start demanding they cite all our favorite articles and with our favorite interpretations of those articles. (Note that in the linked post, Chen is absolutely convinced that she understands citations correctly and the author has gotten them wrong out of carelessness, without even considering the possibility that the interpretive flaw could be on her end or that there might be a reasonable difference of opinions).

So fixing peer review doesn’t begin with you, the author, yelling at your computer “FFS reviewer #10, maybe that’s how you would have done it, but it’s not your paper” (and then having a meeting with your co-authors that goes something like this):

And then spending the next few months doing revisions that feel like this:

And finally summarizing the changes in a response memo that sounds like this:

Nor, realistically, can fixing peer review happen from the editors telling you to go ahead and ignore comments 2, 5, and 6 of reviewer #6. First, it would be an absurd amount of work for the editors to adjudicate the quality of comments. Second, from the editor’s perspective the chief practical problem is recruiting reviewers and getting timely reviews from them and so they don’t want to alienate the reviewers by telling them that half their advice sucks in their cover letter any more than you want to do that in your response memo.

Rather, fixing peer review has to begin with you, the reviewer, telling yourself “maybe I would have done it another way myself, but it’s not my paper.” You need to adopt a mentality of “is it good how the author did it” rather than “how could this paper be made better” (read: how would I have done it). That is the whole of being a good reviewer, the rest is commentary. That said, here’s the commentary.

Do not brainstorm
Responding to a research question by brainstorming possibly relevant citations or methods is a wonderful and generous thing to do when a colleague or student mentions a new research project but it’s a thoroughly shitty thing to do as a peer reviewer. There are a few reasons why the same behavior is so different in two different contexts.

First, many brainstormed ideas are bad. When I give you advice in my office, you can just quietly ignore the ideas I give you that don’t work or are superfluous. When I give you advice as a peer reviewer there is a strong presumption that you take the advice even if it’s mediocre which is why almost every published paper has a couple of footnotes along the lines of “for purposes of this paper we assume that water is wet” or “although it has almost nothing to do with this paper, it’s worth noting that Author (YEAR) is pretty awesome.” Of course some suggestions are so terrible that the author can’t take them in good conscience but in such cases the author needs to spend hours or days per suggestion writing an apologetic and extremely deferential memo apologizing for not implementing the reviewer’s suggestions.

Second, many brainstormed ideas are confusing. When I give you advice in my office you can ask follow-up questions about how to interpret and implement it. When I give advice as a peer reviewer it’s up to you to hope that you read the entrails in a way that correctly augurs the will of the peer reviewers. As a related point, be as specific as possible. “This paper needs more Bourdieu” is a not terribly useful comment (indeed, “cite this” comments without further justification are usually less about any kind of intellectual content than they are about demanding shibboleths or the recitation of a creedal confession) whereas it might actually be pretty helpful to say “your argument about the role of critics on pages 4-5 should probably be described in terms of restricted field from Bourdieu’s Field of Cultural Production.” (Being specific has the ancillary benefit that it’s costly to the reviewer which should help you maintain the discipline to thin the mindfart herd stampeding into the authors’ revisions.)

Third, ideas are more valuable at the beginning of a project than at the end of it. When I give you advice about your new project you can use it to shape the way the project develops organically. When I give it to you as a reviewer you can only graft it on after the fact. My suggested specification may check the robustness of your finding or my suggested citation may help you frame your theory in a way that is more appealing, but they can’t help you develop your ideas because that ship has sailed.

That’s not to say that you shouldn’t give an author advice on how to fix problems with the paper. However it is essential to keep in mind that no matter how highly you think of your own expertise and opinions, you remember that the author doesn’t want to hear it. When you give advice, think in terms of “is it so important that these changes be made that I upset the author and possibly delay publication at a crucial career point.” Imagine burning a $20 bill for every demand you make of the author and ask yourself if you’d still make it. Trust me, the author would pay a lot more than $20 to avoid it — and not just because dealing with comments is annoying but because it’s time-consuming and time is money. It usually takes me an amount of time that is at least the equivalent of a course release to turn-around an R&R and at most schools a course release in turn is worth about $10,000 to $30,000 if you’re lucky enough to raise the grants to buy them. If you think about the productivity of science as a sector then ask yourself if your “I’m just thinking out loud” comment that takes the author a week to respond to is worth a couple thousand dollars to society. I mean, I’ve got tenure so in a sense I don’t care but I do feel a moral obligation to give society a good value in exchange for the upper middle class living it provides me and I don’t feel like I’m getting society its money’s worth when I spend four months of full-time work to turn around one round of R&R instead of getting to my next paper. This brings me to my next point…

Distinguish demands versus suggestions versus synapses that happened to fire as you were reading the paper
A lot of review comments ultimately boil down to some variation on “this reminds me of this citation” or “this research agenda could go in this direction.” OK, great. Now ask yourself, is it a problem that this paper does not yet do these things or are these just possibilities you want to share with the author? Often as not they’re really just things you want to share with the author but the paper is fine without them. If so, don’t demand that the author do them. Rather just keep it to yourself or clearly demarcate these as optional suggestions that the author may want to consider, possibly for the next paper rather than the revision of this paper.

As a related issue, demonstrate some rhetorical humility. Taking a commanding and indignant tone doesn’t mean you know what you’re talking about. On a recent review I observed, I noticed that one of the reviewers whose (fairly demanding) comments seemed to reflect a deep understanding of the paper nonetheless used a lot of phrases like “might,” “consider,” “could help,” etc whereas another reviewer who completely missed the point of the paper was prone to phrases like “needs to” and “is missing.”

There’s wrong and then there’s difference of opinion
On quite a few methodological and theoretical issues there is a reasonable range of opinion. Don’t force the author to weigh in on your side. It may very well be appropriate to suggest that the author acknowledge the existence of a debate on this subject (and perhaps briefly explore the implications of the alternative view) but that’s a different thing from expecting that the author completely switch allegiances because error has no rights. Often such demands are tacit rather than explicit, just taking for granted that somebody should use, I don’t know, Luhmann, without considering that the author might be among the many people who if told “you can cite Luhmann or you can take a beating” would ask you “tell me more about this beating? will there be clubs involved?”

Popepiusix_CROP

For instance, consider Petev ASR 2013. The article relies heavily on McPherson et al ASR 2006, which is an extremely controversial article (see here, here, and here). One reaction to this would be to say the McPherson et al paper is refuted and ought not be cited. However Petev summarizes the controversy in footnote 10 and then in footnote 17 explains why his own data is a semi-independent (same dataset, different variables) corroboration of McPherson et al. These footnotes acknowledge a nontrivial debate about one of the article’s literature antecedents and then situates the paper within the debate. No matter what your opinion of McPherson et al 2006, you should be fine with Petev relying upon and supporting it while parenthetically acknowledging the debate about it.

There are also issues of essentially theoretical nature. I sat on one of my R&R for years in large part because I’m using a theory in its original version while briefly discussing how it would be different if we were to use a schism of the theory while one of the reviewers insists that I rewrite it from the perspective of the schismatic view. Theoretical debates are rarely an issue of decisive refutation or strictly cumulative knowledge but rather at any given time there’s a reasonable range of opinions and you shouldn’t demand that the author go with your view but at most that they explore its implications if they were to. Most quants will suggest robustness checks to alternative plausible model specifications without demanding that these alternative models are used in the actual paper’s tables, we should have a similar attitude towards treating robustness or scope conditions to alternative conceptions of theory as something for the footnotes rather than a root and branch reconceptualization of the paper.

There are cases where you fall on one side of a theoretical or methodological gulf and the author on another to the extent that you feel that you can’t really be fair. For instance, I can sometimes read the bibliography of a paper, see certain cites, and know instantly that I’m going to hate the paper. Under such circumstances you as the reviewer have to decide if you’re going to engage in what philosophers of science call “the demarcation problem” and sociologists of science call “boundary work” or you’re going to recuse yourself from the review. If you don’t like something but it has an active research program of non-crackpots then you should probably refuse to do the review rather than agreeing and inevitably rejecting. Note that the managing editor will almost always try to convince you to do the review anyway and I’ve never been sure if this is them thinking I’m giving excuses for being lazy and not being willing to let me off the hook, them being lazy about finding a more appropriate reviewer, or an ill-conceived principle that a good paper should be pleasing to all members of the discipline and thus please even a self-disclaimed hostile reader. Notwithstanding the managing editor’s entreaties, be firm about telling him or her, “no, I don’t feel I could be fair to a paper of type X, but please send me manuscripts of type Y or Z in the future.”

Don’t try to turn the author’s theory section into a lit review.
moarcites
The author’s theory section should motivate the hypotheses. The theory section is not about demonstrating basic competence or reciting a creedal confession and so it does not need to discuss every book or article ever published on the subject or even just the things important enough to appear on your graduate syllabus or field exam reading list. If “AUTHOR (YEAR)” would not change the way we understand the submission’s hypotheses, then there’s no good reason the author needs to cite it. Yes, that is true even if the “omitted” citation is the most recent thing published on the subject or was written by your favorite grad student who you’re so so proud of and really it’s a shame that her important contribution isn’t cited more widely. If the submission reminds you of a citation that’s relevant to the author’s subject matter, think about whether it would materially affect the argument. If it would, explain how it would affect the argument. If it wouldn’t, then either don’t mention it at all or frame it as an optional suggestion rather than berating the author for being so semi-literate as to allow such a conspicuous literature lacuna.

By materially affect the argument I mostly have in mind the idea that in light of this citation the author would do the analysis or interpret the analysis differently. This is not the same thing as saying “you do three hypotheses, this suggests a fourth.” Rather it’s about this literature shows that doing it that way is ill-conceived and you’re better off doing it this way. It’s simplest if you think about in terms of methods where we can imagine a previous cite demonstrates how important it is for this phenomena that one models censorship, specifies a particular form for the dependent variable, or whatever. Be humble in this sort of thing though lest it turn into brainstorming.

Another form of materially affecting the argument would be if the paper is explicitly pitched as novel but it is in fact discussing a well understood problem. It is not necessarily a problem if the article discusses an issue in terms of literature X but does not also review literature Y that is potentially related. However it is a problem if the author says nobody has ever studied issue A in fashion B when there is in fact a large literature from subfield Y that closely parallels what the author is pitching. More broadly, you should call the authors on setting up straw man lit review, where one special case of that would be “there is no literature.” (Note to authors: be very careful with “this is unprecedented” claims). Again, be humble in how you apply this lest it turn into a pretext for demanding that every article not only motivate its positive contribution, but also be prefaced with an exhaustive review that would be suitable for publication in ARS.

There is one major exception to the rule that a paper should have a theory section and not a lit review, which is when the authors are importing a literature that is likely to be unfamiliar to their audience and so they need more information than usual to get up to speed. Note though that this is an issue best addressed by the reviewers who are unfamiliar with the literature and for whom it is entirely appropriate to say something like “I was previously unfamiliar with quantum toad neuropathology and I suspect other readers will be as well so I ask that rather than assuming a working knowledge of this literature that the author please add a bit more background information to situate the article and point to a good review piece or textbook for those who want even more background.” Of course that’s rarely how the “do more lit review” comments go. Rather such comments tend to be from people with a robust knowledge of theory X and they want to ensure that the authors share that knowledge and gavage it into the paper’s front end. I’m speaking from personal experience as on several occasions I have used theories that are exotic to sociologists and while several of the reviewers said they were glad to learn about this new-to-them theory and how it fits with more mainstream sociology like peanut butter and chocolate, nobody asked for more background on it. And I’m cool with that since it means my original drafts provided sufficient background info for them to get the gist of the exotic theory and how it was relevant. Of course, I did get lots of “you talk about Bourdieu, but only for ten pages when you could easily go for twenty.” That is, nobody wants to know more about something they didn’t know before and need a little more background knowledge to get up to speed, but everybody wants to yell “play Freebird!” This is exactly backwards of how it should be.

damnfreebird

Don’t let flattery give you a big head
It is customary for authors to express their gratitude to the reviewers. You might take from this to think, “ahhh, Gabriel’s wrong about R&Rs being broken,” or more likely “that may be true of other reviewers, but I provide good advice since, after all, they thank me for it.” Taking at face value an author who gushes about what a wonderful backseat driver you are is like watching a prisoner of war saying “I would like to thank my captors for providing me with adequate food and humane treatment even as my country engages in unprovoked imperialist aggression against this oppressed people.” Meanwhile he’s blinking “G-E-T-M-E-O-U-T-O-F-H-E-R-E” in Morse code.

Appreciate the constraints imposed on the author by the journal:
between-rock-hard-place-aron-ralston
Many journals impose a tight word count. When you ask an author to also discuss this or that, you’re making it very difficult for them to keep their word count. One of the most frustrating things as an author is getting a cover letter from the editor saying “Revise the manuscript to include a half dozen elaborate digressions demanded by the reviewers, but don’t break your word count.”

Some journals demand that authors include certain material and you need to respect that. ASR is obsessed with articles speaking to multiple areas of the discipline. This necessarily means that an article that tries to meet this mandate won’t be exclusively oriented towards your own subfield and it may very well be that its primary focus is on another literature and its interest in your own literature being secondary. Don’t view this as an incompetent member of your own subfield but as a member of another subfield trying (under duress) to build bridges to your subfield. Similarly some journals demand implications for social policy or for managers. Even if you would prefer value neutrality (or value-laden but with a different set of values) or think it’s ridiculous to talk as if firms will change their business practices because somebody did a t-test, appreciate that this may be a house rule of the journal and the author is doing the best she can to meet it.

Stand up to the editors:
authorsalone
You can be the good guy. Or if necessary, you can demand a coup de grace. But either way you can use your role as a reviewer to push the editors and your fellow reviewers towards giving the authors a more streamlined review process.

First, you can respond to the other parts of the reviews and response memo from the previous round. If you think the criticisms were unfair or that the author responded to them effectively, go ahead and say so. It makes a big difference to the author if she can make explicit that the other reviewers are with her.

Second, you can cajole the editors to make a decision already. In your second round R&R review tell the editors that there’s never going to be a complete consensus among the reviewers and they should stop dicking the authors around with R&Rs. You can refuse to be the dreaded “new reviewer.” You can refuse to review past the first round R+R. You can tell the editors that you’re willing to let them treat your issues as a conditional accept adjudicated by them rather than as another R&R that goes back to you for review.

Just as importantly as being nice, you can tell the editors to give a clean reject. Remember, an R&R does not mean “good but not great” or “honorable mention” but “this could be rewritten to get an accept.” Some flaws (often having to do with sampling or generalizability) are of a nature that they simply can’t be fixed so even if you like the other aspects of the paper you should just reject. Others may be fixable in principle (often having to do with the lit review or model specification) but in practice doing so would require you to rewrite the paper for the authors and it benefits nobody for you to appoint yourself anonymous co-author. Hence my last point…

Give decisive rejections

I’ve emphasized how to be nice to the authors by not burdening them with superfluous demands However it’s equally important to be decisive about things that are just plain wrong. I have a lot of regrets about my actions as a peer reviewer and if I were to go through my old review reports right now I’d probably achieve Augustinian levels of self-reproach. Many of them of are of the nature of “I shouldn’t have told that person to cite/try a bunch of things that didn’t really matter because by so doing I was being the kind of spend-the-next-year-on-the-revisions-to-make-the-paper-worse reviewer I myself hate to get.” However, I don’t at all regret, for instance, a recommendation to reject that I wrote in which I pointed out that the micro-mechanisms of the author’s theory were completely incompatible with the substantive processes in the empirical setting and that the quantitative model was badly misspecified. Nor do I regret recommending to reject a paper because it relied on really low quality data and its posited theoretical mechanism was a Rube Goldberg device grounded in a widely cited but definitively debunked paper. Rather my biggest regret as a reviewer is that I noticed a manuscript had a grievous methodological flaw that was almost certainly entirely driving the results but I raised the issue in a hesitant fashion and the editor published the paper anyway. As I’ve acquired more experience on both sides of the peer review process, I’ve realized that being a good peer reviewer isn’t about being nice, nor is it about providing lots of feedback. Rather being a good reviewer is about evaluating which papers are good and which papers are bad and clearly justifying those decisions. I’m honored to serve as a consulting editor for Sociological Science because that is what that journal asks of us but I also aspire to review like that regardless of what journal I’m reviewing for and I hope you will too. (Especially if you’re reviewing my papers).

November 18, 2013 at 9:07 am 12 comments

Cultural Learnings of Economics for Make Benefit Glorious Discipline of Sociology

| Gabriel |

[Note, if you subscribe by RSS or email you might have gotten an earlier and incomplete version of this that I posted by accident on 5/25/13]

A few weeks ago the political scientist Henry Farrell posted a point-by-point critique of an LA Review of Books essay that was smugly denouncing economics while getting pretty much all its facts about economics dead wrong. (Most notably it confused the difference between public choice and game theory in ways that are extremely funny if you have a working knowledge of both literatures). The thing that made me really cringe about the LARB article though, which was written by a non-academic journalist/ social critic, was how if you told me it was written by a sociologist and got through peer review at a soc journal, I would have believed you.

Sociologists love to talk about how obtuse and limited in vision economists are but we often do so with only a vague awareness of how they do things but a pervasive suspicion that whatever they’re doing, it’s probably nefarious. It’s kind of like hearing peasants describe Jews. At this point I wouldn’t be surprised to hear a sociologist claim that economist tears prevent AIDS, or at the very least that they have horns.

The main reason for this is that we tend not to study economics itself, at least not on any kind of systematic basis but rather learn about it by reading polemical criticisms of economics’ excesses and/or intrusions into sociological turf. Which is fair enough since it’s hard enough to learn your own discipline without getting another too, but it does give us a rather particular vantage point that’s not at all emic. So rather than reading stuff that economists tend to consider fundamental we might read specific works in economics that either seem to be internal criticisms that grope towards sociological enlightenment (e.g., Akerlof, Williamson) or we read stuff that tries to reconceptualize sociological phenomena as exchange (e.g., Becker or Posner on sex and the family) and which tends to involve bizarre epi-orbit type arguments (e.g., “rationally maximize bequests”) or simply make bad predictions we can debunk.

(Note though that Pierre Bourdieu is a lot closer to Gary Becker than you’d think based on the kind mood affiliation heuristic in which we’re supposed to love one and boo and hiss at the other. Not only are both of them known primarily for extending the metaphor of “capital” but Bourdieu’s theory of gifts is very Becker/Posner like in seeing gifts as ultimately a calculated exchange).

A slightly more charitable way to put it is that your average sociologist’s understanding of economics is a lot like learning about Gnosticism by reading Against Heresies. Iraneus had himself read Valentinius and knew enough about Gnosticism to intelligently critique it from the perspective of proto-orthodoxy, but most later Christians and historians knew gnosticism only through Iranaeus’s arguments against it. In this analogy actually learning and reading economics for yourself is like finding the Nag Hammadi library. Once you’ve translated AER and a Principles textbook out of Coptic, you’ll see that they do indeed say a lot of the things we attribute to them, others of their arguments we characterize uncharitably to the point of being barely recognizable, much of what we think they hold central is actually incidental in their own conception, there’s a lot of stuff they care about which we never noticed, and there’s actually a lot of overlap.

Now mind you, it’s not like economists have a clear understanding of what we do either, with their understanding generally falling into three categories:

  • Homo sociologicus ordinarius – A politically correct ninny with more indignation than expertise
  • Homo sociologicus reticularis – Social network analysts who make cool pictures and have mastered a technical expertise different from but on par with anything economists do
  • Homo particularis sociologicus – A particular colleague or noteworthy scholar who happens to be a sociologist but with their identity and contribution being understood as idiosyncratic rather than disciplinary

On the other hand, the economic folklore about sociology is different in character from ours of them insofar as economists’ views of the other social sciences are like how Bukowski was asked what he thought about another poet would always reply “I don’t think about him.” In that sense econ’s ignorant understanding of soc is more like our understanding of anthro than our understanding of economics since there’s a big difference between having a vague understanding of a discipline that you’re dedicated to critiquing and a vague understanding of a discipline that you mostly just ignore.

May 28, 2013 at 7:28 am 15 comments

wos2tab.pl

| Gabriel |

One of my grad students is doing some citation network analysis, for which the Python script (and .exe wrapper) wos2pajek is very well-suited. (Since most network packages can read “.net” this is a good idea even if you’re not using Pajek).

However the student is also interested in node level attributes, not just the network. Unfortunately WOS queries are field-tagged which is kind of a pain to work with and the grad student horrified me by expressing the willingness to spend weeks reshaping the data by hand in Excel. (Even in grad school your time is a lot more valuable than that). To get the data into tab-delimited text, I modified an earlier script I wrote for parsing field-tagged IMDb files (in my case business.list but most of the film-level IMDb files are structured similarly). The basic approach is to read a file line-by-line and match its contents by field-tag, saving the contents in a variable named after the tag. Then when you get to the new record delimiter (in this case, a blank line), dump the contents to disk and wipe the variables. Note that since the “CR” (cited reference) field has internal carriage returns it would require a little doing to integrate into this script, which is one of the reasons you’re better off relying on wos2pajek for that functionality.

#!/usr/bin/perl
#wos2tab.pl by ghr
#this script converts field-tagged Web Of Science queries to tab-delimited text
#for creating a network from the "CR" field, see wos2pajek
#note, you can use the info extracted by this script to replicate a wos2pajek key and thus merge

use warnings; use strict;
die "usage: wos2tab.pl <wos data>\n" unless @ARGV==1;

my $rawdata = shift(@ARGV);

my $au ; #author
my $ti ; #title
my $py ; #year
my $j9 ; #j9 coding of journal title
my $dt ; #document type

# to extract another field, work it in along the lines of the existing vars
# each var must be
# 1. declared with a "my statement" (eg, lines 12-16)
# 2. added to the header with the "print OUT" statement (ie, line 29)
# 3. written into a search and store loop following an "if" statement (eg, lines 37-41)
# 4. inside the blank line match loop (ie, lines 59-66)
#  4a. add to the print statement (ie, line 60)
#  4b. add a clear statement (eg, lines 61-65)

open(IN, "<$rawdata") or die "error opening $rawdata for reading\n";
open(OUT, ">$rawdata.tsv") or die "error creating $rawdata.tsv\n";
print OUT "au\tdt\tpy\tti\tj9\n";
while (<IN>) {
	if($_ =~ m/^AU/) {
		$au = $_;
		$au =~ s/\015?\012//; #manual chomp
		$au =~ s/^AU //; #drop leading tag
		$au =~ s/,//; #drop comma -- author only
	}
	if($_ =~ m/^DT/) {
		$dt = $_;
		$dt =~ s/\015?\012//; #manual chomp
		$dt =~ s/^DT //; #drop leading tag
	}
	if($_ =~ m/^TI/) {
		$ti = $_;
		$ti =~ s/\015?\012//; #manual chomp
		$ti =~ s/^TI //; #drop leading tag
	}
	if($_ =~ m/^J9/) {
		$j9 = $_;
		$j9 =~ s/\015?\012//; #manual chomp
		$j9 =~ s/^J9 //; #drop leading tag
	}
	if($_ =~ m/^PY/) {
		$py = $_;
		$py =~ s/\015?\012//; #manual chomp
		$py =~ s/^PY //; #drop leading tag
	}
	
	#when blank line is reached, write out and clear memory 
	if($_=~ /^$/) {
		print OUT "$au\t$dt\t$py\t$ti\t$j9\n";
		$au = "" ;
		$dt = "" ;
		$ti = "" ;
		$py = "" ;
		$j9 = "" ;
	}
}
close IN;
close OUT;
print "\ndone\n";

July 19, 2010 at 2:13 pm 6 comments

Misc Links, etc

| Gabriel |

  • David Grazian has a two part interview with the rock critic Chuck Klosterman at the Contexts podcast (part one and part two). There’s a lot of neat stuff in there about cultural reception, but the thing that really grabbed me was that Klosterman has become cynical about the interview as being anything more than an arena for cultural scripts, a suspicion I’ve shared.
  • There was just a big lawsuit over age discrimination in Hollywood (Variety and KCRW’s The Business). I haven’t seen it mentioned anywhere, but my bet is that at least one Bielby testified given that they published on exactly this issue in this industry and they do the expert witness thing very effectively.
  • Slate has just started a series of stories on how the army used social network analysis to get Saddam Hussein. So far it looks well worth reading but the “never before told” puffery is more than a little exaggerated. I remember hearing this story years ago and when David Petraeus rewrote the army field manual on counter-insurgency he added a (very good) chapter on social network analysis. Furthermore, Mark Bowden tells a similar story (albeit without the formal analysis) in his amazing book on the decline and fall of Pablo Escobar.
  • On a less grim, but equally sociological, note, Slate had a very cute video slide show on how films use class markers to make glamorous actresses look and sound working class.
  • In yet another Slate article, Reihan Salam writes a whimsical tongue-in-cheek rant about how white the Winter Olympics is. I think the article is funny, but what’s really interesting is the comments thread, most of which has completely missed the sarcasm of the article and imagines Salam as some kind of ethnic grievance monger rather than what he actually is, which is a Republican policy wonk with a weird sense of humor. There’s got to be a story in this about framing, political discourse, and all that Bill Gamson type stuff.
  • The Wall Street Journal has an article on autism research that relies heavily on Peter Bearman’s ginormous autism project. It’s a good article but if you’ll permit me my own grievance mongering, I think it’s interesting how the scientific pecking order comes through. The article doesn’t include the words “sociology” or “sociologist,” instead identifying him only as “Dr. Bearman” from “Columbia University.” In contrast, the other experts interviewed for the piece are identified as “a child psychiatrist at the UCLA Center for Autism Research and Treatment” and “a CDC epidemiologist,” or generically as “medical experts.” That is, it seems like the journalist, probably correctly, thought it would diminish from the authority of the report to attribute it to somebody who doesn’t own a lab coat.
  • Turns out the reliability problems of the cloud aren’t just an issue with airplanes but, you know, at my desk. For the last few weeks I’ve had a lot of trouble reliably connecting to any Google service from UCLA. (No problems from home). This is just annoying when I want to read my RSS feeds but is a real problem when I’m trying to do thing like check my calendar to make appointments. As such I’ve increasingly been migrating my stuff off the cloud and onto local applications on my laptop (which I have with me pretty much all of the time), treating the cloud as little more than a syncing platform. For instance, I access GMail through Mail.app which lets me compose and read old mail even when I can’t connect to the service. For search I’ve mostly been using Bing for the simple reason that it’s more reliable, even though I prefer Google. The promise of the cloud was supposed to be that you can access your resources from any computer but it’s turning out that I can’t access it from the place I work most. I had been considering getting an ARM netbook running Android or Chrome, but what’s the point if it would turn into a paperweight whenever the server is lagging?

February 23, 2010 at 4:43 am 2 comments

I am shocked–shocked–to find scientists abusing peer review

| Gabriel |

A major climate lab in Britain was hacked (leaked?) last week and a lot of the material was really embarrassing. Stuff along the lines of obstruction of freedom of information requests, smoothing messy data, and using peer review and shunning to freeze out contradictory perspectives. From the WaPo write-up:

“I can’t see either of these papers being in the next IPCC report,” Jones writes. “Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”

In another, Jones and Mann discuss how they can pressure an academic journal not to accept the work of climate skeptics with whom they disagree. “Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal,” Mann writes.

“I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor,” Jones replies.

All I can say is:

Most people have been looking at this in terms of the science or politics of climate change, but I’m completely with Robin Hanson in thinking that those are non sequiturs and what’s really interesting about this is the (office) politics of science. I mean, is anyone who has ever been through peer review at all surprised to hear that peer reviewers can be malicious assholes willing to use power plays to effect closure against minority perspectives?

On the other hand, while I think this is an affront to decency, this doesn’t really give me severe problems as a matter of scientific epistemology. Sure, I’d rather that scientists took the JS Mills ideal of the market of ideas with a “let me hear you out and then if I’m still unconvinced I’ll give you my good faith rebuttal.” Nonetheless, I’m enough of a Quinean/Kuhnian to think that science isn’t about isolated findings but the big picture and the dominant perspective is probably still right, even if its adherents aren’t themselves exactly Popperians actively seeking out (and failing to find) evidence against their perspective.

November 23, 2009 at 4:39 pm

Science (esp. econ) made fun

| Gabriel |

In a review essay, Vromen talks about the (whodathunkit) popular book/magazine-column/blog genre of economics-made-fun that’s become a huge hit with the mass audience in the last 5 to 10 years. Although Vromen doesn’t mention it, this can be seen as a special case of the science-can-be-fun genre (e.g., Stephen Jay Gould’s short essays that use things like Hershey bars and Mickey Mouse to explain reasonably complex principles of evolutionary biology.)

Vromen makes a careful distinction from the older genre of economists-can-be-funny (currently exemplified by the stand-up economist), which is really a special case of the general genre of scientists doing elaborate satires of their own disciplines for the benefit of their peers. There is an entire journal of this, but my all time favorite example is a satire of mid-20th century psychology in the form of a review of the literature on when people are willing to pass the salt at the dinner table.  Two excerpts from the “references” section should suffice to convince you to click the link and read the whole thing.

  • Festinger, R. “Let’s Give Some Subjects $20 and Some Subjects $1 and See What Happens.” Journal for Predictions Contrary to Common Sense 10, 1956, pp. 1-20.
  • Milgram, R. “An Electrician’s Wiring Guide to Social Science Experiments.” Popular Mechanics 23, 1969, pp. 74-87.

If you don’t remember what Festinger and Milgram actually did in the 50s and 60s this won’t be funny, but if you do it’s hilarious. Hence, the scientists-can-be-funny genre is a self-deprecating genre for an audience of insiders that simultaneously demonstrates the joker’s mastery of the field and the field’s foibles. In contrast, the science-can-be-fun genre is targeted to a mass audience and is about demonstrating the elegance and power of the field. The former inspires humility among practitioners, the latter awe among the yokels.

One of the interesting things about the econ-made-fun literary genre is that it is largely orthogonal to any theoretical distinction within scholarly economics. The most prominent “econ made fun” practitioners span such theoretical areas as applied micro (Levitt), behavioral (Ariely), and Austrian (Cowen). In part because the “econ made fun” genre exploded at about the same time as the Kahneman Nobel and in part because “econ made fun” tends to focus on unusual substantive issues (i.e., anything but financial markets), this has led a lot of people to conflate “econ made fun” and behavioral econ. I’ve heard Steve Levitt referred to as a “behavioral economist” several times. This drives me crazy as at a theoretical level, behavioral economics is the opposite of applied micro, and in fact Levitt has done important work suggesting that behavioral econ may not generalize very well from the lab to the real world. That people (including people who ought to know better) nonetheless refer to him as a “behavioral economist” suggests to me that in the popular imagination literary genre is vastly more salient than theoretical content.

I myself occasionally do the “sociologists can be funny” genre (see here , here, and here) but these are basically elaborate deadpan in-jokes and I am under no illusions that anyone without a PhD would find them at all funny. I have no idea how to go about writing “sociology can be fun” (this is probably the closest I’ve come) along the lines of Levitt/Dubner or Harford, nor to be honest do I see any other sociologist doing it particularly well. There are plenty of sociologists who try to speak to a mass audience, but the tone tends to be professorial exposition or political exhortation rather than amusement at the surprising intricacy of social life. Fortunately Malcolm Gladwell has an intense and fairly serious interest in sociology and is very talented at making our field look fun.

November 10, 2009 at 4:40 am 1 comment

A Note on the Uses of Official Statistics

| Gabriel |

They are ourselves, I replied; and they see only the shadows of the images which the fire throws on the wall of the den; to these they give names, and if we add an echo which returns from the wall, the voices of the passengers will seem to proceed from the shadows.  — Plato

One of the points I like to stress to my grad students is that data is not an objective (or even unbiased) representation of reality but the result of a social process. The WSJ had a story recently on how we get the “jobs created or saved” figures around the stimulus bill and it makes me want to burn my Stata dvd, take a two-hour shower, and then switch to qualitative methods where at least I know that I would be responsible for any validity problems in my work.

The idea of “jobs created or saved” by a government policy is a meaningful concept in principle but in practice it’s essentially impossible to reckon with any certainty. It’s the kind of problem you might be able to approach empirically if it happened many times and there was some relatively exogenous instrument, but in a single instance you’re probably better off using an answer derived from theory than actually trying to measure it. Nonetheless the political process demands that it be answered empirically and the results are absurd.

The way the government has tried to measure “jobs created or saved” by the stimulus is by simply asking contractors or subcontractors how many jobs were created or saved in their firm by the contract. This involves both false positives of contractors exaggerating the number of jobs they created or saved and false negatives of firms that were not direct beneficiaries of contracts but increased or retained production in expectations of benefitting from the multiplier. In the case covered by the WSJ, a shoe store that sold nine pairs of boots for $100 each to the Army Corps of Engineers didn’t know what else to put and so said they saved nine jobs. When asked about this by the WSJ the shoe store owner’s daughter/bookkeeper replied

“The question, I would like to know is: How do you answer that? Did we create zero? Is it creating a job because they have boots and go out and work for the Corps? I would be really curious to hear how somebody does create a job. The formula is out there for anyone to create, and it’s just so difficult,” she said.

Who’d a thunk it, but apparently FA Hayek was reincarnated as a shoe store worker in Kentucky.

(h/t McArdle)

November 4, 2009 at 1:46 pm 4 comments

Older Posts


The Culture Geeks