Background Readings on Organizational Wokeness

Last year, Charles Lehman and I wrote pieces for the Autumn 2021 issue of City Journal (Charles’s essay, my essay) giving organizational explanations for the rise of woke capital. Here is a list of articles that Charles and I put together of scholarly works, mostly in sociology, that inspired our essays.

Background on Neo-Institutionalism

  • The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields (DiMaggio and Powell 1983)

EEO, Affirmative Action, and DEI

  • The Strength of a Weak State: The Rights Revolution and the Rise of Human Resources Management Divisions (Dobbin and Sutton 1998)
  • Have We Moved Beyond the Civil Rights Revolution? (Skrentny 2014)
  • Legal environments and organizational governance: The expansion of due process in the American workplace (Edelman 1990)
  • Legal Ambiguity and the Politics of Compliance: Affirmative Action Officers’ Dilemma (Edelman et al. 1991)

The Social Construction of Race 

Firm Responses to Activism 

  • The Nixon-in-China Effect: Activism, Imitation, and the Institutionalization of Contentious Practices (Briscoe and Safford 2008)
  • A Political Mediation Model of Corporate Response to Social Movement Activism (King 2008)
  • The Politics of Alignment and the ‘Quiet Transgender Revolution’ in Fortune 500 Corporations, 2008 to 2017 (Ghosh 2021)

Corporate Social Responsibility and Social Control

  • Social Responsibility Messages and Worker Wage Requirements: Field Experimental Evidence from Online Labor Marketplaces (Burbano 2016)
  • Corporate Social Responsibility as an Employee Governance Tool: Evidence from a Quasi-Experiment (Flammer and Luo 2017)

K-12 Education

July 14, 2022 at 6:30 pm

Resampling Approach to Power Analysis

A coauthor and I are doing power analyses based on a pilot test and we quickly realized that it’s really hard to calculate a power analysis for anything much more exotic than a t-test of means. As I usually do when there’s no obvious solution, I decided to just brute force it with a Monte Carlo approach.

The algorithm is as follows:

  1. Start with pilot data
  2. Draw a sample with replacement of the target sample size n and do this trials times
  3. Run the estimation with each of these resampled datasets and see if enough of them achieve significance (with conventional power and alpha this means 80% have p<.05)

In theory, this approach should allow a power analysis for any estimator, no matter how strange. I was really excited about this and figured we could get a methods piece out of it until my coauthor pointed out Green and MacLeod beat us to it. Nonetheless, I figured it’s still worth a blogpost in case you want to do something that doesn’t fit within the lme4 package, around which Green and MacLeod’s simr package wraps the above algorithm. For instance, as seen below, I show how you can do a power analysis for a stratified sample.

Import pre-test data

In a real workflow you would just use some minor variation on pilot <- read_csv("pilot.csv"). But as an illustration, we can assume a pilot study with a binary treatment, a binary DV, and n=6. Let us assume that 1/3 of the control group are positive for the outcome but 2/3 of the treatment group. Note that a real pilot should be bigger – I certainly wouldn’t trust a pilot with n=6.

treatment <- c(0,0,0,1,1,1)
outcome <- c(0,0,1,0,1,1)
pilot <- as.data.frame(cbind(treatment,outcome))

I suggest selecting just the variables you need to save memory given that you’ll be making many copies of the data. In this case it’s superfluous as there are no other variables, but I’m including it here to illustrate the workflow.

pilot_small <- pilot %>% select(treatment,outcome)

Display observed results

As a first step, do the analysis on the pilot data itself. This gives you a good baseline and also helps you see what the estimation object looks like. (Unfortunately this varies considerably for different estimation functions).

est_emp <- glm(outcome ~ treatment, data = pilot, family = "binomial") %>% summary()
z_emp <- est_emp[["coefficients"]][,3]
est_emp
## 
## Call:
## glm(formula = outcome ~ treatment, family = "binomial", data = pilot)
## 
## Deviance Residuals: 
##       1        2        3        4        5        6  
## -0.9005  -0.9005   1.4823  -1.4823   0.9005   0.9005  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  -0.6931     1.2247  -0.566    0.571
## treatment     1.3863     1.7321   0.800    0.423
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 8.3178  on 5  degrees of freedom
## Residual deviance: 7.6382  on 4  degrees of freedom
## AIC: 11.638
## 
## Number of Fisher Scoring iterations: 4

Note that we are most interested in the t or z column. In the case of glm, this is the third column of the coefficient object but for other estimators it may be a different column or you may have to create it manually by dividing the estimates column by the standard deviation column.

Now you need need to see how the z column appears when conceptualized as a vector. Look at the values and see the corresponding places in the table. I wrapped it in as.vector() because some estimators give z as a matrix.

as.vector(z_emp)
## [1] -0.5659524  0.8003776

In this case, we can see that z_emp[1] is z for the intercept and z_emp[2] is z for the treatment effect. Obviously the latter is more interesting.

Set range of assumptions for resamples

Now we need to set a range of assumptions for the resampling.

trials is the number of times you want to test each sample size. Higher values for trials are slower but make the results more reliable. I suggest starting with 100 or 1000 for exploratory purposes and then going to 10,000 once you’re pretty sure you have a good value and want to confirm it. The arithmetic is much simpler if you stick with powers of ten.


nrange
 is a vector of values you want to test out. Note that the z value for your pilot gives you a hint. If it’s about 2, you should try values similar to those in the pilot. If it’s much smaller than 2, you should try values much bigger.

trials <- 1000 # how many resamples per sample size
nrange <- c(50,60,70,80,90,100) # values of sample size to test

Set up resampled data, do the regressions, and store the results

This is the main part of the script.

It creates the resampled datasets in a list called dflist. The list is initialized empty and dataframes are stored in the list as they are generated.

Store z scores and sample size in matrix results.

dflist <- list()
k <- 1 # this object keeps track of which row of the results object to write to
results <- matrix(nrow = length(nrange)*trials, ncol=3) %>% data.frame() #adjust ncol value to be length(as.vector(z_emp))+1
colnames(results) <- c('int','treatment','n') # replace the vector with names for as.vector(z_emp) positions followed by "n." The nanes need not match the names in the regression table but should capture the same concepts.

for (i in 1:length(nrange)) {
  dflist[[i]] <- list()
  for (j in 1:trials) {
    dflist[[i]][[j]] <- sample_n(pilot_small,size=nrange[i],replace=T)
    est <- glm(outcome ~ treatment, data = dflist[[i]][[j]], family = "binomial") %>% summary() # adjust the estimation to be similar to whatever you did in the "test estimation" block of code, just using data=dflist[[i]][[j]] instead of data=pilot
    z <- est[["coefficients"]][,3] # you may need to tweak this line if not using glm
    results[k,] <- c(as.vector(z),nrange[i])
    k <- k +1
  }
}

# create vector summarizing each resample as significant (1) or not significant (0)
results$treatment.stars <- 0
results$treatment.stars[abs(results$treatment)>=1.96] <- 1

Interpret results

Dist of Z by sample size

As an optional first step, plot the distributions of z-scores across resamples by sample size.

results %>% ggplot(mapping = aes(x=treatment)) + 
  geom_density(alpha=0.4) + 
  theme_classic() + 
  facet_wrap(~n)

Number of Significant Resamples for Treatment Effect

Next make a table for what you really want to know, which is how often resamples of a given sample size gives you statistical significance. This rate can be interpreted as power.

table(results$n,results$treatment.stars)
##      
##         0   1
##   50  355 645
##   60  252 748
##   70  195 805
##   80  137 863
##   90  105 895
##   100  66 934

As you can see, n=70 seems to give about 80% power. To confirm this and get a more precise value, you’d probably want to run the script again but this time with nrange <- c(67,68,69,70,71,72,73) and trials <- 10000.

Stratified Samples

50/50 Strata (Common for RFTs)

Note that you can modify the approach slightly to have stratified resamples. For instance, you might want to ensure an equal number of treatment and outcome cases in each resample to mirror a 50/50 random assignment design. (This should mostly be an issue for relatively small resamples as for large resamples you are likely to get very close to the ratio in the pilot test just by chance.)

To do this we modify the algorithm by first splitting the pilot data into treatment and control data frames and then sampling separately from each before recombining, but otherwise using the same approach as before.

pilot_control <- pilot_small %>% filter(treatment==0)
pilot_treatment <- pilot_small %>% filter(treatment==1)

trials_5050 <- 1000 # how many resamples per sample size
nrange_5050 <- c(50,60,70,80,90,100) # values of sample size to test

nrange_5050 <- 2 * round(nrange_5050/2) # ensure all values of nrange are even

dflist_5050 <- list()
k <- 1 # this object keeps track of which row of the results object to write to
results_5050 <- matrix(nrow = length(nrange_5050)*trials_5050, ncol=3) %>% data.frame() #adjust ncol value to be length(as.vector(z_emp))+1
colnames(results_5050) <- c('int','treatment','n') # replace the vector with names for as.vector(z_emp) positions followed by "n." The nanes need not match the names in the regression table but should capture the same concepts.


for (i in 1:length(nrange_5050)) {
  dflist_5050[[i]] <- list()
  for (j in 1:trials_5050) {
    dflist_5050[[i]][[j]] <- rbind(sample_n(pilot_control,size=nrange_5050[i]/2,replace=T),
                                   sample_n(pilot_treatment,size=nrange_5050[i]/2,replace=T))
    est <- glm(outcome ~ treatment, 
               data = dflist_5050[[i]][[j]], 
               family = "binomial") %>% 
               summary()
    z <- est[["coefficients"]][,3]
    results_5050[k,] <- c(as.vector(z),nrange_5050[i])
    k <- k +1
  }
}

results_5050$treatment.stars <- 0
results_5050$treatment.stars[abs(results_5050$treatment)>=1.96] <- 1

Number of Significant Resamples for Treatment Effect With 50/50 Strata

table(results_5050$n,results_5050$treatment.stars)
##      
##         0   1
##   50  330 670
##   60  248 752
##   70  201 799
##   80  141 859
##   90   91 909
##   100  63 937

Not surprisingly, our 80% power estimate is still about 70.

One Fixed Strata and the Other Estimated

Or perhaps you know the size of a sample in one strata and want to test the necessary size of another strata. Perhaps a power analysis for a hypothesis specific to strata one gives n1 as its necessary sample size but you want to estimate a power analysis for a pooled sample where n=n1+n2. Likewise, you may wish to estimate the necessary size of an oversample. Note that if you already have one strata in hand you could modify this code to work but should just use the data for that strata, not resamples of it.

For one fixed and one estimated strata, let’s assume our pilot test is departments A and B from the UCBAdmissions file, that we know we need n=500 for 80% power on some hypothesis specific to department A, and we are trying to determine how many we need for department B in order to pool and analyze them together. I specify dummies for gender (male) and a dummy for department A (vs B).

ucb_tidy <- UCBAdmissions %>% 
  as_tibble() %>% 
  uncount() %>% 
  mutate(male = (Gender=="Male"), 
         admitted = (Admit=="Admitted")) %>% 
  select(male,admitted,Dept)

ucb_A <- ucb_tidy %>% filter(Dept=="A") %>% mutate(depA=1)
ucb_B <- ucb_tidy %>% filter(Dept=="B") %>% mutate(depA=0)

n_A <- 500
trials_B <- 1000 # how many resamples per sample size
nrange_B <- c(50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800,850,900,950,1000) # values of sample size to test

dflist_B <- list()
k <- 1 # this object keeps track of which row of the results object to write to
results_B <- matrix(nrow = length(nrange_B)*trials_B, ncol=4) %>% data.frame() #adjust ncol value to be length(as.vector(z_emp))+1
colnames(results_B) <- c('int','male','depA','n') # replace the vector with names for as.vector(z_emp) positions followed by "n." The nanes need not match the names in the regression table but should capture the same concepts.

for (i in 1:length(nrange_B)) {
  dflist_B[[i]] <- list()
  for (j in 1:trials_B) {
    dflist_B[[i]][[j]] <- rbind(sample_n(ucb_B,size=nrange_B[i],replace=T),
                                sample_n(ucb_A,size=n_A,replace=T))
    est <- glm(admitted ~ male + depA, data = dflist_B[[i]][[j]], family = "binomial") %>% summary()
    z <- est[["coefficients"]][,3]
    results_B[k,] <- c(as.vector(z),nrange_B[i])
    k <- k +1
  }
}

# add dummy for dept

results_B$male.stars <- 0
results_B$male.stars[abs(results_B$male)>=1.96] <- 1

Number of Significant Resamples for Gender Effect With Unbalanced Strata

table(results_B$n,results_B$male.stars)
##       
##          0   1
##   50   108 892
##   100  132 868
##   150  128 872
##   200  120 880
##   250  144 856
##   300  133 867
##   350  153 847
##   400  194 806
##   450  181 819
##   500  159 841
##   550  186 814
##   600  168 832
##   650  185 815
##   700  197 803
##   750  201 799
##   800  194 806
##   850  191 809
##   900  209 791
##   950  183 817
##   1000 214 786

This reveals a tricky pattern. We see about 90% power when there are either 50 or 100 cases from department B (i.e., 550-600 total including the 500 from department A). With trials_B <- 1000 it’s a bit noisy but still apparent that the power drops as we add cases from B and then rises again along a U-shaped curve.

Normally you’d expect that more sample size would mean more statistical power because standard error is inversely proportional to the square root of degrees of freedom. The trick is that this assumes nothing happens to β. As it happens, UCBAdmissions is a famous example of Simpson’s Paradox and specifically the gender effects are much stronger for department A …

glm(admitted ~ male, data = ucb_A, family = "binomial") %>% summary()
## 
## Call:
## glm(formula = admitted ~ male, family = "binomial", data = ucb_A)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8642  -1.3922   0.9768   0.9768   0.9768  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   1.5442     0.2527   6.110 9.94e-10 ***
## maleTRUE     -1.0521     0.2627  -4.005 6.21e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1214.7  on 932  degrees of freedom
## Residual deviance: 1195.7  on 931  degrees of freedom
## AIC: 1199.7
## 
## Number of Fisher Scoring iterations: 4

… than the gender effects are for department B.

glm(admitted ~ male, data = ucb_B, family = "binomial") %>% summary()
## 
## Call:
## glm(formula = admitted ~ male, family = "binomial", data = ucb_B)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.5096  -1.4108   0.9607   0.9607   0.9607  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)  
## (Intercept)   0.7538     0.4287   1.758   0.0787 .
## maleTRUE     -0.2200     0.4376  -0.503   0.6151  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 769.42  on 584  degrees of freedom
## Residual deviance: 769.16  on 583  degrees of freedom
## AIC: 773.16
## 
## Number of Fisher Scoring iterations: 4

Specifically, department A strongly prefers to admit women whereas department B has only a weak preference for admitting women. The pooled model has a dummy to account for department A generally being much less selective than department B but it tacitly assumes that the gender effect is the same as it has no interaction effect. This means that as we increase the size of the department B resample, we’re effectively flattening the gender slope through compositional shifts towards department B and its weaker preference for women.

December 3, 2021 at 2:14 pm

Dungeon Crawling Together

I’ve been reading a lot of OSR games and they remind me a lot of the “traditionalist” phase of the genre trajectory model from Lena and Peterson’s 2008 ASR and Lena’s book Banding Together. OSR games are attempts to recreate Dungeons and Dragons as it was played in the 1970s, often by using the OGL (think Creative Commons or GPL) for the 2000s version of the game but then changing the rules to be more like 1970s D&D.

For instance, here is the first two sentences to Labyrinth Lord:

Labyrinth Lord is not new or innovative. This game exists solely as an attempt to help breathe back life into old-school fantasy gaming, to do some small part in expanding its fan base.

And here is the start of the intro to Swords and Wizardry:

In 1974, Gary Gygax (1938-2008) and Dave Arneson (1947-2009) wrote the world’s first fantasy role-playing game, a simple and very flexible set of rules that launched an entirely new genre of gaming. Unfortunately, the original rules are no longer in print, even in electronic format. The books themselves are becoming more expensive by the day, since they are now collector items. Indeed, there is a very good chance that the original game could, effectively, disappear. That’s why this game is published. When you play Swords & Wizardry, you are using those original rules.

The intro to the first edition of OSRIC basically says we don’t expect you to actually play with this book but with your old copies of the 1978 AD&D edition however we are writing this so you can create new content AD&D compatible content without getting sued.

The games vary in whether they are attempting to:

  1. recreate a very specific edition of D&D (eg, OSE is a retro-clone of the 1981 “B/X” Moldvay edition, Swords and Wizardry is a retro-clone of the “OD&D” 1974 edition, OSRIC is a retro-clone of “AD&D” from 1978, etc)
  2. take the overall feel of 1970s D&D while using some rules that were invented much later (eg, Basic Fantasy RPG, Five Torches Deep, Labyrinth Lord)
  3. put a distinct twist on the game whether that’s distinctive new rules (Dungeon Crawl Classics), a more metal version of the game (eg, Lamentations of the Flame Princess, Mork Borg), or change the genre of story entirely (Mecha Hack, Mothership, Mutant Crawl Classics)

Approach #2 seemed to dominate early on, in part for the practical reason that people didn’t know how far they could push the OGL, but more recent games tend to follow approach #1 or #3.

Anyway, OSR is a traditionalist stage but you can trace the whole game back and see all the stages.

  • Avant garde: Dave Arneson and Gary Gygax inventing the game as an oral tradition in midwestern wargaming circles
  • Scenes: The publication of OD&D and a distinctive split between the game as played by Midwest wargamers (who had access to the oral tradition) and Caltech students (who only had the incomprehensible published rules and so had to improvise, coming up with an unofficial addendum called Warlock which later influenced the published rules in part because the first revised edition of the official rules was by a Californian).
  • Industry: An explosion of sales in the late 1970s and early 1980s including tie-ins to other media.
  • Tradition: OSR community beginning c 2008

Anyway, this could be a dissertation topic for some grad student but this seems like an obvious mesearch trap and I am always of the opinion that research is not worth doing if it only says “case X (which I care about for reasons other than theory) fits theory Y.” Rather, research ought to say “case X fits theory Y in a way that suggests novel twist Z” and I’m not yet sure if there’s a novel twist here yet that leads us to reconceptualize Lena’s theory of creative communities rather than just saying “yup, as expected, RPGs are a creative community just like music.” If you find that twist, email me and let me know what you’re doing with it.

November 21, 2021 at 1:01 pm

Some Notes on Online Teaching

I spent basically all of spring quarter working on advice for colleagues as to how to teach online. In fall quarter, I actually did it myself and it was a different experience than the theoretical one. I had a huge learning curve over the course of the quarter — such a big one that if I ever do another online quarter I am probably re-recording most of my lectures even though as you’ll see recording and editing them was a ton of work.

Anyway, here are my notes and reflections on my first term of online teaching.

Making the Videos

The single biggest lesson is that online teaching is far more work than regular teaching. There’s always a lot of work answering emails, rewriting exams, etc, but in traditional teaching for an existing prep the actual lecture is just printing out my notes, trying to remember my microphone and HDMI dongle, and standing in front of an auditorium for two hours and change per week. I get the lecture hall 75 minutes twice a week for ten weeks so that’s about 25 hours of lecture per quarter, and maybe add an hour or two total for walking to the hall and printing out the notes, so about 27 hours spent lecturing per class per quarter. I spent almost that much time per class per week of instruction on the lectures. That is, I spent almost ten times more time on recording lectures in fall quarter than I usually do delivering lectures. And this is on top of the usual amount of work for emails, office hours, etc. (Actually I spent more time than usual on this stuff, but not absurdly more). If you think I’m exaggerating or you’re wondering how I found enough hours in the day to do this for the two lectures I taught in fall quarter, the answer is a) I pre-recorded seven weeks of lecture for one of my classes over the summer, b) I worked 60+ hour weeks all quarter, and c) I shirked a fair amount of service. Now that I have a good workflow I think I could get lecture down to 15 hours a week per class, but that’s still a lot more than 2.5 hours a week per class that it normally is.

A shopping list

Various bits of equipment really help.

  • 10" selfie ring light with stand ($35)
  • Gooseneck USB microphone ($20)
  • HD Webcam ($35)
  • Muslin backdrop, white 6′ x 9′ ($20)
  • Backdrop stand, at least 5′ wide preferably 7′ wide ($30-$40)
  • Filmora ($70)

Total $210-$220, plus tax. The most bang for the buck is in the selfie light. Next most important thing is the microphone as I find the audio recorded from a webcam is soft.

I bought a few more items (a different style microphone and a green screen) but didn’t find them useful so I’m leaving them off my list.

I also use Zoom, PowerPoint, PDF-XChange Editor ($43), and Paint.Net ($7) as part of my workflow but I assume everyone already has Zoom, presentation, PDF, and image editing software.

Mise en scene

It took a surprising amount of work to find a good camera set up. I ended up throwing out and re-recording several entire lectures because they looked like at the end my captors would demand $1 million and a prisoner exchange for my safe release.

Here’s what works for me.

Set the camera at eye level about 18"-24" from where your face will be. Eye level means that if your camera is on an external display you will need to lower the display to its lowest setting and if it is on a laptop you will need to stack books under the laptop. I don’t recommend putting the camera on the mount in the selfie light unless you can work without notes.

Arrange your notes just below the camera so your sight line is more or less at the camera. You will want to scroll frequently so you are always looking at the top of your display and thereby keep your sightline near the camera. I keep my notes in a text editor but you can use Word, Chrome, Acrobat, or whatever makes you comfortable. If you use Powerpoint I suggest you don’t go full screen unless you can just take it in at a glance.

Place the selfie light immediately behind the camera. Make sure it barely clears the screen with your notes.

Place the gooseneck USB microphone as close as possible to you. Don’t worry if it appears in frame.

You don’t realize how wide the 16:9 widescreen aspect ratio is until you try to frame a shot and realize that there is not a single angle in your home or office that doesn’t compromise your privacy, look weird, etc. Hence my recommendation of a backdrop. The other reason I like a backdrop is I like to use pop-up text and this is much easier for you and more legible for the students if there’s a solid background than if there is, say, a bookcase as there was in my early videos.

Set the feet wide on the backdrop. You don’t need the extra few inches of height if you’re recording seated but you do need the added stability if you don’t want to constantly knock it over.

A 5′ backdrop can be 48" from a camera with a 16:9 aspect ratio. Note that this is assuming the camera is framed exactly right. I actually stuck a wooden spoon in my backdrop stand to extend the arm about 6" and make my backdrop about 5’6" wide and give myself a 3" margin of error on either side. I add a bit of backdrop slack on the short arm for balance. If you want to avoid a crude extender like my spoon, you might want a 7′ wide stand which means two stands with a cross-beam instead of one T-shaped stand.

48" of depth is actually pretty tight when you realize that this includes you, the space between you and the camera, and the space between you and the backdrop. There should be at least 18" from your face to the camera and at least a foot beind you to the backdrop. (The space between you and the backdrop helps avoid shadows, which create the hostage video effect. Diffuse light as you get from a selfie light or indirect daylight behind the camera also help with this.)

Set your chair slightly off-center so you appear on camera left. This gives you the option in editing of using the news caster effect of having captions or images appear next to you.

Recording

I use Zoom for recording. I open my personal meeting and record. I’d like to use Windows Camera but it doesn’t let me specify that I want video from the HD webcam and audio from the USB microphone. If you don’t have a USB microphone and aren’t planning to use screenshare, Windows Camera should be fine.

Note that if you share screen during the video that this will change the aspect ratio of the whole video unless the screen or window you are sharing is the same resolution as the camera (probably 1280×720 for a laptop’s webcam or 1920×1080 for an external HD webcam). I learned this the hard way, being very puzzled that there were black bars on every side of a recent video until I realized that halfway through I turned on share screen. Fortunately I was able to crop these bars in Filmora.

I like to use screen share for things that are kind of dynamic by nature, such as walking students through a NetLogo simulation. It hasn’t come up yet in a recorded lecture but I’d also use this to demo RStudio and Stata. If you share your whole screen, make sure to mute notifications and close any windows you don’t want your students to see.

I haven’t had occasion to do this in a recorded lecture, only in office hours, but to get the traditional blackboard experience I open Zoom on my tablet, go to share screen, and choose "share whiteboard." You can record this, but you have to go to [your university].zoom.us/meetings to retrieve the cloud recording.

If I make a mistake or my dog barks at a delivery person walking past or a family member makes a noise, I simply leave a few seconds of silence and start over at some earlier natural break, usually a few lines up in my notes. I later edit out the interruption/mistake and the pause. This is one of the main reasons to edit as it means you don’t need to do a perfect take. This is especially useful if you do long videos since the chance of an interruption or mistake, and the hassle of re-recording, goes up as the video grows longer.

When I complete the recording I copy it to Box but I keep that folder locally synced since trying to do video-editing from on-demand cloud storage is a painful experience. I plan on unsyncing the folder when the term is over.

Editing

Like data cleaning, it makes sense to treat video editing as raw read-only files that you convert into clean files ready for distribution. In my class directory on Box I have subfolders for "/rawvideo," "/cleanvideo," and then one for each week of slides, for instance "/01_intro_and_econ."

I open a new file in Filmora and set it to 720p. My feeling is that nobody needs a full HD video of a glorified podcast so the only thing 1080p does it make the finished file take up more space on my hard drive and a longer upload time to the university web site. The worst thing about 1080p is that unless they’re very short, the files are so big that UCLA’s instructional website refuses the upload.

Note that I don’t need to have slides made yet.

I go through the video in Filmora. My first pass takes about 3-4x the runtime of the video and involves the following:

  • Edit out bad takes. I just use the scissors to cut the beginning and end of the bad material then right-click and delete.
  • Use the "titles" function to add pop-up text for key words. I set these in dark grey and put them to the right of me. (Remember, I frame the camera so I am camera left which leaves plenty of space on the right). Since I have a white backdrop the titles are clearly legible. If you have a complicated backdrop like a bookshelf or garden you may need to add a layer of a solid contrasting color below the text and optionally set the contrasting color’s opacity to about 50%. This contrasting color will make the text pop rather than blending into the background.
  • Use the "titles" function to create placeholders for the slides. I leave this in white and it’s just a description of the slide, which I create in PowerPoint as I edit the video. So if I create a histogram on the slides, the placeholder title may say "histogram."

I then finalize the slides in Powerpoint, export them to PDF, and then use PDF-X-Change Editor to convert the slides to a series of PNG files.

It’s now time for my second pass, which mercifully only takes about half the runtime. In this pass I find the placeholder titles and replace them with the PNGs. The PNGs may take the whole frame or I may crop and/or resize them so they take a partial frame which gives a news caster effect.

I then export the video. This takes about half the run-time on my computer but that just means I can’t use my video editing software for that time, it’s not active work for me. This may be faster or slower on your computer (my computer has a fast CPU but the video card is nothing special).

When to upload and when to reveal

It takes awhile to upload the video and even longer for the server to process it so upload well in advance. You can upload a file and then "hide" it until you are ready for the students to see it. (That’s true at UCLA where we use Moodle, presumably it’s also true for Blackboard and Canvas). This implies a question for pre-recorded lectures not faced by either traditional or streaming teaching, which is when to release the lectures. In fall quarter I mostly released lectures the Thursday before they appeared on the syllabus. (I sometimes didn’t have them done until Saturday night).

I don’t think I’d do this again. My students didn’t think of it as getting to see the lectures four days early but as only having four days to write their memos and they complained, a lot, about how this wasn’t enough time. From the students’ perspective, it is unreasonable to expect them to do the reading and come to a preliminary understanding of it themselves before I explain to them what the reading is about in lecture.

However I feel it’s an important part of a college education and a reasonable demand of college level work to be able to make a preliminary engagement with a text independently. In addition, I have seen the counterfactual. I have in the past had the homework be about the reading from the previous week and the TAs uniformly reported it was a disaster to discuss lectures from week X and readings from week X-1. Since this is a university not a restaurant, ultimately my perspective is the one that counts and so if I were doing it again I’d release the lectures on the Tuesday they appear on the syllabus. Zero overlap with the period during which they do homework is probably less likely to lead to grievance than a short overlap. If you’re an undergraduate in one of my classes in a subsequent quarter, first of all please stop reading my blog and second of all, you can blame the students in Fall quarter 2020.

Make a trailer

At Jessica Collett’s suggestion, a few weeks in I started recording "trailer" lectures that I post at least a week before the material they cover. The trailers briefly discuss the lecure materials and the reading. They’re pretty similar to the few minutes you might add at the end of a Thursday lecture discussing what to expect from next week’s material. I’m not sure how many of the students watch the trailers but they only take a few minutes (there’s no editing) so why not?

Thumbnails

Kaltura, UCLA’s video vendor, seems to have an artificial intelligence designed to find the single most unflattering frame in the video and set it as the thumbnail. To fix this you will need to do one of two things.

  1. Have eight seconds showing a title card before the lecture starts. Since Kaltura takes the frame 7 seconds in as the thumbnail, this ensures the title card will be the thumbnail.

  2. In CCLE/Moodle’s admin panel, go to media gallery, then "+ Add Media," then select and "publish," then click the ellipsis on the thumbnail, then click the pencil on the thumbnail, and then click the thumbnail tab. You can either "upload thumbnail" (I like to use a memorable figure or graph from my slides) or "auto-generate" which gives you ten choices, at least one of which will not make you look ridiculous. Yes, I agree, it’s ridiculous that they bury "don’t make me look like an ugly dork" so deep in the UX.

Exams

I worry a lot about academic misconduct in remote teaching and both my own experience and reports from peers suggests I am right to worry. I only have anecdata for UCLA, but Berkeley has seen a 400% increase in cheating reports. Anything that takes students away from a bluebook in class is going to make it easier to cheat. But the thing is that cheating is time consuming, clumsy, or both. The only way to write an answer fast is to know the answer. Sure, students can google keywords from the prompt and ctrl-c the first result even faster than they can type, but that’s easy to catch because it typically only loosely resembles the answer and TurnItIn automates this.

My main solution was to make it tight. Not any tighter than it is with a traditional blue book, but also no longer. In the before time when we had bluebooks I gave my students about 70 minutes for 4 short answer questions and a few multiple choice and now I give them 60 minutes for 3 short answer questions. That’s enough time to answer the questions but not enough to research the questions. I have heard colleagues do things like say "you have any one hour period in the next 24 hours to do the exam" which to me feels like leaving a bucket of candy on your porch and a "please take one" sign on Halloween.

One thing I did and would do again is offer an evening seating. A lot of our students are in Asia and business hours in the US are basically sleeping hours in China and Korea. I don’t want students stuck overseas to have to take the exam at what, to them, is 3am. Likewise some Americans have problems with family members using up all the bandwidth during the day or whatever. This requires me to write a second set of questions in case the morning questions leak, but I think it’s worth it.

What I will not be repeating was my attempt to individually watermark and email exam prompts. The idea was I’d be able to trace who uploaded their exam to CourseHero but it was a ton of work and didn’t successfully distribute the exams. It meant having students sign up in advance, which meant dealing with those who didn’t sign up in time. It took about two or three minutes per student to set up, which in a large lecture means an entire work day. Worst of all, the emails didn’t arrive on time. They were all sent on time (I scheduled them the night before) but they didn’t actually arrive in the students inboxes on time. At least one was almost an hour late. Never doing that again.

Term paper mistakes to avoid

I decided to assign a term paper in fall quarter, in part because I wanted to give less weight to timed exams. For one of my lectures this was "apply theories from lecture to current events in the field we are studying." For the other it was "apply theories from lecture to Book A or Book B." My TAs very reasonably said "we can’t grade that many papers and stay within a reasonable number of hours given that we have 75 students each." One of my TAs suggested we have students work in pairs and then there would be half as many papers to grade. While I appreciate the TA’s suggestion and would be delighted to work with this TA again, it was a mistake to follow this particular suggestion. Having students work in pairs created a massive logistical burden of recording partners from those who paired themselves up then pairing off the unpaired. In the class with two books this was further complicated because a) I had to match people who wanted the same book and b) I had to ensure an equal number of papers on each book so the TAs wouldn’t have to read multiple books. And even then my work wasn’t done because then I had to deal with "my partner dropped the class" or "my partner isn’t answering my emails" or "I got a new partner but then my old partner wants to work together again."

In the future I am never assigning group work again unless the project is intrinsically collaborative. If it’s too much grading for the TAs to grade term papers I will either grade enough personally to absorb the excess hours or choose another assignment. Assigning group work cuts down the number of papers but it doesn’t really reduce the total instructional hours.

I’m also not saying "choose one of these books" unless it’s not necessary for the grader to have read the book, there is sufficient time for my TAs to read multiple books, or it’s a seminar where I have already read all the books.

VPN

VPN has proven to be a much bigger problem than usual. This is weird since students always have to deal with VPN and it’s never a totally intuitive technology but it turns out that it helps a lot when they can bring their laptop by the campus tech support lab after class or, if all else fails, just download the papers while they’re on campus. That’s a big escape valve for the few percent of students who can’t get VPN to work and that escape valve is clogged for remote teaching. Rather than deal with a few "I can’t get the readings" emails a week, I eventually just mirrored the papers on the site. What better illustration could you have that piracy is a service problem, not a price problem? The students have already paid for the readings through their tuition but they can’t access them because the VPN paywall is too hard to work reliably at scale (and I suspect it’s not just user error but the VPN server simply has downtime).

January 13, 2021 at 2:37 pm 1 comment

Edudammerung

My prediction for what’s going to come for higher education, and particularly the job market is that a repeat of 2008-2009 is the best case scenario. As many of you will recall, the 2008-09 academic year job market was surprisingly normal, in part because the lines were approved before the crash. Then there was essentially no job market in the 2009-2010 academic year as private schools saw their endowments crater and public schools saw their funding slashed in response to state budget crises. By the 2010-2011 academic year, there was a more or less normal level of openings, but there was a backlog of extremely talented postdocs or ABDs who delayed defending, which meant that the market was extremely competitive in 2010-2011 and several years thereafter. A repeat of this is the best case scenario for covid-19, but it will probably be worse.

The overall recession will be worse than 2008. In 2008 we realized houses in the desert a two hour drive from town weren’t worth that much when gas went to $5/gallon. This was a relatively small devaluation of underlying assets, but it had huge effects throughout the financial system and then sovereign debt. Covid-19 means a much bigger real shock to the economy. In the short run we are seeing large sectors of the economy either shut down entirely or running at fractional capacity for months until we keep R0 below 1 and for some industries more like a year or two. Not only is this a huge foregone amount of GDP, but it will last long enough that many people and firms will go bankrupt and they won’t bounce back the second quarantine is relaxed. We also have large sectors of the economy that will need to be devalued, especially anything having to do with travel. I also expect to see a trend away from just-in-time inventory and globalization and towards economic autarky and a preference for slack over efficiency. This too will hit GDP. Just as in the post 2008 era, expect to see a weak stock market (which will hit endowments) and weak state budget finances (which will hit publics). We may even see straining of the fiscal capacity of the federal government which could mean less support for both tuition and research.

That’s just “economy to end tomorrow, universities and colleges hardest hit” though. However there are particular reasons to think universities will be distinctly impacted. Universities are not the cruise ship industry, but we’re closer to it than you’d think. In the post-2008 era, universities made up funding shortfalls by treating international students (who as a rule pay full tuition) as a profit center. That is basically gone for the 2020-2021 academic year. The optimistic scenario for covid-19 is that a couple months of quarantine drives down infections to the point that we can switch to test and trace like many Asian countries. Life is 95% normal in Taiwan and that could be us with enough masks, thermometers, tests, and GPS-enabled cell phone apps. However part of the way Taiwan achieves relative normalcy is that on March 19 they banned nearly all foreigners from entering the country and even repatriated citizens are subject to two weeks of quarantine. The is a crucial part of a test and trace regime, as demonstrated by Singapore (which has a similar public health response to Taiwan) switching from test and trace to mass quarantine after new infections from abroad brought infections above the point where test and trace is feasible. The US State Department has already warned Americans abroad to come home or we might not let you back. Travel restrictions are coming and will mean the end of international students for at least one academic year.

Most likely we are looking at 2 to 3 years without a job market, not just the one year with no market as happened post-2008. But as in the post-2008 job market, there will be a backlog of talented fresh PhDs and so it will be absurdly competitive to get a job. During the Cultural Revolution, China suspended college entrance exams as a way to exclude applicants from “bad class backgrounds” (e.g., grandpa was a landlord). When the Cultural Revolution ended China reinstated exams and so there was a huge backlog of students who could now apply, which made the matriculating class of 1977 extraordinarily competitive. There will be no job market until the 2022-2023 academic year, give or take, and then for several years thereafter you’ll need several articles or a book contract to get a flyout at Podunk State.

OK, so we have a few years of nothing, and then a few years of an extremely competitive market, but the PhD inventory should clear by the 2026-2027 academic year, right? And this means that if you’re in the fall 2020 entering graduate cohort or want to apply for the fall 2021 entering graduate cohort you should be fine, right? Well, no. Birth cohorts got appreciably smaller in 2008 which means by the time the covid-19 recession and its PhD backlog clears, there will be a demographic shock to demand for higher education.

My suggestion to all grad students is to emphasize methods so you’ll be competitive on the industry market, which will recover faster than the academic market. That’s right, I really am telling you to learn to code.

April 8, 2020 at 10:02 am 1 comment

Test by batches to save tests

Given the severe shortage of covid testing capacity, creative approaches are needed to expand the effective testing capacity. If the population base rate is relatively low, it can be effective to pool samples and test by batches. Doing so would imply substantial rates of false positives since every member of a test batch would be presumptively sick, but this would still allow three major use cases that will be crucial for relaxing quarantine without allowing the epidemic to go critical so long as testing capacity remains finite:

  1. Quickly clear from quarantine every member of a batch with negative results
  2. Ration tests more effectively for individual diagnoses with either no loss or an identifiable loss in precision but at the expense of delay
  3. Use random sampling to creating accurate tracking data with known confidence intervals for the population of the country as a whole or for metropolitan areas

Here is the outline of the algorithm. Assume that one has x people who were exposed to the virus, the virus has an infectiousness rate of 1/x, and you have only one test. Without testing, every one of these x people must quarantine. Under these assumptions, the expected number of actually infected people is 1 out of the x exposed, or more precisely a random draw from a Poisson distribution with a mean of one. This means that 36.8% of the time nobody is infected, 36.8% of the time one person is infected, 18.4% of the time two are infected, and 8% of the time three or more are infected. With only one test, only one individual can be tested and cleared, but if you pool and test them as a batch, over a third of the time you can clear the whole batch.

Alternately, suppose that you have two tests available for testing x people with an expected value of one infection. Divide the x exposed people into batch A and B then test each batch. If nobody is infected both batches will clear. If one person is infected, one batch will clear and the other will not. Even if two people are infected, there is a 50% chance they will both be in the same batch and thus the other batch will clear, and if there are three there is a 12.5% chance they are all in the same batch, etc. Thus with only two tests there is a 36.8% chance you can clear both batches and a 47.7% chance you can clear one of the two batches. This is just an illustration based on the assumption that the overall pool has a single expected case. The actual probabilities will vary depending on the population mean. The lower the population mean (or base rate), the larger you can afford to make the size of the test batches and still gain useful information.

This most basic application of testing pooled samples is sufficient to understand the first use case: to clear batches of people immediately. Use cases could be to clear groups of recently arrived international travelers or to weekly test medical personnel or first responders. There would be substantial false positives, but this is still preferable to a situation where we quarantine the entire population. 

Ideally though we want to get individual diagnoses, which implies applying the test iteratively. Return to the previous example but suppose that we have access to (x/2)+2 tests. We use the first two tests to divide the exposed pool into two test batches. There is a 36.7% chance both batches test negative, in which case no further action is necessary and we can save the remaining x/2 test kits. The modal outcome though (47.7%) is that one of the two test batches tests positive. Since we have x/2 test kits remaining and x/2 people in the positive batch, we can now test each person in the positive batch individually and meanwhile release everyone in the negative batch from quarantine. 

There is also a 15.5% chance that both batches test positive, in which case the remaining x/2 test kits will prove inadequate, but if we are repeating this procedure often enough we can borrow kits from the ⅓ of cases where both batches test negative. Thus with testing capacity of approximately ½ the size of the suspected population to be tested, we can test the entire suspected population. A batch test / individual test protocol will slow down testing as some people will need to be tested twice (though their samples only collected once) but allow us to greatly economize on testing capacity.

Finally, we can use pooled batches to monitor population level infection rates as an indication of when we can ratchet up or ratchet down social distancing without diverting too many tests from clinical or customs applications. Each batch will either test positive or negative and so the survey will only show whether at least one person in the batch was positive, but not how many.

For instance, suppose one collects a thousand nasal swabs from a particular city and divides them into a hundred batches of ten each and then finds that only two of these hundred batches test positive. This is equivalent to 98% rate of batches having exactly zero infected test subjects. Even though the test batch data are dichotomous, one can infer the mean of a Poisson just from the number of zeroes and so this corresponds to a population mean of about 2%. Although this sounds equivalent to simply testing individuals, the two numbers can diverge considerably. For instance, if half the batches test positive, this implies a population base rate of 70%. 

[Update]

On Twitter, Rich Davis notes that large batches risk increasing false negatives. This is a crucial empirical question. I lack the bench science knowledge to provide estimates but experts would need to provide empirical estimates before this system were implemented.

March 18, 2020 at 3:01 pm

Drop list elements if

I am doing a simulation in R that generates a list of simulated data objects and then simulates emergent behavior on them. Unfortunately, it chokes on “data” meeting a certain set of conditions and so I need to kick data like that from the list before running it. This is kind of hard to do as if you delete a list element you mess up the indexing since when you kick a list element the list is now shorter. My solution is to create a vector of dummies flagging which list elements to kick by indexing in ascending order, but then actually kick list elements in descending order. Note that if the length of the list is used elsewhere in your code that you’ll need to update it after running it.

FWIW, the substantive issue in my case is diffusion where some of the networks contain isolates, which doesn’t work in the region of parameter space which assumes all diffusion occurs through the network. In my case, deleting these networks is a conservative assumption.

Capture

Apologies for posting the code as a screenshot but WordPress seems to really hate recursive loops, regardless of whether I use sourcecode or pre tags.

 

 

April 5, 2019 at 2:43 pm

EU migrants

| Gabriel |

Yesterday the Guardian published a list of 34,361 migrants who had died attempting to reach or settle within Europe. (The list is originally from United for Intercultural Action).  The modal cause of death is shipwreck, but the list also includes suicides, homicides, terrestrial accidents, etc. I was curious as to the timeline as to when these deaths occurred so I converted the PDF list to a csv and made a graph and a few tables.

The graph is noisy, but nonetheless a few trends jump out.

Deaths rose slowly through the 1990s until 2006 and 2007 then dropped. Presumably this reflects declining labor demand in the EU. There is an isolated jump in 2009, but deaths are low in 2008 and 2010.

Deaths spike sharply in 2011, especially March of that year, which coincides with regime collapse in Libya. (Since 2004 Gaddafi had been suppressing migrant traffic as part of a secret deal with Italy). Deaths were low again by late 2011.

The dog that didn’t bark is that migrant deaths were relatively low throughout 2012 and 2013, notwithstanding the Syrian Civil War.

In October 2013 there was a major shipwreck, after which the Italians launched Operation Mare Nostrum, in which the Italian Navy would rescue floundering vessels. For the first few months this seems to have been successful as a humanitarian effort, but  eventually the Peltzman Effect took to sea and deaths skyrocketed in the summer of 2014. After this spike (and the budget strain created by the operation), the Italians cancelled Operation Mare Nostrum and deaths decreased briefly.

Operation Triton replaced Mare Nostrum, which was a) a pan-European effort and b) less ambitious. The post-Mare Nostrum death lull ended in spring of 2015.

European Union states had widely varying migration policies during 2015, with some enacting restrictionist policies and others pro-migration policies. Although there were many migrant deaths in 2015, they were mostly in the spring. Angela Merkel’s various pro-immigration statements (circa September and October of 2015) do not seem to have yielded a moral hazard effect on deaths, perhaps because this was almost simultaneous with the EU getting an agreement with Turkey to obstruct migrant flows. In any case, migrant deaths were relatively low in the last quarter of 2015 and first quarter of 2016. Deaths were very high in March and April of 2016 and overall 2016 was the worst year for deaths in the EU migration crisis.

In 2017 deaths declined back to 2015 levels, being high in both years but not as high as the peak year of 2016. It is too early to describe trends for 2018 but deaths in the first quarter of 2018 are lower than those of any quarter in 2017.

migrants

*http://unitedagainstrefugeedeaths.eu/wp-content/uploads/2014/06/ListofDeathsActual.pdf
*ran PDF through pdf/csv translator then stripped out lines with regex for lines not starting w digit
cd "C:\Users\gabri\Dropbox\Documents\codeandculture\eumigrants\"
import delimited ListofDeathsActual.csv, varnames(1) clear
keep if regexm(found,"^[0-9]")
drop v7-v30
gen date = date(found,"DMY",2019)
format date %td
gen n=real(number)
sum n
gen week=wofd(date)
format week %tw
gen month=mofd(date)
format month %tm
gen year=yofd(date)
save eudeaths.dta, replace
table month, c(sum n)
table year, c(sum n)
collapse (sum) n, by(week)
sort week
lab var n "deaths"
twoway (line n week)
graph export migrants.png, replace

June 21, 2018 at 4:11 pm 4 comments

Networks Reading List

| Gabriel |

In response to my review of Ferguson’s Square and the Tower, several people have asked me what to read to get a good introduction to social networks. First of all, Part I of Ferguson’s book is actually pretty good. I meant it when I said in the review that it’s a pretty good intro to social networks and in my first draft I went through and enumerated all the concepts he covers besides betweenness and hierarchy being just a tree network. Here’s the list: degree, sociometry, citation networks, homophily, triadic closure, clustering coefficients, mean path length, small worlds, weak ties as bridges, structural holes, network externalities, social influence, opinion leadership, the Matthew Effect, scale free networks, random graph networks, and lattices. While I would also cover Bonacich centrality / dependence and alpha centrality / status, that’s a very good list of topics and Ferguson does it well. I listed all my issues with the book (basically 1) he’s not good on history/anthropology prior to the early modern era and 2) there’s a lot of conceptual slippage between civil society and social networks as a sort of complement (in the set theory sense) to the state and other hierarchies. However it’s a very well written book that covers a lot of history, including some great historical network studies and the theory section of the book is a good intro to SNA for the non-specialist.

Anyway, so what else would I recommend as the best things to get started with for understanding networks, especially for the non-sociologist.

Well obviously, I wrote the best short and fun introduction.

dylan

My analysis of combat events in the Iliad is how I teach undergraduates in economic sociology and they like it. (Gated Contexts version with great typesetting and art, ungated SocArxiv version with the raw data and code). This very short and informal paper introduces basic concepts like visualization and nodes vs edges as well as showing the difference between degree centrality (raw connections), betweenness centrality (connections that hold the whole system together), and alpha centrality (top of the pecking order).

Social networks is as much a method as it is a body of theory so it can be really helpful to play with some virtual tinker toys to get a tactile sense of how it works, speed it up, slow it down, etc. For this there’s nothing better than playing around in NetLogo. There’s a model library including several network models like “giant component” (Erdos-Renyi random graph), preferential attachment, “small world” (Watts and Strogatz ring lattice with random graph elements), and team assembly. Each model in the library has three tabs. The first shows a visualization that you can slow down or speed up and tweak in parameter space. This is an incredibly user-friendly and intuitive way to grok what parameters are doing and how the algorithm under each model thinks. A second tab provides a well-written summary of the model, along with citations to the primary literature. The third tab provides the raw code, which as you’d expect is a dialect of the Logo language that anyone born in the late 1970s learned in elementary school. I found this language immediately intuitive to read and it only took me two days to write useful code in it, but your mileage may vary. Serious work should probably be done in R (specifically igraph and statnet), but NetLogo is much better for conveying the intuition behind models.

Since this post was inspired by Square and the Tower and my main gripe about that is slippage between civil society and social networks, I should mention that the main way to take a social networks approach to civil society in the literature is to follow Putnam in distinguishing between bridging (links between groups) and bonding (links within groups) social capital. TL;DR is don’t ask the monkey’s paw for your society to have social capital without specifying that you want it to have both kinds.

If you want to get much beyond that, there are some books. For a long time Wasserman and Faust was canonical but it’s now pretty out of date. There are a few newer books that do a good job of it.

The main textbook these days is Matthew O. Jackson’s Social and Economic Networks. It’s kind of ironic that the main textbook is written by an economist, but if Saul of Tsarsus could write a plurality of the New Testament, then I guess an economist can write a canonical textbook on social network analysis. It covers a lot of topics, including very technical ones.

I am a big fan of The Oxford Handbook of Analytical Sociology. Analytical sociology isn’t quite the same thing as social networks or complex systems, but there’s a lot of overlap. Sections I (Foundations) and III (Social Dynamics) cover a lot in social networks and related topics like threshold models. (One of my pet peeves is assuming networks are the only kind of bottom-up social process so I like that OHoAS includes stuff on models with less restrictive assumptions about structure, which is not just a simplifying assumption but sometimes more accurate).

I’m a big fan of John Levi Martin’s Social Structures. The book divides fairly neatly into a first half that deals with somewhat old school social networks approaches to small group social networks (e.g., kinship moieties) and a second half that emphasizes how patronage is a scalable social structure that eventually gets you to the early modern state.

Aside from that, there’s just a whole lot of really interesting journal articles. Bearman, Moody, and Stovel 2004 maps the sexual network of high school students and discover an implicit taboo on dating your ex’s partner’s ex. Smith and Papachristos 2016 look at Al Capone’s network and show that you can’t conflate different types of ties, but neither can you ignore some types, only by taking seriously multiple types of ties as distinct can you understand Prohibition era organized crime. Hedström, Sandell, and Stern 2000 show that the Swedish social democratic party spread much faster than you’d expect because it didn’t just go from county to county, but jumped across the country with traveling activists, which is effectively an empirical demonstration of a theoretical model from Watts and Strogatz 1998.

February 6, 2018 at 12:24 pm 1 comment

Blue upon blue

| Gabriel |

On Twitter, Dan Lavoie observed that Democrats got more votes for Congress but Republicans got more seats. One complication is that some states effectively had a Democratic run-off, not a traditional general. It is certainly true that most Californians wanted a Democratic senator, but not 100%, which is what the vote shows as the general was between Harris and Sanchez, both Democrats. That aside though, there’s a more basic issue which is that Democrats are just more geographically concentrated than Republicans.

Very few places with appreciable numbers of people are as Republican as New York or San Francisco are Democratic (ie, about 85%). Among counties with at least 150,000 votes cast, in 2004 only two suburbs of Dallas (Collin County and Denton County) voted over 70%. In 2008 and 2012 only Montgomery County (a Houston suburb) and Utah county (Provo, Utah) were this Republican. By contrast, in 2004 sixteen large counties voted at least 70% Democratic and 25 counties swung deep blue for both of Obama’s elections. A lot of big cities that we think of as Republican are really slightly reddish purple. For instance, in 2004 Harris county (Houston, Texas) went 55% for George W Bush and Dallas was a tie. In 2012 Mitt Romney got 58% in Salt Lake. The suburbs of these places can be pretty red, but as a rule these suburbs are not nearly as red as San Francisco is blue, not very populated, or both.

I think the best way to look at the big picture is to plot the density of Democratic vote shares by county, weighted by county population. Conceptually, this shows you the exposure of voters to red or blue counties.

Update
At Charlie Seguin’s suggestion, I added a dozen lines of code to the end to check the difference between the popular vote and what you’d get if you treated each county as winner-take-all then aggregated up weighted by county size. Doing so it looks like treating counties as winner-take-all gives you a cumulative advantage effect for the popular vote winner. Here’s a table summarizing the Democratic popular vote versus the Democratic vote treating counties as a sort of electoral college.
Popular vote Electoral counties
2004 48.26057 0.4243547
2008 52.96152 0.6004451
2012 51.09047 0.5329050
2016 50.12623 0.5129522

elect.png

setwd('C:/Users/gabri/Dropbox/Documents/codeandculture/blueonblue')
elections <- read.csv('https://raw.githubusercontent.com/helloworlddata/us-presidential-election-county-results/master/data/us-presidential-election-county-results-2004-through-2012.csv')
elections$bluecounty <- ifelse(elections$pct_dem>50, 1, 0)

elect04 <- elections[(elections$year==2004 & elections$vote_total>0),]
elect04$weight <- elect04$vote_total/sum(elect04$vote_total)
dens04 <- density(elect04$pct_dem, weights = elect04$weight)
png(filename="elect04.png", width=600, height=600)
plot(dens04, main='2004 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect08 <- elections[(elections$year==2008 & elections$vote_total>0),]
elect08$weight <- elect08$vote_total/sum(elect08$vote_total)
dens08 <- density(elect08$pct_dem, weights = elect08$weight)
png(filename="elect08.png", width=600, height=600)
plot(dens08, main='2008 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect12 <- elections[(elections$year==2012 & elections$vote_total>0),]
elect12$weight <- elect12$vote_total/sum(elect12$vote_total)
dens12 <- density(elect12$pct_dem, weights = elect12$weight)
png(filename="elect12.png", width=600, height=600)
plot(dens12, main='2012 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect16 <- read.csv('http://www-personal.umich.edu/~mejn/election/2016/countyresults.csv')
elect16$sumvotes <- elect16$TRUMP+elect16$CLINTON
elect16$clintonshare <- 100*elect16$CLINTON / (elect16$TRUMP+elect16$CLINTON)
elect16$weight <- elect16$sumvotes / sum(elect16$sumvotes)
elect16$bluecounty <- ifelse(elect16$clintonshare>50, 1, 0)
dens16 <- density(elect16$clintonshare, weights = elect16$weight)
png(filename = "elect16.png", width=600, height=600)
plot(dens16, main='2016 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

png(filename = "elect.png", width=600, height=600)
plot(dens04, main='Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote", col="blue")
lines(dens08)
lines(dens12)
lines(dens16)
dev.off()


m <- matrix(1:8,ncol=2,byrow = TRUE)
colnames(m) <- c("Popular vote","Electoral counties")
rownames(m) <- c("2004","2008","2012","2016")
m[1,1] <- weighted.mean(elect04$pct_dem,elect04$weight)
m[1,2] <- weighted.mean(as.numeric(elect04$bluecounty),elect04$weight)
m[2,1] <- weighted.mean(elect08$pct_dem,elect08$weight)
m[2,2] <- weighted.mean(as.numeric(elect08$bluecounty),elect08$weight)
m[3,1] <- weighted.mean(elect12$pct_dem,elect12$weight)
m[3,2] <- weighted.mean(as.numeric(elect12$bluecounty),elect12$weight)
m[4,1] <- weighted.mean(elect16$clintonshare,elect16$weight)
m[4,2] <- weighted.mean(as.numeric(elect16$bluecounty),elect16$weight)
m

 

November 9, 2017 at 11:24 am

Older Posts


The Culture Geeks