Author Archive

Some Notes on Online Teaching

I spent basically all of spring quarter working on advice for colleagues as to how to teach online. In fall quarter, I actually did it myself and it was a different experience than the theoretical one. I had a huge learning curve over the course of the quarter — such a big one that if I ever do another online quarter I am probably re-recording most of my lectures even though as you’ll see recording and editing them was a ton of work.

Anyway, here are my notes and reflections on my first term of online teaching.

Making the Videos

The single biggest lesson is that online teaching is far more work than regular teaching. There’s always a lot of work answering emails, rewriting exams, etc, but in traditional teaching for an existing prep the actual lecture is just printing out my notes, trying to remember my microphone and HDMI dongle, and standing in front of an auditorium for two hours and change per week. I get the lecture hall 75 minutes twice a week for ten weeks so that’s about 25 hours of lecture per quarter, and maybe add an hour or two total for walking to the hall and printing out the notes, so about 27 hours spent lecturing per class per quarter. I spent almost that much time per class per week of instruction on the lectures. That is, I spent almost ten times more time on recording lectures in fall quarter than I usually do delivering lectures. And this is on top of the usual amount of work for emails, office hours, etc. (Actually I spent more time than usual on this stuff, but not absurdly more). If you think I’m exaggerating or you’re wondering how I found enough hours in the day to do this for the two lectures I taught in fall quarter, the answer is a) I pre-recorded seven weeks of lecture for one of my classes over the summer, b) I worked 60+ hour weeks all quarter, and c) I shirked a fair amount of service. Now that I have a good workflow I think I could get lecture down to 15 hours a week per class, but that’s still a lot more than 2.5 hours a week per class that it normally is.

A shopping list

Various bits of equipment really help.

  • 10" selfie ring light with stand ($35)
  • Gooseneck USB microphone ($20)
  • HD Webcam ($35)
  • Muslin backdrop, white 6′ x 9′ ($20)
  • Backdrop stand, at least 5′ wide preferably 7′ wide ($30-$40)
  • Filmora ($70)

Total $210-$220, plus tax. The most bang for the buck is in the selfie light. Next most important thing is the microphone as I find the audio recorded from a webcam is soft.

I bought a few more items (a different style microphone and a green screen) but didn’t find them useful so I’m leaving them off my list.

I also use Zoom, PowerPoint, PDF-XChange Editor ($43), and Paint.Net ($7) as part of my workflow but I assume everyone already has Zoom, presentation, PDF, and image editing software.

Mise en scene

It took a surprising amount of work to find a good camera set up. I ended up throwing out and re-recording several entire lectures because they looked like at the end my captors would demand $1 million and a prisoner exchange for my safe release.

Here’s what works for me.

Set the camera at eye level about 18"-24" from where your face will be. Eye level means that if your camera is on an external display you will need to lower the display to its lowest setting and if it is on a laptop you will need to stack books under the laptop. I don’t recommend putting the camera on the mount in the selfie light unless you can work without notes.

Arrange your notes just below the camera so your sight line is more or less at the camera. You will want to scroll frequently so you are always looking at the top of your display and thereby keep your sightline near the camera. I keep my notes in a text editor but you can use Word, Chrome, Acrobat, or whatever makes you comfortable. If you use Powerpoint I suggest you don’t go full screen unless you can just take it in at a glance.

Place the selfie light immediately behind the camera. Make sure it barely clears the screen with your notes.

Place the gooseneck USB microphone as close as possible to you. Don’t worry if it appears in frame.

You don’t realize how wide the 16:9 widescreen aspect ratio is until you try to frame a shot and realize that there is not a single angle in your home or office that doesn’t compromise your privacy, look weird, etc. Hence my recommendation of a backdrop. The other reason I like a backdrop is I like to use pop-up text and this is much easier for you and more legible for the students if there’s a solid background than if there is, say, a bookcase as there was in my early videos.

Set the feet wide on the backdrop. You don’t need the extra few inches of height if you’re recording seated but you do need the added stability if you don’t want to constantly knock it over.

A 5′ backdrop can be 48" from a camera with a 16:9 aspect ratio. Note that this is assuming the camera is framed exactly right. I actually stuck a wooden spoon in my backdrop stand to extend the arm about 6" and make my backdrop about 5’6" wide and give myself a 3" margin of error on either side. I add a bit of backdrop slack on the short arm for balance. If you want to avoid a crude extender like my spoon, you might want a 7′ wide stand which means two stands with a cross-beam instead of one T-shaped stand.

48" of depth is actually pretty tight when you realize that this includes you, the space between you and the camera, and the space between you and the backdrop. There should be at least 18" from your face to the camera and at least a foot beind you to the backdrop. (The space between you and the backdrop helps avoid shadows, which create the hostage video effect. Diffuse light as you get from a selfie light or indirect daylight behind the camera also help with this.)

Set your chair slightly off-center so you appear on camera left. This gives you the option in editing of using the news caster effect of having captions or images appear next to you.


I use Zoom for recording. I open my personal meeting and record. I’d like to use Windows Camera but it doesn’t let me specify that I want video from the HD webcam and audio from the USB microphone. If you don’t have a USB microphone and aren’t planning to use screenshare, Windows Camera should be fine.

Note that if you share screen during the video that this will change the aspect ratio of the whole video unless the screen or window you are sharing is the same resolution as the camera (probably 1280×720 for a laptop’s webcam or 1920×1080 for an external HD webcam). I learned this the hard way, being very puzzled that there were black bars on every side of a recent video until I realized that halfway through I turned on share screen. Fortunately I was able to crop these bars in Filmora.

I like to use screen share for things that are kind of dynamic by nature, such as walking students through a NetLogo simulation. It hasn’t come up yet in a recorded lecture but I’d also use this to demo RStudio and Stata. If you share your whole screen, make sure to mute notifications and close any windows you don’t want your students to see.

I haven’t had occasion to do this in a recorded lecture, only in office hours, but to get the traditional blackboard experience I open Zoom on my tablet, go to share screen, and choose "share whiteboard." You can record this, but you have to go to [your university] to retrieve the cloud recording.

If I make a mistake or my dog barks at a delivery person walking past or a family member makes a noise, I simply leave a few seconds of silence and start over at some earlier natural break, usually a few lines up in my notes. I later edit out the interruption/mistake and the pause. This is one of the main reasons to edit as it means you don’t need to do a perfect take. This is especially useful if you do long videos since the chance of an interruption or mistake, and the hassle of re-recording, goes up as the video grows longer.

When I complete the recording I copy it to Box but I keep that folder locally synced since trying to do video-editing from on-demand cloud storage is a painful experience. I plan on unsyncing the folder when the term is over.


Like data cleaning, it makes sense to treat video editing as raw read-only files that you convert into clean files ready for distribution. In my class directory on Box I have subfolders for "/rawvideo," "/cleanvideo," and then one for each week of slides, for instance "/01_intro_and_econ."

I open a new file in Filmora and set it to 720p. My feeling is that nobody needs a full HD video of a glorified podcast so the only thing 1080p does it make the finished file take up more space on my hard drive and a longer upload time to the university web site. The worst thing about 1080p is that unless they’re very short, the files are so big that UCLA’s instructional website refuses the upload.

Note that I don’t need to have slides made yet.

I go through the video in Filmora. My first pass takes about 3-4x the runtime of the video and involves the following:

  • Edit out bad takes. I just use the scissors to cut the beginning and end of the bad material then right-click and delete.
  • Use the "titles" function to add pop-up text for key words. I set these in dark grey and put them to the right of me. (Remember, I frame the camera so I am camera left which leaves plenty of space on the right). Since I have a white backdrop the titles are clearly legible. If you have a complicated backdrop like a bookshelf or garden you may need to add a layer of a solid contrasting color below the text and optionally set the contrasting color’s opacity to about 50%. This contrasting color will make the text pop rather than blending into the background.
  • Use the "titles" function to create placeholders for the slides. I leave this in white and it’s just a description of the slide, which I create in PowerPoint as I edit the video. So if I create a histogram on the slides, the placeholder title may say "histogram."

I then finalize the slides in Powerpoint, export them to PDF, and then use PDF-X-Change Editor to convert the slides to a series of PNG files.

It’s now time for my second pass, which mercifully only takes about half the runtime. In this pass I find the placeholder titles and replace them with the PNGs. The PNGs may take the whole frame or I may crop and/or resize them so they take a partial frame which gives a news caster effect.

I then export the video. This takes about half the run-time on my computer but that just means I can’t use my video editing software for that time, it’s not active work for me. This may be faster or slower on your computer (my computer has a fast CPU but the video card is nothing special).

When to upload and when to reveal

It takes awhile to upload the video and even longer for the server to process it so upload well in advance. You can upload a file and then "hide" it until you are ready for the students to see it. (That’s true at UCLA where we use Moodle, presumably it’s also true for Blackboard and Canvas). This implies a question for pre-recorded lectures not faced by either traditional or streaming teaching, which is when to release the lectures. In fall quarter I mostly released lectures the Thursday before they appeared on the syllabus. (I sometimes didn’t have them done until Saturday night).

I don’t think I’d do this again. My students didn’t think of it as getting to see the lectures four days early but as only having four days to write their memos and they complained, a lot, about how this wasn’t enough time. From the students’ perspective, it is unreasonable to expect them to do the reading and come to a preliminary understanding of it themselves before I explain to them what the reading is about in lecture.

However I feel it’s an important part of a college education and a reasonable demand of college level work to be able to make a preliminary engagement with a text independently. In addition, I have seen the counterfactual. I have in the past had the homework be about the reading from the previous week and the TAs uniformly reported it was a disaster to discuss lectures from week X and readings from week X-1. Since this is a university not a restaurant, ultimately my perspective is the one that counts and so if I were doing it again I’d release the lectures on the Tuesday they appear on the syllabus. Zero overlap with the period during which they do homework is probably less likely to lead to grievance than a short overlap. If you’re an undergraduate in one of my classes in a subsequent quarter, first of all please stop reading my blog and second of all, you can blame the students in Fall quarter 2020.

Make a trailer

At Jessica Collett’s suggestion, a few weeks in I started recording "trailer" lectures that I post at least a week before the material they cover. The trailers briefly discuss the lecure materials and the reading. They’re pretty similar to the few minutes you might add at the end of a Thursday lecture discussing what to expect from next week’s material. I’m not sure how many of the students watch the trailers but they only take a few minutes (there’s no editing) so why not?


Kaltura, UCLA’s video vendor, seems to have an artificial intelligence designed to find the single most unflattering frame in the video and set it as the thumbnail. To fix this you will need to do one of two things.

  1. Have eight seconds showing a title card before the lecture starts. Since Kaltura takes the frame 7 seconds in as the thumbnail, this ensures the title card will be the thumbnail.

  2. In CCLE/Moodle’s admin panel, go to media gallery, then "+ Add Media," then select and "publish," then click the ellipsis on the thumbnail, then click the pencil on the thumbnail, and then click the thumbnail tab. You can either "upload thumbnail" (I like to use a memorable figure or graph from my slides) or "auto-generate" which gives you ten choices, at least one of which will not make you look ridiculous. Yes, I agree, it’s ridiculous that they bury "don’t make me look like an ugly dork" so deep in the UX.


I worry a lot about academic misconduct in remote teaching and both my own experience and reports from peers suggests I am right to worry. I only have anecdata for UCLA, but Berkeley has seen a 400% increase in cheating reports. Anything that takes students away from a bluebook in class is going to make it easier to cheat. But the thing is that cheating is time consuming, clumsy, or both. The only way to write an answer fast is to know the answer. Sure, students can google keywords from the prompt and ctrl-c the first result even faster than they can type, but that’s easy to catch because it typically only loosely resembles the answer and TurnItIn automates this.

My main solution was to make it tight. Not any tighter than it is with a traditional blue book, but also no longer. In the before time when we had bluebooks I gave my students about 70 minutes for 4 short answer questions and a few multiple choice and now I give them 60 minutes for 3 short answer questions. That’s enough time to answer the questions but not enough to research the questions. I have heard colleagues do things like say "you have any one hour period in the next 24 hours to do the exam" which to me feels like leaving a bucket of candy on your porch and a "please take one" sign on Halloween.

One thing I did and would do again is offer an evening seating. A lot of our students are in Asia and business hours in the US are basically sleeping hours in China and Korea. I don’t want students stuck overseas to have to take the exam at what, to them, is 3am. Likewise some Americans have problems with family members using up all the bandwidth during the day or whatever. This requires me to write a second set of questions in case the morning questions leak, but I think it’s worth it.

What I will not be repeating was my attempt to individually watermark and email exam prompts. The idea was I’d be able to trace who uploaded their exam to CourseHero but it was a ton of work and didn’t successfully distribute the exams. It meant having students sign up in advance, which meant dealing with those who didn’t sign up in time. It took about two or three minutes per student to set up, which in a large lecture means an entire work day. Worst of all, the emails didn’t arrive on time. They were all sent on time (I scheduled them the night before) but they didn’t actually arrive in the students inboxes on time. At least one was almost an hour late. Never doing that again.

Term paper mistakes to avoid

I decided to assign a term paper in fall quarter, in part because I wanted to give less weight to timed exams. For one of my lectures this was "apply theories from lecture to current events in the field we are studying." For the other it was "apply theories from lecture to Book A or Book B." My TAs very reasonably said "we can’t grade that many papers and stay within a reasonable number of hours given that we have 75 students each." One of my TAs suggested we have students work in pairs and then there would be half as many papers to grade. While I appreciate the TA’s suggestion and would be delighted to work with this TA again, it was a mistake to follow this particular suggestion. Having students work in pairs created a massive logistical burden of recording partners from those who paired themselves up then pairing off the unpaired. In the class with two books this was further complicated because a) I had to match people who wanted the same book and b) I had to ensure an equal number of papers on each book so the TAs wouldn’t have to read multiple books. And even then my work wasn’t done because then I had to deal with "my partner dropped the class" or "my partner isn’t answering my emails" or "I got a new partner but then my old partner wants to work together again."

In the future I am never assigning group work again unless the project is intrinsically collaborative. If it’s too much grading for the TAs to grade term papers I will either grade enough personally to absorb the excess hours or choose another assignment. Assigning group work cuts down the number of papers but it doesn’t really reduce the total instructional hours.

I’m also not saying "choose one of these books" unless it’s not necessary for the grader to have read the book, there is sufficient time for my TAs to read multiple books, or it’s a seminar where I have already read all the books.


VPN has proven to be a much bigger problem than usual. This is weird since students always have to deal with VPN and it’s never a totally intuitive technology but it turns out that it helps a lot when they can bring their laptop by the campus tech support lab after class or, if all else fails, just download the papers while they’re on campus. That’s a big escape valve for the few percent of students who can’t get VPN to work and that escape valve is clogged for remote teaching. Rather than deal with a few "I can’t get the readings" emails a week, I eventually just mirrored the papers on the site. What better illustration could you have that piracy is a service problem, not a price problem? The students have already paid for the readings through their tuition but they can’t access them because the VPN paywall is too hard to work reliably at scale (and I suspect it’s not just user error but the VPN server simply has downtime).

January 13, 2021 at 2:37 pm 1 comment


My prediction for what’s going to come for higher education, and particularly the job market is that a repeat of 2008-2009 is the best case scenario. As many of you will recall, the 2008-09 academic year job market was surprisingly normal, in part because the lines were approved before the crash. Then there was essentially no job market in the 2009-2010 academic year as private schools saw their endowments crater and public schools saw their funding slashed in response to state budget crises. By the 2010-2011 academic year, there was a more or less normal level of openings, but there was a backlog of extremely talented postdocs or ABDs who delayed defending, which meant that the market was extremely competitive in 2010-2011 and several years thereafter. A repeat of this is the best case scenario for covid-19, but it will probably be worse.

The overall recession will be worse than 2008. In 2008 we realized houses in the desert a two hour drive from town weren’t worth that much when gas went to $5/gallon. This was a relatively small devaluation of underlying assets, but it had huge effects throughout the financial system and then sovereign debt. Covid-19 means a much bigger real shock to the economy. In the short run we are seeing large sectors of the economy either shut down entirely or running at fractional capacity for months until we keep R0 below 1 and for some industries more like a year or two. Not only is this a huge foregone amount of GDP, but it will last long enough that many people and firms will go bankrupt and they won’t bounce back the second quarantine is relaxed. We also have large sectors of the economy that will need to be devalued, especially anything having to do with travel. I also expect to see a trend away from just-in-time inventory and globalization and towards economic autarky and a preference for slack over efficiency. This too will hit GDP. Just as in the post 2008 era, expect to see a weak stock market (which will hit endowments) and weak state budget finances (which will hit publics). We may even see straining of the fiscal capacity of the federal government which could mean less support for both tuition and research.

That’s just “economy to end tomorrow, universities and colleges hardest hit” though. However there are particular reasons to think universities will be distinctly impacted. Universities are not the cruise ship industry, but we’re closer to it than you’d think. In the post-2008 era, universities made up funding shortfalls by treating international students (who as a rule pay full tuition) as a profit center. That is basically gone for the 2020-2021 academic year. The optimistic scenario for covid-19 is that a couple months of quarantine drives down infections to the point that we can switch to test and trace like many Asian countries. Life is 95% normal in Taiwan and that could be us with enough masks, thermometers, tests, and GPS-enabled cell phone apps. However part of the way Taiwan achieves relative normalcy is that on March 19 they banned nearly all foreigners from entering the country and even repatriated citizens are subject to two weeks of quarantine. The is a crucial part of a test and trace regime, as demonstrated by Singapore (which has a similar public health response to Taiwan) switching from test and trace to mass quarantine after new infections from abroad brought infections above the point where test and trace is feasible. The US State Department has already warned Americans abroad to come home or we might not let you back. Travel restrictions are coming and will mean the end of international students for at least one academic year.

Most likely we are looking at 2 to 3 years without a job market, not just the one year with no market as happened post-2008. But as in the post-2008 job market, there will be a backlog of talented fresh PhDs and so it will be absurdly competitive to get a job. During the Cultural Revolution, China suspended college entrance exams as a way to exclude applicants from “bad class backgrounds” (e.g., grandpa was a landlord). When the Cultural Revolution ended China reinstated exams and so there was a huge backlog of students who could now apply, which made the matriculating class of 1977 extraordinarily competitive. There will be no job market until the 2022-2023 academic year, give or take, and then for several years thereafter you’ll need several articles or a book contract to get a flyout at Podunk State.

OK, so we have a few years of nothing, and then a few years of an extremely competitive market, but the PhD inventory should clear by the 2026-2027 academic year, right? And this means that if you’re in the fall 2020 entering graduate cohort or want to apply for the fall 2021 entering graduate cohort you should be fine, right? Well, no. Birth cohorts got appreciably smaller in 2008 which means by the time the covid-19 recession and its PhD backlog clears, there will be a demographic shock to demand for higher education.

My suggestion to all grad students is to emphasize methods so you’ll be competitive on the industry market, which will recover faster than the academic market. That’s right, I really am telling you to learn to code.

April 8, 2020 at 10:02 am

Test by batches to save tests

Given the severe shortage of covid testing capacity, creative approaches are needed to expand the effective testing capacity. If the population base rate is relatively low, it can be effective to pool samples and test by batches. Doing so would imply substantial rates of false positives since every member of a test batch would be presumptively sick, but this would still allow three major use cases that will be crucial for relaxing quarantine without allowing the epidemic to go critical so long as testing capacity remains finite:

  1. Quickly clear from quarantine every member of a batch with negative results
  2. Ration tests more effectively for individual diagnoses with either no loss or an identifiable loss in precision but at the expense of delay
  3. Use random sampling to creating accurate tracking data with known confidence intervals for the population of the country as a whole or for metropolitan areas

Here is the outline of the algorithm. Assume that one has x people who were exposed to the virus, the virus has an infectiousness rate of 1/x, and you have only one test. Without testing, every one of these x people must quarantine. Under these assumptions, the expected number of actually infected people is 1 out of the x exposed, or more precisely a random draw from a Poisson distribution with a mean of one. This means that 36.8% of the time nobody is infected, 36.8% of the time one person is infected, 18.4% of the time two are infected, and 8% of the time three or more are infected. With only one test, only one individual can be tested and cleared, but if you pool and test them as a batch, over a third of the time you can clear the whole batch.

Alternately, suppose that you have two tests available for testing x people with an expected value of one infection. Divide the x exposed people into batch A and B then test each batch. If nobody is infected both batches will clear. If one person is infected, one batch will clear and the other will not. Even if two people are infected, there is a 50% chance they will both be in the same batch and thus the other batch will clear, and if there are three there is a 12.5% chance they are all in the same batch, etc. Thus with only two tests there is a 36.8% chance you can clear both batches and a 47.7% chance you can clear one of the two batches. This is just an illustration based on the assumption that the overall pool has a single expected case. The actual probabilities will vary depending on the population mean. The lower the population mean (or base rate), the larger you can afford to make the size of the test batches and still gain useful information.

This most basic application of testing pooled samples is sufficient to understand the first use case: to clear batches of people immediately. Use cases could be to clear groups of recently arrived international travelers or to weekly test medical personnel or first responders. There would be substantial false positives, but this is still preferable to a situation where we quarantine the entire population. 

Ideally though we want to get individual diagnoses, which implies applying the test iteratively. Return to the previous example but suppose that we have access to (x/2)+2 tests. We use the first two tests to divide the exposed pool into two test batches. There is a 36.7% chance both batches test negative, in which case no further action is necessary and we can save the remaining x/2 test kits. The modal outcome though (47.7%) is that one of the two test batches tests positive. Since we have x/2 test kits remaining and x/2 people in the positive batch, we can now test each person in the positive batch individually and meanwhile release everyone in the negative batch from quarantine. 

There is also a 15.5% chance that both batches test positive, in which case the remaining x/2 test kits will prove inadequate, but if we are repeating this procedure often enough we can borrow kits from the ⅓ of cases where both batches test negative. Thus with testing capacity of approximately ½ the size of the suspected population to be tested, we can test the entire suspected population. A batch test / individual test protocol will slow down testing as some people will need to be tested twice (though their samples only collected once) but allow us to greatly economize on testing capacity.

Finally, we can use pooled batches to monitor population level infection rates as an indication of when we can ratchet up or ratchet down social distancing without diverting too many tests from clinical or customs applications. Each batch will either test positive or negative and so the survey will only show whether at least one person in the batch was positive, but not how many.

For instance, suppose one collects a thousand nasal swabs from a particular city and divides them into a hundred batches of ten each and then finds that only two of these hundred batches test positive. This is equivalent to 98% rate of batches having exactly zero infected test subjects. Even though the test batch data are dichotomous, one can infer the mean of a Poisson just from the number of zeroes and so this corresponds to a population mean of about 2%. Although this sounds equivalent to simply testing individuals, the two numbers can diverge considerably. For instance, if half the batches test positive, this implies a population base rate of 70%. 


On Twitter, Rich Davis notes that large batches risk increasing false negatives. This is a crucial empirical question. I lack the bench science knowledge to provide estimates but experts would need to provide empirical estimates before this system were implemented.

March 18, 2020 at 3:01 pm

Drop list elements if

I am doing a simulation in R that generates a list of simulated data objects and then simulates emergent behavior on them. Unfortunately, it chokes on “data” meeting a certain set of conditions and so I need to kick data like that from the list before running it. This is kind of hard to do as if you delete a list element you mess up the indexing since when you kick a list element the list is now shorter. My solution is to create a vector of dummies flagging which list elements to kick by indexing in ascending order, but then actually kick list elements in descending order. Note that if the length of the list is used elsewhere in your code that you’ll need to update it after running it.

FWIW, the substantive issue in my case is diffusion where some of the networks contain isolates, which doesn’t work in the region of parameter space which assumes all diffusion occurs through the network. In my case, deleting these networks is a conservative assumption.


Apologies for posting the code as a screenshot but WordPress seems to really hate recursive loops, regardless of whether I use sourcecode or pre tags.



April 5, 2019 at 2:43 pm

EU migrants

| Gabriel |

Yesterday the Guardian published a list of 34,361 migrants who had died attempting to reach or settle within Europe. (The list is originally from United for Intercultural Action).  The modal cause of death is shipwreck, but the list also includes suicides, homicides, terrestrial accidents, etc. I was curious as to the timeline as to when these deaths occurred so I converted the PDF list to a csv and made a graph and a few tables.

The graph is noisy, but nonetheless a few trends jump out.

Deaths rose slowly through the 1990s until 2006 and 2007 then dropped. Presumably this reflects declining labor demand in the EU. There is an isolated jump in 2009, but deaths are low in 2008 and 2010.

Deaths spike sharply in 2011, especially March of that year, which coincides with regime collapse in Libya. (Since 2004 Gaddafi had been suppressing migrant traffic as part of a secret deal with Italy). Deaths were low again by late 2011.

The dog that didn’t bark is that migrant deaths were relatively low throughout 2012 and 2013, notwithstanding the Syrian Civil War.

In October 2013 there was a major shipwreck, after which the Italians launched Operation Mare Nostrum, in which the Italian Navy would rescue floundering vessels. For the first few months this seems to have been successful as a humanitarian effort, but  eventually the Peltzman Effect took to sea and deaths skyrocketed in the summer of 2014. After this spike (and the budget strain created by the operation), the Italians cancelled Operation Mare Nostrum and deaths decreased briefly.

Operation Triton replaced Mare Nostrum, which was a) a pan-European effort and b) less ambitious. The post-Mare Nostrum death lull ended in spring of 2015.

European Union states had widely varying migration policies during 2015, with some enacting restrictionist policies and others pro-migration policies. Although there were many migrant deaths in 2015, they were mostly in the spring. Angela Merkel’s various pro-immigration statements (circa September and October of 2015) do not seem to have yielded a moral hazard effect on deaths, perhaps because this was almost simultaneous with the EU getting an agreement with Turkey to obstruct migrant flows. In any case, migrant deaths were relatively low in the last quarter of 2015 and first quarter of 2016. Deaths were very high in March and April of 2016 and overall 2016 was the worst year for deaths in the EU migration crisis.

In 2017 deaths declined back to 2015 levels, being high in both years but not as high as the peak year of 2016. It is too early to describe trends for 2018 but deaths in the first quarter of 2018 are lower than those of any quarter in 2017.


*ran PDF through pdf/csv translator then stripped out lines with regex for lines not starting w digit
cd "C:\Users\gabri\Dropbox\Documents\codeandculture\eumigrants\"
import delimited ListofDeathsActual.csv, varnames(1) clear
keep if regexm(found,"^[0-9]")
drop v7-v30
gen date = date(found,"DMY",2019)
format date %td
gen n=real(number)
sum n
gen week=wofd(date)
format week %tw
gen month=mofd(date)
format month %tm
gen year=yofd(date)
save eudeaths.dta, replace
table month, c(sum n)
table year, c(sum n)
collapse (sum) n, by(week)
sort week
lab var n "deaths"
twoway (line n week)
graph export migrants.png, replace

June 21, 2018 at 4:11 pm 4 comments

Networks Reading List

| Gabriel |

In response to my review of Ferguson’s Square and the Tower, several people have asked me what to read to get a good introduction to social networks. First of all, Part I of Ferguson’s book is actually pretty good. I meant it when I said in the review that it’s a pretty good intro to social networks and in my first draft I went through and enumerated all the concepts he covers besides betweenness and hierarchy being just a tree network. Here’s the list: degree, sociometry, citation networks, homophily, triadic closure, clustering coefficients, mean path length, small worlds, weak ties as bridges, structural holes, network externalities, social influence, opinion leadership, the Matthew Effect, scale free networks, random graph networks, and lattices. While I would also cover Bonacich centrality / dependence and alpha centrality / status, that’s a very good list of topics and Ferguson does it well. I listed all my issues with the book (basically 1) he’s not good on history/anthropology prior to the early modern era and 2) there’s a lot of conceptual slippage between civil society and social networks as a sort of complement (in the set theory sense) to the state and other hierarchies. However it’s a very well written book that covers a lot of history, including some great historical network studies and the theory section of the book is a good intro to SNA for the non-specialist.

Anyway, so what else would I recommend as the best things to get started with for understanding networks, especially for the non-sociologist.

Well obviously, I wrote the best short and fun introduction.


My analysis of combat events in the Iliad is how I teach undergraduates in economic sociology and they like it. (Gated Contexts version with great typesetting and art, ungated SocArxiv version with the raw data and code). This very short and informal paper introduces basic concepts like visualization and nodes vs edges as well as showing the difference between degree centrality (raw connections), betweenness centrality (connections that hold the whole system together), and alpha centrality (top of the pecking order).

Social networks is as much a method as it is a body of theory so it can be really helpful to play with some virtual tinker toys to get a tactile sense of how it works, speed it up, slow it down, etc. For this there’s nothing better than playing around in NetLogo. There’s a model library including several network models like “giant component” (Erdos-Renyi random graph), preferential attachment, “small world” (Watts and Strogatz ring lattice with random graph elements), and team assembly. Each model in the library has three tabs. The first shows a visualization that you can slow down or speed up and tweak in parameter space. This is an incredibly user-friendly and intuitive way to grok what parameters are doing and how the algorithm under each model thinks. A second tab provides a well-written summary of the model, along with citations to the primary literature. The third tab provides the raw code, which as you’d expect is a dialect of the Logo language that anyone born in the late 1970s learned in elementary school. I found this language immediately intuitive to read and it only took me two days to write useful code in it, but your mileage may vary. Serious work should probably be done in R (specifically igraph and statnet), but NetLogo is much better for conveying the intuition behind models.

Since this post was inspired by Square and the Tower and my main gripe about that is slippage between civil society and social networks, I should mention that the main way to take a social networks approach to civil society in the literature is to follow Putnam in distinguishing between bridging (links between groups) and bonding (links within groups) social capital. TL;DR is don’t ask the monkey’s paw for your society to have social capital without specifying that you want it to have both kinds.

If you want to get much beyond that, there are some books. For a long time Wasserman and Faust was canonical but it’s now pretty out of date. There are a few newer books that do a good job of it.

The main textbook these days is Matthew O. Jackson’s Social and Economic Networks. It’s kind of ironic that the main textbook is written by an economist, but if Saul of Tsarsus could write a plurality of the New Testament, then I guess an economist can write a canonical textbook on social network analysis. It covers a lot of topics, including very technical ones.

I am a big fan of The Oxford Handbook of Analytical Sociology. Analytical sociology isn’t quite the same thing as social networks or complex systems, but there’s a lot of overlap. Sections I (Foundations) and III (Social Dynamics) cover a lot in social networks and related topics like threshold models. (One of my pet peeves is assuming networks are the only kind of bottom-up social process so I like that OHoAS includes stuff on models with less restrictive assumptions about structure, which is not just a simplifying assumption but sometimes more accurate).

I’m a big fan of John Levi Martin’s Social Structures. The book divides fairly neatly into a first half that deals with somewhat old school social networks approaches to small group social networks (e.g., kinship moieties) and a second half that emphasizes how patronage is a scalable social structure that eventually gets you to the early modern state.

Aside from that, there’s just a whole lot of really interesting journal articles. Bearman, Moody, and Stovel 2004 maps the sexual network of high school students and discover an implicit taboo on dating your ex’s partner’s ex. Smith and Papachristos 2016 look at Al Capone’s network and show that you can’t conflate different types of ties, but neither can you ignore some types, only by taking seriously multiple types of ties as distinct can you understand Prohibition era organized crime. Hedström, Sandell, and Stern 2000 show that the Swedish social democratic party spread much faster than you’d expect because it didn’t just go from county to county, but jumped across the country with traveling activists, which is effectively an empirical demonstration of a theoretical model from Watts and Strogatz 1998.

February 6, 2018 at 12:24 pm 1 comment

Blue upon blue

| Gabriel |

On Twitter, Dan Lavoie observed that Democrats got more votes for Congress but Republicans got more seats. One complication is that some states effectively had a Democratic run-off, not a traditional general. It is certainly true that most Californians wanted a Democratic senator, but not 100%, which is what the vote shows as the general was between Harris and Sanchez, both Democrats. That aside though, there’s a more basic issue which is that Democrats are just more geographically concentrated than Republicans.

Very few places with appreciable numbers of people are as Republican as New York or San Francisco are Democratic (ie, about 85%). Among counties with at least 150,000 votes cast, in 2004 only two suburbs of Dallas (Collin County and Denton County) voted over 70%. In 2008 and 2012 only Montgomery County (a Houston suburb) and Utah county (Provo, Utah) were this Republican. By contrast, in 2004 sixteen large counties voted at least 70% Democratic and 25 counties swung deep blue for both of Obama’s elections. A lot of big cities that we think of as Republican are really slightly reddish purple. For instance, in 2004 Harris county (Houston, Texas) went 55% for George W Bush and Dallas was a tie. In 2012 Mitt Romney got 58% in Salt Lake. The suburbs of these places can be pretty red, but as a rule these suburbs are not nearly as red as San Francisco is blue, not very populated, or both.

I think the best way to look at the big picture is to plot the density of Democratic vote shares by county, weighted by county population. Conceptually, this shows you the exposure of voters to red or blue counties.

At Charlie Seguin’s suggestion, I added a dozen lines of code to the end to check the difference between the popular vote and what you’d get if you treated each county as winner-take-all then aggregated up weighted by county size. Doing so it looks like treating counties as winner-take-all gives you a cumulative advantage effect for the popular vote winner. Here’s a table summarizing the Democratic popular vote versus the Democratic vote treating counties as a sort of electoral college.
Popular vote Electoral counties
2004 48.26057 0.4243547
2008 52.96152 0.6004451
2012 51.09047 0.5329050
2016 50.12623 0.5129522


elections <- read.csv('')
elections$bluecounty <- ifelse(elections$pct_dem>50, 1, 0)

elect04 <- elections[(elections$year==2004 & elections$vote_total>0),]
elect04$weight <- elect04$vote_total/sum(elect04$vote_total)
dens04 <- density(elect04$pct_dem, weights = elect04$weight)
png(filename="elect04.png", width=600, height=600)
plot(dens04, main='2004 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")

elect08 <- elections[(elections$year==2008 & elections$vote_total>0),]
elect08$weight <- elect08$vote_total/sum(elect08$vote_total)
dens08 <- density(elect08$pct_dem, weights = elect08$weight)
png(filename="elect08.png", width=600, height=600)
plot(dens08, main='2008 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")

elect12 <- elections[(elections$year==2012 & elections$vote_total>0),]
elect12$weight <- elect12$vote_total/sum(elect12$vote_total)
dens12 <- density(elect12$pct_dem, weights = elect12$weight)
png(filename="elect12.png", width=600, height=600)
plot(dens12, main='2012 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")

elect16 <- read.csv('')
elect16$sumvotes <- elect16$TRUMP+elect16$CLINTON
elect16$clintonshare <- 100*elect16$CLINTON / (elect16$TRUMP+elect16$CLINTON)
elect16$weight <- elect16$sumvotes / sum(elect16$sumvotes)
elect16$bluecounty <- ifelse(elect16$clintonshare>50, 1, 0)
dens16 <- density(elect16$clintonshare, weights = elect16$weight)
png(filename = "elect16.png", width=600, height=600)
plot(dens16, main='2016 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")

png(filename = "elect.png", width=600, height=600)
plot(dens04, main='Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote", col="blue")

m <- matrix(1:8,ncol=2,byrow = TRUE)
colnames(m) <- c("Popular vote","Electoral counties")
rownames(m) <- c("2004","2008","2012","2016")
m[1,1] <- weighted.mean(elect04$pct_dem,elect04$weight)
m[1,2] <- weighted.mean(as.numeric(elect04$bluecounty),elect04$weight)
m[2,1] <- weighted.mean(elect08$pct_dem,elect08$weight)
m[2,2] <- weighted.mean(as.numeric(elect08$bluecounty),elect08$weight)
m[3,1] <- weighted.mean(elect12$pct_dem,elect12$weight)
m[3,2] <- weighted.mean(as.numeric(elect12$bluecounty),elect12$weight)
m[4,1] <- weighted.mean(elect16$clintonshare,elect16$weight)
m[4,2] <- weighted.mean(as.numeric(elect16$bluecounty),elect16$weight)


November 9, 2017 at 11:24 am

Strange Things Are Afoot at the IMDb

| Gabriel |

I was helping a friend check something on IMDb for a paper and so we went to the URL that gives you the raw data. We found it’s in a completely different format than it was last time I checked, about a year ago.

The old data will be available until November 2017. I suggest you grab a complete copy while you still can.

Good news: The data is in a much simpler format, being six wide tables that are tab-separated row/column text files. You’ll no longer need my Perl scripts to convert them from a few dozen files that are a weird mish mash of field-tagged format and the weirdest tab-delimited text you’ve ever seen. Good riddance.

Bad news: It’s hard to use. S3 is designed for developers not end users. You could download the old version with Chrome or “curl” from the command line. The new version requires you to create an S3 account and as best I can tell, there’s no way to just use the S3 web interface to get it. There is sample Java code, but it requires supplying your account credentials which gives me cold sweat flashbacks to when Twitter changed its API and my R scrape broke. Anyway, bottom line being you’ll probably need IT to help you with this.

Really bad news: A lot of the files are gone. There’s no country by country release dates, no box offices, no plot keywords, there are only up to three genres, no distributor or production company, etc. These are all things I’ve used in publications.

September 8, 2017 at 2:29 pm 3 comments

The more things change

| Gabriel |

I just listened to a podcast conversation [transcript / audio] between Tyler Cowen and Ben Sasse and very much enjoyed it but was bothered by one of the senator’s laugh lines, “Turned out sex was really similar in most centuries.”* Now in a sense this is obviously true since any culture in which sex lost a certain aspect would only last one generation, and indeed this has happened. But there is still a lot of variation within the scope condition that in pretty much all times and places, sex is procreative at least some of the time. What kinds of sex one has varies enormously over time, as does with what kinds of and how many people. We can see this over big periods of history and within living memory in our own culture. My discussion will be necessarily detailed, but not prurient.

Dover’s Greek Homosexuality uses detailed interpretation of comedies, legal speeches, pornographic pottery, and similar sources to provide a thorough picture of sexuality in 4th and 5th c BCE Greece, especially among Athenian upper class men, but not limited just to the idiosyncratic and often idealized views of philosophers. There were two big differences with our culture, the most obvious being that with whom you had sex varied over the life course and the less obvious but equally important one being that what role you played was equally important as with whom you played it. An aristocratic Athenian male would typically be an eromenos (“beloved” or passive homosexual) in his late teens and when he reached full maturity would be an erastes (“lover” or active homosexual) but also get married to a woman. As long as you stuck to this life course trajectory, no money changed hands, and the eromenos had love but not lust for his erastes, the relationship was honorable. However for someone to remain an eromenos into full maturity was scandalous and bearded men who continued to accept passive sexual roles were stigmatized. Interestingly, what exactly is the action that occurs between active and passive varies enormously based on source, with philosophers downplaying sex entirely, pornographic pottery suggesting intercrural sex, and Aristophanes joking about anal intercourse (e.g., the best food for a dung beetle).

One thing the sources seem to agree on is that fellatio generally did not occur among Greek men. Dover argues that the avoidance of fellatio, avoidance of prostitution, and age separation of partners all served the purpose of avoiding hubris (assault that degrades status) otherwise implied by one male citizen penetrating another. Generally, our culture’s ubiquity of fellatio, and especially our common assumption that it is less intimate than vaginal intercourse, is exceptional across cultures. This is not only an issue of Greece, fellatio was exceptionally rare in 18th c elite French prostitution (although anal sex was common) and in early 20th c New York city. Interestingly, Dover notes that the women of Lesbos were legendary in Greek culture for heterosexual fellatio. While our culture derives our word for gay women from that island, largely through it being the home of Sappho, the cultural meaning in antiquity was of fellatrix, though the two meanings made sense in the Greek mind as relating to women who were especially open to sex of many varieties. This sounds bizarre to us, but as I’ll describe in a bit, this reflects emerging practice in our own culture.

For changes in recent decades, we do not need to rely on measuring the angles of penetration depicted on a kylix or on epithets in old comedy but can go by systematic survey data.** The main finding of Laumann et al’s 1994 Sex in America study was that sex was much more focused on monogamy, marriage, and vaginal intercourse than anyone expected based on Kinsey (who relied on convenience sampling) or popular culture. However things have changed a lot in the last two decades and in ways much more profound than that my undergrads don’t like rock music. The National Survey of Family Growth 2002-2013 replicates most of the research questions of Laumann et al and found that sex had gotten much more complicated since the early 1990s.  One major finding is a substantial rise in same sex intercourse. Women born from 1966-1974 are half as likely to have had same sex intercourse as women born from 1985-1995. In contrast to ancient Athens, this rise in same sex intercourse is limited to women (and the base rate is much higher), but as in ancient Greece, it is mostly an issue of youthful experimentation that is complementary to heterosexual practice and on the margin women self-identify as straight or bi, not lesbian. Chandra et al’s analysis of the same data showed a corollary that echos ancient stereotypes of Lesbians, which is that female experience with same sex partners is positively correlated with lifetime number of male partners. In addition, Chandra et al found that heterosexual anal intercourse is rising substantially, with about 30% of women aged 18-44 in 2002 having experienced it, or almost double what Laumann et al found twenty years earlier. This likely reflects influence from pornography, as does the almost universal (~85%) adoption of pubic grooming among women under thirty. However, again, this reflects ancient trends as ancient Greek women would singe off pubic hair and indeed the punishment for a male adulterer was to be symbolically feminized through pubic depilation and penetration with a radish.

* Sasse elaborated that he meant that in all times and places sex serves a mix of recreation, procreation, and pair-bonding and I think he’s right about that.

** I am not relying on pornography production or usage data as I strongly suspect that pornography follows a zero-inflated over-dispersed count distribution and thus consumption data, especially that showing that pornography is increasingly bizarre, is mostly informative about a relatively small minority of intensive users.

July 3, 2017 at 10:38 am 11 comments

Medicaid and mortality

| Gabriel |

This morning Spotted Toad picked up on the point in Quinones that a lot of pill mills were funded through Medicaid fraud and so he used Medicaid expansion under Obamacare to see if this led to greater drug overdoses in Medicaid expansion states. In fact he found that in the time since the Medicaid expansion, states that participated in the expansion had faster growth in overdose deaths than states that refused Medicaid expansion. That’s interesting, but I never want to base a trend on just two time points. (FWIW, Toad was analyzing the data as the CDC presents it — the analysis below requires a lot more queries). So I queried the CDC data in more granular detail to check if the trend started with Medicaid expansion. (saved query link, just iterate over year to get annual state-level OD deaths).

As it turns out, I was able to replicate Toad’s finding that Medicaid expansion states (blue) have higher rates and faster growth in fatal drug overdoses than Medicaid holdout states (red), but the two groups of states diverged starting in 2010, well before states began implementing Obamacare’s Medicaid expansion. So there may be a real difference between Medicaid expansion states (which are generally Democratic) and Medicaid holdout states (which are Republican), and the difference may even be some aspect of health policy, but it wasn’t Obamacare Medicaid expansion as the divergence starts too early. (It’s worth noting that Toad updated his own post with my graph as soon as I sent it to him).


Here is the data in Stata format (which you can reconstruct yourself from a series of CDC queries).

Here is the code

cd "~/Documents/codeandculture/cdcdrugmortality"

gen state=""
gen year=.
save drugs19992015.dta, replace

forvalues i=1999/2015 {
 disp "`i'"
 insheet using drugs`i'.txt, clear
 append using drugs19992015.dta, force
 recode year .=`i'
 save drugs19992015, replace
drop if state==""


insheet using medicaidholdouts.txt, clear
ren v1 state
save medicaidholdouts.dta, replace

use medicaidholdouts, clear
merge 1:m state using drugs19992015
ren _merge medicaidexpansion 
recode medicaidexpansion 2=1 3=0

list state medicaidexpansion if (state=="Texas" | state=="California") & year==2010

save drugs19992015, replace

collapse (sum)deaths population, by (year medicaidexpansion)
gen cruderate= deaths/population
twoway (line cruderate year if medicaidexpansion==1) (line cruderate year if medicaidexpansion==0) , legend(order(1 "Medicaid expansion" 2 "Holdouts")) ytitle(Weighted Crude Rate of Fatal Drug Overdoses)
graph export medicaidod.png, replace

*have a nice day

And here’s my list of Medicaid holdouts:

North Carolina
South Carolina
South Dakota


March 21, 2017 at 9:16 pm 3 comments

Older Posts

The Culture Geeks