Author Archive

Edudammerung

My prediction for what’s going to come for higher education, and particularly the job market is that a repeat of 2008-2009 is the best case scenario. As many of you will recall, the 2008-09 academic year job market was surprisingly normal, in part because the lines were approved before the crash. Then there was essentially no job market in the 2009-2010 academic year as private schools saw their endowments crater and public schools saw their funding slashed in response to state budget crises. By the 2010-2011 academic year, there was a more or less normal level of openings, but there was a backlog of extremely talented postdocs or ABDs who delayed defending, which meant that the market was extremely competitive in 2010-2011 and several years thereafter. A repeat of this is the best case scenario for covid-19, but it will probably be worse.

The overall recession will be worse than 2008. In 2008 we realized houses in the desert a two hour drive from town weren’t worth that much when gas went to $5/gallon. This was a relatively small devaluation of underlying assets, but it had huge effects throughout the financial system and then sovereign debt. Covid-19 means a much bigger real shock to the economy. In the short run we are seeing large sectors of the economy either shut down entirely or running at fractional capacity for months until we keep R0 below 1 and for some industries more like a year or two. Not only is this a huge foregone amount of GDP, but it will last long enough that many people and firms will go bankrupt and they won’t bounce back the second quarantine is relaxed. We also have large sectors of the economy that will need to be devalued, especially anything having to do with travel. I also expect to see a trend away from just-in-time inventory and globalization and towards economic autarky and a preference for slack over efficiency. This too will hit GDP. Just as in the post 2008 era, expect to see a weak stock market (which will hit endowments) and weak state budget finances (which will hit publics). We may even see straining of the fiscal capacity of the federal government which could mean less support for both tuition and research.

That’s just “economy to end tomorrow, universities and colleges hardest hit” though. However there are particular reasons to think universities will be distinctly impacted. Universities are not the cruise ship industry, but we’re closer to it than you’d think. In the post-2008 era, universities made up funding shortfalls by treating international students (who as a rule pay full tuition) as a profit center. That is basically gone for the 2020-2021 academic year. The optimistic scenario for covid-19 is that a couple months of quarantine drives down infections to the point that we can switch to test and trace like many Asian countries. Life is 95% normal in Taiwan and that could be us with enough masks, thermometers, tests, and GPS-enabled cell phone apps. However part of the way Taiwan achieves relative normalcy is that on March 19 they banned nearly all foreigners from entering the country and even repatriated citizens are subject to two weeks of quarantine. The is a crucial part of a test and trace regime, as demonstrated by Singapore (which has a similar public health response to Taiwan) switching from test and trace to mass quarantine after new infections from abroad brought infections above the point where test and trace is feasible. The US State Department has already warned Americans abroad to come home or we might not let you back. Travel restrictions are coming and will mean the end of international students for at least one academic year.

Most likely we are looking at 2 to 3 years without a job market, not just the one year with no market as happened post-2008. But as in the post-2008 job market, there will be a backlog of talented fresh PhDs and so it will be absurdly competitive to get a job. During the Cultural Revolution, China suspended college entrance exams as a way to exclude applicants from “bad class backgrounds” (e.g., grandpa was a landlord). When the Cultural Revolution ended China reinstated exams and so there was a huge backlog of students who could now apply, which made the matriculating class of 1977 extraordinarily competitive. There will be no job market until the 2022-2023 academic year, give or take, and then for several years thereafter you’ll need several articles or a book contract to get a flyout at Podunk State.

OK, so we have a few years of nothing, and then a few years of an extremely competitive market, but the PhD inventory should clear by the 2026-2027 academic year, right? And this means that if you’re in the fall 2020 entering graduate cohort or want to apply for the fall 2021 entering graduate cohort you should be fine, right? Well, no. Birth cohorts got appreciably smaller in 2008 which means by the time the covid-19 recession and its PhD backlog clears, there will be a demographic shock to demand for higher education.

My suggestion to all grad students is to emphasize methods so you’ll be competitive on the industry market, which will recover faster than the academic market. That’s right, I really am telling you to learn to code.

April 8, 2020 at 10:02 am Leave a comment

Test by batches to save tests

Given the severe shortage of covid testing capacity, creative approaches are needed to expand the effective testing capacity. If the population base rate is relatively low, it can be effective to pool samples and test by batches. Doing so would imply substantial rates of false positives since every member of a test batch would be presumptively sick, but this would still allow three major use cases that will be crucial for relaxing quarantine without allowing the epidemic to go critical so long as testing capacity remains finite:

  1. Quickly clear from quarantine every member of a batch with negative results
  2. Ration tests more effectively for individual diagnoses with either no loss or an identifiable loss in precision but at the expense of delay
  3. Use random sampling to creating accurate tracking data with known confidence intervals for the population of the country as a whole or for metropolitan areas

Here is the outline of the algorithm. Assume that one has x people who were exposed to the virus, the virus has an infectiousness rate of 1/x, and you have only one test. Without testing, every one of these x people must quarantine. Under these assumptions, the expected number of actually infected people is 1 out of the x exposed, or more precisely a random draw from a Poisson distribution with a mean of one. This means that 36.8% of the time nobody is infected, 36.8% of the time one person is infected, 18.4% of the time two are infected, and 8% of the time three or more are infected. With only one test, only one individual can be tested and cleared, but if you pool and test them as a batch, over a third of the time you can clear the whole batch.

Alternately, suppose that you have two tests available for testing x people with an expected value of one infection. Divide the x exposed people into batch A and B then test each batch. If nobody is infected both batches will clear. If one person is infected, one batch will clear and the other will not. Even if two people are infected, there is a 50% chance they will both be in the same batch and thus the other batch will clear, and if there are three there is a 12.5% chance they are all in the same batch, etc. Thus with only two tests there is a 36.8% chance you can clear both batches and a 47.7% chance you can clear one of the two batches. This is just an illustration based on the assumption that the overall pool has a single expected case. The actual probabilities will vary depending on the population mean. The lower the population mean (or base rate), the larger you can afford to make the size of the test batches and still gain useful information.

This most basic application of testing pooled samples is sufficient to understand the first use case: to clear batches of people immediately. Use cases could be to clear groups of recently arrived international travelers or to weekly test medical personnel or first responders. There would be substantial false positives, but this is still preferable to a situation where we quarantine the entire population. 

Ideally though we want to get individual diagnoses, which implies applying the test iteratively. Return to the previous example but suppose that we have access to (x/2)+2 tests. We use the first two tests to divide the exposed pool into two test batches. There is a 36.7% chance both batches test negative, in which case no further action is necessary and we can save the remaining x/2 test kits. The modal outcome though (47.7%) is that one of the two test batches tests positive. Since we have x/2 test kits remaining and x/2 people in the positive batch, we can now test each person in the positive batch individually and meanwhile release everyone in the negative batch from quarantine. 

There is also a 15.5% chance that both batches test positive, in which case the remaining x/2 test kits will prove inadequate, but if we are repeating this procedure often enough we can borrow kits from the ⅓ of cases where both batches test negative. Thus with testing capacity of approximately ½ the size of the suspected population to be tested, we can test the entire suspected population. A batch test / individual test protocol will slow down testing as some people will need to be tested twice (though their samples only collected once) but allow us to greatly economize on testing capacity.

Finally, we can use pooled batches to monitor population level infection rates as an indication of when we can ratchet up or ratchet down social distancing without diverting too many tests from clinical or customs applications. Each batch will either test positive or negative and so the survey will only show whether at least one person in the batch was positive, but not how many.

For instance, suppose one collects a thousand nasal swabs from a particular city and divides them into a hundred batches of ten each and then finds that only two of these hundred batches test positive. This is equivalent to 98% rate of batches having exactly zero infected test subjects. Even though the test batch data are dichotomous, one can infer the mean of a Poisson just from the number of zeroes and so this corresponds to a population mean of about 2%. Although this sounds equivalent to simply testing individuals, the two numbers can diverge considerably. For instance, if half the batches test positive, this implies a population base rate of 70%. 

[Update]

On Twitter, Rich Davis notes that large batches risk increasing false negatives. This is a crucial empirical question. I lack the bench science knowledge to provide estimates but experts would need to provide empirical estimates before this system were implemented.

March 18, 2020 at 3:01 pm Leave a comment

Drop list elements if

I am doing a simulation in R that generates a list of simulated data objects and then simulates emergent behavior on them. Unfortunately, it chokes on “data” meeting a certain set of conditions and so I need to kick data like that from the list before running it. This is kind of hard to do as if you delete a list element you mess up the indexing since when you kick a list element the list is now shorter. My solution is to create a vector of dummies flagging which list elements to kick by indexing in ascending order, but then actually kick list elements in descending order. Note that if the length of the list is used elsewhere in your code that you’ll need to update it after running it.

FWIW, the substantive issue in my case is diffusion where some of the networks contain isolates, which doesn’t work in the region of parameter space which assumes all diffusion occurs through the network. In my case, deleting these networks is a conservative assumption.

Capture

Apologies for posting the code as a screenshot but WordPress seems to really hate recursive loops, regardless of whether I use sourcecode or pre tags.

 

 

April 5, 2019 at 2:43 pm

EU migrants

| Gabriel |

Yesterday the Guardian published a list of 34,361 migrants who had died attempting to reach or settle within Europe. (The list is originally from United for Intercultural Action).  The modal cause of death is shipwreck, but the list also includes suicides, homicides, terrestrial accidents, etc. I was curious as to the timeline as to when these deaths occurred so I converted the PDF list to a csv and made a graph and a few tables.

The graph is noisy, but nonetheless a few trends jump out.

Deaths rose slowly through the 1990s until 2006 and 2007 then dropped. Presumably this reflects declining labor demand in the EU. There is an isolated jump in 2009, but deaths are low in 2008 and 2010.

Deaths spike sharply in 2011, especially March of that year, which coincides with regime collapse in Libya. (Since 2004 Gaddafi had been suppressing migrant traffic as part of a secret deal with Italy). Deaths were low again by late 2011.

The dog that didn’t bark is that migrant deaths were relatively low throughout 2012 and 2013, notwithstanding the Syrian Civil War.

In October 2013 there was a major shipwreck, after which the Italians launched Operation Mare Nostrum, in which the Italian Navy would rescue floundering vessels. For the first few months this seems to have been successful as a humanitarian effort, but  eventually the Peltzman Effect took to sea and deaths skyrocketed in the summer of 2014. After this spike (and the budget strain created by the operation), the Italians cancelled Operation Mare Nostrum and deaths decreased briefly.

Operation Triton replaced Mare Nostrum, which was a) a pan-European effort and b) less ambitious. The post-Mare Nostrum death lull ended in spring of 2015.

European Union states had widely varying migration policies during 2015, with some enacting restrictionist policies and others pro-migration policies. Although there were many migrant deaths in 2015, they were mostly in the spring. Angela Merkel’s various pro-immigration statements (circa September and October of 2015) do not seem to have yielded a moral hazard effect on deaths, perhaps because this was almost simultaneous with the EU getting an agreement with Turkey to obstruct migrant flows. In any case, migrant deaths were relatively low in the last quarter of 2015 and first quarter of 2016. Deaths were very high in March and April of 2016 and overall 2016 was the worst year for deaths in the EU migration crisis.

In 2017 deaths declined back to 2015 levels, being high in both years but not as high as the peak year of 2016. It is too early to describe trends for 2018 but deaths in the first quarter of 2018 are lower than those of any quarter in 2017.

migrants

*http://unitedagainstrefugeedeaths.eu/wp-content/uploads/2014/06/ListofDeathsActual.pdf
*ran PDF through pdf/csv translator then stripped out lines with regex for lines not starting w digit
cd "C:\Users\gabri\Dropbox\Documents\codeandculture\eumigrants\"
import delimited ListofDeathsActual.csv, varnames(1) clear
keep if regexm(found,"^[0-9]")
drop v7-v30
gen date = date(found,"DMY",2019)
format date %td
gen n=real(number)
sum n
gen week=wofd(date)
format week %tw
gen month=mofd(date)
format month %tm
gen year=yofd(date)
save eudeaths.dta, replace
table month, c(sum n)
table year, c(sum n)
collapse (sum) n, by(week)
sort week
lab var n "deaths"
twoway (line n week)
graph export migrants.png, replace

June 21, 2018 at 4:11 pm 4 comments

Networks Reading List

| Gabriel |

In response to my review of Ferguson’s Square and the Tower, several people have asked me what to read to get a good introduction to social networks. First of all, Part I of Ferguson’s book is actually pretty good. I meant it when I said in the review that it’s a pretty good intro to social networks and in my first draft I went through and enumerated all the concepts he covers besides betweenness and hierarchy being just a tree network. Here’s the list: degree, sociometry, citation networks, homophily, triadic closure, clustering coefficients, mean path length, small worlds, weak ties as bridges, structural holes, network externalities, social influence, opinion leadership, the Matthew Effect, scale free networks, random graph networks, and lattices. While I would also cover Bonacich centrality / dependence and alpha centrality / status, that’s a very good list of topics and Ferguson does it well. I listed all my issues with the book (basically 1) he’s not good on history/anthropology prior to the early modern era and 2) there’s a lot of conceptual slippage between civil society and social networks as a sort of complement (in the set theory sense) to the state and other hierarchies. However it’s a very well written book that covers a lot of history, including some great historical network studies and the theory section of the book is a good intro to SNA for the non-specialist.

Anyway, so what else would I recommend as the best things to get started with for understanding networks, especially for the non-sociologist.

Well obviously, I wrote the best short and fun introduction.

dylan

My analysis of combat events in the Iliad is how I teach undergraduates in economic sociology and they like it. (Gated Contexts version with great typesetting and art, ungated SocArxiv version with the raw data and code). This very short and informal paper introduces basic concepts like visualization and nodes vs edges as well as showing the difference between degree centrality (raw connections), betweenness centrality (connections that hold the whole system together), and alpha centrality (top of the pecking order).

Social networks is as much a method as it is a body of theory so it can be really helpful to play with some virtual tinker toys to get a tactile sense of how it works, speed it up, slow it down, etc. For this there’s nothing better than playing around in NetLogo. There’s a model library including several network models like “giant component” (Erdos-Renyi random graph), preferential attachment, “small world” (Watts and Strogatz ring lattice with random graph elements), and team assembly. Each model in the library has three tabs. The first shows a visualization that you can slow down or speed up and tweak in parameter space. This is an incredibly user-friendly and intuitive way to grok what parameters are doing and how the algorithm under each model thinks. A second tab provides a well-written summary of the model, along with citations to the primary literature. The third tab provides the raw code, which as you’d expect is a dialect of the Logo language that anyone born in the late 1970s learned in elementary school. I found this language immediately intuitive to read and it only took me two days to write useful code in it, but your mileage may vary. Serious work should probably be done in R (specifically igraph and statnet), but NetLogo is much better for conveying the intuition behind models.

Since this post was inspired by Square and the Tower and my main gripe about that is slippage between civil society and social networks, I should mention that the main way to take a social networks approach to civil society in the literature is to follow Putnam in distinguishing between bridging (links between groups) and bonding (links within groups) social capital. TL;DR is don’t ask the monkey’s paw for your society to have social capital without specifying that you want it to have both kinds.

If you want to get much beyond that, there are some books. For a long time Wasserman and Faust was canonical but it’s now pretty out of date. There are a few newer books that do a good job of it.

The main textbook these days is Matthew O. Jackson’s Social and Economic Networks. It’s kind of ironic that the main textbook is written by an economist, but if Saul of Tsarsus could write a plurality of the New Testament, then I guess an economist can write a canonical textbook on social network analysis. It covers a lot of topics, including very technical ones.

I am a big fan of The Oxford Handbook of Analytical Sociology. Analytical sociology isn’t quite the same thing as social networks or complex systems, but there’s a lot of overlap. Sections I (Foundations) and III (Social Dynamics) cover a lot in social networks and related topics like threshold models. (One of my pet peeves is assuming networks are the only kind of bottom-up social process so I like that OHoAS includes stuff on models with less restrictive assumptions about structure, which is not just a simplifying assumption but sometimes more accurate).

I’m a big fan of John Levi Martin’s Social Structures. The book divides fairly neatly into a first half that deals with somewhat old school social networks approaches to small group social networks (e.g., kinship moieties) and a second half that emphasizes how patronage is a scalable social structure that eventually gets you to the early modern state.

Aside from that, there’s just a whole lot of really interesting journal articles. Bearman, Moody, and Stovel 2004 maps the sexual network of high school students and discover an implicit taboo on dating your ex’s partner’s ex. Smith and Papachristos 2016 look at Al Capone’s network and show that you can’t conflate different types of ties, but neither can you ignore some types, only by taking seriously multiple types of ties as distinct can you understand Prohibition era organized crime. Hedström, Sandell, and Stern 2000 show that the Swedish social democratic party spread much faster than you’d expect because it didn’t just go from county to county, but jumped across the country with traveling activists, which is effectively an empirical demonstration of a theoretical model from Watts and Strogatz 1998.

February 6, 2018 at 12:24 pm 1 comment

Blue upon blue

| Gabriel |

On Twitter, Dan Lavoie observed that Democrats got more votes for Congress but Republicans got more seats. One complication is that some states effectively had a Democratic run-off, not a traditional general. It is certainly true that most Californians wanted a Democratic senator, but not 100%, which is what the vote shows as the general was between Harris and Sanchez, both Democrats. That aside though, there’s a more basic issue which is that Democrats are just more geographically concentrated than Republicans.

Very few places with appreciable numbers of people are as Republican as New York or San Francisco are Democratic (ie, about 85%). Among counties with at least 150,000 votes cast, in 2004 only two suburbs of Dallas (Collin County and Denton County) voted over 70%. In 2008 and 2012 only Montgomery County (a Houston suburb) and Utah county (Provo, Utah) were this Republican. By contrast, in 2004 sixteen large counties voted at least 70% Democratic and 25 counties swung deep blue for both of Obama’s elections. A lot of big cities that we think of as Republican are really slightly reddish purple. For instance, in 2004 Harris county (Houston, Texas) went 55% for George W Bush and Dallas was a tie. In 2012 Mitt Romney got 58% in Salt Lake. The suburbs of these places can be pretty red, but as a rule these suburbs are not nearly as red as San Francisco is blue, not very populated, or both.

I think the best way to look at the big picture is to plot the density of Democratic vote shares by county, weighted by county population. Conceptually, this shows you the exposure of voters to red or blue counties.

Update
At Charlie Seguin’s suggestion, I added a dozen lines of code to the end to check the difference between the popular vote and what you’d get if you treated each county as winner-take-all then aggregated up weighted by county size. Doing so it looks like treating counties as winner-take-all gives you a cumulative advantage effect for the popular vote winner. Here’s a table summarizing the Democratic popular vote versus the Democratic vote treating counties as a sort of electoral college.
Popular vote Electoral counties
2004 48.26057 0.4243547
2008 52.96152 0.6004451
2012 51.09047 0.5329050
2016 50.12623 0.5129522

elect.png

setwd('C:/Users/gabri/Dropbox/Documents/codeandculture/blueonblue')
elections <- read.csv('https://raw.githubusercontent.com/helloworlddata/us-presidential-election-county-results/master/data/us-presidential-election-county-results-2004-through-2012.csv')
elections$bluecounty <- ifelse(elections$pct_dem>50, 1, 0)

elect04 <- elections[(elections$year==2004 & elections$vote_total>0),]
elect04$weight <- elect04$vote_total/sum(elect04$vote_total)
dens04 <- density(elect04$pct_dem, weights = elect04$weight)
png(filename="elect04.png", width=600, height=600)
plot(dens04, main='2004 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect08 <- elections[(elections$year==2008 & elections$vote_total>0),]
elect08$weight <- elect08$vote_total/sum(elect08$vote_total)
dens08 <- density(elect08$pct_dem, weights = elect08$weight)
png(filename="elect08.png", width=600, height=600)
plot(dens08, main='2008 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect12 <- elections[(elections$year==2012 & elections$vote_total>0),]
elect12$weight <- elect12$vote_total/sum(elect12$vote_total)
dens12 <- density(elect12$pct_dem, weights = elect12$weight)
png(filename="elect12.png", width=600, height=600)
plot(dens12, main='2012 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

elect16 <- read.csv('http://www-personal.umich.edu/~mejn/election/2016/countyresults.csv')
elect16$sumvotes <- elect16$TRUMP+elect16$CLINTON
elect16$clintonshare <- 100*elect16$CLINTON / (elect16$TRUMP+elect16$CLINTON)
elect16$weight <- elect16$sumvotes / sum(elect16$sumvotes)
elect16$bluecounty <- ifelse(elect16$clintonshare>50, 1, 0)
dens16 <- density(elect16$clintonshare, weights = elect16$weight)
png(filename = "elect16.png", width=600, height=600)
plot(dens16, main='2016 Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote")
dev.off()

png(filename = "elect.png", width=600, height=600)
plot(dens04, main='Democratic vote, weighted by population', ylab = 'Density, weighted by county population', xlab = "Democratic share of vote", col="blue")
lines(dens08)
lines(dens12)
lines(dens16)
dev.off()


m <- matrix(1:8,ncol=2,byrow = TRUE)
colnames(m) <- c("Popular vote","Electoral counties")
rownames(m) <- c("2004","2008","2012","2016")
m[1,1] <- weighted.mean(elect04$pct_dem,elect04$weight)
m[1,2] <- weighted.mean(as.numeric(elect04$bluecounty),elect04$weight)
m[2,1] <- weighted.mean(elect08$pct_dem,elect08$weight)
m[2,2] <- weighted.mean(as.numeric(elect08$bluecounty),elect08$weight)
m[3,1] <- weighted.mean(elect12$pct_dem,elect12$weight)
m[3,2] <- weighted.mean(as.numeric(elect12$bluecounty),elect12$weight)
m[4,1] <- weighted.mean(elect16$clintonshare,elect16$weight)
m[4,2] <- weighted.mean(as.numeric(elect16$bluecounty),elect16$weight)
m

 

November 9, 2017 at 11:24 am

Strange Things Are Afoot at the IMDb

| Gabriel |

I was helping a friend check something on IMDb for a paper and so we went to the URL that gives you the raw data. We found it’s in a completely different format than it was last time I checked, about a year ago.

The old data will be available until November 2017. I suggest you grab a complete copy while you still can.

Good news: The data is in a much simpler format, being six wide tables that are tab-separated row/column text files. You’ll no longer need my Perl scripts to convert them from a few dozen files that are a weird mish mash of field-tagged format and the weirdest tab-delimited text you’ve ever seen. Good riddance.

Bad news: It’s hard to use. S3 is designed for developers not end users. You could download the old version with Chrome or “curl” from the command line. The new version requires you to create an S3 account and as best I can tell, there’s no way to just use the S3 web interface to get it. There is sample Java code, but it requires supplying your account credentials which gives me cold sweat flashbacks to when Twitter changed its API and my R scrape broke. Anyway, bottom line being you’ll probably need IT to help you with this.

Really bad news: A lot of the files are gone. There’s no country by country release dates, no box offices, no plot keywords, there are only up to three genres, no distributor or production company, etc. These are all things I’ve used in publications.

September 8, 2017 at 2:29 pm 3 comments

Older Posts


The Culture Geeks