Regression to the mean [updated]

May 27, 2010 at 4:37 am 3 comments

| Gabriel |

I updated* my old old script for simulating regression to the mean.

Regression to the mean is the phenomena that when you have a condition measured before and after a treatment, where recruitment into a treatment is conditional on the condition at time zero, you can get artifactual results. For instance, people tend to go into rehab when they hit bottom (i.e., are especially screwed up) so even if rehab were useless you’d expect some people to sober up after a stint in rehab. Likewise, the placebo effect is often understood as something like the “magic feather” in Dumbo but another component is regression to the mean, which is why you can get a placebo effect with plants. A special case of regression to the mean is the “sophomore slump” which occurs when you select cases that were high rather than low for treatment.

The code simulates the process for a population of 100,000 agents (a number chosen to be large enough that sampling error is asymptotically zero). Each agent has a latent tendency drawn from a standard normal that is measured at any given time with (specifiable) noise and is sensitive to (specifiable) treatment effects. The program takes the following arguments in order:

  1. Noisiness defined as noise:signal ratio for any given observation. Can take any non-negative value but 0-1 is a reasonable range to play with. Low values indicate a reliable variable (like height) whereas high values indicate an unreliable variable (like mood). At “zero” there is no measurement error and at “one” any given observation is equal parts latent tendency and random instantaneous error.
  2. True effect of the treatment -1 to +1 is a reasonable range but can take any value: positive, negative, or zero. For raw regression to the mean choose “zero.”
  3. Selection of the cases for treatment. Cases are selected for treatment on the basis of  initial measured condition. The parameter defines how far out into the left tail (negative values) or right tail (positive values) the cases are selected. Negative values are “adverse selection” and positive values are “advantageous selection.” Largish absolute values (i.e., +/- 2 sigmas or higher) indicate that the treatment is applied only to a few extreme cases whereas low values indicate that the treatment is applied to a large number of moderate cases.

After a run the program has in memory the parameters it started with and two output measures. “bias1″ is the classic regression to the mean effect and “bias0″ is the change in non-treatment group (which is usually much smaller than bias1). The program gives text output summarizing the data for those parameters. I designed this mode for use in lab pedagogy — let students play with different parameters to see how much bias they get and try to figure out what’s responsible for it.

Alternately, you can batch it and see the big picture. Doing so shows that the true effect doesn’t matter much for the size of the regression to the mean effect (though of course they might be conflated with each other, which is the whole point). What really drives regression to the mean is primarily the noisiness (i.e., low reliability) of the condition measurement and secondarily how intensive the selection is. This is shown below in a surface graph (which is based on simulations where there is no true effect). In this graph width is noisiness, depth is where in the tail agents get recruited, and height/color is the magnitude of the regression to the mean effect.

The first thing to note is that for very reliably measurable conditions (the left side of the graph) there is no regression to the mean effect. No noise, no regression to the mean. So if you take your shortest students (as measured standing up straight with their shoes off) and have them do jumping jacks for a week to stretch them out you’ll find that they are still your shortest students after the exercise. This is true regardless of whether you impose this on the single shortest student or the shorter half of the class.

As you increase the noise (the right side of the graph) you get more regression to the mean, especially as you have more intensive selection (the front and back of the graph). So if you read your students’ midterms and send the low scorers for tutoring you’ll see improvement even if the tutoring is useless, but the effect will be bigger if you do this only for the very worst student than for the whole bottom half of the class. When you have high noise and intense selection (the front right and back right corners of the graph) you get huge regression to the mean effects, on the order of +/- 1.3 standard deviations. The really scary thing is that this is not some simulation fantasy but a realistic scenario. Lots of the outcomes we care about for policy purposes show intense day-to-day variation such that, if anything assuming that error is of equal magnitude to latent tendency is a conservative assumption. Likewise, lots of policy interventions are targeted at extreme cases (whether it be a positive “rookie of the year” or negative “hitting bottom” extreme). This is one reason to expect that programs developed with hard cases will be less effective when applied to a more representative population.

capture log close
log using reg2mean.log, replace

*full do-file (but not the core reg2mean program) depends on gnuplot and gnuplotpm3d.ado
*can get similar results with surface.ado, tddens.ado, by piping to R, or even MS Excel

capture program drop reg2mean
program define reg2mean
	set more off
	if `1'>=0 {
		local noisiness `1'
		/* how bad is our measure of Y, should range 0 (perfect measure) to 1 (1 signal: 1 noise), >1 indicates noise>signal */
	else {
		disp "NOTE: Noisiness must be non-negative. Set to zero for now"
		local noisiness = 0
	local beta_treatment `2'
	/* how effective is the treatment. should range from -.5 (counter-productive) to .5 (pretty good), where 0 means no effect */
	local recruitment `3'
	/* as measured in sigmas for adverse selection use "<-1" , for advantageous selection use ">1" -- note, the program assumes that the median is in the control */
	quietly set obs 100000  /*note large number is hard-coded to avoid conflating sampling error with reg2mean effects. */
	gen y_0true=rnormal()
	gen y_0observed=y_0true + (rnormal()*`noisiness')
	gen treatment=0
	*this code defines recruitment
	if `recruitment'<0 {
		quietly replace treatment=1 if y_0observed<`recruitment'
	else {
		quietly replace treatment=1 if y_0observed>`recruitment'
	quietly gen y_1true=y_0true+ (treatment*`beta_treatment')
	quietly gen y_1observed=y_1true+ (rnormal()*`noisiness')
	quietly gen delta_observed=y_1observed-y_0observed
	quietly gen bias=delta_observed - (treatment*`beta_treatment')
	collapse (mean) bias , by (treatment)
	quietly gen noisiness=round(`noisiness',.001)
	quietly gen beta_treatment=round(`beta_treatment',.001)
	quietly gen recruitment=round(`recruitment',.001)
	quietly reshape wide bias, i(noisiness beta_treatment recruitment) j(treatment)
	local treatmentbias = bias1 in 1
	local controlbias = bias0 in 1
	if `recruitment'<0 {
		disp "You have simulated regression to the mean where the signal:noise ratio is " _newline "1:" float(`noisiness') ", the true effect of the treatment is " float(`2')  ", and there is adverse " _newline "selection such that the treatment is allocated if and only if the " _newline "the pre-treatment measure of the condition is below " float(`3') " standard deviations."
	else {
		disp "You have simulated regression to the mean where the signal:noise ratio is" _newline "1:" float(`noisiness') ", the true effect of the treatment is " float(`2')  ", and there is advantageous" _newline "selection such that the treatment is allocated if and only if the " _newline "pre-treatment measure of the condition is above " float(`3') " standard deviations."
	disp "Net of the true treatment effect, the regression to the mean artifactual " _newline "effect on those exposed to the treatment is about " round(`treatmentbias',.001) ". Furthermore, " _newline "the non-treated group will experience an average change of " round(`controlbias',.001) "."

tempname results
tempfile resultsfile
postfile `results' bias0 bias1 noisiness beta_treatment recruitment using "`resultsfile'"

forvalues noi=0(.1)1 {
	forvalues beta=-.5(.25).5 {
		disp "noise    beta      recruitment"
		forvalues recr=-2(.25)2 {
			disp round(`noi',.01) _column(10) round(`beta',.01) _column(20) round(`recr',.01)
			quietly reg2mean `noi' `beta' `recr'
			local bias0 = bias0 in 1
			local bias1 = bias1 in 1
			post `results' (`bias0') (`bias1') (`noi') (`beta') (`recr')

postclose `results'
use `resultsfile', clear
foreach var in bias0 bias1 noisiness beta_treatment recruitment {
	replace `var'=round(`var',.0001)
lab var bias0 "artifactual change - nontreatment group"
lab var bias1 "artifactual change - treatment group"
lab var noisiness "measurement error of Y"
lab var beta_treatment "true efficacy of treatment"
lab var recruitment "sigmas out in tail that treatment is recruited"
save reg2mean.dta, replace

keep if beta_treatment==0
gnuplotpm3d noisiness recruitment bias1, title (Regression to the Mean with No True Effect) xlabel(Noisiness) ylabel(Who Gets Treatment) using(r2m_0)
shell open r2m_0.eps

*have a nice day

*The changes are a larger set of agents, integration of postfile, improved handling of macros, specification of selection, interactive mode, and surface plotting (dependent on my Gnuplot pipe).

About these ads

Entry filed under: Uncategorized. Tags: , , .

Gnuplot [updated] An open letter to ICM Research


  • 1. Ulrich  |  June 9, 2010 at 2:49 pm

    I’m thinking »selection bias«, because I’ve never heard your term before. That is, in economics…which field does it stem from?

    • 2. gabrielrossman  |  June 9, 2010 at 4:01 pm

      i wouldn’t say that “selection bias” is exactly a synonym of “regression to the mean” but rather that regression to the mean is a consequence of a special case of selection bias (where exposure to the treatment is related to the pre-treatment measure of the dependent variable).

      anyway, the concept of regression to the mean dates back to Galton’s early work on heredity and is still an important concept in the life sciences, policy analysis, and statistics.

  • 3. Misc Links « Code and Culture  |  May 31, 2011 at 10:23 am

    [...] for California, the only state that switched to the Democrats last November. Repeat after me: REGRESSION TO THE MEAN. I don’t doubt that some of this is substantive backlash to overreach on the part of [...]

The Culture Geeks

Recent Posts


Get every new post delivered to your Inbox.

Join 1,472 other followers

%d bloggers like this: