Mansfield
March 24, 2009 at 9:31 am gabrielrossman 2 comments
| Gabriel |
My MDC technique is basically a multilevel version of a much older technique (as in so old it could have been used for marketing analysis at Sterling Cooper on “Mad Men”) created by Edwin Mansfield. This older technique first does a series of Bass analyses (Mansfield published the equation before Bass but the older version is under-theorized which is why we call it the “Bass” model today). It then treats the coefficients from the first stage as a dataset to itself be regressed. Although more recent work supersedes it in several ways, it’s still worth using for diagnostic purposes. However it’s a pain in the ass to use as it requires you to run a separate regression for each of your innovations and then aggregate them. As such, I wrote this code to automate it.
Even if for some bizarre reason you’re not particularly interested in diffusion models dating from the Kennedy administration, this code may be interesting for a few reasons:
- It uses the “estout” package not for the (indispensable) usual purpose of making results meet publication style, but for the off-label purpose of creating a meta-analysis dataset.
- It makes extensive (and extremely clumsy) use of shell-based regular expression commands to clean this output. (I am under no illusions that the “awk” code is remotely elegant).
- It saves the cluster id variable in a local, then attaches it back using a loop.
capture program drop mansfield program define mansfield *NOTE: dependency, "vallist" and "estout" set more off local caseid `1' local genre `1' sort `caseid' by `caseid': drop if [_N]<5 vallist `caseid', quoted shell touch emptyresults shell mv emptyresults `genre'results.txt foreach case in `r(list)' { disp "`case'" quietly reg w_adds Nt Nt2 if `caseid'==`case' esttab using `genre'results.txt, plain append } shell awk '{ gsub(" +b/t", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub(" +", "\t"); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("\n\t.*", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '/.+/{print $0}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("^\t.+", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("^$", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '/.+/{print $0}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("Nt2\t", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("_cons\t", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("N\t", ""); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell awk '{ gsub("Nt\t", "NR\t"); print $0;}' `genre'results.txt > tmp ; mv tmp `genre'results.txt shell #!/bin/sh shell awk -f mansfield.awk `genre'results.txt > tmp ; mv tmp `genre'results.txt insheet using `genre'results.txt, clear drop v1 v6 ren v4 A ren v2 B ren v3 C ren v5 n gen b = -C gen nmax = (-B - ((B^2)-4*A*C)^0.5) / (2*C) gen a = A / nmax gen `caseid'=. global n=1 foreach case in `r(list)' { replace `caseid'=`case' in $n/$n global n=$n+1 } save `genre'_mansfield.dta, replace end *note, text of mansfield.awk follows *it should be in the same directory as the data and made executable the command *"chmod mansfield.awk -x" *BEGIN { * FS="\n" * RS="Nt\t" * ORS="" *} * *{ * x=1 * while ( x<NF ) { * print $x "\t" * x++ * } * print $NF "\n" *}
Entry filed under: Uncategorized. Tags: diffusion, loops, shell, Stata.
1. MDC code « Code and Culture | June 16, 2009 at 5:11 am
[…] already posted code to do the precursor approach by Edwin Mansfield, though I recently learned some matrix syntax that […]
2. The Workflow of Data Analysis Using Stata « Code and Culture | June 29, 2009 at 5:56 am
[…] syntax. Even until now, I’d never understood how to use matrices (which is why this script is so hideously clunky, really, please don’t click the link) but Long has a very clear […]