Changes

Jump to navigation Jump to search
|Has title=Measuring High-Growth High-Technology Entrepreneurship Ecosystems
|Has author=Ed Egan,
|Has paper status=R and RPublished
}}
==Final Version==
==Current Version==*The final version was accepted to Research Policy on May 17th, 2021. *The 50-day share link is: https://authors.elsevier.com/a/1d8SaB5ASINVf *The title was changed to "A Framework for Assessing Municipal High-Growth High-Tech Entrepreneurship Policy"
The current version<pdf>File:Egan_(2021)_-_A_Framework_for_Assessing_Municipal_High-Growth_High-Tech_Entrepreneurship_Policy. which lead to a 2nd R&R at Research Policy is:pdf</pdf>
<pdf>FileThe BibTeX reference is (pending update with volume and number):MeasuringHGHTEntrepreneurshipEcosystemsV3-3.pdf</pdf>
@article{EGAN2021104292, title ={A framework for assessing municipal high-growth high-technology entrepreneurship policy}, journal =Files{Research Policy}, pages ={104292}, year ={2021}, issn = {0048-7333}, doi = {https://doi.org/10.1016/j.respol.2021.104292}, url = {https://www.sciencedirect.com/science/article/pii/S0048733321000937}, author = {Edward J. Egan}, keywords = {Entrepreneurship, Ecosystem, Measurement, High-growth high-technology, Venture capital, Ecosystem support organization, Pipeline, Raise rate, Policy cartel}, abstract = {This paper advances a framework for making rudimentary need, impact, and cost–benefit assessments of municipal high-growth high-tech entrepreneurship policy. The framework views ecosystem support organizations like accelerators, incubators, and hubs as components in a city’s venture pipeline. A component’s pipeline size, raise rate, and cost per raise measure its performance. In total, the framework consists of eight objective and reproducible measures based on quantities and qualities of venture capital investment and 16 definitions of related terms-of-the-art. These measures and definitions are illustrated in 26 real-world policy examples, which assess initiatives in Houston and St. Louis over the last 20 years. The examples reveal an enormous variation in welfare effects, and some policies appear welfare destroying. Many non-profit organizations claim success (and win awards and acclaim) using non-standard measures despite performing at less than half benchmark levels. Policy cartels, which control startup policy in many U.S. cities, also engage in non-market actions to protect their rents.} }
The files are final file series was v4-6-2 in:
E:\projects\MeasuringHGHTEcosystems
/bulk/vcdb4
Egan (2021) - A Framework for Assessing Municipal High-Growth High-Tech Entrepreneurship Policy.pdf
 
Production files (sent to ResPol):
*MeasuringHGHTEntrepreneurshipEcosystemsV4-6-2.tex
*MeasuringHGHTEntrepreneurshipEcosystemsV4-6-2-TitlePage.tex
*References.bib
*HoustonPipelineV4.png
*HoustonVCRaiseRateWithBenchmarkV4.png
*econ.bst
==Notice==
This The original Measuring HGHT Entrepreneurship Ecosystems paper was broken into two:*Measuring HGHT :'''A Framework for Assessing Municipal High-Growth High-Technology Entrepreneurship Ecosystems: This Policy''' now contains the definitions, measures, and exampleexamples. It is an informalinductive, bycase-example theory study paper.
*[[Determinants of Future Investment in U.S. Startup Cities]]: The empirical analysis of ESOs is now in this paper!
==2nd R&R==
Note that there There were three reviewers for the 2nd R&R but Reviewer 2 never returned any comments.Reviewer 1 accepted the paper. Reviewer 3 asked for a revision. The editor said: :"We would be glad to reconsider a resubmitted paper, revised in the light of the referees' comments.:If you decide to revise the paper, it would be very useful if you could also include an author's response to the referees, listing what changes you have (or have not) made and where.:If you choose to revise your paper, could you please ensure that it is resubmitted on or before Feb 06, 2021. (If there is too long a gap, referees may have forgotten what they said previously or be unwilling to review the revised paper, causing further delays.)"
Summarizing reviewer 1's comment comments (i.e., reading between the lines):
*The writing is currently pretty good: '''it is well done'''... '''the paper is polished'''... '''very nicely done'''.
*It The paper works as a whole. The reviewer didn't want anything cut: '''It is a collection of case studies and definitions''', and '''I don't have ... major comments'''. Reviewer 3's comments are more problematic. As is often the case, I wondered whether the reviewer actually read the paper:*The paper advances seven new measures, not 15 as the reviewer claims*Policy cartels are introduced in section 4.1 (out of 5), and aren't the main focus of the paper per se*I never use the entire battery of measures -- different measures are applicable in different contexts *A key point of the paper is to stop organizations from self-selecting into measures that they do well on*All bar two sentences of the 'substantive material' in the review (i.e., from "First..." to the end) don't mention anything to do with the paper!
At this point in my career, It's Reviewer 3's comments that I'm uniquely placed need to give the response that every academic really wants address. Jim kindly reminded me to give ascribe only normative motivations to the Reviewer (rather than positive theories). He also said to ''self-aggrandizing idiot' that somehow always ends up controlling follow the fate review''' and respond to each point with one of our hard work, namelythree sentiments:*Disagree*Good point but beyond the scope of the paper*Good point and I address it like this...
:"Dear Reviewer. After carefully considering your commentsHe also reminded me that the median review is a reject, I would like and that this a `good' review based on the offer distribution of reviews. The reviewer does say that the following responsepaper is useful and helpful: I find your suggestions for my beautiful :"The defining of key terms is a useful contribution of this paper to puerile/irrelevant/narcissistic/useless/stupid/all-, as are the identification of-potentially useful metrics of HGHT entrepreneurship. Furthermore, the-above (delete as appropriate), so I'm going to ignore them and, by extension, you. Up yoursexamples are often helpful in highlighting their various applications."
HoweverThe reviewer listed the following three major flaws::"In my view, it does seem that '''some''' of reviewer 3's comments would lead to improvements the major flaws in the paper. These study are:#Being clearer that standardized measures aren't a panacea#A better discussion not (1) considering possible downsides of gaming and incentives#Better framing each individual metric, (2) considering possible downsides in using the front end entire battery of why these measures make sense , and (likely using the points from RFP for the special issue3)testing alternatives to this measurement approach."
I would need to address Reviewer 3's comments point-by-pointput these as points 1, so here they are (re-ordered):#Discuss when "standardized measurement" improves outcomes2, when it doesand 3, and perhaps how.#Discuss systematic bias due to gaming metrics incentivized by rewards attached to added material from the measurement rest of his review (old and newsee below). Specifically, note that this behavioral change might have nothing to do with the underlying phenomenon of interest. (Think "Rewarding A While Hoping For B", etc., multitask, etc.)#Justify the choices in create the reduction list of the measurement space. Measurement is often reductivebullet-points that I'll address: What is left out? Is it important? Discuss the difference between the conceptual phenomenon and how it is operationalized.#Consider possible downsides of each the individual metricmetrics.#Consider possible downsides in using the entire battery of measures.
#Test alternatives to this measurement approach
#Discuss the welfare implications of putting more information in the hands of policymakers, including:
##It is not necessarily true that "standardized measurement" improves outcomes, such as policy, overall. All frameworks are not equal and that any conceived framework is not necessarily better than nothing.
##Measurements are rarely if ever neutral (vis-à-vis behavior). There can be systematic bias due to gaming metrics incentivized by rewards attached to the measurement.
#Consider the relationship between the conceptual phenomenon and how it is operationalized:
##Discuss alternative calculations of a measure when applicable (i.e., for the ranking measure).
##Discuss that measurement is reductive.
##Discuss systematic bias.
#Test the effects of using a framework on various outcomes.
Of these, 1,2, and 3 are reasonable suggestions and could be addressed. Point 4 is more problematic but probably possible in some sense. Point 5 could be interpreted as the downside of the framework as a whole (see below) and then could be possible. Points 6 and 7 might be excused by explaining that the editors and I have agreed that this won't be a testing paper. Nevertheless, the paper will show examples of not using the measurement framework through-out. Also for reference, here are the measures from in the latest version of the paper:*Measure 1 (Startup Ranking) *Measure 2 (Apportioned investment and exit value)*Measure 3 2 (MOOMI ratio)*Measure 4 3 (Pipeline)*Measure 5 4 (Raise Rate)*Measure 5 (Cost per raise)
*Measure 6 (Repeat VC)
*Measure 7 (ESO Expertise)
Measure 1 is exclusive to all othersThe old Startup Ranking measure has been dropped. Measure 2 underpins 3Also, but 2 & 3 stand-alone. Measures 4 and there are two definitions that are close to being measures:*Definition 5 go hand in hand, and can be refined by 6. However, 7 exists a proxy for (Local VC)*Definition 8 (Expert) This version also adds Measure 5 and 6, because they can be hard to calculate. So, the reviewers request to "consider possible downsides in using the entire battery of measures" could only mean to consider the downside of the framework (Cost per raise) as a whole, because it isn't possible to use "the entire battery of measures" togethernumbered measure.
In my letter Measures 1 and 2 provide ways to the reviewercalculate proxies for return quartiles. Measures 3, I can explain that this paper is for a special issue 4 and that 5 are the editors and I have agreed that this should not be and an empirical paper and that should not contain regressions or other tests. That might tamp down their vitriol somewhat. Whether I deliver back a "major revision" is in the eye central measures of the beholder, framework. Measures 6 and there's no need 7 are alternative ways to draw attention to this demandassess the performance of pipeline components (measure 7 is possible without VC investment data).
===RP Constraints===
This paper came from a presentation that I made to the Kauffman UMM Grant Cohort. I originally attempted to add empirics but this approach necessarily reduced the coverage of material: Although the framework is simple and used in practice, it is also on the frontier of research, so there aren't any published academic papers with the empirics. So I opted to break the original submission in two - breaking the empirics back out and leaving this as the best attempt I could make at a narrative-based exploration of the whole framework. It is, as a consequence, a very unusual paper. But most people I showed it to were enthusiastic. It is also reference-bait. Outside the review process, some readers were both amused and worried about its snarky tone, which I'm still trying to address.
This paper had a storied submission resubmission process:
*The deadline for resubmission was June 15th, 2020. Before this deadline, I emailed the editors and offered them either this version of the paper, which contains no empirics, or an empirical paper without examples or definitions. I received no response.
*This version The first revision of the paper was submitted as an R&R to a Special Issue of Research Policy on June 10th, 2020, with manuscript number RESPOL-D-19-01438R1.
*On September 15th, I sent an email to the editors requesting information but received no response.
*I wrote to the editors again on October 27th, this time using the Elvesier form, to request another update. The last status reported by Elvesier (https://ees.elsevier.com/respol/default.asp) was 'Required Reviews Complete' on October 9th, 2020.
*On October 28th, I received an email saying: "Hello Ed. I hope to get back to you shortly. I have two good reviews and I’m waiting on a third. This most definitely will be another R&R. More soon"
*On November 8th, I got an official email about the paper that said: "We have now received the referees' reports on your paper, copies of which I enclose below for your information. As you will see, the referees make various comments and suggestions for improvement. I have given up on the third reviewer and want to return the paper to you." However, this email only contained one review. I requested clarification and noted that Reviewer 3 had asked for empirics.
*On November 11th , I got an email that said: "Hello Ed, This is strange. The comments were in the comments to the editor. Here they are and they are not worth that much. This special issue has a specific purpose. '''You do not need to run regressions!'''" (Note that the comments are below as Reviewer 1. They indicate that the reviewer 1 accepted the paper.)*On February 6th, 2021, I submitted the second revision of the paper.
==Research Policy Special Issue==
==Data and Analysis==
The paper uses [[vcdb4VCDB20]] and [[US Startup City Ranking]], as well as a wealth of old McNair material. Sources include (copied to the project folder unless otherwise noted):
*[[Hubs]]: Hubs Data v2_'16.xlsx
*[[Federal Grant Data]], including NIH, NSF and other grant data, especially SBIR/STTR. Possibly also contract data.

Navigation menu