BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Evaluating the Effects of Accelerators? Not So Fast

This article is more than 10 years old.

Guest post written by Jared Konczal

Jared Konczal is an analyst in Research and Policy with the Ewing Marion Kauffman Foundation.

If anything good can be said about this agonizingly slow economic recovery, it is that there is growing interest in helping startups launch and succeed, because we know that new firms create new jobs. There are an increasing number of initiatives seeking to support entrepreneurs as they launch their businesses. I am focusing in this post on to the outcomes of so-called startup accelerator programs. Are they helping new firms succeed and, therefore, having a healthy impact on the economy?

First, let’s define some terms. Many people conflate business incubators with accelerators. We need to make a distinction between the two. A business incubator in the purest sense refers to an office park or building complex that charges businesses, typically new businesses that cannot afford their own offices, some rent in exchange for space within the incubator and some administrative services and infrastructural support. Over time, the term business incubator has accrued some negative connotations for not actually helping to start businesses.

The accelerator term was born to brand groups that, in addition to providing office space (though not all do), have the express focus of “accelerating” a startup from birth to viable company. Susan Cohen is a researcher at the University of North Carolina at Chapel Hill who offers this clarification: Accelerators are organizations that provide cohorts of selected nascent ventures seed-investment, usually in exchange for equity, and limited-duration educational programming, including extensive mentorship and structured educational components. These programs typically culminate in “demo days” where the ventures make pitches to an audience of qualified investors.

What do we know about the effects of accelerators on the startups they take in? I am going to use this story from Grasshopper.com about accelerators and their supposed success as an example. The purpose of this post is to show that the currently available data on accelerators are lacking, and we should not base conclusions on preliminary efforts. As much as we want to support entrepreneurs and accelerators, we should not claim the economic benefits of accelerators with exaggerated numbers, either.

The Grasshopper.com article presents the following summary statistics for 120 accelerator programs worldwide [bold is my emphasis]:

  • 1,436 companies accelerated
  • 69 exits for $979,458,100
  • $1,176,787,411 funding
  • 3,389 jobs created

. . .

One thing that really caught our attention is that, unlike current job creation plans, these jobs haven’t really cost much to create … Thus, the successes of these companies have resulted in the creation of even more jobs at little to no cost.

That accelerators create jobs at little to no cost is a pretty significant claim. Unfortunately, this is simply not backed up by the data the article cites. The source data is credited to Jed Christiansen’s seed-db.com project, which is undertaking the heroic task of collecting data on accelerators and putting it online. There are three issues with the article’s use of the data I would like to discuss:

(1)   Outliers. I plotted the distribution for a few of the variables from the seed-db data in the charts that follow (the x-axis represents programs numbered 1-120). You will notice there is one dot way out on the boundaries of the distribution in each chart. It turns out this dot represents the same accelerator each time, Y-Combinator. Indeed, Christiansen himself points out that Y-Combinator is an outlier.

If you drop Y-Combinator from the sample, summary statistics come out differently:

And suddenly, using the article’s reasoning, accelerators look like a $300 million loss, not break even as the article claims. Or to put it in terms of jobs, the accelerators have spent more than $130,000 for each job created, which does not equate to little-to-no cost. These corrected findings are likely not accurate, by the way, which I will elaborate on shortly. I only present the summary statistics above to compare my data calculations to the article’s presentation of the data. I would actually not use the data at all in its current state, even with outliers removed, because there are other bigger issues with the data itself that the article fails to address. The article claims that “despite the data being far from complete, as a number of the fields rely on companies to update their own CrunchBase data, the economic effect of seed accelerators is clear.” I don’t think this is clear at all from the source data:

(2)   Missing data. If you scroll through the table of data, http://www.seed-db.com/accelerators, you will notice a lot of the zero values – almost half of programs have no data. This is missing a huge amount of information. It is entirely possible that the lack of data is due to many programs being newly created, or that accelerators and companies are purposefully not disclosing data (e.g. for legal investment reasons or intellectual property concerns), so there is some time lag in data availability, but this does not detract from the fact that we do not have a good overall picture of accelerators from these current data.

(3)   Guessed data. I disregard entirely the dollar amounts of the exit figures. Quoting from the front page of seed-db, which prominently states [bold is my emphasis]:

“Exit values - Most values for company exits are guesses, though some come from published reports or the startups themselves. Dots by each exit value indicate the confidence in the value. (High/Medium/Low)”

Seed-db does not hide this in a footnote somewhere. However, I would rather it just leave out exit values entirely for now. Guesses are certainly not reliable.

Even if you believe all of the data currently on seed-db are accurate and usable, major issues remain.

  • The true cost of accelerators is understated by the “funding” statistic. This only captures money put into the companies and does not reflect in-kind contributions from mentors to the companies or the infrastructural and overhead support of the accelerator.
  • The success of some of the accelerators is likely understated. As seed-db itself points out, exits are not the only dollar amount measures of success, just the measures that it is able to obtain currently. Accelerators typically take equity stakes in the startups they support. If a startup is never acquired or never buys out that stake, this does not mean it is not creating some return on investment for the accelerator. As seed-db says, it would like to include some information on company valuation.
  • None of these data address the issue of selection bias, which is a significant concern if you are trying to evaluate the success of accelerators. Accelerators by their nature are very selective about the startups they take in. Maybe these startups would have turned out the same without the accelerator’s intervention. In addition to encouraging accelerators to submit data on the companies they do select, why not also keep a list of companies they don’t select? In the internet age, if the unselected companies go on to start anyway, we should be able to track them down and compare their business outcomes to those startups that made it in to the accelerators.
  • Seed-db bases much of its data off voluntary CrunchBase profiles. If we want to evaluate accelerators, given the nature of these profile data, they should be checked against other sources of information or verified with the accelerator program (if this has been done, it is not clear to me based on going through the seed-db website).

If you don’t believe me about data issues, you might believe David Cohen of TechStars, who similarly applauds seed-db’s efforts but echoes some of the comments I have about data. I would expect wide variation in the “success” measures of accelerators if concrete data are ever collected. Paul Miller and Kirsten Bound of NESTA have detailed the proliferation of accelerator programs, which could also explain why we see so many blank values in the seed-db data, and suggests to me that we will likely see significant disparities in performance as data becomes available. Wade Roush of Xconomy says that we are in an unsustainable accelerator bubble that is due to consolidate to leave only elite programs, though Brad Feld of TechStars might argue that closures of programs don’t necessarily signal an all-out failure. To offer the same critique to myself, however, this is all just conjecture because the data desperately need to be collected and distilled.