Archive for the 'evaluation' Category

Innovation, evidence and industrial strategy

September 1, 2017

circle-mesh_1920_700_75_s_c1

[A What Works Centre post that’s good here too.]

Industrial strategy is one of the big issues for the What Works Centre and its local partners and innovation is one of the main themes of industrial strategies in the UK, and around the world.

Public policy plays a number of important roles in supporting innovation — see thisdebate between Mariana Mazzucato and Stian Westlake for a good intro. And as I wrote back in January, it’s equally important that we understand what the most effective tools are.

The good news for the UK is thatwe are — slowly — building an evidence base on what works for promoting innovation, as well as other pillars of industrial policy. What’s more, what we have suggests some current UK programmes work pretty well.

*

Our latest case study summarises Innovate UK’s programmes of support for microbusinesses and SMEs: mainly grants but also loans, awarded on a competitive basis, either to individual firms, or to promote partnerships with other companies or with universities.

Using standard UK administrative data, evaluators were able to set supported firms alongside similar non-supported companies, then compare how the two groups did. This ‘difference in difference’ approach is one of the methods we endorse, as it meets our minimum standards for good evaluation.

Encouragingly, Innovate UK’s programmes seem to have raised treated firms’ survival prospects (by 14 percentage points), employment (an extra 32 staff on average), and possibly sales too (although this result is less robust). These positive effects are biggest for 2–5 year old companies and those aged 6–19 years old. That is, these programmes seem to have helped innovative firms to scale.

*

This is another helpful piece of the industrial strategy puzzle, for several reasons.

First, in our innovation evidence review back in 2015, we found lots of evidence that these kinds of programmes raised firms’ R&D — but rather less evidence on growth impacts further down the line. Now we have good UK evidence of those growth and scaling impacts.

Second, we already know that the UK’s R&D tax credit system is pretty effective in stimulating firms’ patenting. We can now add good evidence on grants and loans alongside that.

Third, we can set these innovation findings alongside other evidence on business support programmes — where again, we have a decent stock of UK evidence, with several programmes (e.g. on export support) showing positive impacts.

Finally, it’s reassuring to see that evidence for these types of innovation support programmes in the UK broadly lines up with what we’ve found for OECD countries as a whole. We’ve had a number of conversations with policymakers worried that innovation programmes are very context-specific, so results from one country won’t generalise to others. This may be true in some cases. But for grants, loans and tax credits, what we know suggests that what works across the OECD also works in the UK.

*

Originally published here on 17 August 2017.

Big data and local growth policy

March 11, 2016

Twitter.

I’ve written a couple of posts for the What Works Centre on how to use new data sources, and data science techniques, in designing and evaluating local growth programmes.

In parts of the interweb ‘Big Data’ is now such a cliché that dedicated Twitter bots will dice up offending content – see above. But in local economic development, and urban policy more broadly, researchers and policymakers are only beginning to exploit these resources.

The first post lays out the terrain, concepts and resources. The second post is more focused on evaluation, research design and delivery.

Happy reading!

 

What I did in New Zealand

August 4, 2015

Matiu / Somes Island. (c) 2015 Max Nathan

Am back from New Zealand and just about over the jetlag. Thanks again to Motu and the Caddanz team for hosting me. I’m already plotting a return trip …

Here’s my talk from the Pathways conference. This is on the economics of migration and diversity, and brings together various projects from the past few years.

Here are slides and audio from my public policy talk at Motu. This looks at the What Works agenda in the UK, particularly the work of the What Works Centre for Local Economic Growth, and some of the opportunities and challenges these institutions face.

Experimenting on yourself

August 29, 2014

A recent post for the What Works Centre that I thought would be good here too.

*

At the What Works Centre we’re keen on experiments. As we explain here, when it comes to impact evaluation, experimental and ‘quasi-experimental’ techniques generally stand the best chance of identifying the causal effect of a policy.

Researchers are also keen to experiment on themselves (or their colleagues). Here’s a great example from the Journal of Economic Perspectives, where the editors have conducted a randomised control trial on the academics who peer-review journal submissions.

Journal editors rely on these anonymous referees, who give their time for free, knowing that others will do the same when they submit their own papers. (For younger academics, being chosen to review papers for a top journal also looks good on your CV.)

Of course, this social contract sometimes breaks down. Reviewers are often late or drop out late in the process, but anonymity means that such bad behaviour rarely leaks out. To deal with this, some journals have started paying reviewers. But is that the most effective solution? To find out, Raj Chetty and colleagues conducted a field experiment on 1,500 reviewers at the Journal of Public Economics (where Chetty is an editor). Here’s the abstract:

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results.

First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing pro-social behavior.

*

What can we take from this?

First, academics respond well to cash incentives. No surprise there, especially as these referees are all economists.

Second, academics respond well to tight deadlines – this may surprise you. One explanation is that many academics overload themselves and find it hard to prioritise. For such an overworked individual, tightening the deadline may do the prioritisation for them.

Third, the threat of public shame also works – especially for better-paid, more senior people with a reputation to protect (and less need to impress journal editors).

Fourth, this experiment highlights some bigger issues in evaluation generally. One is that understanding the logic chain behind your results is just as important as getting the result in the first place. Rather than resorting to conjecture, it’s important to design your experiment so you can work out what is driving the result. In many cases, researchers can use mixed methods – interviews or participant observation – to help do this. Another is that context matters. I suspect that some of these results are driven by the power of the journal in question: for economists the JPubE is a top international journal, and many researchers would jump at the chance to help out the editor. A less prestigious publication might have more trouble getting these tools to work. It’s also possible that academics in other fields would respond differently to these treatments. In the jargon, we need to think carefully about the ‘external validity’ of this trial. In this case, further experiments – on sociologists or biochemists, say – would build our understanding of what’s most effective where.

 

A version of this post originally appeared on the What Works Centre for Local Economic Growth blog.