Posts Tagged ‘whatworks’

Innovation, evidence and industrial strategy

September 1, 2017


[A What Works Centre post that’s good here too.]

Industrial strategy is one of the big issues for the What Works Centre and its local partners and innovation is one of the main themes of industrial strategies in the UK, and around the world.

Public policy plays a number of important roles in supporting innovation — see thisdebate between Mariana Mazzucato and Stian Westlake for a good intro. And as I wrote back in January, it’s equally important that we understand what the most effective tools are.

The good news for the UK is thatwe are — slowly — building an evidence base on what works for promoting innovation, as well as other pillars of industrial policy. What’s more, what we have suggests some current UK programmes work pretty well.


Our latest case study summarises Innovate UK’s programmes of support for microbusinesses and SMEs: mainly grants but also loans, awarded on a competitive basis, either to individual firms, or to promote partnerships with other companies or with universities.

Using standard UK administrative data, evaluators were able to set supported firms alongside similar non-supported companies, then compare how the two groups did. This ‘difference in difference’ approach is one of the methods we endorse, as it meets our minimum standards for good evaluation.

Encouragingly, Innovate UK’s programmes seem to have raised treated firms’ survival prospects (by 14 percentage points), employment (an extra 32 staff on average), and possibly sales too (although this result is less robust). These positive effects are biggest for 2–5 year old companies and those aged 6–19 years old. That is, these programmes seem to have helped innovative firms to scale.


This is another helpful piece of the industrial strategy puzzle, for several reasons.

First, in our innovation evidence review back in 2015, we found lots of evidence that these kinds of programmes raised firms’ R&D — but rather less evidence on growth impacts further down the line. Now we have good UK evidence of those growth and scaling impacts.

Second, we already know that the UK’s R&D tax credit system is pretty effective in stimulating firms’ patenting. We can now add good evidence on grants and loans alongside that.

Third, we can set these innovation findings alongside other evidence on business support programmes — where again, we have a decent stock of UK evidence, with several programmes (e.g. on export support) showing positive impacts.

Finally, it’s reassuring to see that evidence for these types of innovation support programmes in the UK broadly lines up with what we’ve found for OECD countries as a whole. We’ve had a number of conversations with policymakers worried that innovation programmes are very context-specific, so results from one country won’t generalise to others. This may be true in some cases. But for grants, loans and tax credits, what we know suggests that what works across the OECD also works in the UK.


Originally published here on 17 August 2017.

Doing broadband better

June 30, 2017


A What Works Centre post I thought would be good here too.


Our new broadband toolkit takes a look at how policymakers (nationally and locally) can increase broadband takeup.

Why do we care about this? From a social point of view, it’s increasingly clear that decent internet access is a citizen right (especially as many public services are being shifted online). From an economic angle, evidence from our broadband review shows that household broadband takeup can have positive effects on house prices, female labour market participation, employment, firm growth, and economic growth. (Household adoption is also strongly linked to firm adoption.)

With that in mind, the toolkit looks at three different areas where public policy can step into the market.


The first toolkit looks at direct public and public-private provision. EU State Aid rules limit what member governments can do here, but even so there is room to provide networks in rural areas. And if Brexit means these rules no longer apply, the UK can (potentially) make big investments.

We find that direct public provision raises broadband takeup, even in countries like the US where the private sector is already active. We also found a study of an Italian programme that successfully raises takeup in rural areas. Crucially, though, the US evidence suggests that provision on its own is only half as effective as provision combined with info/education on broadband’s benefits.

Frustratingly, it’s hard for us to say much on cost-effectiveness on the basis of the available evidence; although it’s clear that there are huge variations in programme costs across countries. Some of this reflects physical ruggedness (Norway is tougher to pipe than the Netherlands), but there may also be potential to deliver schemes more smartly than the UK currently does.


The second toolkit looks at ‘local loop unbundling’ – essentially, opening up the last mile of broadband networks to all-comers. Currently only BT is obliged to do this in the UK, but in theory, national government could bring cable providers into the scope of the legislation. There’s also the question of access fees and other arrangements.

The argument against LLU is a simple one: firm X has invested in a very expensive network, so why should others get the benefit, and why should X invest more in future if they are forced to open up access? (In BT’s case this is complicated by the company’s history as a state monopoly.) The counter-argument is also simple: the wider benefits to society (via higher broadband takeup) outweigh losses to a single firm, which can in any case be compensated through charging for access.

The available evidence shows a pretty clear impact of LLU on household broadband takeup (it’s less clear for firms). Across the EU, between 2000 and 2010, LLU raised household broadband adoption by 15%. Perhaps surprisingly, the majority of studies also find no evidence that LLU crowds out future investment by the network owner, and in one case, may even lead to upgrading. This seems partly explained by relatively high access charges in most countries.

We’d urge some caution here though. For the UK, the positive effects of LLU have decreased over time, as broadband rollout covers more and more of the population. That might change if future technology gets a lot better, but this seems unlikely any time soon. Setting lower access charges would also bump up the adoption effect, but also risks discouraging future investment.


The third toolkit looks across other tools that Governments have used to nudge providers and users: incentives (loans, subsidies, tax breaks), information campaigns, and demand-side measures such as buying clubs and bulk purchasing.

Sometimes policy works directly with providers (ISPs); at other times the nudges are applied to users (firms and households). Does it matter which? Not from the point of view of take-up: we find evidence that both kinds of measure can be effective. Specifically, we find that loans to ISPs and demand aggregation measures like buying clubs have the effect of raising downstream takeup. For direct-to-user policies, subsidies, training and providing computers are all effective.

For providers, a cross-OECD study finds that a bundle of producer incentives have the effect of raising fibre uptake by 10% (from slower copper networks). On the user side, a US study shows that a partial loans scheme for farms raised broadband takeup by 13-14% points.

Again, it’s not easy to figure out cost-effectiveness, but we estimate that typical household programmes cost £1.1-1.3k per extra connection, with the farms scheme above costing around £3-4k per farm connected.

The UK has strongly pushed programmes like this, notably the last government’s SME broadband voucher scheme. Sadly, this has had no proper evaluation done (something we’d like to see change for future schemes) – although a user survey was published which (perhaps not surprisingly) found that voucher recipients were very happy with £3k off the cost of a fast broadband line …


Originally posted here on 23 June 2017.

Evidence in a post-experts world

October 11, 2016

(c) The Thick of It / BBC

Something I wrote for the What Works Centre that I thought would be good here too.


The What Works Centre for Local Economic Growth is three years old. So what have we learnt?

Two weeks ago we were in Manchester to discuss the Centre’s progress (the week before we held a similar session in Bristol). These sessions are an opportunity to reflect on what the Centre has been up to, but also to think more broadly about the role of evidence in policy in a post-experts world. In Bristol we asked Nick Pearce, who ran the No 10 Policy Unit under Gordon Brown, to share his thoughts. In Manchester we were lucky to be joined by Diane Coyle, who spoke alongside Henry and Andrew on the platform. Here are my notes from the Manchester event.


Evidence-based policy is more important than ever, Diane pointed out. For cash-strapped local government, evidence helps direct resources into the most effective uses. As devolution rolls on, adopting an evidence-based approach also help local areas build credibility with central government departments, some of whom remain sceptical about handing over power.

Greater Manchester’s current devolution deal is, in part, the product of a long term project to build an evidence base and develop new ways of working around it.

A lack of good local data exacerbates the problem, as highlighted in the Bean Review. The Review has, happily, triggered legislation currently going through House of Commons to allow ONS better access to administrative data. Diane was hopeful that this will start to give a clearer picture of what is going on in local economies in a timely fashion, so the feedback can be used to influence the development of programmes in something closer to real time.

Diane also highlighted the potential of new data sources — information from the web and from social media platforms, for example — to inform city management and to help understand local economies and communities better. We think this is important too; I’ve written about this here and here.


So what have we done to help? Like all of the What Works Centres, we’ve had three big tasks since inception: to systematically review evaluation evidence, to translate those findings into usable policy lessons, and to work with local partners to embed those in everyday practice. (In our case, we’ve also had to start generating new, better evidence, through a series of local demonstrator projects.

Good quality impact evaluations need to give us some idea about whether the policy in question had the effects we wanted (or had any negative impacts we didn’t want). In practice, we also need process evaluation — which tells us about policy rollout, management and user experience — but with limited budgets, WWCs tend to focus on impact evaluations.

In putting together our evidence reviews, we’ve developed a minimum standard for the evidence that we consider. Impact evaluations need to be able to look at outcomes before and after a policy is implemented, both for the target group and for a comparison group. That feels simple enough, but we’ve found the vast majority of local economic growth evaluations don’t meet this standard.

However, we do have enough studies in play to draw conclusions about more or less effective policies.

The chart above summarises the evidence for employment effects: one of the key economic success measures for LEPs and for local economies.

First, we can see straight away that success rates vary. Active labour market programmes and apprenticeships tend to be pretty effective at raising employment (and at cutting time spent unemployed). By contrast, firm-focused interventions (business advice or access to finance measures) don’t tend to work so well at raising workforce jobs.

Second, some programmes are better at meeting some objectives than others. This matters, since local economic development interventions often have multiple objectives.

For example, the firm-focused policies I mentioned earlier turn out to be much better at raising firm sales and profits than at raising workforce head count. That *might* feed through to more money flowing in the local economy — but if employment is the priority, resources might be better spent elsewhere.

We can also see that complex interventions like estate renewal don’t tend to deliver job gains. However, they work better at delivering other important objectives — not least, improved housing and local environments.

Third, some policies will work best when carefully targeted. Improving broadband access is a good example: SMEs benefit more than larger firms; so do firms with a lot of skilled workers; so do people and firms in urban areas. That gives us some clear steers about where economic development budgets need to be focused.


Fourth, it turns out that some programmes don’t have a strong economic rationale — but then, wider welfare considerations can come into play. For example, if you think of the internet as a basic social right, then we need universal access, not just targeting around economic gains.

This point also applies particularly to area-based interventions such as sports and cultural events and facilities, and to estate renewal. The evidence shows that the net employment, wage and productivity effects of these programmes tends to be very small (although house price effects may be bigger). There are many other good reasons to spend public money on these programmes, just not from the economic development budget.


Back at the event, the Q&A covered both future plans and bigger challenges. In its second phase, the Centre will be producing further policy toolkits (building on the training, business advice and transport kits already published). We’ll also be doing further capacity-building work and — we hope — further pilot projects with local partners.

At the same time, we’ll continue to push for more transparency in evaluation. BEIS is now publishing all its commissioned reports, including comments by reviewers; we’d like to see other departments follow suit.

At the Centre, we’d also like to see wider use of Randomised Control Trials in evaluation. Often this will need to involve ’what works better’ settings where we test variations of a policy against each other — especially when the existing evidence doesn’t give strong priors. For example, Growth Hubs present an excellent opportunity to do this, at scale, across a large part of the country.

That kind of exercise is difficult for LEPs to organise on their own. So central government will still need to be the co-ordinator — despite devolution. Similarly, Whitehall has important brokering, convening and info-sharing roles, alongside the What Works Centres and others.

Incentives also need to change. We think LEPs should be rewarded not just for running successful programmes — but for running successful evaluations, whether or not they work.

Finally, we and other Centres need to keep pushing the importance of evidence, and to as wide a set of audiences as we can manage. Devolution, especially when new Mayors are involved, should enrich local democracy and the local public conversation. At the same time, the Brexit debate has shown widespread distrust of experts, and the ineffectiveness of much expert language and communication tools. The long term goal of the Centres — to embed evidence into decision-making — has certainly got harder. But the community of potential evidence users is getting bigger all the time.




Big data and local growth policy

March 11, 2016


I’ve written a couple of posts for the What Works Centre on how to use new data sources, and data science techniques, in designing and evaluating local growth programmes.

In parts of the interweb ‘Big Data’ is now such a cliché that dedicated Twitter bots will dice up offending content – see above. But in local economic development, and urban policy more broadly, researchers and policymakers are only beginning to exploit these resources.

The first post lays out the terrain, concepts and resources. The second post is more focused on evaluation, research design and delivery.

Happy reading!


Experimenting on yourself

August 29, 2014

A recent post for the What Works Centre that I thought would be good here too.


At the What Works Centre we’re keen on experiments. As we explain here, when it comes to impact evaluation, experimental and ‘quasi-experimental’ techniques generally stand the best chance of identifying the causal effect of a policy.

Researchers are also keen to experiment on themselves (or their colleagues). Here’s a great example from the Journal of Economic Perspectives, where the editors have conducted a randomised control trial on the academics who peer-review journal submissions.

Journal editors rely on these anonymous referees, who give their time for free, knowing that others will do the same when they submit their own papers. (For younger academics, being chosen to review papers for a top journal also looks good on your CV.)

Of course, this social contract sometimes breaks down. Reviewers are often late or drop out late in the process, but anonymity means that such bad behaviour rarely leaks out. To deal with this, some journals have started paying reviewers. But is that the most effective solution? To find out, Raj Chetty and colleagues conducted a field experiment on 1,500 reviewers at the Journal of Public Economics (where Chetty is an editor). Here’s the abstract:

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results.

First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing pro-social behavior.


What can we take from this?

First, academics respond well to cash incentives. No surprise there, especially as these referees are all economists.

Second, academics respond well to tight deadlines – this may surprise you. One explanation is that many academics overload themselves and find it hard to prioritise. For such an overworked individual, tightening the deadline may do the prioritisation for them.

Third, the threat of public shame also works – especially for better-paid, more senior people with a reputation to protect (and less need to impress journal editors).

Fourth, this experiment highlights some bigger issues in evaluation generally. One is that understanding the logic chain behind your results is just as important as getting the result in the first place. Rather than resorting to conjecture, it’s important to design your experiment so you can work out what is driving the result. In many cases, researchers can use mixed methods – interviews or participant observation – to help do this. Another is that context matters. I suspect that some of these results are driven by the power of the journal in question: for economists the JPubE is a top international journal, and many researchers would jump at the chance to help out the editor. A less prestigious publication might have more trouble getting these tools to work. It’s also possible that academics in other fields would respond differently to these treatments. In the jargon, we need to think carefully about the ‘external validity’ of this trial. In this case, further experiments – on sociologists or biochemists, say – would build our understanding of what’s most effective where.


A version of this post originally appeared on the What Works Centre for Local Economic Growth blog.

… and we’re live

October 25, 2013

our London launch

We had the London launch of the What Works Centre yesterday. It went very well – full room, sharp discussion, plus strong contributions from LSE’s Director Craig Calhoun, from BIS and DCLG Ministers Michael Fallon and Kris Hopkins and from Joanna Killian from Essex.

We’re off to Manchester in a couple of weeks for a second launch session. Details here.

Now the hard work begins

In the meantime you can catch up on what we’re up to here and here.

%d bloggers like this: