Posts Tagged ‘research’

Experimenting on yourself

August 29, 2014

A recent post for the What Works Centre that I thought would be good here too.

*

At the What Works Centre we’re keen on experiments. As we explain here, when it comes to impact evaluation, experimental and ‘quasi-experimental’ techniques generally stand the best chance of identifying the causal effect of a policy.

Researchers are also keen to experiment on themselves (or their colleagues). Here’s a great example from the Journal of Economic Perspectives, where the editors have conducted a randomised control trial on the academics who peer-review journal submissions.

Journal editors rely on these anonymous referees, who give their time for free, knowing that others will do the same when they submit their own papers. (For younger academics, being chosen to review papers for a top journal also looks good on your CV.)

Of course, this social contract sometimes breaks down. Reviewers are often late or drop out late in the process, but anonymity means that such bad behaviour rarely leaks out. To deal with this, some journals have started paying reviewers. But is that the most effective solution? To find out, Raj Chetty and colleagues conducted a field experiment on 1,500 reviewers at the Journal of Public Economics (where Chetty is an editor). Here’s the abstract:

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results.

First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing pro-social behavior.

*

What can we take from this?

First, academics respond well to cash incentives. No surprise there, especially as these referees are all economists.

Second, academics respond well to tight deadlines – this may surprise you. One explanation is that many academics overload themselves and find it hard to prioritise. For such an overworked individual, tightening the deadline may do the prioritisation for them.

Third, the threat of public shame also works – especially for better-paid, more senior people with a reputation to protect (and less need to impress journal editors).

Fourth, this experiment highlights some bigger issues in evaluation generally. One is that understanding the logic chain behind your results is just as important as getting the result in the first place. Rather than resorting to conjecture, it’s important to design your experiment so you can work out what is driving the result. In many cases, researchers can use mixed methods – interviews or participant observation – to help do this. Another is that context matters. I suspect that some of these results are driven by the power of the journal in question: for economists the JPubE is a top international journal, and many researchers would jump at the chance to help out the editor. A less prestigious publication might have more trouble getting these tools to work. It’s also possible that academics in other fields would respond differently to these treatments. In the jargon, we need to think carefully about the ‘external validity’ of this trial. In this case, further experiments – on sociologists or biochemists, say – would build our understanding of what’s most effective where.

 

A version of this post originally appeared on the What Works Centre for Local Economic Growth blog.

What Works

September 11, 2013

As some of you will know, LSE, the Centre for Cities and Arup will be running the new What Works Centre on local economic growth.

The Centre will conduct systematic reviews of UK and international research, ranking the most effective interventions, and will work closely with local government, local enterprise partnerships and other ‘users’ to help develop stronger economic policymaking across the UK. As NICE and the EEF already do, it may eventually commission research too.

The Centre has just begun work – we had a great workshop today with a number of our local partners – and we’ll formally launch later in the Autumn. We’ll be part of a network of six working on health, education, ageing, crime reduction and early intervention as well as local economies.

Henry Overman is stepping down from SERC to lead the Centre. I’m becoming one of the Deputy Directors, and will be working at LSE alongside my research-focused role at NIESR. I’ll be leading on the academic workstream, co-ordinating the systematic reviews and demonstrator projects, as well as advising Henry on the Centre’s direction.

We’ll be working with a strong team of academics across the country – in Liverpool, Leeds, Newcastle and Bristol, as well as London. We’ll also team up with New Economy Manchester on capacity-building and demonstrator projects. And we’ll be using the UK-wide networks developed by Centre for Cities and Arup.

Developing a new organisation from scratch is exciting, challenging and a huge amount of work, as I can attest from my early days at the Centre for Cities. Unlike most start-ups, we are very lucky to have secure initial funding. And we have an emerging body of good practice to draw on. But we still have a great deal to do in the months ahead. I look forward to working with many of you as we build out.

Big data and digital firms

July 23, 2013

(C) 2013 niesr and growth intelligence

I’ve just published some new analysis of the UK’s digital economy, joint with Anna Rosso and Growth Intelligence, and funded by Google. We had a launch session yesterday with Vince Cable – see here for a good write-up by the Guardian.

We’ve done pretty well for media so far: see coverage from the BBC, FT [£], Sky, Telegraph, Independent, Scotsman and Guardian (again) among others, and a nice blog post from Google’s Hal Varian.

*

This is the first phase of a research programme with roots in the resurgence of industrial policy around the world. Like many others, the UK government wants to promote ICT and digital content activities – in the global North at least, this is generally high value activity, with spillover effects to the rest of economy.

A big problem is that we have little idea of the true size and nature of these digital companies. That’s because official definitions use SIC codes, which don’t work well for companies doing innovative, high-tech stuff.

To try and fix this, we use big data provided by Growth Intelligence. GI pull in data from the web, social media, news feeds, patents and a range of other sources, and layer this on top of public data from Companies House. That gives a much richer picture of who’s out there, their characteristics and their performance.

Crucially, GI’s data buys us a lot more precision than SIC-based analysis. We can look at industries and at products, services, clients and distribution platforms.  For increasingly tech-powered sectors like architecture, that allows us to distinguish ‘digital’ companies producing (say) CAD specialist software from ‘non-digital’ ones making buildings.

*

Overall, we find over 40% more digital companies than official estimates suggest. We also find that digital companies who report revenue or employment are pretty resilient, with faster revenue growth and higher average employment than non-digital companies.

And contrary to the popular sense that it’s all about London start-ups, we find hotspots of digital activity across the country, including some perhaps surprising places like Aberdeen, Middlesbrough and Blackpool.

*

Okay, this is all fascinating stuff for researchers. But what should Government do differently? First, the big data field is still in its early days, and we’d encourage officials to explore how it can complement conventional statistics. Second, better data should lead to better-designed industrial policies. Finding the optimal policy mix, however, is a separate and much harder question to answer.

BIS’ information economy strategy is rightly cautious about hands-on intervention. This NBER paper by Aaron Chatterji, Ed Glaeser and Bill Kerr is a good overview of the wider evidence. Henry Overman and I will be publishing a piece in the Oxford Review of Economic Policy soon too, which puts the case for a more agglomeration-focused approach.

We’ll also be continuing the data analysis, thanks to further support from NESTA. Look out for further mapping and econometric work in the months ahead.

%d bloggers like this: