Tag Archives: utility

New paper: Urban data and city dashboards: Six key issues

Rob Kitchin and Gavin McArdle have published a new Programmable City working paper (no. 21) – Urban data and city dashboards: Six key issues – on SocArXiv today.  It is a pre-print of a chapter that will be published in Kitchin, R., Lauriault, T.P. and McArdle, G. (eds) (forthcoming) Data and the City. Routledge, London..


This chapter considers the relationship between data and the city by critically examining six key issues with respect city dashboards: epistemology, scope and access, veracity and validity, usability and literacy, use and utility, and ethics.  While city dashboards provide useful tools for evaluating and managing urban services, understanding and formulating policy, and creating public knowledge and counter-narratives, our analysis reveals a number of conceptual and practical shortcomings.  In order for city dashboards to reach their full potential we advocate a number of related shifts in thinking and praxes and forward an agenda for addressing the issues we highlight.  Our analysis is informed by our endeavours in building the Dublin Dashboard.

Key words: dashboards, cities, access, epistemology, ethics, open data, scope, usability, utility, veracity, validity

Open data talks at Dublinked event

On Thursday two members of the ProgCity team – Rob Kitchin and Tracey Lauriault – presented at the Open Data Summit organized by Dublinked.  Rob presented a paper entitled ‘Open data: An open and shut case’ (see below for slides) and Tracey presented a paper entitled ‘The open data landscape in Ireland.’  It was an excellent event and hopefully the slides of the other talks will be put online as there was a lot of useful insight shared during the presentations and discussion.

Writing for impact: how to craft papers that will be cited

For the past few years I’ve co-taught a professional development course for doctoral students on completing a thesis, getting a job, and publishing.  The course draws liberally on a book I co-wrote with the late Duncan Fuller entitled, The Academics’ Guide to Publishing.  One thing we did not really cover in the book was how to write and place pieces that have impact, rather providing more general advice about getting through the peer review process.

The general careers advice mantra of academia is now ‘publish or perish’.  Often what is published and its utility and value can be somewhat overlooked — if a piece got published it is assumed it must have some inherent value.  And yet a common observation is that most journal articles seem to be barely read, let alone cited.

Both authors and editors want to publish material that is both read and cited, so what is required to produce work that editors are delighted to accept and readers find so useful that they want to cite in their own work?

A taxonomy of publishing impact

The way I try and explain impact to early career scholars is through a discussion of writing and publishing a paper on airport security (see Figure 1).  Written pieces of work, I argue, generally fall into one of four categories, with the impact of the piece rising as one traverses from Level 1 to Level 4.

Figure 1: Levels of research impact

Figure 1: Levels of research impact

Level 1: the piece is basically empiricist in nature and makes little use of theory.  For example, I could write an article that provides a very detailed description of security in an airport and how it works in practice.  This might be interesting, but would add little to established knowledge about how airport security works or how to make sense of it.  Generally, such papers appear in trade magazines or national level journals and are rarely cited.

Level 2: the paper uses established theory to make sense of a phenomena. For example, I could use Foucault’s theories of disciplining, surveillance and biopolitics to explain how airport security works to create docile bodies that passively submit to enhanced screening measures.  Here, I am applying a theoretical frame that might provide a fresh perspective on a phenomena if it has not been previously applied.  I am not, however, providing new theoretical or methodological tools but drawing on established ones.  As a consequence, the piece has limited utility, essentially constrained to those interested in airport security, and might be accepted in a low-ranking international journal.

Level 3: the paper extends/reworks established theory to make sense of phenomena.  For example, I might argue that since the 1970s when Foucault was formulating his ideas there has been a radical shift in the technologies of surveillance from disciplining systems to capture systems that actively reshape behaviour.  As such, Foucault’s ideas of governance need to be reworked or extended to adequately account for new algorithmic forms of regulating passengers and workers.  My article could provide such a reworking, building on Foucault’s initial ideas to provide new theoretical tools that others can apply to their own case material.  Such a piece will get accepted into high-ranking international journals due to its wider utility.

Level 4: uses the study of a phenomena to rethink a meta-concept or proposes a radically reworked or new theory.  Here, the focus of attention shifts from how best to make sense of airport security to the meta-concept of governance, using the empirical case material to argue that it is not simply enough to extend Foucault’s thinking, rather a new way of thinking is required to adequately conceptualize how governance is being operationalised.  Such new thinking tends to be well cited because it can generally be applied to making sense of lots of phenomena, such as the governance of schools, hospitals, workplaces, etc.  Of course, Foucault consistently operated at this level, which is why he is so often reworked at Levels 2 and 3, and is one of the most impactful academics of his generation (cited nearly 42,000 time in 2013 alone).  Writing a Level 4 piece requires a huge amount of skill, knowledge and insight, which is why so few academics work and publish at this level.  Such pieces will be accepted into the very top ranked journals.

One way to think about this taxonomy is this: generally, those people who are the biggest names in their discipline, or across disciplines, have a solid body of published Level 3 and Level 4 material — this is why they are so well known; they produce material and ideas that have high transfer utility.  Those people who well known within a sub-discipline generally have a body of Level 2 and Level 3 material.  Those who are barely known outside of their national context generally have Level 1/2 profiles (and also have relatively small bodies of published work).

In my opinion, the majority of papers being published in international journals are Level 2/borderline 3 with some minor extension/reworking that has limited utility beyond making sense of a specific phenomena, or Level 3/borderline 2 with narrow, timid or uninspiring extension/reworking that constrains the paper’s broader appeal. Strong, bold Level 3 papers that have wider utility beyond the paper’s focus are less common, and Level 4 papers that really push the boundaries of thought and praxis are relatively rare.  The majority of articles in national level journals tend to be Level 2; and the majority of book chapters in edited collections are Level 1 or 2.  It is not uncommon, in my experience, for authors to think the paper that they have written is category above its real level (which is why they are often so disappointed with editor and referee reviews).

Does this basic taxonomy of impact work in practice? 

I’ve not done a detailed empirical study, but can draw on two sources of observations.  First, my experience as an editor two international journals (Progress in Human Geography, Dialogues in Human Geography), and for ten years an editor of another (Social and Cultural Geography), and viewing download rates and citational analysis for papers published in those journals.  It is clear from such data that the relationship between level and citation generally holds — those papers that push boundaries and provide new thinking tend to be better cited.  There are, of course, some exceptions and there are no doubt some Level 4 papers that are quite lowly cited for various reasons (e.g,, their arguments are ahead of their time), but generally the cream rises.  Most academics intuitively know this, which is why the most consistent response of referees and editors to submitted papers is to provide feedback that might help shift Level 2/borderline Level 3 papers (which are the most common kind of submission) up to solid Level 3 papers – pieces that provide new ways of thinking and doing and provide fresh perspectives and insights.

Second, by examining my own body of published work.  Figure 2 displays the citation rates of all of my published material (books, papers, book chapters) divided into the four levels.  There are some temporal effects (such as more recently published work not having had time to be cited) and some outliers (in this case, a textbook and a coffee table book) but the relationship is quite clear, especially when just articles are examined (Figure 3) — the rate of citation increases across levels.  (I’ve been fairly brutally honest in categorising my pieces and what’s striking to me personally is proportionally how few Level 3 and 4 pieces I’ve published, which is something for me to work on).

citations by level

So what does this all mean? 

Basically, if you want your work to have impact you should try to write articles that meet Level 3 and 4 criteria — that is, produce novel material that provides new ideas, tools, methods that others can apply in their own studies.  Creating such pieces is not easy or straightforward and demands a lot of reading, reflection and thinking, which is why it can be so difficult to get papers accepted into the top journals and the citation distribution curve is so heavily skewed, with a relatively small number of pieces having nearly all the citations (Figure 4 shows the skewing for my papers; my top cited piece has the same number of cites as the 119 pieces with the least number).

Figure 4: Skewed distribution of citations

Figure 4: Skewed distribution of citations

In my experience, papers with zero citations are nearly all Level 1 and 2 pieces.  That’s not the only kind of papers you should be striving to publish if you want some impact from your work.

Rob Kitchin

Four critiques of open data initiatives

by Rob Kitchin

I’ve been a long time supporter of open data and providing analytic tools to citizens to enable evidence-informed participation in public debate.  Since 2006, when it was initially established as the Cross-Border Regional Research Observatory, I have been PI on the All-Island Research Observatory (www.airo.ie), a project that provides access to various government datasets in the Republic of Ireland, Northern Ireland and Europe, along with interactive mapping and graphing tools.  The core project team of Justin Gleeson, Aoife Dowling and Eoghan McCarthy have worked hard to leverage datasets out of various agencies and negotiate more favourable licensing terms, add value and insight to these datasets, promote data journalism through collaboration with the Irish Times and Irish Examiner, and provide open access to a couple of thousand datasets through the AIRO datastore.

The arguments concerning the benefits of open data are now reasonably well established and include contentions that open data lead to increased transparency and accountability with respect to public bodies and services; increases the efficiency and productivity of agencies and enhances their governance; promotes public participation in decision making and social innovation; and fosters economic innovation and job and wealth creation (Pollock 2006; Huijboom and Van der Broek 2011; Janssen 2012; Yiu 2012).

What is less well examined are the potential problems affecting, and negative consequences of, open data initiatives.  Consequently, as a provocation for Wednesday’s (Nov 13th, 4-6pm) Programmable City open data event I thought it might be useful to outline four critiques of open data, each of which deserves and demands critical attention: open data lacks a sustainable financial model; promotes a politics of the benign and empowers the empowered; lacks utility and usability; and facilitates the neoliberalisation and marketisation of public services.  These critiques do not suggest abandoning the move towards opening data, but contend that open data initiatives need to be much more mindful of what data are being made open, how data are made available, how they are being used, and how they are being funded.

Funding and sustainability

Because, to date, attention has been largely focused on the supply-side of accessing data and creating open data initiatives, insufficient attention has been paid to the economics of creating sustainably funded initiatives.  Data might be non-rivalrous in nature, meaning that it can distributed for marginal cost but the initial copy needs to be paid for along with on-going data management and customer service (Pollock 2006).  As such, open data might well be a free resource for end-users, but its production and curation is certainly not without significant cost (especially with respect to appropriate technologies and skilled staffing).  In many cases, the data being opened has to date been a major source of revenue for organisations, and in the case of companies, competitive advantage.  A key question, therefore, centres on how open data projects are funded sustainably in the absence of a direct revenue stream?

A number of different models have been suggested (see Ferro and Osella 2013), but it is generally acknowledged that securing a stable financial base is best achieved by direct government subvention.  Here, it is argued that such a subvention will be offset by two factors.  First, open data will produce diverse consumer surplus value, generating significant public goods which are worth the investment of public expenditure.  Second, open data will lead to new innovative products that will create new markets, which in turn will produce additional corporate revenue and tax receipts (Pollock 2008).  These tax receipts will be in excess of additional government costs of opening the data.  This may well be the case with high value datasets such as mapping and transport data, but much less likely with most other datasets.

de Vries et al. (2011) reported that the average apps developer made only $3,000 per year from apps sales, with 80 percent of paid Android apps being downloaded fewer than 100 times.  In addition, they noted that even successful apps, such as MyCityWay which had been downloaded 40 million times, were not yet generating profits.  Instead, venture capitalists are investing in projects with potential whilst a sustainable business model is sought.  Given austerity and cutbacks across governments finding the necessary funds to open data is a challenge.  And yet, the consequences of reductions or fluctuations in the financial base of open data services are likely to be a decline in data quality, responsiveness, innovation, and general performance (Pollock 2008).  At present, the jury is still out on whether opening up all public sector data is economically viable and sustainable, especially in the short term.

Politics of the benign and empowering the empowered

Another consequence of focusing on gaining access to the data, is to ignore the politics of the data themselves, what the data reveals, or how they are used and for whose interests (Shah 2013).  The open data movement largely seeks to present an image of being politically benign and commonsensical, promoting a belief that opening up data is inherently a good thing in and of itself by democratising data.  For others, making data accessible is just one element with respect to the notion of openness.  Just as important are what the data consist of, how they can be used, and how they can create a more just and equitable society.  If open data merely serves the interests of capital by opening public data for commercial re-use and further empowers those who are already empowered and disenfranchises others, then it has failed to make society more democratic and open (Gurstein 2011; Shah 2013).

Implicit in most discussions on open data is that the data is neutral and objective in nature and that everyone has the potential to access and use such data (Gurstein 2011; Johnson 2013).  However, these are not the case.  With respect to open data themselves, as Johnson (2013) contends, a high degree of social privilege and social values are embedded in public sector data with respect to what data are generated relating to whom and what (especially within domains that function as disciplinary systems, such as social welfare and law enforcement), and whose interests are represented within the data set and whose interests are excluded.  As such, value structures are inherent in data sets and these subsequently shape analysis and interpretation and work to propagate injustices and reinforce dominant interests.

Citizens have differential access to the hardware and software required to download and process open data sets, as well as varying levels of skills required to analyze, contextualize and interpret the data (Gurstein 2011).  And even if some groups have the ability to make compelling sense of the data, they do not necessarily have the contacts needed to gain a public voice and influence a debate, or the political skill to take on a well resourced and savvy opponent.  As such, the democratic potential of open data has been overly optimistic, with most users those with high degrees of technical knowledge and an established political profile (McClean 2011).  Indeed, open data can work to further empower the empowered and to reproduce and deepen power imbalances (Gurstein 2011).  An oft-cited example of the latter is the digitization of land records in Karnataka, India, where an open data project, which was promoted as a ‘pro-poor’ initiative, worked to actively disenfranchise the poor by enabling those with financial resources and skills to access previously restricted data and to re-appropriate their lands (Gurstein 2011; Slee 2012; Donovan 2012).  Far from aiding all citizens, in this case open data facilitated a change in land rights and a transfer of wealth from poor to rich.  In other words, opening data does not mean an inherent democratization of data.  Indeed, open data can function as a tool of disciplinary power (Johnson 2013).

Utility and usability

In a study of a number of different open data projects, Helbig et al. (2012) reported that many are too technically focused amounting to “little more than websites linked to miscellaneous data files, with no attention to the usability, quality of the content, or consequences of its use.”  The result is a set of open data sites that operate more as data holdings or data dumps, lacking the qualities expected in a well organised and run data infrastructure such as clean, high quality, validated and interoperable data that comply with data standards and have appropriate metadata and full record sets (associated documentation); preservation, backup and auditing policies;  re-use, privacy and ethics policies; administrative arrangements, management organisation and governance mechanisms; and financial stability and a long term plan of development and sustainability.  Many sites also lack appropriate tools and contextual materials to support data analysis.  Moreover, the data sets released are often low-hanging fruit, consisting of those that are easy to release and contain non-sensitive data that has relatively low utility.  In contrast, data that might be more difficult and demanding to make open, due to issues of sensitivity or because they require more management work to comply with data protection laws, often remain closed (Chignard 2013).

Part of the issue is that many open data sites have been rough and ready responses to an emerging phenomena.  They have been built by enthusiasts and organisations who have little experience of data archiving or the contextual use of the data being opened.  They have been supported and promoted by hackathons and data dives, which reproduce many of these issues.  As McKeon (2013) and Porway (2013) contend, these events, which invite coders and other interested parties to build apps using open data, can do as much harm as good.  Whilst they do focus attention on the data and are good for networking, those doing the coding often have little deep contextual knowledge with regards to what the data refers, belong to a particular demographic that is not reflective of wider society (e.g., young, educated and tech-orientated), and believe that deep structural problems can be resolved by technological solutions.  In other words, they are “built by a micro-community of casual volunteers, not by people with a deep stake in seeing the project succeed” (McKeon 2013).  Further, hackathon created solutions often remain at version 1.0, with little after event follow-up, maintenance or development.

Because of these various teething issues, rather than creating a virtuous cycle, where the release of more and more data sets, in more formats, produces growing use, and therefore the release of more data, as assumed by the open data movement, Helbig et al. (2013) note that many sites have low and declining traffic as they do not encourage use or facilitate users, and are limited by other factors such as data management practices, agency effort and internal politics.  After an initial spark of interest, data use drops quite markedly as the limitations of the data are revealed and users struggle to work out how the data might be profitably analyzed and used.  McClean (2011), for example, notes that analysis arising from open data has had limited impact on political debates, and concludes with respect to COINS (government financial data in the UK), that after “a brief flurry of media interest in mid-2010, in the immediate aftermath of the release, … reports explicitly mentioning COINS are now extremely rare and those members of the press who were most interested obtaining access to it report that it has not proved particularly useful as a driver of journalism.”   Where data are released periodically (e.g., quarterly or annually), usage tends to be cyclical and often tied to specific projects (such as consultancy reports) rather than to have a more consistent pattern of use.  In such cases, Helbig et al. (2012) observed a set of negative or balancing feedback loops slowed the supply of data and use, thus further decreasing usage.  Thus, after some initial ‘quick wins’, the danger is that any virtuous cycle shifts from being positive to negative, and thus the rationale for central government funding of such initiatives is undermined and in due course cut.

Neoliberalisation and marketisation of public services

Jo Bates (2012) argues, “open initiatives such as OGD [open government data] emerge into a historical process, not a neutral terrain.”  As with all political initiatives, the politics of open data are not simply commonsensical or neutral, but rather are underpinned by political and economic ideology.  The open data movement is diverse and made up of a range of constituencies with different agendas and aims, and is not driven by any one party.   However, Bates makes the case that the open data movement, in the UK at least, had little political traction until big business started to actively campaign for open data, and open government initiatives started to fit into programmes of forced austerity and the marketisation of public services.  For her, political parties and business have appropriated the open data movement on “behalf of dominant capitalist interests under the guise of a ‘Transparency Agenda’” (Bates 2012).

In other words, the real agenda of business interested in open data is to get access to expensively produced data for no cost, and thus a heavily subsidised infrastructural support from which they can leverage profit, whilst at the same time removing the public sector from the marketplace and weakening its position as the producer of such data.  Indeed, because the income from data/data services disappears by opening data (which is especially acute in trading funds where data production and management was largely being funded by fees with some public subsidy), public sector bodies are more likely to be forced outsource such services to the private sector on a competitive basis or cede data production to the private sector which they then have to procure (Gurstein 2013).  Here, data services and data derived from public data has to be purchased back by the data creator.  At the same time the data literacy of the organisation is hollowed out.   Moreover, because open data often concerns a body’s own activities, especially when supplemented by key performance indicators, they facilitate public sector reform and reorganisation that promotes a neoliberal, New Public Management ethos and private sector interests (McClean 2011; Longo 2011).

Such processes, Bates (2013) argues, are part of a deliberate political strategy to open up the “provision of almost all public services to competition from private and third sector providers”, with open data about public services enabling “service users to make informed choices within a market for public services based on data-driven applications produced by a range of commercial and non-commercial developers” (original emphasis).  In such cases, the transparency agenda promoted by politicians and businesses is merely a rhetorical discursive device.  If either party was genuinely interested in transparency then it would be equally supportive of the right to information movement (freedom of information) and the work of whistleblowers (Janssen 2012) and also loosening the shackles of intellectual property rights more broadly (Shah 2013).  Instead, governments and businesses are generally resistant to both.


Open data initiatives hold much promise and value.  They are radically altering access to publicly produced data and making new kinds of analysis possible.  They are creating new forms of transparency and accountability, fostering new form of social participation and evidence-informed modes of governance, and promoting innovation and wealth generation.  At the same time, much more critical attention needs to be paid to how open data projects are developing as complex socio-technical systems with diverse stakeholders and agendas.  To date, efforts have concentrated on the political and technical work of establishing open data projects, and not enough on studying these discursive and material moves and their consequences.  As a result, we lack detailed case studies of open data projects in action, the assemblages surrounding and shaping them, and the messy, contingent and relational ways in which they unfold.  It is only through such studies that are more complete picture of open data will emerge, one that reveals both the positive and negatives of such projects, and which will provide answers to more normative questions concerning how they should be implemented and to what ends.

This post is a modified extract from a forthcoming book by Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences (Sage, London).


Bates, J. (2012) “This is what modern deregulation looks like”: Co-optation and contestation in the shaping of the UK’s Open Government Data Initiative.  The Journal of Community Informatics 8(2).  http://www.ci-journal.net/index.php/ciej/article/view/845/916 (last accessed 6 February 2013)

Bates, J. (2013) Opening up public data.  SPERI Comment.  May 21st. http://speri.dept.shef.ac.uk/2013/05/21/opening-public-data/ (last accessed 18 September 2013)

Chignard, S. (2013) A brief history of open data.  Paris Tech Review. March 29th. http://www.paristechreview.com/2013/03/29/brief-history-open-data/ (last accessed 18 Sept 2013)

de Vries, M., Kapff, L., Negreiro Achiaga, M., Wauters, P., Osimo, D., Foley, P., Szkuta, K., O’Connor, J., and Whitehouse, D. (2011) Pricing of Public Sector Information Study (POPSIS).  http://epsiplatform.eu/sites/default/files/models.pdf (last accessed 11 August 2013)

Donovan, K. (2012). Seeing like a slum: Towards open, deliberative development. Georgetown Journal of International Affairs, 13(1), 97-104.

Ferro, E. and Osella, M. (2013)  Eight Business Model Archetypes for PSI Re-Use.  “Open Data on the Web” Workshop, 23rd-24th April 2013, Google Campus, Shoreditch, London.  http://www.w3.org/2013/04/odw/odw13_submission_27.pdf (last accessed 13 August 2013)

Gordon-McKeon, S. (2013) Hacking the hackathon.  Shaunagm.net http://www.shaunagm.net/blog/2013/10/hacking-the-hackathon/ 10th October (last accessed 21 October 2013)

Gurstein, M. (2011) Open data: Empowering the empowered or effective data use for everyone.  First Monday 16(2) http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3316/2764 (last accessed 6 February 2013)

Gurstein, M. (2013) Should “Open Government Data” be a product or a service (and why does it matter?)  Gurstein’s Community Informatics, 3 February 2013, http://gurstein.wordpress.com/2013/02/03/is-open-government-data-a-product-or-a-service-and-why-does-it-matter/ (last accessed 6 February 2013)

Helbig, N., Cresswell, A.M., Burke, G.B. and Luna-Reyes, L. (2012) The Dynamics of Opening Government Data: A White Paper.  Centre for Technology in Government, State University of New York, Albany. http://www.ctg.albany.edu/publications/reports/opendata/opendata.pdf‎

Huijboom, N. and Van der Broek, T. (2011) Open data: an international comparison of strategies European Journal of ePractice Nº 12, March/April. http://www.epractice.eu/files/European%20Journal%20epractice%20Volume%2012_1.pdf (last accessed 15 August 2013)

Janssen, K. (2012) Open government data: right to information 2.0 or its rollback version? ICRI Working Paper 8/2012  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2152566 (last accessed 14 August 2013)

Johnson, J.A. (2013)  From open data to information justice. Paper presented at the Annual Conference of the Midwest Political Science Association April 13, 2013, Chicago, Illinois. http://papers.ssrn.com/abstract=2241092  (last accessed 16 August 2013)

Longo, J. (2011)  #OpenData: Digital-Era Governance Thoroughbred or New Public Management Trojan Horse? PP+G Review 2(2) http://ppgr.files.wordpress.com/2011/05/longo-ostry.pdf (last accessed 16 Sept 2013)

McClean, T. (2011) Not with a bang but a whimper: The politics of accountability and open data in the UK. Paper prepared for the American Political Science Association Annual Meeting. Seattle, Washington, 1-4 September 2011. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1899790 (last accessed 19th August 2013)

Pollock, R. (2006) The value of the public domain.  IPPRhttp://www.ippr.org/publication/55/1526/the-value-of-the-public-domain (last accessed 13 August 2013)

Pollock. R. (2009)  The economics of public information.  Cambridge Working Papers in Economics 0920. http://www.econ.cam.ac.uk/research/repec/cam/pdf/cwpe0920.pdf  (last accessed 13 August 2013)

Porway, J. (2013) You can’t just hack your way to social change.  Harvard Business Review Blog, 7 March 2013 http://blogs.hbr.org/cs/2013/03/you_cant_just_hack_your_way_to.html (last accessed 9 March 2013)

Shah, N. (2013) Big data, people’s lives, and the importance of openness. DMLcentral, June 24th. http://dmlcentral.net/blog/nishant-shah/big-data-peoples-lives-and-importance-openness (last accessed 25 July 2013)

Slee, T. (2012) Seeing like a geek. Crooked Timber. June 25th http://crookedtimber.org/2012/06/25/seeing-like-a-geek/ (last accessed 18 September 2013)

Yiu, C. (2012) A right to data: Fulfilling the promise of open public data in the UK.  Policy Exchange Research Notehttp://www.policyexchange.org.uk/publications/category/item/a-right-to-data-fulfilling-the-promise-of-open-public-data-in-the-uk (last accessed 14 August 2013)