Tag Archives: big data

New book: Data and the City

data and the cityA new book – Data and the City – edited by Rob Kitchin, Tracey Lauriault and Gavin McArdle has been published by Routledge as part of the Regions and Cities series.  The book is one of the outputs from a Progcity workshop in late 2015.

There is a long history of governments, businesses, science and citizens producing and utilizing data in order to monitor, regulate, profit from and make sense of the urban world. Recently, we have entered the age of big data, and now many aspects of everyday urban life are being captured as data and city management is mediated through data-driven technologies.

Data and the City is the first edited collection to provide an interdisciplinary analysis of how this new era of urban big data is reshaping how we come to know and govern cities, and the implications of such a transformation. This book looks at the creation of real-time cities and data-driven urbanism and considers the relationships at play. By taking a philosophical, political, practical and technical approach to urban data, the authors analyse the ways in which data is produced and framed within socio-technical systems. They then examine the constellation of existing and emerging urban data technologies. The volume concludes by considering the social and political ramifications of data-driven urbanism, questioning whom it serves and for what ends. It will be crucial reading for those who wish to understand and conceptualize urban big data, data-driven urbanism and the development of smart cities.

The book includes chapters by Martijn De Waal, Mike Batty, Teresa Scassa, Jim Thatcher and Craig Dalton, Jim Merricks White, Dietmar Offenhuber, Pouria Amirian and Anahid Bassiri, Chris Speed Deborah Maxwell and Larissa Pschetz, Till Straube, Jo Bates, Evelyn Ruppert, Muki Haklay, as well as the editors.

Data and the City is available in both paperback and hardback and is a companion volume to Code and the City published last year.

CFP: Slow computing: A workshop on resistance in the algorithmic age

Call for Papers

One-day workshop, Maynooth University, Ireland, December 14th, 2017

 Hosted by the Programmable City project at Maynooth University Social Sciences Institute and the Department of Geography

In line with the parallel concepts of slow food (e.g. Miele & Murdoch 2002) or slow scholarship (Mountz et al 2015), ‘slow computing’ (Fraser 2017) is a provocation to resist. In this case, the idea of ‘slow computing’ prompts users of contemporary technologies to consider ways of refusing the invitation to enroll in data grabbing architectures – constituted in complex overlapping ways by today’s technology services and devices – and by accepting greater levels of inconvenience while also pursuing data security, privacy, and even a degree of isolation from the online worlds of social networks.

The case for slow computing arises from the emerging form and nature of ‘the algorithmic age.’ As is widely noted across the sciences today (e.g. see Boyd & Crawford 2012; Kitchin 2014), the algorithmic age is propelled forward by a wide range of firms and government agencies pursuing the roll-out of data-driven and data-demanding technologies. The effects are varied, differentiated, and heavily debated. However, one obvious effect entails the re-formatting of consumers into data producers who (knowingly or unwittingly) generate millions of data points that technology firms can crunch and manipulate to understand specific markets and society as a whole, not to mention the public and private lives of everyday users. Once these users are dispossessed of the value they help create (Thatcher et al 2016), and then conceivably targeted in nefarious ways by advertisers and political campaigners (e.g. see Winston 2016), the subsequent implications for economic and democratic life are potentially far-reaching.

As such, as we move further into a world of ‘big data’ and the so-called ‘digital economy,’ there is a need to ask how individuals – as well as civil society organizations, small firms, small-scale farmers, and many others – might continue to make appropriate and fruitful use of today’s technologies, but while also trying to avoid becoming another data point in the new data-aggregating market. Does slow computing offer a way to navigate the algorithmic age while taking justice seriously? Can slow computing become a part of diverse strategies or tactics of resistance today? Just what are the possibilities and limitations of slow computing?

This one-day workshop invites participation from scholars, practitioners, artists and others who might be exploring these, or other related questions, about slow computing. Papers might contain explorations of:

  • Slow computing practices (whether using auto-ethnography, ethnography, or other qualitative or quantitative methodologies);
  • How slow computing technologies could be designed for private or public institutions;
  • The challenges facing actors who try to unplug, shield, or silo data or other products of social life from the digital economy;
  • The socio-political possibilities emerging from efforts to avoid data-grabbing architectures;
  • Efforts to raise awareness about the privacy implications of contemporary data-grabbing technologies.

Confirmed keynote speaker: Prof. Stefania Milan, University of Amsterdam

Those interested in participating should send a proposed title and abstract of no more than 250 words to Dr. Alistair Fraser – alistair.fraser@mu.ie – by September 29th 2017. Informal enquiries about the workshop can also be sent to the co-organizer, Prof. Rob Kitchin: rob.kitchin@mu.ie

Works cited:

Boyd, D. and Crawford, K. 2012. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society. doi:10.1080/1369118X.2012.678878

Fraser, A. 2017. Land Grab / Data Grab. SocArXiv. May 19. osf.io/preprints/socarxiv/9utyh.

Kitchin, R. 2014. Big Data, new epistemologies and paradigm shifts. Big Data & Society. doi:10.1177/2053951714528481

Miele, M. and Murdoch, J., 2002. The practical aesthetics of traditional cuisines: slow food in Tuscany. Sociologia Ruralis, 42(4), pp.312-328. doi: 10.1111/1467-9523.00219

Mountz, A., Bonds, A., Mansfield, B., Loyd, J., Hyndman, J., Walton-Roberts, M., Basu, R., Whitson, R., Hawkins, R., Hamilton, T. and Curran, W., 2015. For slow scholarship: A feminist politics of resistance through collective action in the neoliberal university. ACME: An International Journal for Critical Geographies, 14(4), pp.1235-1259.

Thatcher. J., O’Sullivan, D., and Mahmoudi, D. 2016. ‘Data Colonialism through accumulation by dispossession: New metaphors for daily data. Environment & Planning D: Society & Space 34: 990-1006. doi: 10.1177/0263775816633195

Winston, J. 2016. How the Trump campaign built an identity database and used facebook ads to win the election. Startup Grind, Nov 18.

Ulysses Workshop “Reshaping Cities through Data and Experiments” – Introduction

[This text is the introduction to the "Reshaping Cities through Data and Experiments" workshop held in Maynooth University, 30th of May, which was the first part of an Ulysses research exchange between researchers from the Centre de Sociologie de l’Innovation (i3-CSI) at the École des Mines in Paris, and the researchers from MUSSI-NIRSA in Maynooth University, Ireland. UPDATE: The videos of the presentations are now available as the following separate posts: session 1, session 2, session 3]

Introduction: Why smart cities, why data and experiments


Our aim is to advance the understanding of the contemporary cities in relation to urban data and experimentation, creating a link between “The Programmable City” (Maynooth) and “City Experiments” (“CitEx”, Paris). In particular, we want to initiate a transdisciplinary discussion on the theoretical, methodological and empirical issues related to experimental and data-driven approaches to urban development and living. This conversation is vital in a time when cities all over the world – from Singapore to San Francisco, Medellin and Dublin, as we shall see – are increasingly turning into public-private testbeds and living labs, where urban development projects merge with the design of cyber-infrastructures to test new services and new forms of engagement for urban innovation and economic development. These new forms of interaction between algorithms, planning practices and governance processes raise crucial questions for researchers on how everyday life, civic engagement and urban change are shaped in contemporary cities. Our approach is to study smart cities as the unstable and uncertain product of ongoing interactions of data and experiments.

There is a pragmatic reason, indeed. In many cases, being responsible for tax payer’s money, city administrations need to spend their budget very carefully while thinking about possible futures. It brings us to a problem of skills, knowledge and expertise: what do the public bodies know about available technologies and state of the art? How to procure them? How to test them? Once procured and tested, how to know that the adoption of a specific technology would work in the actual urban settings? Which knowledge do data allow and shadow? How to maintain the rolled out service in time?

Thus, experimentation and data become a way to engage with new actors, with new kinds of expertise and skills that enter into the public so as to test projects before committing to large scale rolling out.

But the pragmatic reason is deeply connected with a theoretical and methodological one. Sociologists of science and technology use to saying that the laboratory is now the world: it does not mean that the world should be treated as a mere copy of a laboratory. Rather, it is an invite to expand and unfold the idea of laboratory from an organizational, technical and political perspective. In terms of the smart city discourse, it involves at least three intertwined issues. There is a problem related to the organizational processes and rationalities (how data and experiments interact with organizational change), there is a problem related to technological rationalities (data and experiments are not neutral), and there is a problem related to political rationalities (which are the implication for democracy), all combined and making the smart city discourse complex and undetermined.

Experiments represents a unique place of encounter between theory and practice, which allow us to observe smart urbanism in the actual making, looking at the dynamic apparatus of practices, infrastructures, knowledge, narratives, bodies, etc. and to possibly try distinguish between good ways to combine data and experiment and bad ways to combine data and experiments.

This is where our work in Maynooth University and in the Programmable City project on big data assemblages (Kitchin 2014), algorithmic governance (Coletta and Kitchin 2017), smart city development processes (Coletta, Heaphy and Kitchin 2017), hacktivism and civic engagement (Perng and Kitchin 2016) matches the work that David and his colleagues are doing at CSI.


I shall start with a remark: compared to what has been done by colleagues here, at MUSSI-NIRSA in Maynooth, about cities and data, we actually did a very few. Actually, we have been involved in projects on cities and urban settings only recently. As you might know, the CSI is well known in science and technology studies (STS), especially for its contribution to the early laboratory studies. And our CitEx project clearly draws on this background, notably what we consider as two important results.

In Laboratory Life, Bruno Latour and Steve Woolgar (1979) examined in minute details scientists working at the bench, performing experiments, discussing results, and writing publications. What is interesting for us here is to consider the laboratory as a peculiar place, both as a controlled environment configured to conduct experiments and to envision their replication and dissemination, and a site designed to elaborate new knowledge and to perform some demonstrations. Yet the laboratory is not the only significant site to be investigated. As Michel Callon and his colleagues (1988) clearly emphasized in La science et ses réseaux, scientific facts would be nothing without the crucial part heterogeneous networks take place in their production and dissemination. What we learned here is the various ways in which the results of experiments are not only tightly linked to economic networks, but also contribute to perform some political orderings. To put it roughly, these are the two main arguments on scientific experiments we started with to elaborate our CitEx project; these are our basics, so to speak.

This being said, some works on city and urban settings have already taken place at the CSI, and they directly inspire our ongoing CitEx project. Obviously, the book Paris, the invisible city (Latour and Hermant 1996), which is focused on the heterogeneous infrastructures that make Paris works and stands as a city on a daily basis, is particularly relevant in this regard. Contemporary experiments in urban settings are based on exiting infrastructures, dedicated to urban mobility or to data processing and storage, or to both — as it is often the case. The study of subway signs in Paris as an immobile informational infrastructure designed and maintained everyday in order to ease riders fluidity is particularly telling: by shaping both some users’ positions and some particular conditions of a public space, subway signs participate in the enactment of a specific political ordering (Denis and Pontille 2010). But some experiments may also be focused on the infrastructure itself. This is what we investigated more recently, examining the introduction a fleet of 50 electric cars as part of a car-sharing system without fixed stations (Laurent and Tironi 2015). Not only sociotechnical instruments were mobilized to explore social and technical uncertainties and to produce public demonstrations, but also what was actually tested eventually changed during the project.

The CitEx project has been elaborated at the crossroad of STS and Urban studies because, we argue, experiments are a stimulating research site. Tightly coupled with the production and use of data, experiments constitute a particular entry point to explore how part of contemporary cities are currently constituted as laboratories to test various new technologies and infrastructures, as well as forms of urban assemblages and modes of government.

This is why we believe the collaboration with Claudio and his colleagues involved in “the programmable city” project will be fruitful and stimulating.

Claudio Coletta and David Pontille


We are grateful to the IRC, Ambassade de France in Ireland and the Maynooth University Social Sciences Institute for their generous support and for making possible this event.


Callon M (1989) La science et ses réseaux: genèse et circulation des faits scientifiques. Éditions La Découverte.

Coletta C and Kitchin R (In press) Algorhythmic governance: Regulating the ‘heartbeat’of a city using the Internet of Things, Big Data and Society, Special Issue on “Algorithms in Culture”. Pre-print available at https://osf.io/bp7c4/

Coletta, C., Heaphy, L. and Kitchin, R. (2017) From the accidental to articulated smart city: The creation and work of ‘Smart Dublin’. Programmable City Working Paper 29 https://osf.io/preprints/socarxiv/93ga5

Denis J and Pontille D (2010). The Graphical Performation of a Public Space. The Subway Signs and their Scripts, in G. Sonda, C. Coletta, F. Gabbi (eds.) Urban Plots, Organizing Cities. Ashgate, pp. 11-22.

Kitchin R (2014) The data revolution: Big data, open data, data infrastructures and their consequences. Sage.

Laurent B and Tironi M (2015) A field test and its displacements. Accounting for an experimental mode of industrial innovation. CoDesign 11(3–4): 208–221.

Latour B and Woolgar S (1986) Laboratory Life: The Construction of Scientific Facts. Princeton University Press.

Latour B and Hermant E (1998) Paris: Ville Invisible. Éditions La Découverte.

Perng SY and Kitchin R (2016, online first) Solutions and frictions in civic hacking: Collaboratively designing and building a queuing app for an immigration office. Social and Cultural Geography.

New paper: Land grab / data grab

Our colleague from the Geography Department at Maynooth University, Alistair Fraser, has published a fascinating paper as a Progcity working paper (31) – Land grab / data grab. Focusing on the use of big data in food production he develops two useful conceptual tools ‘data grab’ and ‘data sovereignty’, using them to explore ‘precision agriculture’ and the notions that data is a ‘new cash crop’ and the ‘the new soil’.


Developments in the area of ‘precision agriculture’ are creating new data points (about flows,
soils, pests, climate) that agricultural technology providers ‘grab,’ aggregate, compute, and/or
sell. Food producers now churn out food and, increasingly, data. ‘Land grabs’ on the horizon
in the global south are bound up with the dynamics of data production and grabbing, although
researchers have not, as yet, revealed enough about the people and projects caught up in this
new arena. Against this backdrop, this paper examines some of the key issues taking shape,
while highlighting new frontiers for research and introducing a concept of ‘data sovereignty,’
which food sovereignty practitioners (and others) need to consider.


The limits of social media big data

handbook social media researchA new book chapter by Rob Kitchin has been published in The Sage Handbook of Social Media Research Methods edited by Luke Sloan and Anabel Quan-Haase. The chapter is titled ‘Big data – hype or revolution’ and provides a general introduction to big data, new epistemologies and data analytics, with the latter part focusing on social media data.  The text below is a sample taken from a section titled ‘The limits of social media big data’.

The discussion so far has argued that there is something qualitatively different about big data from small data and that it opens up new epistemological possibilities, some of which have more value than others. In general terms, it has been intimated that big data does represent a revolution in measurement that will inevitably lead to a revolution in how academic research is conducted; that big data studies will replace small data ones. However, this is unlikely to be the case for a number of reasons.

Whilst small data may be limited in volume and velocity, they have a long history of development across science, state agencies, non-governmental organizations and business, with established methodologies and modes of analysis, and a record of producing meaningful answers. Small data studies can be much more finely tailored to answer specific research questions and to explore in detail and in-depth the varied, contextual, rational and irrational ways in which people interact and make sense of the world, and how processes work. Small data can focus on specific cases and tell individual, nuanced and contextual stories.

Big data is often being repurposed to try and answer questions for which it was never designed. For example, geotagged Twitter data have not been produced to provide answers with respect to the geographical concentration of language groups in a city and the processes driving such spatial autocorrelation. We should perhaps not be surprised then that it only provides a surface snapshot, albeit an interesting snapshot, rather than deep penetrating insights into the geographies of race, language, agglomeration and segregation in particular locales. Moreover, big data might seek to be exhaustive, but as with all data they are both a representation and a sample. What data are captured is shaped by: the field of view/sampling frame (where data capture devices are deployed and what their settings/parameters are; who uses a space or media, e.g., who belongs to Facebook); the technology and platform used (different surveys, sensors, lens, textual prompts, layout, etc. all produce variances and biases in what data are generated); the context in which data are generated (unfolding events mean data are always situated with respect to circumstance); the data ontology employed (how the data are calibrated and classified); and the regulatory environment with respect to privacy, data protection and security (Kitchin, 2013, 2014a). Further, big data generally capture what is easy to ensnare – data that are openly expressed (what is typed, swiped, scanned, sensed, etc.; people’s actions and behaviours; the movement of things) – as well as data that are the ‘exhaust’, a by-product, of the primary task/output.

Small data studies then mine gold from working a narrow seam, whereas big data studies seek to extract nuggets through open-pit mining, scooping up and sieving huge tracts of land. These two approaches of narrow versus open mining have consequences with respect to data quality, fidelity and lineage. Given the limited sample sizes of small data, data quality – how clean (error and gap free), objective (bias free) and consistent (few discrepancies) the data are; veracity – the authenticity of the data and the extent to which they accurately (precision) and faithfully (fidelity, reliability) represent what they are meant to; and lineage – documentation that establishes provenance and fit for use; are of paramount importance (Lauriault, 2012). In contrast, it has been argued by some that big data studies do not need the same standards of data quality, veracity and lineage because the exhaustive nature of the dataset removes sampling biases and more than compensates for any errors or gaps or inconsistencies in the data or weakness in fidelity (Mayer-Schonberger and Cukier, 2013). The argument for such a view is that ‘with less error from sampling we can accept more measurement error’ (p.13) and ‘tolerate inexactitude’ (p. 16).

Nonetheless, the warning ‘garbage in, garbage out’ still holds. The data can be biased due to the demographic being sampled (e.g., not everybody uses Twitter) or the data might be gamed or faked through false accounts or hacking (e.g., there are hundreds of thousands of fake Twitter accounts seeking to influence trending and direct clickstream trails) (Bollier, 2010; Crampton et al., 2012). Moreover, the technology being used and their working parameters can affect the nature of the data. For example, which posts on social media are most read or shared are strongly affected by ranking algorithms not simply interest (Baym, 2013). Similarly, APIs structure what data are extracted, for example, in Twitter only capturing specific hashtags associated with an event rather than all relevant tweets (Bruns, 2013), with González-Bailón et al. (2012) finding that different methods of accessing Twitter data – search APIs versus streaming APIs – produced quite different sets of results. As a consequence, there is no guarantee that two teams of researchers attempting to gather the same data at the same time will end up with identical datasets (Bruns, 2013). Further, the choice of metadata and variables that are being generated and which ones are being ignored paint a particular picture (Graham, 2012). With respect to fidelity there are question marks as to the extent to which social media posts really represent peoples’ views and the faith that should be placed on them. Manovich (2011: 6) warns that ‘[p]eoples’ posts, tweets, uploaded photographs, comments, and other types of online participation are not transparent windows into their selves; instead, they are often carefully curated and systematically managed’.

There are also issues of access to both small and big data. Small data produced by academia, public institutions, non-governmental organizations and private entities can be restricted in access, limited in use to defined personnel, or available for a fee or under license. Increasingly, however, public institution and academic data are becoming more open. Big data are, with a few exceptions such as satellite imagery and national security and policing, mainly produced by the private sector. Access is usually restricted behind pay walls and proprietary licensing, limited to ensure competitive advantage and to leverage income through their sale or licensing (CIPPIC, 2006). Indeed, it is somewhat of a paradox that only a handful of entities are drowning in the data deluge (boyd and Crawford, 2012) and companies such as mobile phone operators, app developers, social media providers, financial institutions, retail chains, and surveillance and security firms are under no obligations to share freely the data they collect through their operations. In some cases, a limited amount of the data might be made available to researchers or the public through Application Programming Interfaces (APIs). For example, Twitter allows a few companies to access its firehose (stream of data) for a fee for commercial purposes (and have the latitude to dictate terms with respect to what can be done with such data), but with a handful of exceptions researchers are restricted to a ‘gardenhose’ (c. 10 percent of public tweets), a ‘spritzer’ (c. one percent of public tweets), or to different subsets of content (‘white-listed’ accounts), with private and protected tweets excluded in all cases (boyd and Crawford, 2012). The worry is that the insights that privately owned and commercially sold big data can provide will be limited to a privileged set of academic researchers whose findings cannot be replicated or validated (Lazer et al., 2009).

Given the relative strengths and limitations of big and small data it is fair to say that small data studies will continue to be an important element of the research landscape, despite the benefits that might accrue from using big data such as social media data. However, it should be noted that small data studies will increasingly come under pressure to utilize the new archiving technologies, being scaled-up within digital data infrastructures in order that they are preserved for future generations, become accessible to re-use and combination with other small and big data, and more value and insight can be extracted from them through the application of big data analytics.

Rob Kitchin

Video: Data Politics and Internet of Things

In November 2016, CONNECT, The Programmable City and Maynooth University Social Science Institute (MUSSI) invited a panel of international and local experts from different disciplines to explore the broader political, economic and social implications of Internet of Things.

The panel included Linda Doyle (Trinity College Dublin), Anne Helmond (University of Amsterdam), Aphra Kerr (Maynooth University), Rob Kitchin (Maynooth University), Liz McFall (Open University) and Alison Powell (LSE). The video of the presentations by the panel members and also the discussion afterwards are available to view now.

For more details of the event, please see Science Gallery Dublin’s event page here, or here for a workshop organised for earlier in the day.