Author Archives: Rob Kitchin

New paper: Smart cities, urban technocrats, epistemic communities and advocacy coalitions

 

Rob Kitchin, Claudio Coletta, Leighton Evans, Liam Heaphy and Darach MacDonncha have published a new working paper:  ‘Smart cities, urban technocrats, epistemic communities and advocacy coalitions‘ on on SocArXiv. It has been prepared for the ‘A New Technocracy’ workshop,University of Amsterdam, March 20-21 2017.

Abstract
In this paper, we argue that the ideas, ideals and the rapid proliferation of smart city rhetoric and initiatives globally have been facilitated and promoted by three inter-related communities. A new set of ‘urban technocrats’ – chief innovation/technology/data officers, project managers, consultants, designers, engineers, change-management civil servants, and academics – many of which have become embedded in city administrations.  A smart cities ‘epistemic community’; that is, a network of knowledge and policy experts that share a worldview and a common set of normative beliefs, values and practices with respect to addressing urban issues, and work to help decision-makers identify and deploy technological solutions to solve city problems.  A wider ‘advocacy coalition’ of smart city stakeholders and vested interests who collaborate to promote the uptake and embedding of a smart city approach to urban management and governance.  We examine the roles of new urban technocrats and the multiscale formation and operation of a smart cities epistemic community and advocacy coalitions, detailing a number of institutional networks at global, supra-national, national, and local scales. In the final section, we consider the translation of the ideas and practices of the smart city into the policies and work of city administrations. In particular, we consider what might be termed the ‘last mile problem’ and the reasons why, despite a vast and active set of technocrats and epistemic community and advocacy coalition, smart city initiatives are yet to become fully mainstreamed and the smart city mission successfully realized in cities across the globe. We illustrate this last mile problem through a discussion of plans to introduce smart lighting in Dublin.

Key words: smart cities, epistemic community, advocacy coalition, technocrats, urban governance, city administration, smart lighting

The paper can be downloaded here

 

New book: Understanding Spatial Media

USM3A new book, Understanding Spatial Media, edited by Rob Kitchin, Tracey Lauriault and Matt Wilson has been published by Sage. The book started life as a conversation at the launch of the Programmable City project. It includes 22 chapters detailing forms of spatial media and their consequences, including discussions of the geoweb, neogeography, volunteered geographic information, locative media, spatial big data, surveillance, privacy, openness, transparency, etc.  Here’s the back cover blurb:

“Over the past decade, a new set of interactive, open, participatory and networked spatial media have become widespread.  These include mapping platforms, virtual globes, user-generated spatial databases, geodesign and architectural and planning tools, urban dashboards and citizen reporting geo-systems, augmented reality media, and locative media.  Collectively these produce and mediate spatial big data and are re-shaping spatial knowledge, spatial behaviour, and spatial politics.

Understanding Spatial Media brings together leading scholars from around the globe to examine these new spatial media, their attendant technologies, spatial data, and their social, economic and political effects.

The 22 chapters are divided into the following sections:

  • Spatial media technologies
  • Spatial data and spatial media
  • The consequences of spatial media

Understanding Spatial Media is the perfect introduction to this fast emerging phenomena for students and practitioners of geography, urban studies, data science, and media and communications.”

Contributors: Britta Ricker, Jeremy Crampton, Mark Graham, Jim Thatcher, Jessa Lingel, Shannon Mattern, Stephen Ervin, Dan Sui, Gavin McArdle, Muki Haklay, Peter Pulsifer, Glenn Brauen, Harvey Miller, Teresa Scassa, Leighton Evans, Sung-Yueh Perng, Mary Francoli, Mike Batty, Francisco Klauser, Sarah Widmar, David Murakami Wood, and Agnieszka Leszczynski.

Thanks to Lev Manovich for permission to use an image from the On Broadway project for the cover.

Details about the book can be found here.

Rob Kitchin

New paper: The (in)security of smart cities: vulnerabilities, risks, mitigation and prevention

Rob Kitchin and Martin Dodge have published a new Programmable City working paper (No. 24), ‘The (in)security of smart cities: vulnerabilities, risks, mitigation and prevention‘ on SocArXiv.

Abstract:  In this paper we examine the current state of play with regards to the security of smart city initiatives. Smart city technologies are promoted as an effective way to counter and manage uncertainty and urban risks through the effective and efficient delivery of services, yet paradoxically they create new vulnerabilities and threats, including making city infrastructure and services insecure, brittle, and open to extended forms of criminal activity. This paradox has largely been ignored or underestimated by commercial and governmental interests or tackled through a technically-mediated mitigation approach. We identify five forms of vulnerabilities with respect to smart city technologies, detail the present extent of cyberattacks on networked infrastructure and services, and present a number of illustrative examples. We then adopt a normative approach to explore existing mitigation strategies, suggesting a wider set of systemic interventions (including security-by-design, remedial security patching and replacement, formation of core security and computer emergency response teams, a change in procurement procedures, and continuing professional development). We discuss how this approach might be enacted and enforced through market-led and regulation/management measures, and examine a more radical preventative approach to security.

Keywords: crime, cyberattacks, mitigation, risk, security, smart cities, urban resilience

Download here

 

New paper: Urban informatics, governmentality and the logics of urban control

Rob Kitchin, Claudio Coletta and Gavin McArdle have published a new Programmable City working paper (No. 25), ‘Urban informatics, governmentality and the logics of urban control ‘, on SocArXiv.

Abstract: In this paper, we examine the governmentality and the logics of urban control enacted through smart city technologies. Several commentators have noted that the implementation of algorithmic forms of urban governance that utilize big data greatly intensifies the extent and frequency of monitoring populations and systems and shifts the governmental logic from surveillance and discipline to capture and control.  In other words, urban governmentality is shifting from subjectification – molding subjects and restricting action – to modulating affects, desires and opinions, and inducing action within prescribed comportments.  We examine this contention through an examination of two forms of urban informatics: city dashboards and urban control rooms and their use in urban governance. In particular, we draw on empirical analysis of the governmental logics of the Dublin Dashboard, a public, analytical dashboard that displays a wide variety of urban data, and the Dublin Traffic Management and Incident Centre (TMIC) and its use of SCATS (Sydney Coordinated Adaptive Traffic System) to control the flow of traffic in the city.  We argue that there is no one governmentality being enacted by smart city technologies, rather they have mutable logics which are abstract, mobile, dynamic, entangled and contingent, being translated and operationalized in diverse, context-dependent ways.  As such, just as disciplinary power never fully supplanted sovereign power, control supplements rather than replaces discipline.

dashboard

Download here

New paper: Urban science: a short primer

Rob Kitchin has published a new Programmable City working paper (No. 23), ‘Urban science: a short primer‘, on SocArXiv.

Abstract: This paper provides a short introductory overview of urban science. It defines urban science, details its practioners and their aims, sets out its relationship to urban informatics and urban studies, and explains its epistemology and the analysis of urban big data. It then summarizes criticism of urban science with respect to epistemology, instrumental rationality, data issues, and ethics. It is concluded that urban science research will continue to grow for the foreseeable future, providing a valuable means of making sense of cities, but that it is unlikely it will become a new paradigm, producing an integrative approach that replaces the diverse philosophical traditions within urban studies.

Download here

 

The limits of social media big data

handbook social media researchA new book chapter by Rob Kitchin has been published in The Sage Handbook of Social Media Research Methods edited by Luke Sloan and Anabel Quan-Haase. The chapter is titled ‘Big data – hype or revolution’ and provides a general introduction to big data, new epistemologies and data analytics, with the latter part focusing on social media data.  The text below is a sample taken from a section titled ‘The limits of social media big data’.

The discussion so far has argued that there is something qualitatively different about big data from small data and that it opens up new epistemological possibilities, some of which have more value than others. In general terms, it has been intimated that big data does represent a revolution in measurement that will inevitably lead to a revolution in how academic research is conducted; that big data studies will replace small data ones. However, this is unlikely to be the case for a number of reasons.

Whilst small data may be limited in volume and velocity, they have a long history of development across science, state agencies, non-governmental organizations and business, with established methodologies and modes of analysis, and a record of producing meaningful answers. Small data studies can be much more finely tailored to answer specific research questions and to explore in detail and in-depth the varied, contextual, rational and irrational ways in which people interact and make sense of the world, and how processes work. Small data can focus on specific cases and tell individual, nuanced and contextual stories.

Big data is often being repurposed to try and answer questions for which it was never designed. For example, geotagged Twitter data have not been produced to provide answers with respect to the geographical concentration of language groups in a city and the processes driving such spatial autocorrelation. We should perhaps not be surprised then that it only provides a surface snapshot, albeit an interesting snapshot, rather than deep penetrating insights into the geographies of race, language, agglomeration and segregation in particular locales. Moreover, big data might seek to be exhaustive, but as with all data they are both a representation and a sample. What data are captured is shaped by: the field of view/sampling frame (where data capture devices are deployed and what their settings/parameters are; who uses a space or media, e.g., who belongs to Facebook); the technology and platform used (different surveys, sensors, lens, textual prompts, layout, etc. all produce variances and biases in what data are generated); the context in which data are generated (unfolding events mean data are always situated with respect to circumstance); the data ontology employed (how the data are calibrated and classified); and the regulatory environment with respect to privacy, data protection and security (Kitchin, 2013, 2014a). Further, big data generally capture what is easy to ensnare – data that are openly expressed (what is typed, swiped, scanned, sensed, etc.; people’s actions and behaviours; the movement of things) – as well as data that are the ‘exhaust’, a by-product, of the primary task/output.

Small data studies then mine gold from working a narrow seam, whereas big data studies seek to extract nuggets through open-pit mining, scooping up and sieving huge tracts of land. These two approaches of narrow versus open mining have consequences with respect to data quality, fidelity and lineage. Given the limited sample sizes of small data, data quality – how clean (error and gap free), objective (bias free) and consistent (few discrepancies) the data are; veracity – the authenticity of the data and the extent to which they accurately (precision) and faithfully (fidelity, reliability) represent what they are meant to; and lineage – documentation that establishes provenance and fit for use; are of paramount importance (Lauriault, 2012). In contrast, it has been argued by some that big data studies do not need the same standards of data quality, veracity and lineage because the exhaustive nature of the dataset removes sampling biases and more than compensates for any errors or gaps or inconsistencies in the data or weakness in fidelity (Mayer-Schonberger and Cukier, 2013). The argument for such a view is that ‘with less error from sampling we can accept more measurement error’ (p.13) and ‘tolerate inexactitude’ (p. 16).

Nonetheless, the warning ‘garbage in, garbage out’ still holds. The data can be biased due to the demographic being sampled (e.g., not everybody uses Twitter) or the data might be gamed or faked through false accounts or hacking (e.g., there are hundreds of thousands of fake Twitter accounts seeking to influence trending and direct clickstream trails) (Bollier, 2010; Crampton et al., 2012). Moreover, the technology being used and their working parameters can affect the nature of the data. For example, which posts on social media are most read or shared are strongly affected by ranking algorithms not simply interest (Baym, 2013). Similarly, APIs structure what data are extracted, for example, in Twitter only capturing specific hashtags associated with an event rather than all relevant tweets (Bruns, 2013), with González-Bailón et al. (2012) finding that different methods of accessing Twitter data – search APIs versus streaming APIs – produced quite different sets of results. As a consequence, there is no guarantee that two teams of researchers attempting to gather the same data at the same time will end up with identical datasets (Bruns, 2013). Further, the choice of metadata and variables that are being generated and which ones are being ignored paint a particular picture (Graham, 2012). With respect to fidelity there are question marks as to the extent to which social media posts really represent peoples’ views and the faith that should be placed on them. Manovich (2011: 6) warns that ‘[p]eoples’ posts, tweets, uploaded photographs, comments, and other types of online participation are not transparent windows into their selves; instead, they are often carefully curated and systematically managed’.

There are also issues of access to both small and big data. Small data produced by academia, public institutions, non-governmental organizations and private entities can be restricted in access, limited in use to defined personnel, or available for a fee or under license. Increasingly, however, public institution and academic data are becoming more open. Big data are, with a few exceptions such as satellite imagery and national security and policing, mainly produced by the private sector. Access is usually restricted behind pay walls and proprietary licensing, limited to ensure competitive advantage and to leverage income through their sale or licensing (CIPPIC, 2006). Indeed, it is somewhat of a paradox that only a handful of entities are drowning in the data deluge (boyd and Crawford, 2012) and companies such as mobile phone operators, app developers, social media providers, financial institutions, retail chains, and surveillance and security firms are under no obligations to share freely the data they collect through their operations. In some cases, a limited amount of the data might be made available to researchers or the public through Application Programming Interfaces (APIs). For example, Twitter allows a few companies to access its firehose (stream of data) for a fee for commercial purposes (and have the latitude to dictate terms with respect to what can be done with such data), but with a handful of exceptions researchers are restricted to a ‘gardenhose’ (c. 10 percent of public tweets), a ‘spritzer’ (c. one percent of public tweets), or to different subsets of content (‘white-listed’ accounts), with private and protected tweets excluded in all cases (boyd and Crawford, 2012). The worry is that the insights that privately owned and commercially sold big data can provide will be limited to a privileged set of academic researchers whose findings cannot be replicated or validated (Lazer et al., 2009).

Given the relative strengths and limitations of big and small data it is fair to say that small data studies will continue to be an important element of the research landscape, despite the benefits that might accrue from using big data such as social media data. However, it should be noted that small data studies will increasingly come under pressure to utilize the new archiving technologies, being scaled-up within digital data infrastructures in order that they are preserved for future generations, become accessible to re-use and combination with other small and big data, and more value and insight can be extracted from them through the application of big data analytics.

Rob Kitchin