Category Archives: publications

New paper: The (in)security of smart cities: vulnerabilities, risks, mitigation and prevention

Rob Kitchin and Martin Dodge have published a new Programmable City working paper (No. 24), ‘The (in)security of smart cities: vulnerabilities, risks, mitigation and prevention‘ on SocArXiv.

Abstract:  In this paper we examine the current state of play with regards to the security of smart city initiatives. Smart city technologies are promoted as an effective way to counter and manage uncertainty and urban risks through the effective and efficient delivery of services, yet paradoxically they create new vulnerabilities and threats, including making city infrastructure and services insecure, brittle, and open to extended forms of criminal activity. This paradox has largely been ignored or underestimated by commercial and governmental interests or tackled through a technically-mediated mitigation approach. We identify five forms of vulnerabilities with respect to smart city technologies, detail the present extent of cyberattacks on networked infrastructure and services, and present a number of illustrative examples. We then adopt a normative approach to explore existing mitigation strategies, suggesting a wider set of systemic interventions (including security-by-design, remedial security patching and replacement, formation of core security and computer emergency response teams, a change in procurement procedures, and continuing professional development). We discuss how this approach might be enacted and enforced through market-led and regulation/management measures, and examine a more radical preventative approach to security.

Keywords: crime, cyberattacks, mitigation, risk, security, smart cities, urban resilience

Download here

 

New paper: Urban informatics, governmentality and the logics of urban control

Rob Kitchin, Claudio Coletta and Gavin McArdle have published a new Programmable City working paper (No. 25), ‘Urban informatics, governmentality and the logics of urban control ‘, on SocArXiv.

Abstract: In this paper, we examine the governmentality and the logics of urban control enacted through smart city technologies. Several commentators have noted that the implementation of algorithmic forms of urban governance that utilize big data greatly intensifies the extent and frequency of monitoring populations and systems and shifts the governmental logic from surveillance and discipline to capture and control.  In other words, urban governmentality is shifting from subjectification – molding subjects and restricting action – to modulating affects, desires and opinions, and inducing action within prescribed comportments.  We examine this contention through an examination of two forms of urban informatics: city dashboards and urban control rooms and their use in urban governance. In particular, we draw on empirical analysis of the governmental logics of the Dublin Dashboard, a public, analytical dashboard that displays a wide variety of urban data, and the Dublin Traffic Management and Incident Centre (TMIC) and its use of SCATS (Sydney Coordinated Adaptive Traffic System) to control the flow of traffic in the city.  We argue that there is no one governmentality being enacted by smart city technologies, rather they have mutable logics which are abstract, mobile, dynamic, entangled and contingent, being translated and operationalized in diverse, context-dependent ways.  As such, just as disciplinary power never fully supplanted sovereign power, control supplements rather than replaces discipline.

dashboard

Download here

New paper: Urban science: a short primer

Rob Kitchin has published a new Programmable City working paper (No. 23), ‘Urban science: a short primer‘, on SocArXiv.

Abstract: This paper provides a short introductory overview of urban science. It defines urban science, details its practioners and their aims, sets out its relationship to urban informatics and urban studies, and explains its epistemology and the analysis of urban big data. It then summarizes criticism of urban science with respect to epistemology, instrumental rationality, data issues, and ethics. It is concluded that urban science research will continue to grow for the foreseeable future, providing a valuable means of making sense of cities, but that it is unlikely it will become a new paradigm, producing an integrative approach that replaces the diverse philosophical traditions within urban studies.

Download here

 

The limits of social media big data

handbook social media researchA new book chapter by Rob Kitchin has been published in The Sage Handbook of Social Media Research Methods edited by Luke Sloan and Anabel Quan-Haase. The chapter is titled ‘Big data – hype or revolution’ and provides a general introduction to big data, new epistemologies and data analytics, with the latter part focusing on social media data.  The text below is a sample taken from a section titled ‘The limits of social media big data’.

The discussion so far has argued that there is something qualitatively different about big data from small data and that it opens up new epistemological possibilities, some of which have more value than others. In general terms, it has been intimated that big data does represent a revolution in measurement that will inevitably lead to a revolution in how academic research is conducted; that big data studies will replace small data ones. However, this is unlikely to be the case for a number of reasons.

Whilst small data may be limited in volume and velocity, they have a long history of development across science, state agencies, non-governmental organizations and business, with established methodologies and modes of analysis, and a record of producing meaningful answers. Small data studies can be much more finely tailored to answer specific research questions and to explore in detail and in-depth the varied, contextual, rational and irrational ways in which people interact and make sense of the world, and how processes work. Small data can focus on specific cases and tell individual, nuanced and contextual stories.

Big data is often being repurposed to try and answer questions for which it was never designed. For example, geotagged Twitter data have not been produced to provide answers with respect to the geographical concentration of language groups in a city and the processes driving such spatial autocorrelation. We should perhaps not be surprised then that it only provides a surface snapshot, albeit an interesting snapshot, rather than deep penetrating insights into the geographies of race, language, agglomeration and segregation in particular locales. Moreover, big data might seek to be exhaustive, but as with all data they are both a representation and a sample. What data are captured is shaped by: the field of view/sampling frame (where data capture devices are deployed and what their settings/parameters are; who uses a space or media, e.g., who belongs to Facebook); the technology and platform used (different surveys, sensors, lens, textual prompts, layout, etc. all produce variances and biases in what data are generated); the context in which data are generated (unfolding events mean data are always situated with respect to circumstance); the data ontology employed (how the data are calibrated and classified); and the regulatory environment with respect to privacy, data protection and security (Kitchin, 2013, 2014a). Further, big data generally capture what is easy to ensnare – data that are openly expressed (what is typed, swiped, scanned, sensed, etc.; people’s actions and behaviours; the movement of things) – as well as data that are the ‘exhaust’, a by-product, of the primary task/output.

Small data studies then mine gold from working a narrow seam, whereas big data studies seek to extract nuggets through open-pit mining, scooping up and sieving huge tracts of land. These two approaches of narrow versus open mining have consequences with respect to data quality, fidelity and lineage. Given the limited sample sizes of small data, data quality – how clean (error and gap free), objective (bias free) and consistent (few discrepancies) the data are; veracity – the authenticity of the data and the extent to which they accurately (precision) and faithfully (fidelity, reliability) represent what they are meant to; and lineage – documentation that establishes provenance and fit for use; are of paramount importance (Lauriault, 2012). In contrast, it has been argued by some that big data studies do not need the same standards of data quality, veracity and lineage because the exhaustive nature of the dataset removes sampling biases and more than compensates for any errors or gaps or inconsistencies in the data or weakness in fidelity (Mayer-Schonberger and Cukier, 2013). The argument for such a view is that ‘with less error from sampling we can accept more measurement error’ (p.13) and ‘tolerate inexactitude’ (p. 16).

Nonetheless, the warning ‘garbage in, garbage out’ still holds. The data can be biased due to the demographic being sampled (e.g., not everybody uses Twitter) or the data might be gamed or faked through false accounts or hacking (e.g., there are hundreds of thousands of fake Twitter accounts seeking to influence trending and direct clickstream trails) (Bollier, 2010; Crampton et al., 2012). Moreover, the technology being used and their working parameters can affect the nature of the data. For example, which posts on social media are most read or shared are strongly affected by ranking algorithms not simply interest (Baym, 2013). Similarly, APIs structure what data are extracted, for example, in Twitter only capturing specific hashtags associated with an event rather than all relevant tweets (Bruns, 2013), with González-Bailón et al. (2012) finding that different methods of accessing Twitter data – search APIs versus streaming APIs – produced quite different sets of results. As a consequence, there is no guarantee that two teams of researchers attempting to gather the same data at the same time will end up with identical datasets (Bruns, 2013). Further, the choice of metadata and variables that are being generated and which ones are being ignored paint a particular picture (Graham, 2012). With respect to fidelity there are question marks as to the extent to which social media posts really represent peoples’ views and the faith that should be placed on them. Manovich (2011: 6) warns that ‘[p]eoples’ posts, tweets, uploaded photographs, comments, and other types of online participation are not transparent windows into their selves; instead, they are often carefully curated and systematically managed’.

There are also issues of access to both small and big data. Small data produced by academia, public institutions, non-governmental organizations and private entities can be restricted in access, limited in use to defined personnel, or available for a fee or under license. Increasingly, however, public institution and academic data are becoming more open. Big data are, with a few exceptions such as satellite imagery and national security and policing, mainly produced by the private sector. Access is usually restricted behind pay walls and proprietary licensing, limited to ensure competitive advantage and to leverage income through their sale or licensing (CIPPIC, 2006). Indeed, it is somewhat of a paradox that only a handful of entities are drowning in the data deluge (boyd and Crawford, 2012) and companies such as mobile phone operators, app developers, social media providers, financial institutions, retail chains, and surveillance and security firms are under no obligations to share freely the data they collect through their operations. In some cases, a limited amount of the data might be made available to researchers or the public through Application Programming Interfaces (APIs). For example, Twitter allows a few companies to access its firehose (stream of data) for a fee for commercial purposes (and have the latitude to dictate terms with respect to what can be done with such data), but with a handful of exceptions researchers are restricted to a ‘gardenhose’ (c. 10 percent of public tweets), a ‘spritzer’ (c. one percent of public tweets), or to different subsets of content (‘white-listed’ accounts), with private and protected tweets excluded in all cases (boyd and Crawford, 2012). The worry is that the insights that privately owned and commercially sold big data can provide will be limited to a privileged set of academic researchers whose findings cannot be replicated or validated (Lazer et al., 2009).

Given the relative strengths and limitations of big and small data it is fair to say that small data studies will continue to be an important element of the research landscape, despite the benefits that might accrue from using big data such as social media data. However, it should be noted that small data studies will increasingly come under pressure to utilize the new archiving technologies, being scaled-up within digital data infrastructures in order that they are preserved for future generations, become accessible to re-use and combination with other small and big data, and more value and insight can be extracted from them through the application of big data analytics.

Rob Kitchin

New paper: Algorhythmic governance: Regulating the ‘heartbeat’ of a city using the Internet of Things

Claudio Coletta and Rob Kitchin have published a new Programmable City working paper (No. 22) – Algorhythmic governance: Regulating the ‘heartbeat’ of a city using the Internet of Things – which is due to be delivered at the Algorithms in Culture workshop at the University of California Berkeley, 1-2 December 2016.

It can be downloaded from: OSF, ResearchGate, Academia

Abstract

To date, research examining the socio-spatial effects of smart city technologies have charted how they are reconfiguring the production of space, spatiality and mobility, and how urban space is governed, but have paid little attention to how the temporality of cities is being reshaped by systems and infrastructure that capture, process and act on real-time data. In this paper, we map out the ways in which city-scale Internet of Things infrastructures, and their associated networks of sensors, meters, transponders, actuators and algorithms, are used to measure, monitor and regulate the polymorphic temporal rhythms of urban life. Drawing on Lefebvre (1992[2004]), and subsequent research, we employ rhythmanalysis in conjunction with Miyazaki’s (2012, 2013a/b) notion of ‘algorhythm’ and nascent work on algorithmic governance, to develop a concept of ‘algorhythmic governance’. We then use this framing to make sense of two empirical case studies: a traffic management system and sound monitoring and modelling. Our analysis reveals: (1) how smart city technologies computationally perform rhythmanalysis and undertake rhythm-work that intervenes in space-time processes; (2) three distinct forms of algorhythmic governance, varying on the basis of adaptiveness, immediacy of action, and whether humans are in, on-, of-, off-the-loop; (3) and a number of factors that shape how algorhythmic governance works in practice.

Key words: algorhythm, algorithmic governance, rhythmanalysis, Internet of Things, smart cities, time geography

 

 

New paper on frictions in civic hacking

Drawing on postcolonial technoscience and particularly the notion of ‘frictions’, Sung-Yueh Perng and Rob Kitchin analyse how solutions are worked up, challenged and changed in civic hacking events. The paper is published in Social & Cultural Geography and is entitled Solutions and frictions in civic hacking: collaboratively designing and building wait time predictions for an immigration office. There are still eprints available for free via the link: http://www.tandfonline.com/eprint/SSWBCcCech3hezdgIFZp/full. For more details about the paper, the abstract is pasted below.

Abstract: Smart and data-driven technologies seek to create urban environments and systems that can operate efficiently and effortlessly. Yet, the design and implementation of such technical solutions are full of frictions, producing unanticipated consequences and generating turbulence that foreclose the creation of friction-free city solutions. In this paper, we examine the development of solutions for wait time predictions in the context of civic hacking to argue that a focus on frictions is important for establishing a critical understanding of innovation for urban everyday life. The empirical study adopted an ethnographically informed mobile methods approach to follow how frictions emerge and linger in the design and production of queue predictions developed through the civic hacking initiative, Code for Ireland. In so doing, the paper charts how solutions have to be worked up and strategies re-negotiated when a shared motivation meets different data sources, technical expertise, frames of understanding, urban imaginaries and organisational practices; and how solutions are contingently stabilised in technological, motivational, spatiotemporal and organisational specificities rather than unfolding in a smooth, linear, progressive trajectory.