Code and the City workshop videos: Session 4

If you missed any of the videos from the first three sessions, they are here: Session 1, Session 2 and Session 3.

Session 4: Cities, knowledge classification and ontology

Cities and context: The codification of small areas through geodemographic classification
Alex Singleton, Geography, University of Liverpool

Abstract
Geodemographic classifications group small area geography into categories based on shared population and built environment characteristics. This process of “codification” aims to create a common language for the description of salient internal structure of places, and by extension, enable their comparison across geographic contexts. The typological study of areas is not a new phenomenon, and contemporary geodemographics emerged from research conducted in the 1970s that aimed at providing a new method of targeting deprivation relief funding within the city of Liverpool. This city level model was later extended for the national context, and became the antecedent of contemporary geodemographic classification. This paper explores the origins of geodemographics, to first illustrate that the coding of areas is not just a contemporary practice; and then extends this discussion to consider how methodological choices influence classification structure. Being open with such methods is argued as being essential for classifications to engender greater social responsibility.

The city and the Feudal Internet: Examining institutional materialities
Paul Dourish, Informatics, UC Irvine

Abstract
In “Seeing like a City,” Marianne Valverde turns to urban regulation to counter some of James Scott’s arguments about the homogenizing gaze of high modern statehood. Cities, she notes, are highly regulated, but without the panoptic order that Scott suggests. They operate instead as a splintered patchwork of regulatory boundaries – postal codes, tax assessment districts, business improvement zones, school catchment areas, zoning blocks, sanitation districts, and similar divisions that don’t quite line up. Arguments about online experience and the consequences of the Internet have a similar air to Scott’s analysis of statehood – they posit a world of consistent, compliant, and compatible information systems, in which the free flow of information and the homogenizing gaze of the digital erases boundaries (both for good and ill).

In fact, the organization of the Internet — that is, of our technologically- and historically-specific internet –is one of boundaries, barriers, and fiefdoms. We have erected all sorts of internal barriers to the free flow of information for a range of reasons, including the desire for autonomy and the extraction of tolls and rents. In this talk I want to explore some aspects of the historical specificity of our Internet and consider what this has to tell us about the ways that we talk about code and the city.

Semantic cities: Coded geopolitics and rise of the semantic web
Heather Ford and Mark Graham, Oxford Internet Institute, University of Oxford

Abstract
In 2012, Google rolled out a service called Knowledge Graph which would enable users to have their search query resolved without having to navigate to other websites. So, instead of just presenting users with a diverse list of possible answers to any query, Google selects and frames data about cities, countries and millions of other objects sourced from sites including Wikipedia, the CIA World Factbook and Freebase under its own banner.

For many, this heralded Google’s eventual recognition of the benefits of the Semantic Web: an idea and ideal that the Web could be made more efficient and interconnected when websites share a common framework that would allow data to be shared and reused across application, enterprise, community, and geographic boundaries. This move towards the Semantic Web can be starkly seen in the ways that Wikipedia, as one of the foundations for Google’s Knowledge Graph, has begun to make significant epistemic changes. With a Google funded project called WikiData, Wikipedia has begun to use Semantic Web principles to centralise ‘factual’ data across all language versions of the encyclopaedia. For instance, this would mean that the population of a city need only be altered once in WikiData rather in all places where it occurs in Wikipedia’s 285 language versions.

For Google, these efficiencies provide a faster experience for users who will stay on their website rather than navigating away. For Wikipedia, such efficiencies promise to centralise the updating process so that data are consistent and so that smaller language Wikipedias can obtain automated assistance in translating essential data for articles more rapidly.

This paper seeks to critically interrogate these changes in the digital architectures and infrastructures of our increasingly augmented cities. What shifts in power result from these changes in digital infrastructures? How are semantic standardisations increasingly encoded into our urban environments and experiences? And what space remains for digital counter-narratives, conflict, and contention?

To tackle those questions, we trace data about two cities as they travel through Google’s algorithms and the Semantic Web platforms of Wikidata and Wikipedia. In each case, we seek to understand how particular reflections of the city are made visible or invisible and how particular publics are given voice or silenced. Doing so leads us to ultimately reflect on how these new alignments of code and content shape how cities are presented, experienced, and brought into being.

Code and the City workshop videos: Session 3

If you missed our first and second sessions of the Code and the City workshop video, the embedded links will lead you to them. And now is time for Session 3!

Session 3: Locative/social media

Digital social interactions in the city: Reflecting on location-based social media
Luigina Ciolfi, Human-Centred Computing, Sheffield Hallam University
Gabriela Avram, University of Limerick

Abstract
Location-based social media increasingly mediates social and interpersonal interactions in urban settings. Such practices become coded in software representing both the log and content of social interactions and the location to which they relate. Therefore a digital “cloud” of social interactions becomes embedded into the physical reality of the city, of its neighbourhoods, public places, cafés, transportation hubs and any other location identified by social media users (by user-initiated “check-ins” or by the content that they generate, such as photographs) and by the tools they use (for example, through automatic geo-tagging). Two sets of issues to be investigated are emerging: firstly referring to how such localised interactions are populating the algorithms and infrastructures provided by the software: how are the platform of location-based social media framing people’s perceptions and identifications of locations? How is code both facilitating and representing a set of social interactions relating to various spatial configurations? A second set of issues regards the re-materialisation of such cloud of interactions in the physical world: could it be made somehow perceivable and/or tangible in the physical world by the way in which certain environments are designed?

Overall, could new approaches to urban planning and environmental design become concerned with accommodating and facilitating these social interactions as they do so by supporting in-presence, analogue ones?

This paper will attempt to define and discuss these issues drawing both from interaction design and human-computer interaction literature on physical/digital interactions and from two preliminary empirical studies of location-based social media use in two cities.

Feeling place in the city: strange ontologies, Foursquare and location-based social media
Leighton Evans, National University of Ireland Maynooth

Abstract
Certain instances of the use of location-based social media in cities can result in deep understandings of novel locations. The contributions of other users and the information pushed to users when in particular locales can help users rapidly attune themselves to places and achieve an understanding of the place. The use of a computational device and location-based social networking to achieve this understanding indicates an alteration in the achievement of placehood using computational technology. Practices and methods of understanding place can, in some situations, be delegated to the device and application. This paper explores how the moment that place is appreciated as place (that is, as a meaningful existential locale) can be reconciled with the delegation of the epistemologies of placehood to a computational device and location-based social media application. Drawing on data from an ethnographic study of Foursquare users, the phenomenological appreciation of place is understood as co-constituent between the device, application and the mood of the user. Code and computational devices are contextualised as a constant foregrounding presence in the city, and the engagement of the user, device, code and data in understanding place is a moment of revealing that is co-constituent of all these elements. This exploratory paper engages Peter Sloterdijk’s theory of spheres as a framework to understand how these four elements interact, and how that interaction of elements can orient a user to a revealing of the city that can be understood as a phenomenological revealing of place.

Cultural curation and urban Interfaces: Locative media as experimental platforms for cultural data
Nanna Verhoeff, Department of Media and Culture Studies, Utrecht University

Abstract
My contribution is concerned with the way in which urban interfaces are used for access to cultural collections – whether institutionally embedded, or bottom-up, participatory collections. Designed in code and exploring affordances of new location-based and/or mobile technologies for urban space-making, these interfaces are thought to be powerful tools for ideals of participatory urban culture. I propose to approach these “projects” as curatorial machines, as urban experimental laboratories for cultural data. This entails a threefold perspective, on curation, on code, and on principles of creative (sometimes artistic or playful) experimentation.

For this, we may remind ourselves of the curatorial project of museal and archival institutions, of preserving, and “caring” for the object, as well as creating new contexts for the object and providing access for an urban public – a field which is very much in transition as a result of current ambitions for new public engagement and ideals of participation, pervasive in all socio-economic and political regions of contemporary culture. Simultaneously we witness the current interest in the principles of data curation as the care for, interaction with, interpretation and visualisation of digital data, as the datafication and codification of culture invades all corners of urban life. Design of interfaces is central in how we can access, work with, and make meaning with digital culture. Departing from the concept of dispositif in the analysis of interfaces, I propose to bring together the fact that the interfaces are coded and designed, to (playfully) experiment with their affordances.

In my approach to this intersection of datafication of, and the proliferation of interfaces for “culture”, I aim to develop heuristic tools for critical evaluation of this phenomenon, broadly bracketed as [urban interfaces] as interfaces of cultural curation.

A Window, a message, or a medium? Learning about cities from Instagram
Lev Manovich, Computer Science, The Graduate Center, City University of New York

Abstract
Over last few years, tens of thousands of researchers in social computing and computational social sciences started to use available data from social networks and media sharing services (such as Twitter, Foursquare and Instagram) created by users of mobile platforms. The research uses techniques from statistics, machine learning, and visualization, among others, to analyze all kinds of patterns contained in this data and also (less frequently) propose new models for understanding the social. The examples include analysis of information propagation in Twitter, predicting popularity of photos on Flickr, proposing new sets of city neighborhoods using Foursquare users check-ins, and understanding connections between musical genres using listening data from Echonest.

In my talk I will address a fundamental question we face in doing this research: what exactly are we learning when analyzing can social media data? Is it a window into real-world social and cultural behaviors, a reflection of lifestyles of particular demographics who use mobile platforms and particular network services, or only an artifact of mobile apps? In other words – is social media a “message” or a “medium”?

I will discuss this question using three recent projects from my lab (softwarestudies.com). The projects use large sets of Instagram images and accompanying data together with data science and visualization tools. Phototrails.net (2013) analyzes 2.3 million photos from 13 global cities to investigate how different kinds of events are represented in these photos. The project also investigates if the universal affordances of Instagram app (same interface and same set of filters available to all users) result in universal digital visual language. Selfiecity.net (2014) analyzes the distinct artifact of mobile platforms – selfies. We compare thousands of selfies to see if cultural specificity of different places and cultural is preserved in this genre. Finally, our third project compares Instagram photos taken by visitors in a few major modern art museums, asking if photographs of famous works of the art differ depending on what these artworks are and where they are situated.

Job: Three year postdoc on the Programmable City project

We’re pleased to announce the advertisement of a three year postdoc position on the Programmable City project.   Full details of the project can be found on the Maynooth University HR page, but essentially the post will study algorithms and code used in smart city initiatives (broadly conceived) from a software studies perspective.  As such, the project will critically examine how software developers translate rules, procedures and policies into a complex architecture of interlinked algorithms that manage and govern how people traverse or interact with urban systems.  It will thus provide an in-depth analysis of how software and data are being produced to aid the regulation of city life in an age of software and ‘big data’. The primary methods will be a selection from those set out in the paper ‘Thinking critically about and researching algorithms’.

We are seeking applications from researchers with an interest in software studies, critical data studies, urban studies, and smart cities to work in an interdisciplinary team. Applicants will:

  • have a keen interest in understanding software from a social science perspective;
  • be a proficient programmer and able to comprehend other developer’s code;
  • have a good, broad range of qualitative data creation and analysis skills;
  • be interested in theory building;
  • have an aptitude to work well in an interdisciplinary team;
  • be prepared to undertake overseas fieldwork;
  • have a commitment to publishing and presenting their work;
  • have a willingness to communicate through new social media;
  • be prepared to archive their data for future re-use by others;
  • be prepared to help organise and attend workshops and conferences.

The closing data is 5th December.  See the full job description here for more details.

We would encourage any interested candidates to apply for the post and for readers of the blog to bring the post to the attention of those who you think might be interested, or circulate in your networks/social media.

New paper: From a Single Line of Code to an Entire City

A new paper by Rob Kitchin has been posted as open access on SSRN.  From a Single Line of Code to an Entire City: Reframing Thinking on Code and the City is The Programmable City Working Paper 4.

Abstract:     
Cities are rapidly becoming composed of digitally-mediated components and infrastructures, their systems augmented and mediated by software, with widespread consequences for how they are managed, governed and experienced. This transformation has been accompanied by critical scholarship that has sought to understand the relationship between code and the city. Whilst this work has produced many useful insights, in this paper I argue that it also has a number of shortcomings. Principal amongst these is that the literatures concerning code and the city have remained quite divided. Studies that focus on code are often narrow in remit, fading out the city, and tend to fetishize and potentially decontextualises code at the expense of the wider socio-technical assemblage within which it is embedded. Studies that focus on the city tend to examine the effects of code, but rarely unpack the constitution and mechanics of the code producing those effects. To provide a more holistic account of the relationship between code and the city I forward two interlinked conceptual frameworks. The first places code within a wider socio-technical assemblage. The second conceives the city as being composed of millions of such assemblages. In so doing, the latter seeks to provide a means of productively building a conceptual and empirical understanding of programmable urbanism that scales from individual lines of code to the complexity of an entire urban system.

Keywords: code, city, software, programmable urbanism, software studies, smart city, urban studies, assemblages

Download

Hype, hubris, hope, heads in the sand, and some very cool stuff: A report on the Web Summit

A chunk of the Programmable City team attended the Web Summit in Dublin last week.  I was fortunate to be asked to MC the Machine Stage for Tuesday afternoon (on smart cities/smart cars), and also presented a paper, participated in a panel discussion, and chaired a private panel session, all on smart cities.  As well reported in the media, it was an enormous event attended by 22,000 people, with 600 speakers across nine stages, and hundreds of stands, many of which changed daily to accommodate them all.  No doubt a huge amount of business was conducted, personal networks extended, and thousands of pages of copy for newspapers, magazines and websites filed.

To me what was interesting about the event were the silences as much as what was presented and displayed.  There were loads of very interesting apps and technologies demoed, many of which will have real world impact.  That said, there was also a lot of hype, hubris, hope, self-promotion, buzzwords (to my ear ‘disruption’, ‘smart’, ‘platform’, ‘internet of things’ and ‘use case’ were used a lot), Californian ideology (radical individualism, libertarianism, neoliberal economics, and tech utopianism), and heads in the sand.  In contrast, there was an absence of critical reflection about the following three broad concerns. Continue reading

Code and the City workshop videos: Session 2

Following up from last week’s videos, we are now into our second session of the Code and the City Workshop!

Session 2: Code and mobility

Moving applications: A multilayered approach to mobile computing
Jim Merricks White, National University of Ireland, Maynooth

Abstract
Mobile computing plays an increasingly important role in the way that space is experienced in the city. This has political consequences, both at the micro level of everyday production and consumption, and at the macro level of institutional and political economy. While geographers have explored the ontological role which might be played by hardware, software, data and mapping within this spatial paradigm, there remains little concerted effort to explore mobile computing as a technological system which incorporates all of these socio-technical assemblages. By drawing on adjacent disciplines of science and technology studies (STS) and media and communication studies, this essay proposes a multilayered model for such a holistic inquiry: hardware—software—data(base)—GUI (graphical user interface).

By applying this model to a self-reflexive exploration of the taxi service Hailo and the mobility tracking application Moves, I attempt to demonstrate how it might be put to work as a heuristic tool. Following on from my desire to expose and explore the politics of mobile computing, the model is used to draw attention to the networks of power which make up these mobile computing services.

Digital urbanism in crises: A hopeful monster?
Monika Büscher, With Michael Liegl, Katrina Petersen, Mobilities.Lab, Lancaster University, UK

Abstract
Intersecting mobilities of data, people and resources are an integral part of a new digital urbanism. Thrift speaks of Lifeworld.Inc, a new entertainment-security sector driven contexture where people’s everyday activities, movements, physiological data, thoughts, desires and fears are so richly documented in real time that commercial enterprise as well as urban services (transport, energy, security) can dynamically anticipate and shape them ‘just-in-time’ (2011). While this opens up novel opportunities for more efficiency, comfort, and sustainability in networked urban mobilities, it also provides new leverage for mobilizing disaster response. In a ‘century of disasters’ (eScience 2012), where urbanization has increased vulnerability and climate change contributes to increased frequency and severity of disasters, this opens up a perspicuous site for investigations of post-human practices, phenomenologies and ethics. Big data analytics and information sharing for risk prevention and disaster response can exacerbate the unprecedented surveillance contemporary societies practice (Harding 2014), Kafka-eske transformations of privacy and civil liberties (Solove 2004) and a splintering urbanism (Graham & Marvin 2001). At the heart of these transformations is a digital phenomenology of invisibility, immateriality and ‘intelligence’ that does not lend itself to human control. ‘Smart cities’ may depend on smart citizens (Greenfield 2013), but the technologies contemporary societies produce do not support human intelligence. We report from ‘inside the belly of the beast’ of innovation in mobilizing Lifeworld.Inc data for disaster response (Balka 2006). Drawing on experience from collaborative research and design projects (e.g. http://www.bridgeproject.eu/en), we discuss the relationship between lived cyborg practice, phenomenology and ethics in networked urban mobilities. Using a disaster perspective for a disclosive ethical investigation (Introna 2007) does disclose some potentially disastrous transformations, but it also highlights avenues for alternative, radically careful as well as carefully radical design (Latour 2009).

Abstract urbanism
Matthew Fuller and Graham Harwood, Cultural Studies, Goldsmiths

Abstract
The urban riots of the USA in the late 1960s were some of the most powerful political events of that era. As well as drawing numerous responses from media, the civil rights movement, black nationalists, and groups such as the Situationist International, the uprising also triggered a range of research responses including some of the first computational models of cities. T.C. Schelling’s “Models of Segregation” attempted to provide a logical model for racial segregation and laid much of the groundwork for what later became agent-based modeling. Such work is expressed contemporarily for instance in the riot and insurgency modeling of J.M. Epstein and others. For the state, such events mark a schizophrenic relationship to the contingency of riot and how the algorithms play out in such a scenario. How can it govern events that both demonstrate and excite its power and also undermine it? This paper will propose a tracing of the genealogy of such models alongside a reading of other ways of using urban modeling in relation to the urban riots of that era and now. A parrallel reference point here will be the work of W. Bunge a quantitative geographer and spatial theorist. Bunge consistently argued that geometrical patterns and morphological laws express disadvantage and injustice under contemporary capitalism, and that identified patterns could be remedied by rational methods.

The history of computing, from G.W. Leibniz onwards, tangles with the problematic of developing rational approaches to complex, multi-dimensional problems with a high-degree of what J. Law describes as “messiness”. This paper will examine the ways in which rationality, or ratio, is positioned in relation to urban conflict as a means of discussing the relations between the city and software. The paper will develop a discussion of ratio in relation to questions of abstraction, reduction and empiricism. We are especially concerned to find a relationship between abstraction and the empirical that, by working with the materiality of computational systems recognises, and perhaps works with, the tendency to reduction(ism) but through which modes of abstraction may also work with the highly and complexly empirical.