Tag Archives: geodemographic

Code and the City workshop videos: Session 4

If you missed any of the videos from the first three sessions, they are here: Session 1, Session 2 and Session 3.

Session 4: Cities, knowledge classification and ontology

Cities and context: The codification of small areas through geodemographic classification
Alex Singleton, Geography, University of Liverpool

Abstract
Geodemographic classifications group small area geography into categories based on shared population and built environment characteristics. This process of “codification” aims to create a common language for the description of salient internal structure of places, and by extension, enable their comparison across geographic contexts. The typological study of areas is not a new phenomenon, and contemporary geodemographics emerged from research conducted in the 1970s that aimed at providing a new method of targeting deprivation relief funding within the city of Liverpool. This city level model was later extended for the national context, and became the antecedent of contemporary geodemographic classification. This paper explores the origins of geodemographics, to first illustrate that the coding of areas is not just a contemporary practice; and then extends this discussion to consider how methodological choices influence classification structure. Being open with such methods is argued as being essential for classifications to engender greater social responsibility.

The city and the Feudal Internet: Examining institutional materialities
Paul Dourish, Informatics, UC Irvine

Abstract
In “Seeing like a City,” Marianne Valverde turns to urban regulation to counter some of James Scott’s arguments about the homogenizing gaze of high modern statehood. Cities, she notes, are highly regulated, but without the panoptic order that Scott suggests. They operate instead as a splintered patchwork of regulatory boundaries – postal codes, tax assessment districts, business improvement zones, school catchment areas, zoning blocks, sanitation districts, and similar divisions that don’t quite line up. Arguments about online experience and the consequences of the Internet have a similar air to Scott’s analysis of statehood – they posit a world of consistent, compliant, and compatible information systems, in which the free flow of information and the homogenizing gaze of the digital erases boundaries (both for good and ill).

In fact, the organization of the Internet — that is, of our technologically- and historically-specific internet –is one of boundaries, barriers, and fiefdoms. We have erected all sorts of internal barriers to the free flow of information for a range of reasons, including the desire for autonomy and the extraction of tolls and rents. In this talk I want to explore some aspects of the historical specificity of our Internet and consider what this has to tell us about the ways that we talk about code and the city.

Semantic cities: Coded geopolitics and rise of the semantic web
Heather Ford and Mark Graham, Oxford Internet Institute, University of Oxford

Abstract
In 2012, Google rolled out a service called Knowledge Graph which would enable users to have their search query resolved without having to navigate to other websites. So, instead of just presenting users with a diverse list of possible answers to any query, Google selects and frames data about cities, countries and millions of other objects sourced from sites including Wikipedia, the CIA World Factbook and Freebase under its own banner.

For many, this heralded Google’s eventual recognition of the benefits of the Semantic Web: an idea and ideal that the Web could be made more efficient and interconnected when websites share a common framework that would allow data to be shared and reused across application, enterprise, community, and geographic boundaries. This move towards the Semantic Web can be starkly seen in the ways that Wikipedia, as one of the foundations for Google’s Knowledge Graph, has begun to make significant epistemic changes. With a Google funded project called WikiData, Wikipedia has begun to use Semantic Web principles to centralise ‘factual’ data across all language versions of the encyclopaedia. For instance, this would mean that the population of a city need only be altered once in WikiData rather in all places where it occurs in Wikipedia’s 285 language versions.

For Google, these efficiencies provide a faster experience for users who will stay on their website rather than navigating away. For Wikipedia, such efficiencies promise to centralise the updating process so that data are consistent and so that smaller language Wikipedias can obtain automated assistance in translating essential data for articles more rapidly.

This paper seeks to critically interrogate these changes in the digital architectures and infrastructures of our increasingly augmented cities. What shifts in power result from these changes in digital infrastructures? How are semantic standardisations increasingly encoded into our urban environments and experiences? And what space remains for digital counter-narratives, conflict, and contention?

To tackle those questions, we trace data about two cities as they travel through Google’s algorithms and the Semantic Web platforms of Wikidata and Wikipedia. In each case, we seek to understand how particular reflections of the city are made visible or invisible and how particular publics are given voice or silenced. Doing so leads us to ultimately reflect on how these new alignments of code and content shape how cities are presented, experienced, and brought into being.