Rob Kitchin has published a new Programmable City working paper (no. 20) – Reframing, reimagining and remaking smart cities – on SocArXiv today. It is an introductory framing/provocation essay for the ‘Creating smart cities’ workshop to be hosted at Maynooth University, 5-6 September 2016.
Over the past decade the concept and development of smart cities has unfolded rapidly, with many city administrations implementing smart city initiatives and strategies and a diverse ecology of companies and researchers producing and deploying smart city technologies. In contrast to those that seek to realise the benefits of a smart city vision, a number of critics have highlighted a number of shortcomings, challenges and risks with such endeavours. This short paper outlines a third path, one that aims to realise the benefits of smart city initiatives while recasting the thinking and ethos underpinning them and addressing their deficiencies and limitations. It argues that smart city thinking and initiatives need to be reframed, reimagined and remade in six ways. Three of these concern normative and conceptual thinking with regards to goals, cities and epistemology, and three concern more practical and political thinking and praxes with regards to management/governance, ethics and security, and stakeholders and working relationships. The paper does not seek to be definitive or comprehensive, but rather to provide conceptual and practical suggestions and stimulate debate about how to productively recast smart urbanism and the creation of smart cities.
Late last week I, and many others I would presume, were left further behind in the digital era at the stroke of a pen. What my monthly bill cheekily termed broadband was officially no longer! In fact I never really had broadband to begin with, reliant as I am on ancient lines of copper which valiantly struggled to connect me to a quaint legacy telephone exchange deep in rural Wexford. Often it has proven more useful as an indicator of wind speed than a delivery method of zeros and ones, with wind-generated friction on the line reducing those precious few minutes of 1.2 Mbps connectivity still further on stormy evenings. Well in the US the telecoms watchdog, the FCC, has just raised the bar on what can officially be labelled as broadband, state-side at least, by redefining the minimum download speed at 25 Mbps. I can but dream! Continue reading →
Today, we will be sharing videos from the Opening talk and First session: Code, coding and interfaces
Code and the city: Reframing the conceptual terrain Rob Kitchin, NIRSA, National University of Ireland Maynooth
Software has become essential to the functioning of cities. It is deeply and pervasively embedded into the systems and infrastructure of the built environment and in the management and governance of urban societies. Software-enabled technologies and services augment and facilitate how we understand and plan cities, how we manage urban services and utilities, and how we live urban lives. This paper will provide an overarching overview of the ways in which software has become an indispensible mediator of urban systems and the consequent implications, and makes the case for the study of computational algorithms and how cities are captured in and processed through code.
Session 1: Code, coding and interfaces
Code-crowd: How software repositories express urban life Adrian Mackenzie, Sociology, Lancaster University
Is code an expression of urban life? This paper analyses around 10 million software repositories on Github.com from the perspective of how they include cities. The methodology here relies on data-intensive work with bodies of code at a number of different levels. It maps the geographies of Github organisations and users to see how location anchors coding work. More experimentally, it tracks how urban spaces, movements and architectures figure in and configure code. The paper’s focus is less on how code shapes cities and more on apprehending code and coding as a way of experientially inhabiting cities. This approach might better highlight how code expresses urban experiences of proximity, mixing, movement, nearness, distance, and location. It might also shed light on the plural forms of spatiality arising from code, particularly as algorithmic processes become more entangled with each other.
Encountering the city at hackathons
Sophia Maalsen and Sung-Yueh Perng, National University of Ireland, Maynooth
The growing significance of hackathons is currently developing in a mutually informing way. On the one hand, there is an increasing use of hackathons to address issues of city governance – Chris Vein, US CTO for government innovation has described them as ‘sensemaking’ tools for government, encouraging agencies to make use of hackathons and “let the collective energy of the people in the room come together and really take that data and solve things in creative and imaginative ways” (Llewellyn 2012). On the other, regular hack nights appear as creative urban space for citizens to discuss problems they encounter and which are not necessarily considered by government, and produce solutions to tackle these issues.
In this paper, we explore potential opportunities and tensions, as well as excitement and inattentiveness, emerging as solutions are proposed and pursued. Through this, we reflect upon how such processes translate the city and transform ways of living in places where the solutions are applied. We further ask whether the positive discourse surrounding hackathons is justified or whether there are limits to their ability to deal with the complexity of urban issues.
Interfacing urban intelligence Shannon Mattern, Media Studies, New School NY
Technology companies, city governments, and design firms – the entities teaming up to construct our highly-networked cities of the future – have prototyped interfaces through which citizens can engage with the smart city. But those prototypes, almost always envisioned as screens of some sort, embody institutional values that aren’t always aligned with those of citizens who rightfully claim a “right to the city.” Based on promotional materials from Cisco, Siemens, IBM, Microsoft, and their smart-city-making counterparts, it seems that one of the chief preoccupations of our future-cities is to reflect their data consumption and hyper-efficient (often “widgetized”) activity back to themselves. We thus see city “control centers” lined with screens that serve in part to visualize, and celebrate, the city’s own supposedly hyper-rational operation. Public-facing interfaces, meanwhile, are typically rendered via schematic mock-ups, with little consideration given to interface design. They’re portrayed as conduits for transit information, commercial and service locations and reviews, and information about cultural resources and tourist attractions; and as portals for gathering user-generated data. Across the board, these interfacing platforms tend to frame their users as sources of data that feed the urban algorithmic machines, and as consumers of data concerned primarily with their own efficient navigation and consumption of the city.
In this talk, I’ll consider how we might we design urban interfaces for urban citizens, who have a right to know what’s going on inside “’black boxed’ [urban] control systems” – and even engage with the operating system as more than mere data-generators or reporters-of-potholes-and-power-outages. In considering what constitutes an ideal urban interface, we need to examine those platforms that are already in existence, and those that are proposed for future cities. Even the purely hypothetical, the speculative – the “design fiction” – can illuminate what’s possible, technologically, aesthetically, and ideologically; and can allow us to ask ourselves what kind of a “public face” we want to front our cities, and, even more important, what kinds of intelligence and agency – technological and human – we want our cities to embody.
Do come back next Friday! The next session awaits!
The seminar will focus on the development of smart cities in India. In 2014, the newly elected Indian government announced an ambitious programme of building 100 new smart cities across India. These cities are presented as the answer to the challenges of rural-urban migration, rapid urbanisation, and sustainable development in India. Ayona’s seminar will examine these claims by focussing on two Indian ‘smart cities’ being built from scratch.
You can write down equations that predict what people will do. That’s the huge change. So I have been running the big data conversation … It’s about the fact that you can now understand customers, employees, how we organise, in a quantitative, predictive way for the first time.
Predictive analytics is fervently discussed in the business world, if not fully taken up, and increasingly by public services, governments or medical practices to exploit the value hidden in the public archive or even in social media. In New York for example, there is a geek squad to Mayor’s office, seeking to uncover deep and detailed relationships between the people living there and the government, and at the same time realising “how insanely complicated this city is”. In there, an intriguing question remains as to the effectiveness of predictive analytics, the extent to which it can support and facilitate urban life and the consequences to the cities that are immersed in a deep sea of data, predictions and humans.
People like working in a system that is proactive rather than reactive. When we are expecting a patient load everyone knows what their jobs [are], and you are more efficient with your time.
The patients are happy too, because they receive and finish treatment quickly:
Can we find such success when predictive analytics is practised in various forms of urban governance? Moving the discussion back to US cities again and using policing as an example. Policing work is shifting from reactive to proactive in many cities, in experimental or implementation stages. PredPol is predictive policing software produced by a startup company and has caught considerable amount of attention from various police departments in the US and other parts of the world. Their success as a business, however, is partly to do with by their “energetic” marketing strategies, contractual obligations of referring the startup company to other law enforcement agencies, and so on.
Above all, claims of success shown by the company are difficult to sustain in closer examination. The subjects of the analytics that the software focuses are very specific: burglaries, robberies, vehicle thefts, thefts from vehicles and gun crimes. In other words, the crimes that have “plenty of data to chew on” for making predictions, and are of the opportunistic crimes which are easier to prevent by the presence of the patrolling police (more details here).
This further brings us to the issue of the “proven” and “continued” aspects of success. These are even more difficult and problematic aspects of policing work for the purpose of evaluating and “effectiveness” and “success” of predictive policing. To prove that an algorithm performs well, expectations for which an algorithm is built and tweaked have to be specified, not only for those who build the algorithm, but also for people who will be entangled in the experiments in intended and unintended ways. In this sense, transparency and objectivity related to predictive policing are important. Without acknowledging, considering and evaluating how both the crimes and everyday life, or both normality and abnormality, are transformed into algorithms and disclosing them for validation and consultation, a system of computational criminal justice can turn into, if not witchhunting, alchemy – let’s put two or more elements into a magical pot, stir them and see what happens! This is further complicated by knowing that there are already existing and inherent inequalities in crime data, such as reporting or sentencing, and the perceived neutrality of algorithms can well justify cognitive biases that are well documented in justice system, biases that could justify the rational that someone should be treated more harshly because the person is already on the black list, without reconsidering how the person gets onto such list in the first place. There is an even darker side of predictive policing when mundane social activities are constantly treated as crime data when using social network analysis to profile and incriminate groups and grouping of individuals. This is also a more dynamic and contested field of play considering that while crime prediction practitioners (coders, private companies, government agencies and so on) appropriate personal data and private social media messages for purposes they are not intended for, criminals (or activists for that matter) play with social media, if not yet prediction results obtained by the reverse engineering of algorithms, to plan coups, protests, attacks, etc.
For those who want to look further into how predictive policing is set up, proven, run and evaluated, there are ways of opening up the black box, at least partially, for critically reflecting upon what exactly it could achieve and how the “success” is managed both in computer simulation and in police practices. The Chief scientist of PredPol gave a lecturer where, as pointed out:
He discusses the mathematics/statistics behind the algorithm and, at one point, invites the audience not to take his word for it’s accuracy because he is employed by PredPol, but to take the equations discussed and plug in crime data (e.g. Chicago’s open source crime data) to see if the model has any accuracy.
The video of the lecturer is here
Furthermore, RAND provides a review of predictive policing methods and practices across many US cities. The report can be found here and analyses the advantages gained by various crime prediction methods as well as their limitations. Predictive policing as shown in the report is far from a crystal ball, and has various levels of complexity to run and implement, mathematically, computationally and organisationally. Predictions can range from crime mapping to predicting crime hotspots when given certain spatiotemporal characteristics of crimes (see a taxonomy in p. 19). As far as prediction are concerned, they are good as long as crimes in the future look similar to the ones in the past – their types, temporality and geographic prevalence, if the data is good, which is a big if!. Also, predictions are good when they are further contextualised. Compared with predicting crimes without any help (not even from the intelligence that agents in the field can gather), applying mathematics to help in a guessing game creates a significant advantage, but the differences among these methods are not as dramatic. Therefore, one of the crucial messages intended by reviewing and contextualising predictive methods is that:
It is important to bear in mind that the predictive methods discussed here do not predict where and when the next crime will be committed. Rather, they predict the relative level of risk that a crime will be associated with a particular time and place. The assumption is always that the past is prologue; predictions are made based on the analysis of past data. If the criminal adapts quickly to police interventions, then only data from the recent past will be useful to police departments. (p. 55)
Therefore, however automated, human and organisational efforts are still required in many areas in practice. Activities such as finding relevant data, preparing them for analysis, tweaking factors, variables and parameters, all require human efforts, collaboration as a team and transforming decisions into actions for reducing crimes at organisational levels. Similarly, human and organisational efforts are again needed when types and patterns of crimes are changing, targeted crimes shift, results are to be interpreted and integrated in relation to changing availabilities of resources.
Furthermore, the report reviews the issues of privacy, transparency, trust and civil liberties within existing legal and policy frameworks. However, it becomes apparent that predictions and predictive analytics need careful and mindful designs, responding to emerging ethical, legal and social issues (ELSI) when the impacts of predictive policing occur at individual and organisational levels, affecting the day-to-day life of residents, communities and frontline officers. While it is important to maintain and revisit existing legal requirements and frameworks, it is also important to respond to emerging information and data practices, and notions of “obscurity by design” and “prodecural data due processes” are ways of rethinking and designing relationships between privacy, data, algorithms and predictions. Even the term transparency needs further reflections to make progress on issues concerning what it means under the context of predictive analytics and how it can be achieved by taking into account renewed theoretical, ethical, practical and legal considerations. Under this context, “transparent predictions” is proposed wherein the importance and potential unintended consequences are outlined with regards to rendering prediction processes interpretable to humans and driven by causations rather than correlations. Critical reflections on such a proposal are useful, for example this two part series – (1)(2), further contextualising transparency both in prediction precesses and case-specific situations.
Additionally, IBM has partnered with New York Fire Department and Lisbon Fire Brigade. The main goal is to use predictive analytics to make smart cities safer by using the predictions to better and more effectively allocate emergency response resources. Similarly, crowd behaviours have already been simulated for understanding and predicting how people would react in various crowd events in places such as major traffic and travel terminals, sports and concert venues, shopping malls and streets, busy traffic areas, etc. Simulation tools take into account data generated by sensors, as well as quantified embodied actions, such as walking speeds, body sizes or disabilities, and it is not difficult to imagine that more data and analysis could take advantage of social media data where sentiments and psychological aspects are expected to refine simulation results (a review of simulation tools).
To bring the discussion to a pause, data, algorithms and predictions are quickly turning not only cities but also many other places into testbeds, as long as there are sensors (human and nonhuman) and the Internet. Data will become available and different kinds of tests can then be run to verify ideas and hypotheses. As many articles have pointed out, data and algorithms are flawed, revealling and reinforcing unequal parts and aspects of cities and city lives. Tests and experiments, such as manipulating user emtions by Facebook in their experiments, can make cities vulnerable too, when they are run without regards to embodied and emotionally chared humans. Therefore, there is a great deal more to say about data, algorithms and experiments, because the production of data and experiments of making use of them are always an intervention rather than evaluation. We will be coming back to these topics in subsequent posts.
After the Launch event earlier this week, we are really happy to have Dr Andy Hudson-Smith to discussion Citizens, Data, Virtual Reality and the Internet of Things! Please see the details of our next seminar below.
Time: 16:00 – 18:00, Wednesday, 2 April, 2014 Venue: Room 2.31, 2nd Floor Iontas Building, North Campus NUI Maynooth (Map)
Abstract Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few (IBM, 2103). This data can, compared to traditional data sources, be defined as ‘big’. Cities and urban environments are the main sources for big data, every minute 100,000 tweets are sent globally, Google receives 2,000,000 search requests and users share 684,478 pieces of content on Facebook (Mashable, 2012). An increasingly amount of this data stream is geolocated, from Check-ins via Foursquare through to Tweets and searches via Google Now, the data cities and individuals emit can be collected and viewed to make the data city visible, aiding our understanding of now only how urban systems operate but opening up the possibility of a real-time view of the city at large (Hudson-Smith, 2013). The talk explores systems such as The City Dashboard (http://www.citydashboard.org) and the rise of the Internet of Things (IoT) in terms of data collection, visualization and analysis. Joining these up creates a move towards the Smart City and via innovations in IoT a look towards augmented reality pointing towards the the creation of a ‘Smart Citizen’, ‘the Quantified Self’ and ultimately a Smart City.
Dr Andrew Hudson-Smith is Director of the Centre for Advanced Spatial Analysis (CASA) at The Bartlett, University College London. Andy is a Reader in Digital Urban Systems and Editor-in-Chief of Future Internet Journal, he is also an elected Fellow of the Royal Society of Arts, a member of the Greater London Authority Smart London Board and Course Founder of the MRes in Advanced Spatial Analysis and Visualisation and MSc in Smart Cities at University College London.