In this short position paper we examine some of the dimensions and dynamics of the algorithmic age by considering three broad questions. First, what are the problematic consequences of life mediated by ‘algorithm machines’? Second, how are individuals or groups and associations resisting the problems they encounter? Third, how might the algorithmic age be re-envisioned and re-made in more normative terms? We focus on two key aspects of living with ubiquitous computing, ‘acceleration’ and ‘data grabbing,’ which we contend are two of the most prominent and problematic features of the algorithmic age. We then begin to shed light on the sorts of practices that constitute slow computing responses to these issues. In the conclusion, we make the case for a widescale embrace of slow computing, which we propose is a necessary step for society to make the most of the undeniable opportunities for radical social change emerging from contemporary technological developments.
On May 27th 2015, Cathal Gurrin and Rami Albatal visited the Programmable City Project and delivered a seminar on lifelogging, covering the history of creating lifelogs, technological developments in the field, the current state of the practice and future possibilities for comprehensive personal data.
The talk was extremely well received, and this video of the event should be of interest to anone interested in lifelogging, the quantified self, personal or wearable technologies or the emergence and possibilities of personal data.
We are delighted to welcome Rami Albatal and Cathal Gurrin to Maynooth on Wednesday 27th May at 4pm, Iontas Building room 2.31 for the third of our Programmable City seminars this semester. Dr. Rami Albatal is the lead postdoctoral researcher of the Lifelogging team at Insight Centre for Data Analytics at Dublin City University, and received his Ph.D. in Computer Vision in 2010 from Grenoble University, France. His research focuses on three main areas: Lifelogging, Computer Vision and Machine Learning. Currently he is working on new generation of Quantified-Self technologies that employ contextual data gathering and analytics, in the goal of building advanced data-driven decision-making, planning and recommendation platforms. Cathal Gurrin is a lecturer at the School of Computing, at Dublin City University, Ireland and he is an investigator at the Insight Centre for Data Analytics where he leads a research group of 10 people. He is also a visiting scientist at the University of Tromso, Norway. His research interest is personal analytics and lifelogging (a search engine for the self). He has gathered a digital memory since 2006 (including over 15 million wearable camera images) and hundreds of millions of other sensor readings. He is the founder of the world’s first dedicated lifelog meetup group.
The session will introduce the topic lifelogging, explore the current state-of-the-art technology and look forward to a future in which lifelog archives may become commonplace. The current approaches to semantic enrichment will be explored, along with applications and user interfaces. In an era of personal data, Facebook, Twitter, digital photos and many other activities all leave significant trails of personal data. One aspect of personal data gathering that is receiving increasing attention is the concept of lifelogging. Lifelogging is concerned with utilising sensors to create a large archives of personal data, or a surrogate memory for the individual. Applying semantic enrichment and organisational software over this data results in the creation of a lifelog for the individual. Lifelogging has been described as an inevitability and is expected to change life experience for all. Finally, lifelogging raises many societal issues, among them privacy and data security, which will be explored and solutions proposed. This session should be of interest to a wide range of academics and interested parties.
In May 2014 Ubisoft released a new computer game called Watch Dogs. Having sold over 4 million copies in the first week of sales it is tipped to be the game of the year. In the game, Chicago City is controlled by a central operating system (ctOS). The super computer gets a panoptic view of the city using data from cameras and sensor networks. The information obtained is used to manage the city’s infrastructure and technology as well as to maintain a database of personal information about citizens and their activities. In Watch Dogs, a disgruntled computer hacker finds a way to access and hack the ctOS, allowing him to hijack traffic lights, the power grid, bridges and toll gates, rupture water pipes, disable surveillance cameras and access personal information about fellow citizens. The motive for causing mayhem in the city is to find a gang who were involved in his sister’s death and ultimately take down the corrupt system that runs ctOS. In this article, we take a look at some of the real dangers facing today’s cities from malicious hackers.
A Character Accesses City Infrastructure and Data in Watch Dogs
In terms of technology, Chicago, as presented in Watch Dogs is a smart city. Data is fed into the central operating system and the infrastructure of the city adapts and responds accordingly. Although much of the game is fictional, Watch Dogs draws on existing technologies and echoes what is happening today. For example, Rio de Janeiro has a large control centre which applies data analytics to social media, sensors and surveillance cameras in an attempt to predict and control events taking place in the city. Its mission is to provide a safe environment for citizens. Other cities such as Santander and Singapore have invested in sensor networks to record a range of environmental and traffic conditions at locations across the cities. Earlier this year, Intel and Dublin City Council announced that Dublin is also to get a sensor network for measuring city processes. At present many of these projects are focusing on the technical challenge of configuring hardware, designing standards and collecting, storing and processing data from the city-wide sensor networks. Projects are using the data for a range of services to control the city such traffic management (guiding motorist to empty parking spaces), energy management (dimming street lights if no one is present) and water conservation (using sensors to determine when city parks need water).
The Internet of Things & Security
The roll out of such smart city technology is enabled through the Internet of Things (IoT) which is essentially a network of objects which communicate and transfer data without requiring human-to-human or human-to-computer interaction. The IoT can range from a pace maker sending patient information to a physician, to a fridge informing its owner that the milk is low. In the smart city, sensors automatically relay data to a control centre where it is analysed and acted upon.
The Control Centre in Rio de Janeiro
While Watch Dogs raises important moral and ethical issues concerning privacy and the development of a big brother society via smart city technologies, it also raises some interesting questions about the security and reliability of the technology. In Watch Dogs, the main character can control the city infrastructure using a smart phone due to a security weakness in the ctOS. In reality, we have recently seen objects in the IoT being compromised due to weaknesses in the hardware security. Baby monitoring webcams which were accessed by hackers and demonstrations of how insulin pumps can be compromised are cases which have received media attention. Major vulnerabilities of the IoT, were seen in late 2013 and early 2014 when an orchestrated cyber attack saw 100,000 ‘things’ connected to the Internet hacked and used to send malicious spam emails. The hacked ‘things’ included smart TVs, fridges and media centres. Basic security misconfigurations and failures to alter default passwords left devices open to attack.
Even mature internet technologies such as those used in ecommerce websites are vulnerable to hacking. In May this year e-bay’s web servers were hacked leading to the loss of user data. Security flaws with the OpenSSL cryptography standard (used to transmit data securely on the Internet) came to light in April 2014 with the ‘Heartbleed’ bug. A vulnerability enabled hackers to access the short term memory of servers to capture information such as passwords or credit card details of users who recently interacted with the server. All technologies which can send and receive data are vulnerable to attack and misuse unless strict security protocols are used and kept up-to-date. Unfortunately, as the examples here highlight, it seems that the solutions to security issues are only provided after a problem or a breech has been detected. This is because it’s often an unknown bug in the code or poor coding practice which provides a way for hackers to access systems remotely. There is a reluctance to invest in thorough testing of technologies for such weaknesses. Development companies seem prepared to risk cyber attacks rather than invest in the resources required to identify problem areas.
Hacking the Smart City
The fact that all systems connected to the Internet appear vulnerable to cyber attacks is very worrying when considered in the context of smart cities. Unlike personal webcams, TVs and fridges, the technology of smart cities forms part of a complex and critical infrastructure which is used to calibrate and control a city.While governments and city authorities are generally slow to publicise attacks on their technological infrastructure, the Israeli government has acknowledged that essential services that run off sensors, such as water, electricity and banking, have been the target of numerous hacking attacks. For example, in 2013, the traffic management system for a main artery in the port city of Haifa, was hacked, causing major traffic problems that lasted for several hours. Such malicious hijacking of technology is inconvenient for citizens, costs the city financially and could also have fatal consequences. Last year, it was demonstrated that it was relatively easy to hack the traffic light system in New York City. By sending false signals regarding the traffic flow at particular junctions, the algorithm used to control the traffic light sequence could be outsmarted and fooled into thinking that a particular junction was busy and therefore adjust the green time of traffic lights in a particular direction.
City technology is built on legacy systems which have been incrementally updated as technology has changed. Security was often not considered in the original design and only added after. This makes such legacy systems more vulnerable for exploiting. For example, many of the traffic light coordination systems in cities date from the 1980s when the main security threat was physical interference. Another problem with the city technology is the underlying algorithms which can be purely reactive to the data they receive. If false data is supplied then the algorithm may produce undesirable consequences. While the discussion here has focused on sensors embedded in the city, other sources of data, such as social media are open to the same abuse. In March 2014, the twitter account of The Associated Press was hacked and a message reporting of an attack on President Barrack Obama was posted. This led to $136 billion being wiped of the NY stock exchange within seconds. This is an example of humans using bad data to make a bad decision. If the human cognition process is unable to interpret bad data, what hope do pre-programmed computer algorithms have?
As cities continue to roll out technologies aimed at enhancing the lives of citizens, they are moving towards data driven forms of governance both for long term and short term actions. Whatever type of sensor is collecting data, there is a danger that data can be biased, corrupt, played, contained errors or even be faked through hacking. It is therefore imperative for city officials to question the trustworthiness of data used in decision making. From a technical point of view, the data can be made safe by calibrating the sensors regularly and validating their readings against other sensors. From a security perspective, the hardware needs to be secured, maintained and updated to prevent malicious hacking of the device. Recognising the threat which has been highlighted by Watch Dogs, the US Centre for Internet Security (CIS) issued a Cyber Alertregarding the game stating that ‘CIS believes it is likely that a small percentage of Watch Dog players will experiment with compromising computers and electronic systems outside of game play, and this activity will likely affect SLTT (State, Local, Tribal and Territorial) government systems and Department of Transportation (DOT) systems in particular.’
In other domains, such as the motor industry there is a move to transfer functions from the human operator to algorithms. For example, automatic braking, parking assistance, distance based cruise control and pedestrian detection are becoming mainstream in-car technologies in a slow move towards vehicles which drive themselves. It is likely that managing the city will follow the same pattern and incrementally the city will ‘drive’ itself and could ultimately be completely controlled by data-driven algorithms which react to a network of sensors. Although agencies such as the CIS give some advice to minimise the risk of Cyber Attacks on cities, it seems that hacking of the smart city infrastructure is inevitable. The reliance of cities on software and the risks associated with this strategy are well known (Dodge & Kitchin, 2004; Kitchin, 2014). The problem is compounded by the disappearance of the analogue alternative to smart city technologies (Townsend, 2013). This could lead to prolonged recovery from attacks and bugs due to the total reliance on technology to run cities. Cities therefore need to consider the security risks connected to deploying and using sensors to control a city and make decisions. It is also essential that control loops and contingency plans are in place to allow a city to function during a data outage just as contingency plans are made for handling the loss of other essential services such as power and water.
References
Dodge, M., & Kitchin, R. (2004). Flying through code/space: The real virtuality of air travel. Environment and Planning A, 36(2), 195–211.
Townsend, A. (2013). Smart cities: Big data, civic hackers, and the quest for a new utopia. New York: W.W. Norton & Co.
Rob provides an overview of The Programmable City project in this launch video, which includes the ideas underpinning the research and the prospective case studies. Here are links to the slides the complete program.
The first published output of the Programmable City project was a working paper by Rob Kitchin, presented at the ‘Smart Urbanism: Utopian Vision or False Dawn’ workshop at the University of Durham, 20-21 June 2013, and published on the Social Science Research Network.
Abstract
‘Smart cities’ is a term that has gained traction in academia, business and government to describe cities that, on the one hand, are increasingly composed of and monitored by pervasive and ubiquitous computing and, on the other, whose economy and governance is being driven by innovation, creativity and entrepreneurship, enacted by smart people. This paper focuses on the former and how cities are being instrumented with digital devices and infrastructure that produce ‘big data’ which enable real-time analysis of city life, new modes of technocratic urban governance, and a re-imagining of cities. The paper details a number of projects that seek to produce a real-time analysis of the city and provides a critical reflection on the implications of big data and smart urbanism.