Tag Archives: social media

Seminar 2: Tweeting the Smart city by Prof Gillian Rose

We are very excited to announce that our next seminar will feature Professor Gillian Rose (Oxford University), jointly organised with Social Sciences Institute and Geography Department. The seminar is entitled: Tweeting the Smart city: The Affective Enactments of the Smart City on Social Media and you can find further seminar details below. We look forward to seeing many of you in the seminar!

Time: 13:00 to 14.30, Thursday, 26th October
Venue: Rocque Lab, Rhetoric House, South Campus, Maynooth University (Building #17 on the campus map)
Digital technologies of various kinds are now the means through which many cities are made visible and their spatialities negotiated. From casual snaps shared on Instagram to elaborate photo-realistic visualisations, digital technologies for making, distributing and viewing cities are more and more pervasive. This talk will explore some of the implications of that digital mediation of urban spaces. What forms of urban life are being made visible in these digitally mediated cities, and how? Through what configurations of temporality, spatiality and embodiment? And how should that picturing be theorised? Drawing on recent work on the visualisation of so-called ‘smart cities’ on social media, the lecture will suggest the scale and pervasiveness of digital imagery now means that notions of ‘representation’ have to be rethought. Cities and their inhabitants are increasingly mediated through a febrile cloud of streaming image files; as well as representing cities, this cloud also operationalises particular, affective ways of being urban. The lecture will explore some of the implications of this shift for both theory and method as well as critique.


The limits of social media big data

handbook social media researchA new book chapter by Rob Kitchin has been published in The Sage Handbook of Social Media Research Methods edited by Luke Sloan and Anabel Quan-Haase. The chapter is titled ‘Big data – hype or revolution’ and provides a general introduction to big data, new epistemologies and data analytics, with the latter part focusing on social media data.  The text below is a sample taken from a section titled ‘The limits of social media big data’.

The discussion so far has argued that there is something qualitatively different about big data from small data and that it opens up new epistemological possibilities, some of which have more value than others. In general terms, it has been intimated that big data does represent a revolution in measurement that will inevitably lead to a revolution in how academic research is conducted; that big data studies will replace small data ones. However, this is unlikely to be the case for a number of reasons.

Whilst small data may be limited in volume and velocity, they have a long history of development across science, state agencies, non-governmental organizations and business, with established methodologies and modes of analysis, and a record of producing meaningful answers. Small data studies can be much more finely tailored to answer specific research questions and to explore in detail and in-depth the varied, contextual, rational and irrational ways in which people interact and make sense of the world, and how processes work. Small data can focus on specific cases and tell individual, nuanced and contextual stories.

Big data is often being repurposed to try and answer questions for which it was never designed. For example, geotagged Twitter data have not been produced to provide answers with respect to the geographical concentration of language groups in a city and the processes driving such spatial autocorrelation. We should perhaps not be surprised then that it only provides a surface snapshot, albeit an interesting snapshot, rather than deep penetrating insights into the geographies of race, language, agglomeration and segregation in particular locales. Moreover, big data might seek to be exhaustive, but as with all data they are both a representation and a sample. What data are captured is shaped by: the field of view/sampling frame (where data capture devices are deployed and what their settings/parameters are; who uses a space or media, e.g., who belongs to Facebook); the technology and platform used (different surveys, sensors, lens, textual prompts, layout, etc. all produce variances and biases in what data are generated); the context in which data are generated (unfolding events mean data are always situated with respect to circumstance); the data ontology employed (how the data are calibrated and classified); and the regulatory environment with respect to privacy, data protection and security (Kitchin, 2013, 2014a). Further, big data generally capture what is easy to ensnare – data that are openly expressed (what is typed, swiped, scanned, sensed, etc.; people’s actions and behaviours; the movement of things) – as well as data that are the ‘exhaust’, a by-product, of the primary task/output.

Small data studies then mine gold from working a narrow seam, whereas big data studies seek to extract nuggets through open-pit mining, scooping up and sieving huge tracts of land. These two approaches of narrow versus open mining have consequences with respect to data quality, fidelity and lineage. Given the limited sample sizes of small data, data quality – how clean (error and gap free), objective (bias free) and consistent (few discrepancies) the data are; veracity – the authenticity of the data and the extent to which they accurately (precision) and faithfully (fidelity, reliability) represent what they are meant to; and lineage – documentation that establishes provenance and fit for use; are of paramount importance (Lauriault, 2012). In contrast, it has been argued by some that big data studies do not need the same standards of data quality, veracity and lineage because the exhaustive nature of the dataset removes sampling biases and more than compensates for any errors or gaps or inconsistencies in the data or weakness in fidelity (Mayer-Schonberger and Cukier, 2013). The argument for such a view is that ‘with less error from sampling we can accept more measurement error’ (p.13) and ‘tolerate inexactitude’ (p. 16).

Nonetheless, the warning ‘garbage in, garbage out’ still holds. The data can be biased due to the demographic being sampled (e.g., not everybody uses Twitter) or the data might be gamed or faked through false accounts or hacking (e.g., there are hundreds of thousands of fake Twitter accounts seeking to influence trending and direct clickstream trails) (Bollier, 2010; Crampton et al., 2012). Moreover, the technology being used and their working parameters can affect the nature of the data. For example, which posts on social media are most read or shared are strongly affected by ranking algorithms not simply interest (Baym, 2013). Similarly, APIs structure what data are extracted, for example, in Twitter only capturing specific hashtags associated with an event rather than all relevant tweets (Bruns, 2013), with González-Bailón et al. (2012) finding that different methods of accessing Twitter data – search APIs versus streaming APIs – produced quite different sets of results. As a consequence, there is no guarantee that two teams of researchers attempting to gather the same data at the same time will end up with identical datasets (Bruns, 2013). Further, the choice of metadata and variables that are being generated and which ones are being ignored paint a particular picture (Graham, 2012). With respect to fidelity there are question marks as to the extent to which social media posts really represent peoples’ views and the faith that should be placed on them. Manovich (2011: 6) warns that ‘[p]eoples’ posts, tweets, uploaded photographs, comments, and other types of online participation are not transparent windows into their selves; instead, they are often carefully curated and systematically managed’.

There are also issues of access to both small and big data. Small data produced by academia, public institutions, non-governmental organizations and private entities can be restricted in access, limited in use to defined personnel, or available for a fee or under license. Increasingly, however, public institution and academic data are becoming more open. Big data are, with a few exceptions such as satellite imagery and national security and policing, mainly produced by the private sector. Access is usually restricted behind pay walls and proprietary licensing, limited to ensure competitive advantage and to leverage income through their sale or licensing (CIPPIC, 2006). Indeed, it is somewhat of a paradox that only a handful of entities are drowning in the data deluge (boyd and Crawford, 2012) and companies such as mobile phone operators, app developers, social media providers, financial institutions, retail chains, and surveillance and security firms are under no obligations to share freely the data they collect through their operations. In some cases, a limited amount of the data might be made available to researchers or the public through Application Programming Interfaces (APIs). For example, Twitter allows a few companies to access its firehose (stream of data) for a fee for commercial purposes (and have the latitude to dictate terms with respect to what can be done with such data), but with a handful of exceptions researchers are restricted to a ‘gardenhose’ (c. 10 percent of public tweets), a ‘spritzer’ (c. one percent of public tweets), or to different subsets of content (‘white-listed’ accounts), with private and protected tweets excluded in all cases (boyd and Crawford, 2012). The worry is that the insights that privately owned and commercially sold big data can provide will be limited to a privileged set of academic researchers whose findings cannot be replicated or validated (Lazer et al., 2009).

Given the relative strengths and limitations of big and small data it is fair to say that small data studies will continue to be an important element of the research landscape, despite the benefits that might accrue from using big data such as social media data. However, it should be noted that small data studies will increasingly come under pressure to utilize the new archiving technologies, being scaled-up within digital data infrastructures in order that they are preserved for future generations, become accessible to re-use and combination with other small and big data, and more value and insight can be extracted from them through the application of big data analytics.

Rob Kitchin

Emerging Technological Responses in Emergency Management Systems

The advent of discourses around the ‘smart city’, big data, open data, urban analytics, the introduction of ‘smarter technology’ within cities, the  sharing of real-time information, and the emergence of social media platforms has had a number of outcomes on emergency services worldwide. Together they provide opportunities and promises for emergency services regarding efficiency, community engagement and better real-time coordination.  Thus, we are seeing a growth in technologically based emergency response. However, such developments are also riddled with broad concerns, ranging from privacy, ethics, reliability, accessibility, staff reluctance and fear.

This post considers one recent technological push for the re-invention of the emergency call system (911bot) and another for the sharing of real-time information during a major event (Smartphone Terror Alert).


In recent years, there has been a significant move away from voice calls towards texting and internet based platforms (eg.WhatsApp and Twitter)(see figure 1), this is tracked regularly by the International Smartphone Mobility Report conducted across 12 countries by the data tracking company Infomate. In 2015, they found that in America the average time spent on voice calls was 6 minutes as opposed to 26 minutes texting, and worldwide,  internet based platforms were the main form of communication (Infomate, 2015 and Shrapshire, 2015).


cell phone communication

Figure 1: Cell phone Communication. Source: Russell (2015).

In light of this, there is a push by both the private sector and entrepreneurs to utilise mobile phones and  social media platforms in new ways such as within the emergency call system. Within my own field research, I have questioned first responders in Ireland and the US regarding the use of social media and apps as alternative means to the current telephone system.  For the most part, this was met with disdain and confusion from first responders.  Strong arguments were made against a move away from a call-dominated response system. These included:

a)      Difficulty in obtaining relevant and accurate information regarding the event, including changing conditions and situations.

b)      Not able to provide the victim or caller with accurate instructions and information.

c)      Restrictions in contacting the caller.

d)     The system would need an overhaul for it to work, i.e. a dedicated team ensuring that these messages are not missed, and require staff training.

e)      Call systems are established mechanisms for contacting the emergency services, why change it when it works?

f)       If you use something like Twitter or Facebook to report an emergency how do we ensure that it is reported correctly and not just tweeted or messaged to an interface which is not monitored 24/7?

And as can be seen through the following conversation with two operational first responders in Dublin, Ireland, they want new technology but are also highly hesitant as to its ability to ensure a quick response.

Conversation between researcher and two first responders

R1: See the problem with a tweet and a text, I can’t get any information out of that, like I could tweet and back and then you are waiting for them to send something back, when I have you on the phone, I can question you, “What is it?”, “What is wrong?”, “What is the problem?”.

R2: If you did go with something like [social media platform for emergency call intake], you would have to have the likes of, if you are the tweet man then you would have to be 100% on the phone looking at it

R2: It probably would work if it wasn’t an emergency as such, not a full emergency

R1: I think people need tobe re-assured that someone has seen it and really knows what is happening.

R1: Jesus you could have everyone tweeting saying I have a sore stomach and that would register as a call for us so the calls would just get worse and worse. [...] I think if you ring Domino Pizza now, it will know who you are, where you are and your order

R2: They can read the caller ID coming

R1:We haven’t got that

All of these are understandable concerns, but they also illustrate a resistance to innovative change that may result in cultural and institutional change which they oppose due to highly legitimate fears of effectiveness and reliability. Even so, they are welcoming of technology which has obvious benefits for them such as the “Domino’s Pizza” caller ID system, but are more reluctant towards innovations such as the 911bot whose value is overshadowed by fears of inefficiency, information gaps and reliability. However, the 911bot does potentially address some of these issues within its design.

The 911bot (figure 2) was developed during TechCrunch’s Disrupt Hackathon in New York in 2016.  It works through Facebook Messenger, which had a reported 1 billion users in July 2016 (Costine, 2016), to allow users to report an emergency.  Initially, one would be forgiven for immediately thinking of the arguments made against a transformation of the current system as presented above. However, the messenger app already offers location services based on the phones GPS thus, when reporting an incident, your exact location is immediately sent (although you can turn off your GPS signal and restrict your location being sent, when using this bot there is potential for that to be overridden).  The person reporting the incident can also send pictures or videos and the bot can provide information on what you should and shouldn’t do in that situation such as, how to do CPR during a cardiac arrest (Westlake, 2016).

Further, this bot has potential to feedback the location of the first responders to the reporter. It provides the control room with more accurate information coming from real-time videos and pictures meaning that they are not relying wholly on information from untrained and scared people.  And, most importantly, this system doesn’t take away from the control room interacting with the caller. From the information provided by the developers, it appears that once the messenger sends the request, the control room calls the phone and resumes their role but with more information.   Possibly, going forward this could even be done through Facetime so that the control room has live interaction with the event prior to the arrival of the first responders.  Although, the 911bot has only been developed and not deployed, in time and after much consultation and experimentation, it could prove very beneficial within emergency response.  For instance, if the control room operator can actually see how the person is conducting CPR, can see and hear their breathing, see the extent of the injury, fire, or road traffic collision in real time, it would inform decision-making that could create better and more efficient responses.  However, it would be remiss to discuss this without noting that there are potential privacy issues with the mass use of this type of technology outside of the remit of this post, that would need to be considered.


Figure 2: 911bot. Source: 911bot online.

Smartphone Terror Alert

Another new use of mobile technology was the mass terror alert issued on September 17th 2016, after Chelsea, Manhattan was hit with an explosion.  The alert (figure 3) was issued by the Office of Emergency Management, New York Police Department and the FBI through all phone networks. It was received by an unknown number of people and provided information about the key suspect – Ahmed Khan Rahami.  The Press secretary for New York Mayor Bill de Blasio stated that it was the first use of this alert at a “mass scale” and as the suspect was caught within 3 hours, it presented the appearance that this alert was effective, with New York’s Police Commissioner stating “it was the future”(Fiegerman, 2016). Yet there is no evidence that the alert had anything to do with the catching of the suspect; these two factors could be circumstantial.


Figure 3: Smart phone terror Alert. Source: published in Fiegerman (2016).

Further, as illustrated by Anil Dash in Fiegerman (2016) how effective was it actually?  “Is there evidence that low-information untargeted push notifications help with any kind of crime? Seems they’re more optimised for panic”.  This is compounded by the lack of an all-clear alert, which would work to ease tensions and potential panic.  We live in a socially constructed risk society (Beck, 1992; 2009) and with innovations such as this, even if the intention is good, the potential for mass panic is created, which raises questions regarding the appropriateness of this mechanism. In this instance, sending an alert with little information, using just a name, makes everyone who could fit that name a potential target, and is an action that could create panic, fear and racial attacks under the illusion of “citizen arrest”.  However, this system has potential especially if it were utilised during severe weather events to provide information on evacuation centres and resources rather than during more sensitive events such as a manhunt.  Essentially, though, before it can be deemed thoroughly effective and safe there needs to be stringent supportive policy and agency and community training to ensure that response from agencies as well as communities is coordinated and effective rather than panicked and uninformed. So, I wonder, is this really the future, and indeed, does it need to be the future? Is it already the present with no sense of reflection on the potential consequences of such a system by the lead federal and local emergency agencies and institutions?  I don’t have the answers to these questions but examining the operational use of this alert even, at its small scale of use, provides opportunities to begin to tease out the danger of a dichotomy between effectiveness and panic and to explore issues around privacy, fear, reliability and usefulness.

In conclusion, this post has provided two different innovations within emergency management, one being experimented with and one which has been implemented. But what is clear is that changes in how we engage with control centres and emergency services are taking place, albeit slowly. But, one can only hope, especially in relation to the alert system, that lobbied criticisms will be engaged with and solutions sought.


911bot (2016) 911bot. [Online]. Available at: http://www.911bot.online/) (Accessed 9th November 2016).

Beck, U., (1992). Risk Society: Towards a New Modernity. London: Sage.

Beck, U., (2009). World of Risk. Cambridge: Polity Press.

Costine, J. (2016) How Facebook Messenger clawed its way to 1 billion users. [Online].  Available at: https://techcrunch.com/2016/07/20/one-billion-messengers/ (Accessed 8th November 2016).

Fiegerman, S.(2016) The story behind the Smartphone Terror Alert in NYC. [Online].  Available at: http://money.cnn.com/2016/09/19/technology/chelsea-explosion-emergency-alert/ (Accessed 9th November 2016).

Infomate (2015) The International Smartphone Mobility Report [Online]. Available for download at: the International Smartphone Mobility Report (Accessed 7th November 2016).

Russell, D. (2015) We just don’t speak anymore. But we’re ‘talking’ more than ever. [Online].  Available at: http://attentiv.com/we-dont-speak/ (Accessed 9th November 2016).

Shropshire, C. (2015) Americans prefer texting to talking, report says. Chicago Tribune [Online].  Available at: http://www.chicagotribune.com/business/ct-americans-texting-00327-biz-20150326-story.html (Accessed 9th November 2016).

Westlake, A. (2016) Finally, there’s a chat bot for calling 911. [Online].  Available at: http://www.slashgear.com/finally-theres-a-chat-bot-for-calling-911-08439211/ (Accessed 7th November 2016).


Event – Privacy: gathering insights from lawyers and technologists

privacy-law-highlightThe roundtable event ‘Privacy: Gathering insights from lawyers and technologists’ is scheduled for Wednesday 1st July 2015. The Event will be held at the Phoenix Building, North Campus, Maynooth University and has been organised by faculty at the University in conjunction with the British and Irish Law Education and Technology Association.

The event will bring technologists, legal practitioners, technology companies and academics together in order to address the common issues faced by the different parties. The goal is to facilitate the communication of differing perspectives in an effort to formulate a unified approach to developing privacy issues.

Confirmed speakers for the event are:

Dara Murphy, TD – Minister for European Affairs and Data Protection.
Helen Dixon – Data Protection Commissioner of Ireland.

Confirmed speakers for the first session of the event, “Privacy in a digital world: notions and understandings of privacy in a digital infrastructure”, are:

Confirmed speakers for the second session of the event, “The Right to be Forgotten, demystified…”, are:

  • Ronan Kennedy, Lecturer in Law, National University of Ireland, Galway.
  • Dr Michael Lang, Lecturer in Information Systems, National University of Ireland, Galway.
  • William Malcolm, Senior Privacy Counsel, Google
  • Rob Corbet, Technology and Innovation Lawyer, Arthur Cox
  • Eoin O’Dell, Associate Professor, School of Law, Trinity College Dublin

For further information and tickets to the event, please visit the project webpage or contact the organisers Maria Murphy or Leighton Evans.

Code and the City workshop videos: Session 3

If you missed our first and second sessions of the Code and the City workshop video, the embedded links will lead you to them. And now is time for Session 3!

Session 3: Locative/social media

Digital social interactions in the city: Reflecting on location-based social media
Luigina Ciolfi, Human-Centred Computing, Sheffield Hallam University
Gabriela Avram, University of Limerick

Location-based social media increasingly mediates social and interpersonal interactions in urban settings. Such practices become coded in software representing both the log and content of social interactions and the location to which they relate. Therefore a digital “cloud” of social interactions becomes embedded into the physical reality of the city, of its neighbourhoods, public places, cafés, transportation hubs and any other location identified by social media users (by user-initiated “check-ins” or by the content that they generate, such as photographs) and by the tools they use (for example, through automatic geo-tagging). Two sets of issues to be investigated are emerging: firstly referring to how such localised interactions are populating the algorithms and infrastructures provided by the software: how are the platform of location-based social media framing people’s perceptions and identifications of locations? How is code both facilitating and representing a set of social interactions relating to various spatial configurations? A second set of issues regards the re-materialisation of such cloud of interactions in the physical world: could it be made somehow perceivable and/or tangible in the physical world by the way in which certain environments are designed?

Overall, could new approaches to urban planning and environmental design become concerned with accommodating and facilitating these social interactions as they do so by supporting in-presence, analogue ones?

This paper will attempt to define and discuss these issues drawing both from interaction design and human-computer interaction literature on physical/digital interactions and from two preliminary empirical studies of location-based social media use in two cities.

Feeling place in the city: strange ontologies, Foursquare and location-based social media
Leighton Evans, National University of Ireland Maynooth

Certain instances of the use of location-based social media in cities can result in deep understandings of novel locations. The contributions of other users and the information pushed to users when in particular locales can help users rapidly attune themselves to places and achieve an understanding of the place. The use of a computational device and location-based social networking to achieve this understanding indicates an alteration in the achievement of placehood using computational technology. Practices and methods of understanding place can, in some situations, be delegated to the device and application. This paper explores how the moment that place is appreciated as place (that is, as a meaningful existential locale) can be reconciled with the delegation of the epistemologies of placehood to a computational device and location-based social media application. Drawing on data from an ethnographic study of Foursquare users, the phenomenological appreciation of place is understood as co-constituent between the device, application and the mood of the user. Code and computational devices are contextualised as a constant foregrounding presence in the city, and the engagement of the user, device, code and data in understanding place is a moment of revealing that is co-constituent of all these elements. This exploratory paper engages Peter Sloterdijk’s theory of spheres as a framework to understand how these four elements interact, and how that interaction of elements can orient a user to a revealing of the city that can be understood as a phenomenological revealing of place.

Cultural curation and urban Interfaces: Locative media as experimental platforms for cultural data
Nanna Verhoeff, Department of Media and Culture Studies, Utrecht University

My contribution is concerned with the way in which urban interfaces are used for access to cultural collections – whether institutionally embedded, or bottom-up, participatory collections. Designed in code and exploring affordances of new location-based and/or mobile technologies for urban space-making, these interfaces are thought to be powerful tools for ideals of participatory urban culture. I propose to approach these “projects” as curatorial machines, as urban experimental laboratories for cultural data. This entails a threefold perspective, on curation, on code, and on principles of creative (sometimes artistic or playful) experimentation.

For this, we may remind ourselves of the curatorial project of museal and archival institutions, of preserving, and “caring” for the object, as well as creating new contexts for the object and providing access for an urban public – a field which is very much in transition as a result of current ambitions for new public engagement and ideals of participation, pervasive in all socio-economic and political regions of contemporary culture. Simultaneously we witness the current interest in the principles of data curation as the care for, interaction with, interpretation and visualisation of digital data, as the datafication and codification of culture invades all corners of urban life. Design of interfaces is central in how we can access, work with, and make meaning with digital culture. Departing from the concept of dispositif in the analysis of interfaces, I propose to bring together the fact that the interfaces are coded and designed, to (playfully) experiment with their affordances.

In my approach to this intersection of datafication of, and the proliferation of interfaces for “culture”, I aim to develop heuristic tools for critical evaluation of this phenomenon, broadly bracketed as [urban interfaces] as interfaces of cultural curation.

A Window, a message, or a medium? Learning about cities from Instagram
Lev Manovich, Computer Science, The Graduate Center, City University of New York

Over last few years, tens of thousands of researchers in social computing and computational social sciences started to use available data from social networks and media sharing services (such as Twitter, Foursquare and Instagram) created by users of mobile platforms. The research uses techniques from statistics, machine learning, and visualization, among others, to analyze all kinds of patterns contained in this data and also (less frequently) propose new models for understanding the social. The examples include analysis of information propagation in Twitter, predicting popularity of photos on Flickr, proposing new sets of city neighborhoods using Foursquare users check-ins, and understanding connections between musical genres using listening data from Echonest.

In my talk I will address a fundamental question we face in doing this research: what exactly are we learning when analyzing can social media data? Is it a window into real-world social and cultural behaviors, a reflection of lifestyles of particular demographics who use mobile platforms and particular network services, or only an artifact of mobile apps? In other words – is social media a “message” or a “medium”?

I will discuss this question using three recent projects from my lab (softwarestudies.com). The projects use large sets of Instagram images and accompanying data together with data science and visualization tools. Phototrails.net (2013) analyzes 2.3 million photos from 13 global cities to investigate how different kinds of events are represented in these photos. The project also investigates if the universal affordances of Instagram app (same interface and same set of filters available to all users) result in universal digital visual language. Selfiecity.net (2014) analyzes the distinct artifact of mobile platforms – selfies. We compare thousands of selfies to see if cultural specificity of different places and cultural is preserved in this genre. Finally, our third project compares Instagram photos taken by visitors in a few major modern art museums, asking if photographs of famous works of the art differ depending on what these artworks are and where they are situated.