Category Archives: information

Will CovidTracker Ireland work?

The coronavirus pandemic has posed enormous challenges for governments seeking to delay, contain and mitigate its effects. Along with measures within health services, a range of disruptive public health tactics have been adopted to try and limit the spread of the virus and flatten the curve, including social distancing, self-isolation, forbidding social gatherings, limiting travel, enforced quarantining, and lockdowns. Across a number of countries these measures are being supplemented by a range of digital technologies designed to improve their efficiency and effectiveness by harnessing fine-grained, real-time big data. In general, the technologies being developed and rolled-out fall into four types: contact tracing, quarantine enforcement/movement permission, pattern and flow modelling, and symptom tracking. The Irish government is pursuing two of these – contact tracing and symptom tracking – merged into a single app ‘CovidTracker Ireland’. In this short essay, I outline what is known about the Irish approach to developing this app and assess whether it will work effectively in practice.

CovidTracker Ireland

On March 29th 2020 the Health Services Executive (HSE) announced that it hoped to launch a Covid-19 contact tracing app within a matter of days. Few details were given about the proposed app functionality or architecture, other than it would mimic other tracing apps, such as Singapore’s TraceTogether, using Bluetooth connections to record proximate devices and thus possible contacts, together with additional features for reporting well-being. The HSE made it clear that it would be an opt-in rather than compulsory initiative, that the app would respect privacy and GDPR, being produced in consultation with the Data Protection Commission, and it would be time-limited to the coronavirus response. It was not stated who would develop the app beyond it being described as a ‘cross-government’ effort.

On April 10th, the HSE revealed more details through a response to questions from Broadsheet.ie, stating that the now named CovidTracker Ireland App will:

  • “help the health service with its efforts in contact tracing for people who are confirmed cases;
  • allow a user to record how well they are feeling, or track their symptoms every day;
  • provide links to advice if the user has symptoms or is feeling unwell;
  • give the user up-to-date information about the virus in Ireland.”

Further, they reiterated that the app ‘will be designed in a way that maximises privacy as well as maximising value for public health. Privacy-by-design is a core principle underpinning the design of the CovidTracker Ireland App – which will operate on a voluntary and fully opt-in basis.’ There was no mention of the approach being taken; however the use of the HSE logo on the PEPP-PT (Pan-European Privacy-Preserving Proximity Tracing) website indicates that it has adopted that architecture, an initiative that claims seven countries are using their approach, with reportedly another 40 countries involved in discussions.

As of April 22nd the CovidTracker Ireland app is under development, with HSE stating on April 17th that it was being tested with a target of launching by early May when it is planned that some government restrictions will be lifted.

Critique and concerns

From the date it was announcement concerns have been expressed about the CovidTracker Ireland, particularly by representatives of Digital Rights Ireland and the Irish Council for Civil Liberties. A key issue has been the lack of transparency and openness in the approach being taken. An app will simply be launched for use without any published details of the approach and architecture being adopted, consultation with stakeholders, piloting by members of the public, and external feedback and assessment.

There are concerns that a centralized, rather than decentralized approach will be taken, and there is no indication that the underlying code will be open for scrutiny, if not by the public, at least by experts. It is not clear if the app is being developed in-house, or if it has been contracted out to a third-party developer and if the associated contract includes clauses concerning data ownership, re-use and sale, and intellectual property. There are no details about where data will be stored, who will have access to it, how will it be distributed, or how it be acted upon. There is unease as to whether the app will be fully compliant with GDPR and fully protect privacy, especially given that a Data Protection Impact Assessment (DPIA), which is legally required before launch, has seemingly not yet been undertaken. Such a DPIA would allow independent experts to be able to assess, validate and provide feedback and advice.

Critics are also concerned that CovidTracker Ireland merges the tasks of contact tracing and symptom tracking which have been pursued separately in other jurisdictions. Here, two sets of personal information are being tied together: proximate contacts and health measures. This poses a larger potential privacy problem if they are not adequately protected. Moreover, critics are worried that CovidTracker Ireland might become a ‘super app’, which extends its original ambition and goals. Here, the app might enable control creep, wherein it starts to be employed beyond its intended uses such as quarantine enforcement/movement permission. For example, Antoin O’Lachtnain of Digital Rights Ireland has speculated that we might eventually end up with an app to monitor covid-19 status that is “mandatory but not compulsory for people who deal with the public or work in a shared space.”

As Simon McGarr argues, the failure to adequately engage with these critiques and to be open and transparent means that “the launch of the app will inevitably be marred by immediately being the subject of questions and misinformation that could have been avoided by simply overcoming the State’s institutional impulse for secrecy.”

Internationally, there is scepticism concerning the method being used for app-based contact tracing and whether the critical conditions needed for successful deployment exist. Bluetooth does not have sufficient resolution to determine two metres or less proximity and using a timeframe to denote significant encounters potentially excludes fleeting, but meaningful contacts. There are also concerns with respect to representativeness (for example, 28% of people do not own a smartphone in Ireland), data quality, reliability, duping and spoofing, and rule-sets and parameters. The technical limitations are likely to lead to sizeable gaps and a large number of false positives that might produce an unmanageable signal-to-noise ratio, leading to unnecessary self-isolation measures and potentially overloading the testing system.

There is a concern that app-based contact tracing is being rushed to mass roll-out without it being demonstrated that it is fit-for-purpose. Moreover, the app will only be effective in practice if: there is a program of extensive testing to confirm that a person has the virus and if tracing is required; and 60% of the population participate to ensure reach across those who have been in close contact (c.80% of smartphone users). The symptom tracking relies on self-reporting, which lacks rigour and, as testing has shown, a large proportion of the population who were tested because they were experiencing symptoms returned negative. This is likely to lead to a large number of false positives and it is doubtful that these data should guide contact tracing.

At present, while Ireland is ramping up its testing capability towards 100,000 tests a week, it might need to increase that further. The Edmond J. Safra Center for Ethics at Harvard University suggest that in the United States: “We need to deliver 5 million tests per day by early June to deliver a safe social reopening. This number will need to increase over time (ideally by late July) to 20 million a day to fully remobilize the economy. We acknowledge that even this number may not be high enough to protect public health.” The equivalent rate for Ireland would be 300,000 tests per day. In Singapore, only 12% of people have registered to use the TraceTogether app, which raises doubts as to whether 60% of the population in Ireland will participate, especially since the public are primed to be sceptical given media coverage about the app have raised issues of privacy, data security and data usage.

Will CovidTracker Ireland work and what needs to happen?

There is unanimous agreement that contact tracing is a cornerstone measure for tackling pandemics. Assuming that the privacy and data protection issues can be adequately dealt with it, it would be good to think that CovidTracker Ireland will make a difference to containing the coronavirus and stopping any additional waves of infection.

However, there are reasons to doubt that app-based contact tracing and symptom tracking will make the kind of impact hoped for unless:

  • its technical approach is sound and civil liberties protected;
  • there is testing at sufficient scale that potential cases, including false ones, are dealt with quickly;
  • the government can persuade people to participate in large numbers.

The government might also have to supply smartphones to those that do not own them, as they did in Taiwan. Persuading people to participate will especially be a challenge since the government is not being sufficiently transparent at present in explaining the approach being taken, the app’s intended technical specification, how it will operate in practice, its procedures for oversight, and how it will protect civil liberties.

It is essential that the government follow the guidance of the European Data Protection Board that recommends that strong measures are put in place to protect privacy, data minimization is practised, the source code is published and regularly reviewed, there is clear oversight and accountability, and there is purpose limitation that stops control creep.

If implemented poorly, the app could have a profound chilling effect on public trust and public health measures that might be counterproductive. As a consequence, the Ada Lovelace Institute, a leading UK centre for artificial intelligence research, is advising governments to be cautious, ethical and transparent in their use of app-based contact tracing. Ireland might do well to heed their advice.

Rob Kitchin

Rules of thumb for making decisions on requests for academic work

Last week I posted on my personal blog about the volume of requests I receive to undertake work beyond my allocated load as directed by university managers. Such work includes reviewing for journals and grant agencies, writing references and external examining, serving on advisory boards and working with local communities, and contributing papers to special issues or delivering invited talks.

While undertaking all these tasks are expected of academics as part of their normal load, constituting service work and an important part of the exchange economy of academia (Elden 2008), it is usually undertaken at their discretion. Key questions then are: what requests for labour to accept? What is an acceptable/sensible load? These questions become more pressing as the number of requests increases as an individual’s profile and academic network develop.

Most academics, I sense, find it quite difficult to evaluate and manage requests, and to say ‘no’ to many of them. In part this is because we are trying to balance a sense of obligation to participate in the exchange economy (especially if we know the person making the request), with a strategic approach to career development, and a need for personal well-being. Certainly, I struggle in deciding which requests to accept, and even though I do say ‘no’ to a lot of requests I still feel I take on too much and struggle to deliver on my promises (I typically receive about one new request a day beyond existing commitments).

On social media I was asked about how I go about making decisions on what requests to undertake and how I manage the workload. I didn’t have a ready answer because I’ve never formulated a strategy for decision-making or managing commitments. Instead, I have been using a rough set of unarticulated rules of thumb. It was also suggested that it might be useful for academics to be mentored with regards to dealing with requests.

In an effort to provide some mentoring advice, but also to try and formalise my own rules of thumb, I thought it might be useful to consider how best to deal with external requests for academic labour.

My thoughts below are not intended to be overly prescriptive and I am sure others would give alternative advice based on their own experiences; in this sense, I would see my suggestions as an opening gambit for a wider conversation on how to approach and manage such requests. Moreover, different strategies might suit different individuals depending on how they approach and manage their work.

Rather than deal with every type of request separately I have grouped them into five main types: reviewing, endorsing, evaluating, advising, and contributing. Before discussing how to deal with these five type of requests, it is important to note that deciding on a strategy needs to be contextualised by personal circumstance. For example, your institution might have certain expectations with respect to service work (perhaps as part of a tenure evaluation or promotion process), varying with level of seniority or by discipline. Or you might want to maintain a sensible home/work balance! In other words, you need to develop an individual strategy that reflects local and personal expectations.

Reviewing

Usual requests: article review; grant application review; book proposal review; review submitted book manuscript.

Reviewing is an essential part of the exchange economy of academia. When we submit an article or grant application it is evaluated by our peers. In exchange, we should expect to review other people’s papers/applications. As Elden (2008) notes, at the very least then we should expect to review three papers for every one submitted (since our own paper needs three reviews). Requests for peer review are, however, decidedly skewed (by gender, race, seniority, etc.) with some people being asked to review much more than they submit (and some less). For example, I was asked to review 75 papers last year and 27 grant applications. As an academic gains a profile as an expert in an area they can expect to be asked to review more often. In practice, as the number of requests increases it becomes impossible to do them all, so decisions about which ones to do need to be made. My rules of thumb for reviewing are:

  1. Is this a piece of work that I think I can pass reasonable judgement on (i.e., it is directly related to my expertise)? And will I potentially learn something useful from engaging with the work?
  2. Have I published with the journal/received funding from the agency/written a book for a publisher (paying-back on the exchange of review) or am I likely to want to submit a paper/application/book proposal to this journal/grant agency/publisher in the near future (paying forward on the exchange)? In both cases, it’s good practice and strategically sensible to engage in the exchange economy.
  3. Am I already committed to do a number of reviews and can I cope with taking on another one? My limit is generally five open reviews at any one time. I know from being a journal editor that some people set annual quotas (say, 20 per year) and once they have reached that quota refuse all other requests.

The aim of these rules of thumb is to choose to undertake work that is intellectually productive and balances career strategy with an ability to deliver and trying to manage work/life balance. Also, bear in mind that not delivering on a promised review will potentially negatively impact on your reputation (and academia has a strong reputational economy as the work in the next section highlights).

Endorsing

Usual requests: reference/tenure review; book endorsement. 

Academics are reliant on other academics to endorse their reputation and the quality of their work. When applying for jobs or promotion we need our peers to support our applications. Undertaking endorsing work is very important service to our colleagues as it directly affects their career prospects, and of course, we hope that our peers will endorse us. My rules of thumb are:

  1. Agree to all reference/tenure reviews if I know the person well.
  2. Agree to reference/tenure reviews if I only know the person a little (as one’s own career develops one can be asked to provide evaluations of other academics that one does not know well personally), but if the request has come from the applicant then suggest that strategically they might want to ask someone who can write a more personalised endorsement (if they want to persist with me, then I do the best job I can based on my knowledge of them and their work).
  3. I generally agree to do all book endorsements so long as I feel the book is in my area of expertise and I know from the author’s other work that I am likely to like the book.

Evaluating

Usual requests: external examining programmes or PhD theses.

External evaluation of new proposed programmes, existing programmes and how they are internally assessed, and judging the merits of a doctoral thesis are all important forms of peer evaluation vital for the functioning of academic departments and institutions. Again, we are reliant on an exchange economy for evaluating programmes and students. However, unlike reviewing a paper or writing a reference letter there is significantly more labour involved. Being the external examiner for a degree programme might take 3-5 days per annum and involve travel to the other institution, plus requests for feedback throughout the year (e.g., signing off on exam questions). Usually, the appointment is for three or more years. Similarly, undertaking a detailed reading of a PhD thesis, writing a comprehensive report, and attending a viva takes a number of days. Agreeing to perform the role of examiner then involves a significant time commitment. My rules of thumb are:

  1. Do not be the external examiner for more than one degree programme at any one time.
  2. Ideally, do not examine more than three or four PhD thesis a year (and only accept to do those that you feel you can pass reasonable judgement on).

Advising

Usual requests: give an interview/advice/survey; appoint to advisory board

Requests for advice/interviews can come from a number of sources: students from other universities looking for advice or feedback or expert opinion; journalists looking for expert views; government bodies, companies, community groups seeking help. If a group is seeking a more consistent flow of advice then they might ask you to sit on an advisory board that meets a set intervals. It’s a sign of esteem that these individuals/groups are seeking your counsel and undertaking the requests can boost reputation and open up new networks and opportunities. They can also take up quite a bit of time, especially some advisory boards. For example, I’ve sat on boards that meet once a year, quarterly, six times a year, and monthly. In all cases, I’ve had to travel to the meetings, with EU-related boards taking up quite a lot of time in travel as well as in meetings.  My rules of thumb are:

  1. Agree to interviews and meetings that involve no travel (they will come to my office or it can be done via phone/skype) and where I feel I have sufficient expertise.
  2. Agree to interviews elsewhere (e.g., radio/TV studios) where I think I have expertise and doing the interview will be strategically/reputationally beneficial.
  3. Largely ignore circulars that lack a personal touch.
  4. Agree to advisory boards where I think I will get some value out of attending or it is strategically useful for my institution (e.g., serving on government boards).
  5. Do not serve on more than five boards at any one time.

I tend to break rules 4 and 5 by agreeing to help colleagues by serving on their project advisory boards – highlighting the problem of personal obligation in dealing with requests.

Contributing

Usual requests: speak at workshop/conference; contribute paper/chapter; write a book; write a book review; be a partner in a grant application; work on project; be a journal editor; be a visiting professor.

Another form of exchange is to contribute to and collaborate on an academic endeavour such as event, publication or project. While committing to some of these endeavours might be relatively short term, such as attending an event to present an invited talk, others might stretch out over years and significantly impact not only on workload but the direction of one’s research and the nature and location of one’s outputs. Agreeing to take part in contributory requests then need special attention, especially if they are going to impact on forms of evaluation such as tenure or promotions review or institutional assessments. My rules of thumb:

  1. With regards to invited talks – assuming I might be able to accommodate in my diary – pick the ones that I think I might get most from (e.g., potentially interesting audience for feedback, introduction to new network, visit a place I’ve not been before). I also want to make sure all my expenses for attending are going to be covered.
  2. With regards to contributing a paper or chapter, I pick those ones that fit with my publishing strategy. I don’t mind deviating a little if I think it’s an interesting special issue or book idea – publishing in a special issue is likely to lead to a wider readership. Do not take on more than a couple of such commitments a year as they are sizable obligations (it takes quite a bit of time to research and draft the material).
  3. With regards approaches to write a book, I only take on this task if I genuinely want to write a book on that topic – writing a book is a major commitment of time and effort.
  4. In terms of requests to work on project, I only agree to this if I am genuinely interested in both the topic and working with the other researchers. Projects are usually multi-annual commitments. I never agree to collaborate with people I have never met or where I have a sense we might not work well together.
  5. Being the editor of a journal can carry a high-esteem factor and is important disciplinary service, but it also a multi-annual commitment that involves a significant amount of work (as I know from editing three journals). I would only take on the role if I felt I was a good fit and I could commit the necessary time (others might consider the reputational effects are worth the effort). It is worth finding out whether your institution would give you a course or administrative buy-out to facilitate the role. I would never edit more than one journal at any one time.

The key question for all these types of contributions is whether the rewards (in terms of learning, personal development, outputs, reputation, etc.) for taking part are going to exceed the costs of participation (time, energy, re-orientation of work, etc). That requires careful consideration and the balance always needs to be positive before agreeing to contribute.

Balancing the load

One thing to keep in mind is the need to balance the workload across all five kinds of tasks. Cumulatively they can all add up to a lot (on top of the usual teaching, research and admin load). One useful thing to do is to keep a record of all requests, which ones you have agreed to do, and the associated work and deadlines. This will help in deciding whether new requests can be accommodated and also allow better time management.

Also keep in mind that it is very easy to slip into breaking rules of thumb if the request is from people that you know and you feel you have a personal obligation to agree, or the opportunity seems too good to pass up given its strategic value (such as serving on a prestigious advisory board). In effect, compensation for unanticipated decisions needs to be made going forward or risk becoming overloaded. This precisely where I become unstuck – saying no to colleagues or to high prestige invites is difficult!

How to say ‘no’

Saying ‘no’ is tough. It’s especially tough in the exchange and reputational economies of academia. But if sanity and health is to be maintained then saying no is vital. There are, of course, different ways of saying no. And saying no can also create opportunities for others.

Always say ‘no’ politely (e.g., Many thanks for thinking of me for undertaking X. I would really like to help but unfortunately I can’t due to existing commitments … Good luck with your endeavour …)

Always say ‘no’ while also trying to be helpful (e.g., Perhaps the following people might be able to help you … [and name some scholars who the requester might not have thought of, perhaps junior scholars who would benefit from starting to review and building their reputation as an expert]). If I get invited to give a talk, for example, and I cannot attend, I try to recommend more junior colleagues who would benefit from being invited to present.

In conclusion

Taking part in the exchange economy of academia is vital work. It is essential that all academics say ‘yes’ to all the kinds of tasks detailed above. At the same time, there needs to be balance across who is involved in the exchanges and personal workloads. While some of that balance needs to be provided by those making requests getting better at distributing work more evenly across academic, some also needs to be made by academics who receive those requests. Hopefully, the rules of thumb I’ve outlined might be helpful for selecting which tasks to take on and reject, and for managing the tasks to which one has committed. I’d welcome any feedback or alternative suggestions – please leave a comment.

Rob Kitchin

Elden, S. (2008) The exchange economy of peer review. Environment and Planning D: Society and Space 26: 951-953. http://journals.sagepub.com/doi/abs/10.1068/d2606eda

New book: Data and the City

data and the cityA new book – Data and the City – edited by Rob Kitchin, Tracey Lauriault and Gavin McArdle has been published by Routledge as part of the Regions and Cities series.  The book is one of the outputs from a Progcity workshop in late 2015.

Description
There is a long history of governments, businesses, science and citizens producing and utilizing data in order to monitor, regulate, profit from and make sense of the urban world. Recently, we have entered the age of big data, and now many aspects of everyday urban life are being captured as data and city management is mediated through data-driven technologies.

Data and the City is the first edited collection to provide an interdisciplinary analysis of how this new era of urban big data is reshaping how we come to know and govern cities, and the implications of such a transformation. This book looks at the creation of real-time cities and data-driven urbanism and considers the relationships at play. By taking a philosophical, political, practical and technical approach to urban data, the authors analyse the ways in which data is produced and framed within socio-technical systems. They then examine the constellation of existing and emerging urban data technologies. The volume concludes by considering the social and political ramifications of data-driven urbanism, questioning whom it serves and for what ends. It will be crucial reading for those who wish to understand and conceptualize urban big data, data-driven urbanism and the development of smart cities.

The book includes chapters by Martijn De Waal, Mike Batty, Teresa Scassa, Jim Thatcher and Craig Dalton, Jim Merricks White, Dietmar Offenhuber, Pouria Amirian and Anahid Bassiri, Chris Speed Deborah Maxwell and Larissa Pschetz, Till Straube, Jo Bates, Evelyn Ruppert, Muki Haklay, as well as the editors.

Data and the City is available in both paperback and hardback and is a companion volume to Code and the City published last year.

Workshop paper with LERO / School of Business: Perception of Value in Public-Private Ecosystems: Transforming Dublin Docklands through Smart Technologies

On December 9th 2016, Lero organised a workshop called IoT & Smart City Challenges and Applications” (ICSA 2016), prior to the 2016 International Conference on Information Systems (ICIS 2016), held in Dublin, Ireland from December 11-14. The event features 12 presentations across three themes, along with panel discussions.

Among these was the following paper on the Dublin Docklands, drawing from our early findings on partnership models for a smart district:

Perception of Value in Public-Private Ecosystems: Transforming Dublin Docklands through Smart Technologies
Olga Ryazanova, Reka Petercsak, Liam Heaphy, Niall Connolly and Brian Donnellan

or just Dublin Docklands?

Abstract:
Our study explores the potential for developing a hybrid business model for public-private ecosystem that emerged around the smart cities project in Dublin Docklands Strategic Development Zone. We focus on stakeholders’ expectations in relation to value creation and value capture, trying to understand to what extent the interests of stakeholder groups are diverse, and whether it is possible to create consensus that delivers economic, social, and environmental value for participants. The findings of this study seek to advance the literature on the business models of hybrid organisations and to test some assumptions of the research on the governance of public-private partnerships.

download

Emerging Technological Responses in Emergency Management Systems

The advent of discourses around the ‘smart city’, big data, open data, urban analytics, the introduction of ‘smarter technology’ within cities, the  sharing of real-time information, and the emergence of social media platforms has had a number of outcomes on emergency services worldwide. Together they provide opportunities and promises for emergency services regarding efficiency, community engagement and better real-time coordination.  Thus, we are seeing a growth in technologically based emergency response. However, such developments are also riddled with broad concerns, ranging from privacy, ethics, reliability, accessibility, staff reluctance and fear.

This post considers one recent technological push for the re-invention of the emergency call system (911bot) and another for the sharing of real-time information during a major event (Smartphone Terror Alert).

911bot

In recent years, there has been a significant move away from voice calls towards texting and internet based platforms (eg.WhatsApp and Twitter)(see figure 1), this is tracked regularly by the International Smartphone Mobility Report conducted across 12 countries by the data tracking company Infomate. In 2015, they found that in America the average time spent on voice calls was 6 minutes as opposed to 26 minutes texting, and worldwide,  internet based platforms were the main form of communication (Infomate, 2015 and Shrapshire, 2015).

 

cell phone communication

Figure 1: Cell phone Communication. Source: Russell (2015).

In light of this, there is a push by both the private sector and entrepreneurs to utilise mobile phones and  social media platforms in new ways such as within the emergency call system. Within my own field research, I have questioned first responders in Ireland and the US regarding the use of social media and apps as alternative means to the current telephone system.  For the most part, this was met with disdain and confusion from first responders.  Strong arguments were made against a move away from a call-dominated response system. These included:

a)      Difficulty in obtaining relevant and accurate information regarding the event, including changing conditions and situations.

b)      Not able to provide the victim or caller with accurate instructions and information.

c)      Restrictions in contacting the caller.

d)     The system would need an overhaul for it to work, i.e. a dedicated team ensuring that these messages are not missed, and require staff training.

e)      Call systems are established mechanisms for contacting the emergency services, why change it when it works?

f)       If you use something like Twitter or Facebook to report an emergency how do we ensure that it is reported correctly and not just tweeted or messaged to an interface which is not monitored 24/7?

And as can be seen through the following conversation with two operational first responders in Dublin, Ireland, they want new technology but are also highly hesitant as to its ability to ensure a quick response.

Conversation between researcher and two first responders

R1: See the problem with a tweet and a text, I can’t get any information out of that, like I could tweet and back and then you are waiting for them to send something back, when I have you on the phone, I can question you, “What is it?”, “What is wrong?”, “What is the problem?”.

R2: If you did go with something like [social media platform for emergency call intake], you would have to have the likes of, if you are the tweet man then you would have to be 100% on the phone looking at it

R2: It probably would work if it wasn’t an emergency as such, not a full emergency

R1: I think people need tobe re-assured that someone has seen it and really knows what is happening.

R1: Jesus you could have everyone tweeting saying I have a sore stomach and that would register as a call for us so the calls would just get worse and worse. [...] I think if you ring Domino Pizza now, it will know who you are, where you are and your order

R2: They can read the caller ID coming

R1:We haven’t got that

All of these are understandable concerns, but they also illustrate a resistance to innovative change that may result in cultural and institutional change which they oppose due to highly legitimate fears of effectiveness and reliability. Even so, they are welcoming of technology which has obvious benefits for them such as the “Domino’s Pizza” caller ID system, but are more reluctant towards innovations such as the 911bot whose value is overshadowed by fears of inefficiency, information gaps and reliability. However, the 911bot does potentially address some of these issues within its design.

The 911bot (figure 2) was developed during TechCrunch’s Disrupt Hackathon in New York in 2016.  It works through Facebook Messenger, which had a reported 1 billion users in July 2016 (Costine, 2016), to allow users to report an emergency.  Initially, one would be forgiven for immediately thinking of the arguments made against a transformation of the current system as presented above. However, the messenger app already offers location services based on the phones GPS thus, when reporting an incident, your exact location is immediately sent (although you can turn off your GPS signal and restrict your location being sent, when using this bot there is potential for that to be overridden).  The person reporting the incident can also send pictures or videos and the bot can provide information on what you should and shouldn’t do in that situation such as, how to do CPR during a cardiac arrest (Westlake, 2016).

Further, this bot has potential to feedback the location of the first responders to the reporter. It provides the control room with more accurate information coming from real-time videos and pictures meaning that they are not relying wholly on information from untrained and scared people.  And, most importantly, this system doesn’t take away from the control room interacting with the caller. From the information provided by the developers, it appears that once the messenger sends the request, the control room calls the phone and resumes their role but with more information.   Possibly, going forward this could even be done through Facetime so that the control room has live interaction with the event prior to the arrival of the first responders.  Although, the 911bot has only been developed and not deployed, in time and after much consultation and experimentation, it could prove very beneficial within emergency response.  For instance, if the control room operator can actually see how the person is conducting CPR, can see and hear their breathing, see the extent of the injury, fire, or road traffic collision in real time, it would inform decision-making that could create better and more efficient responses.  However, it would be remiss to discuss this without noting that there are potential privacy issues with the mass use of this type of technology outside of the remit of this post, that would need to be considered.

911BOT

Figure 2: 911bot. Source: 911bot online.

Smartphone Terror Alert

Another new use of mobile technology was the mass terror alert issued on September 17th 2016, after Chelsea, Manhattan was hit with an explosion.  The alert (figure 3) was issued by the Office of Emergency Management, New York Police Department and the FBI through all phone networks. It was received by an unknown number of people and provided information about the key suspect – Ahmed Khan Rahami.  The Press secretary for New York Mayor Bill de Blasio stated that it was the first use of this alert at a “mass scale” and as the suspect was caught within 3 hours, it presented the appearance that this alert was effective, with New York’s Police Commissioner stating “it was the future”(Fiegerman, 2016). Yet there is no evidence that the alert had anything to do with the catching of the suspect; these two factors could be circumstantial.

SMARTPHONE TERROR ALERT

Figure 3: Smart phone terror Alert. Source: published in Fiegerman (2016).

Further, as illustrated by Anil Dash in Fiegerman (2016) how effective was it actually?  “Is there evidence that low-information untargeted push notifications help with any kind of crime? Seems they’re more optimised for panic”.  This is compounded by the lack of an all-clear alert, which would work to ease tensions and potential panic.  We live in a socially constructed risk society (Beck, 1992; 2009) and with innovations such as this, even if the intention is good, the potential for mass panic is created, which raises questions regarding the appropriateness of this mechanism. In this instance, sending an alert with little information, using just a name, makes everyone who could fit that name a potential target, and is an action that could create panic, fear and racial attacks under the illusion of “citizen arrest”.  However, this system has potential especially if it were utilised during severe weather events to provide information on evacuation centres and resources rather than during more sensitive events such as a manhunt.  Essentially, though, before it can be deemed thoroughly effective and safe there needs to be stringent supportive policy and agency and community training to ensure that response from agencies as well as communities is coordinated and effective rather than panicked and uninformed. So, I wonder, is this really the future, and indeed, does it need to be the future? Is it already the present with no sense of reflection on the potential consequences of such a system by the lead federal and local emergency agencies and institutions?  I don’t have the answers to these questions but examining the operational use of this alert even, at its small scale of use, provides opportunities to begin to tease out the danger of a dichotomy between effectiveness and panic and to explore issues around privacy, fear, reliability and usefulness.

In conclusion, this post has provided two different innovations within emergency management, one being experimented with and one which has been implemented. But what is clear is that changes in how we engage with control centres and emergency services are taking place, albeit slowly. But, one can only hope, especially in relation to the alert system, that lobbied criticisms will be engaged with and solutions sought.

 Bibliography

911bot (2016) 911bot. [Online]. Available at: http://www.911bot.online/) (Accessed 9th November 2016).

Beck, U., (1992). Risk Society: Towards a New Modernity. London: Sage.

Beck, U., (2009). World of Risk. Cambridge: Polity Press.

Costine, J. (2016) How Facebook Messenger clawed its way to 1 billion users. [Online].  Available at: https://techcrunch.com/2016/07/20/one-billion-messengers/ (Accessed 8th November 2016).

Fiegerman, S.(2016) The story behind the Smartphone Terror Alert in NYC. [Online].  Available at: http://money.cnn.com/2016/09/19/technology/chelsea-explosion-emergency-alert/ (Accessed 9th November 2016).

Infomate (2015) The International Smartphone Mobility Report [Online]. Available for download at: the International Smartphone Mobility Report (Accessed 7th November 2016).

Russell, D. (2015) We just don’t speak anymore. But we’re ‘talking’ more than ever. [Online].  Available at: http://attentiv.com/we-dont-speak/ (Accessed 9th November 2016).

Shropshire, C. (2015) Americans prefer texting to talking, report says. Chicago Tribune [Online].  Available at: http://www.chicagotribune.com/business/ct-americans-texting-00327-biz-20150326-story.html (Accessed 9th November 2016).

Westlake, A. (2016) Finally, there’s a chat bot for calling 911. [Online].  Available at: http://www.slashgear.com/finally-theres-a-chat-bot-for-calling-911-08439211/ (Accessed 7th November 2016).

 

Call for paper: Data driven cities? Digital urbanism and its proxies

We are organising a special issues on data-driven cities. You can find more details below and we look forward to your proposals.

 

Tecnoscienza. Italian Journal of Science and Technology Studies

SPECIAL ISSUE

“DATA DRIVEN CITIES? DIGITAL URBANISM AND ITS PROXIES”

Guest editors:

Claudio Coletta (Maynooth University)

Liam Heaphy (Maynooth University)

Sung-Yueh Perng (Maynooth University)

Laurie Waller (Technische Universität München)

Call for papers:

In the last few decades, data and software have become an integral part of urban life, producing a radical transformation of social relations in cities. Contemporary urban environments are characterised by dense arrangements of data, algorithms, mobile devices, and networked infrastructures. Multiple technologies (such as smart metering, sensing networks, GPS transponders, CCTV, induction loops, and mobile apps) are connected with numerous processes (such as institutional data management, data brokering, crowdsourcing, and workflow management), in order to provide sustainable, efficient, and integrated city governance and services. Accordingly, big data and algorithms have started to play a prominent role in informing decision-making and in performing the spatial, material, and temporal dimensions of cities.

Smart city initiatives undertaken globally are characterised by highlighting the purported benefits of partly automating management of public services, new forms of civic engagement enabled by digital infrastructures, and the potentials for innovating policy and fostering economic development.

Yet, contributions within STS, Critical Data Studies, Geography, Sociology, Media Studies and Anthropology have contested the neutral and apolitical nature of (big) data and the ahistorical, aspatial, homogenizing vision of cities in favour of an understanding that recognizes the situated multiplicity of actual digital urbanism. The politics of data, data analytics and visualizations perform within specific urban and code assemblages embodying specific versions of real-time and anticipatory governance. At the same time, these studies highlight the risks of dataveillance as well as the corporatisation of governance and technocratic solutionism which, especially coupled with austerity regimes, seem to reinforce inequalities while influencing people’s lives out of the public grasp. Within this context, vested interests interact in a multi-billion global market where corporations, companies and start-ups propose data-driven urban solutions, while public administrations increasingly delegate control over citizens’ data. Also, public institutions and private companies leverage the efforts of open data movements, engaged civic communities and citizen-minded initiatives to find new ways to create public and economic value from urban data.

The aim of this special issue is therefore to encourage critical and transdisciplinary debates on the epistemological and ontological implications of actual data-driven urbanism: its uncertain, fragile, contested, conflicting nature; the different forms of performing and making sense of the urban environment through data and algorithms; the different ways to approach the relationship between data, software and cities; the urban and code assemblages that are produced.

To what extent cities are understandable through data? How do software and space work in urban everyday life and urban management? How do data and policies actually shape each other? What forms of delegation, enrolment and appropriation take place?

We welcome theoretical and empirical contributions critically addressing the following (non-exhaustive-list-of) topics:

- urban big data, city dashboards;

- data analytics and data brokering;

- IoT based urban services;

- predictive analytics and anticipatory governance;

- civic hacking, open data movements;

- privacy, security and surveillance in data-driven cities;

- crowd, mobility and traffic management;

- sensors, monitoring, mapping and modelling for urban facilities;

- digitization of built environment.

 

Deadline for abstract submissions: June 30th, 2016

Abstracts (in English) with a maximum length of 1000 words should be sent as email attachments to redazione@tecnoscienza.net and carbon copied to the guest editor at datadrivencities@tecnoscienza.net. Notification of acceptance will be communicated by July 2016.

Deadline for full submissions: October 15th, 2016.

Submissions (in English with a maximum length of 8000 words, including notes and references) can be made via the Journal’s submission system at www.tecnoscienza.net and an electronic copy of the article should be sent to redazione@tecnoscienza.net. The papers will be subject to a blind peer refereed review process. The special issue is expected to be published in 2017.

For further information about the special issue, contact the guest editors at datadrivencities@tecnoscienza.net.

For further information about the Journal please visit the Journal’s web site at http://www.tecnoscienza.net.

Claudio, Liam, Sung-Yueh and Laurie