Practical, Cost Effective and award-winning

Business Continuity, Crisis Management & Information Security Solutions

Phone:

0800 035 1231 (Mon to Fri 9am – 5pm)

Suite 3, The Cotton Mill, Torr Vale Mills, New Mills, Derbyshire, SK22 4HS, UK

There are many interesting lessons to learn in the unfolding saga at on-line sports retailer Wiggle…

Customers first started raising concerns over two weeks ago about orders being placed on their Wiggle accounts (and payments taken) without their knowledge.  Some people also reported that they had been locked out of their accounts.  The company’s initial response was characterised by a complete failure to engage with customers’ concerns.  As of Monday they have publicly acknowledged that there is a problem, but the tone of their communications is still defensive, focusing on the fact that “Our systems remain secure” and “customers’ login details have been acquired outside of Wiggle’s systems.”

The most likely scenario seems to be that, using personal details stolen elsewhere, fraudsters were able to log in to people’s Wiggle accounts where individuals had re-used login details and passwords from other services.  The fraudsters were then able to place orders and change account details (including login details) on these accounts.  Whilst Wiggle seem to be placing great significance on the fact that the data was not stolen from them, and that there was therefore no data breach, that is of little interest or comfort to affected customers.  Moreover, “credential stuffing attacks” such as these are a notifiable data protection incident in their own right (Wiggle has confirmed that it has reported the incident to the ICO).

Clearly there are important lessons here for all of us as consumers, principally about not re-using login details for multiple sites.  The incident also highlights the challenge for on-line retailers in striking the correct balance between security and convenience: it has surprised many people that the fraudsters were able to order goods to be sent to a new address without having to re-enter any card details. But the primary lesson for all organisations is that information security incidents will continue to occur and that you need to be ready to respond quickly when they do.  Critically, that involves having processes in place for investigating reports of suspicious activity in a timely fashion and for communicating effectively with customers.

We blogged back in January about how GDPR fines were starting to bite.  Now, drawing on data from GDPR Enforcement Tracker, we take a first look at the fines that have been issued under GDPR specifically for data breaches.

The database lists 70 fines related to data breaches, ranging in value from €300 to €10m.  21 countries have levied fines so far, with the greatest number being imposed in Romania (15 fines).  Not all entries include the number of people affected by breaches but, from the data available, there is certainly a significant spread in scale from incidents only affecting 1 or 2 individuals to an incident where the records of 6 million people were compromised.

The mean value of approximately €250 000 is very skewed by a few large fines, so it is perhaps more informative to look at the median value of around €25 000.  This is perhaps a surprisingly low figure given the maximum fines of 4% of global turnover allowed under GDPR; but probably reflects a sensible and pragmatic application of the new powers by the various regulatory authorities.  As ever though, it is important to remember that fines may only be a small fraction of the total costs to the company of a data breach: the IBM/Ponemon Institute 2018 Cost of Data Breach Survey found that the largest component of the cost of a data breach was lost business.

(At the time of writing we are still waiting for the UK Information Commissioner’s Office (ICO) to confirm the level of fines that will be imposed on British Airways and Marriott International.  The ICO announced its intention to fine these firms £183m and £99m respectively but neither of these amounts have yet been finalised.)

There has been much media coverage today of “Exercise Iris”, an exercise delivered to Scottish Health Boards in March 2018 by the Scottish Government’s Health Protection Division.  The exercise scenario was based on an outbreak of Middle East Respiratory Syndrome (MERS) in Scotland, and media reporting has focused on why the exercise recommendations were not shared and acted upon more widely.  Understandably, people are asking if we might be in a better situation now if the lessons identified two years ago had prompted action across the UK.

Really there are two separate issues here: sharing observations and recommendations from exercises; and turning these recommendations into actions.  We will explore the latter issue first.

The failure to convert “lessons identified” into “lessons learned” is a well established theme in both the practitioner and academic literature on crisis management.  Time and time again recommendations are made following an exercise, or indeed an actual incident, but they are never followed through.  Looking at the post-exercise report from Exercise Iris, there are two obvious reasons why this may have been the case here: many of the recommendations are rather vague; and, at least in the public version of the report, no deadlines are given for completion.  The situation is further complicated because the actions fall on many different organisations (individual Health Boards, the Scottish Government etc) and it is not clear who was responsible overall for seeing that actions were completed.  It is thus not altogether surprising that resources were not immediately made available to address the identified issues.

The information sharing issue is less straightforward.  The Scottish Government has stated that the findings were shared with attendees, which implies that they were not shared with anybody else at the time.  It is reported that the findings were subsequently shared with the UK Government’s New and Emerging Respiratory Virus Threats Advisory Group in June 2019.  One has to be mindful of hindsight bias here: now that we are in a pandemic it seems obvious that these particular findings should have been shared widely; but if everybody shares the findings of every single exercise with everybody else then there would simply be information overload and none of the reports would ever get read.  Perhaps the real questions is why, when we became aware that we were facing a pandemic, were the recommendations still not widely disseminated.  Maybe all we need is a searchable database of post-exercise reports from across the health services, emergency services, central and local government upon which we can draw when we need to.

In summary then, there should clearly be better mechanisms for sharing findings from exercises throughout the UK (and beyond) but, even where information is shared, it is far from certain that it will be acted upon.

I’m sure I wasn’t the only person to be somewhat surprised at the news that Baroness Dido Harding has been appointed to oversee the implementation of the new NHS Covid-19 app.  Rightly or wrongly, she will always be associated with the massive data breach at TalkTalk in October 2015 and has received significant criticism for her handling of that incident.  As one commentator optimistically phrased it, she may have learnt some useful lessons from that incident.  Hopefully that is true, but it is hardly likely to inspire confidence in a scheme that is already highly controversial.

The news also reminded me of another interesting blog post of ours from last year.  The post summarised findings from a new academic study of the cost to organisations of data breaches.  As well as addressing the main research question the authors also found, somewhat surprisingly, that:

  • The pay of CEOs in firms that had had a data breach increased relative to firms that hadn’t; and
  • Security breaches had no effect on the rate of CEO turnover.

Whilst Baroness Harding did eventually leave TalkTalk, it was not before she famously picked up a substantial bonus.  It is not my intention to criticise individuals, rather I repeat the story because it suggests that CEOs are not adequately incentivised to manage information security risks.  If CEOs know that their remuneration and career prospects will not be damaged, even by a spectacular data breach; why would they allocate scarce resources to mitigate the risk?

That leads on finally to the other big information security story of the week – EasyJet.  The headlines have focused on the total number, 9 million, of customers affected.  But perhaps the more worryingly, it is reported that over 2000 customers had their credit card details compromised.  Given that this incident occurred post-GDPR EasyJet may be looking at a very significant fine when, with a global pandemic going on and almost no air travel taking place, they have enough problems already.

An article by Cambridge Risk Solutions, published this week in Continuity Central, looks at whether there is any evidence that firms that follow good practice in business continuity management (BCM) have fared better in the current Covid-19 pandemic.  Specifically it looks at the impact on the share prices of companies in the FTSE 100 from mid-February to Mid-April, to see if those that have adopted BCM have suffered less damage to shareholder value.

Sadly the results are inconclusive: there is no association between adoption of BCM good practice and falls in share price at any stage during the 8-week period studied.  This could be because the effect is very small and buried in the noise, but the article also considers other possible explanations, including:

  • The possibility that good-practice-based plans were abandoned by senior management when faced with a crisis of such magnitude; or
  • Good-practice-based plans were implemented but failed to mitigate the impact on businesses.

The answers to both of these questions will be vital in learning lessons from this dreadful crisis and improving the practice of BCM for the future.

You can read the full article here.

Reading the first edition of “The Failure of Risk Management: Why it’s Broken and how to Fix it”,by Douglas Hubbard, back in 2009 was a professional epiphany for me.  Having been working in business continuity management for about five years at this stage, I was aware of the prevalence of many questionable practices in risk management.  But seeing how entrenched these methods were, and how confident people were in their efficacy, I wasn’t sure if I was alone in having doubts.

It was therefore wonderful to come across a book that clearly, but rigorously, explained what was going wrong and, more importantly, provided a clear road map for improvement.  Since that time, I have recommended the book to anybody who has attended our training courses, many of our consulting clients and, basically, anybody else who listened.  I was delighted to hear of the release of the second edition, but would it live up to my hopes?

The second edition retains the essential look and feel of the original but has clearly been updated throughout; with many useful references to recent events, particularly in the area of cyber security.  The most obvious addition in the new edition is a completely new chapter (Chapter 4), laying out a simple approach to making the initial transition to quantitative techniques.  This forms a “red-thread” throughout the rest of the book.  There is also important new material on a number of topics, principally:

  • Utility theory (Chapter 6);
  • Inconsistency in expert judgements (Chapter 7); and
  • The analysis of near-misses (Chapter 12).

All of this adds up to a slightly longer, but still very readable, book.

Sadly, the same flawed risk management practices that were highlighted in 2009 are still prevalent today, despite the sustained efforts of Hubbard and others; so the importance of this book has not diminished.  The release of this excellent second edition is very timely and I would thoroughly recommend it to anybody working in any aspect of risk management.  More importantly though, I would also recommend the book to executives and general managers: to paraphrase Georges Clemenceau, risk management is too important to be left to the risk management profession.

As in many parts of the world, here in the UK we have experienced unprecedented events in recent weeks.  Amidst the grim backdrop of the numbers of infections and deaths growing daily we have seen schools, bars and restaurants closed; sport put on hold; and, finally, a nationwide lock-down.  However this has all been achieved with a combination of persuasion and emergency legislation, passed specifically to deal with the Covid-19 outbreak, rather than using the Civil Contingencies Act (CCA).

Passed in 2004, the CCA sought to put emergency planning on a proper footing for the 21st century.  Learning from the experiences of recent events such as the “Millenium Bug”, fuel strikes in 2000, the Foot and Mouth outbreak in 2001 and 9/11; the CCA was designed to provide a flexible framework for planning and responding to crises in our modern age.  The most visible outcomes from passing of the CCA were:

  • The designation of Cat 1 (eg the Emergency Services, NHS, Local Authorities, Environment Agency) and Cat 2 (eg ports, airports, railways, utilities) Responders with various statutory duties;
  • The coordination of local planning through Local Resilience Forums (LRFs); and
  • The publication of Community Risk Registers by these LRFs.

But the CCA also contained various emergency powers to enable the Government to deal with extraordinary situations; and it is the failure to make use of any of these powers that is curious at the present time.  Actually this is not an unusual observation: in many cases, when faced with an incident or crisis, organisations ignore the plans that they have documented, tested and exercised in favour of making things up as they go along.  Why?

Some of the reasons that have been observed in other instances of organisations not using their pre-prepared plans when facing a real incident are:

  • The senior management team lack awareness of and/or confidence in the written plans;
  • The plan can only be triggered by certain prescribed events, none of which occur in this particular scenario; and/or
  • The senior management team believe that the incident requires a brand new bespoke solution, rather than any of the generic solutions documented in plan.

One of the key areas we look for with any organisation when conducting a post-incident debrief is examples of where they have not used their pre-prepared plans.  We then explore with them why, in each instance, they chose not to.  As the current crisis subsides, and we can start looking again to the future, it would be very useful for the UK Government to examine why they chose not to utilise the CCA in the Covid-19 outbreak.

Given the heightened risk of cyber incidents in the current Covid-19 crisis, it seems timely to look at the Cyber Security Breaches Survey 2020 published recently by the Department for Digital, Culture, Media and Sport.  Now in its fifth year the survey looks at UK businesses, charities and, for the first time, educational establishments.

In terms of frequency of breaches and attacks, the survey finds little difference from previous years:

  • 46% of businesses (unchanged from last year); and
  • 26% of charities (up from 19% last year)

were aware of a cyber breach or attack in the last 12 months.  Within this overall threat landscape, phishing attacks had increased, whilst malware and other viruses had decreased.  However, looking for the first time at the education sector, the survey found that an astonishing 80% of Further and Higher Education establishments were aware of a breach or attack.

Looking at impacts, the survey found that only 19% of businesses suffering a breach or attack experienced a loss of data or financial cost.  Even within this small subset who experienced a “material outcome”, the average cost reported was only £3230.  This figure seems extremely low when compared to other data sources and brings into question if responding organisations had calculated the full cost of incidents.

The survey also looks at the steps that organisations are taking to manage cyber risk.  Both businesses and charities are more likely to have a written cyber security policy in place (38% and 42% respectively) than in previous years.  Curiously though, given the UK Government’s backing of the scheme, the survey does not specifically ask about Cyber Essentials accreditation.  However, in a slightly worrying revelation, it notes that only 13% of both businesses and charities are even aware of the scheme!

I was delighted to see an article on the BBC website today repeating much of what we said in a blog post back in January!  When we blogged the number of confirmed cases of Coronavirus globally was doubling roughly every two days; but now the BBC reports that confirmed cases in the UK are only doubling every 3-4 days.  In the current situation we have to be thankful for any good news.

This is also a reminder that our understanding of the pandemic is still changing from day to day as more data emerges – a theme that we first blogged about in January too.  This theme was reinforced again in the news today with reports of a new piece of research that predicted there could be as few as 7000 Coronavirus deaths in the UK.

It is therefore probably timely to update another one of our blog pieces, this time from March 12th, in which we mentioned the progress of the disease in Italy.  Fitting a logistic curve to the data up to that point (at which time Italy had reported about 12 500 cases) suggested that the total number of confirmed cases in Italy would ultimately reach about 40 000 but, sadly, that has not proven to be the case.  Re-fitting the curve two weeks later suggests a much more prolonged and severe wave of infections.

Meanwhile, fitting to data in the UK a week ago suggested a final total of around 50 000 confirmed cases for this pandemic wave.

Once again, this picture changes daily as new information emerges: to reiterate, the predicted total for Italy trebled over the course of two weeks.  However,as can be seen from the graph, so far progress of the disease is actually below the fitted curve.  We’ll keep you posted.

Two reports have now been completed into the cause of the failure of the slipway at Toddbrook Reservoir in Whaley Bridge on 1st August 2019:

  • The “Toddbrook Reservoir Independent Review Report” by Professor David Balmforth, commissioned by DEFRA; and
  • Report on the Nature and Root Cause of the Toddbrook Reservoir Auxiliary Spillway Failure on 1st August 2019” by Dr Andy Hughes, commissioned by the Canal & River Trust (CRT).

Both reports are publicly available on-line.  Whilst I don’t pretend to understand the technical details, both authors are quite clear that there were multiple serious defects in the original design of the spillway.  This enabled water to flow under the slabs forming the spillway, eroding the fill beneath them and, ultimately, displacing the slabs themselves.

Interestingly though, the authors differ in the significance of the contribution of widely-reported maintenance issues at the dam: Balmforth views this as another primary cause of the failure, whereas Hughes sees it as a very much a secondary consideration.  Balmforth also mentions a third contributing factor, the failure of the CRT to lower the water level when the severe weather warning was first issued; but is unable to judge if this could have prevented the outcome on the day.

Whilst Balmforth and Hughes are concerned with specific issues of dam design, the general pattern of multiple “latent incubating defects” in a system is very familiar from studies of previous disasters from Aberfan to the Challenger Space Shuttle.  As in these previous incidents, various people in various different organisations (and indeed members of the public in Whaley Bridge) were aware that there were issues with the dam, but nobody was able to put the pieces together: what Barry Turner called a “Failure of Foresight”.  Turner went on to identify four common features in such failures, two of which are specifically highlighted again in the reports into Toddbrook Reservoir:

  • Division of responsibilities – both organisationally between CRT and DEFRA, and individually between Supervising Engineers and Inspecting Engineers; and
  • Poor intra/inter organisational communications – in particular people not having access to drawings and other documents that they needed, and the failure of the most recent Inspecting Engineer’s report to prompt urgent remedial action on the slipway.

Reflecting on the incident at Toddbrook Reservoir, if we can identify and address these sorts of problems in our own organisations then we are one step closer to preventing a disaster closer to home.