Practical, Cost Effective and award-winning

Business Continuity, Crisis Management & Information Security Solutions


0800 035 1231 (Mon to Fri 9am – 5pm)

Suite 3, The Cotton Mill, Torr Vale Mills, New Mills, Derbyshire, SK22 4HS, UK

I’m sure I wasn’t the only person to be somewhat surprised at the news that Baroness Dido Harding has been appointed to oversee the implementation of the new NHS Covid-19 app.  Rightly or wrongly, she will always be associated with the massive data breach at TalkTalk in October 2015 and has received significant criticism for her handling of that incident.  As one commentator optimistically phrased it, she may have learnt some useful lessons from that incident.  Hopefully that is true, but it is hardly likely to inspire confidence in a scheme that is already highly controversial.

The news also reminded me of another interesting blog post of ours from last year.  The post summarised findings from a new academic study of the cost to organisations of data breaches.  As well as addressing the main research question the authors also found, somewhat surprisingly, that:

  • The pay of CEOs in firms that had had a data breach increased relative to firms that hadn’t; and
  • Security breaches had no effect on the rate of CEO turnover.

Whilst Baroness Harding did eventually leave TalkTalk, it was not before she famously picked up a substantial bonus.  It is not my intention to criticise individuals, rather I repeat the story because it suggests that CEOs are not adequately incentivised to manage information security risks.  If CEOs know that their remuneration and career prospects will not be damaged, even by a spectacular data breach; why would they allocate scarce resources to mitigate the risk?

That leads on finally to the other big information security story of the week – EasyJet.  The headlines have focused on the total number, 9 million, of customers affected.  But perhaps the more worryingly, it is reported that over 2000 customers had their credit card details compromised.  Given that this incident occurred post-GDPR EasyJet may be looking at a very significant fine when, with a global pandemic going on and almost no air travel taking place, they have enough problems already.

An article by Cambridge Risk Solutions, published this week in Continuity Central, looks at whether there is any evidence that firms that follow good practice in business continuity management (BCM) have fared better in the current Covid-19 pandemic.  Specifically it looks at the impact on the share prices of companies in the FTSE 100 from mid-February to Mid-April, to see if those that have adopted BCM have suffered less damage to shareholder value.

Sadly the results are inconclusive: there is no association between adoption of BCM good practice and falls in share price at any stage during the 8-week period studied.  This could be because the effect is very small and buried in the noise, but the article also considers other possible explanations, including:

  • The possibility that good-practice-based plans were abandoned by senior management when faced with a crisis of such magnitude; or
  • Good-practice-based plans were implemented but failed to mitigate the impact on businesses.

The answers to both of these questions will be vital in learning lessons from this dreadful crisis and improving the practice of BCM for the future.

You can read the full article here.

Reading the first edition of “The Failure of Risk Management: Why it’s Broken and how to Fix it”,by Douglas Hubbard, back in 2009 was a professional epiphany for me.  Having been working in business continuity management for about five years at this stage, I was aware of the prevalence of many questionable practices in risk management.  But seeing how entrenched these methods were, and how confident people were in their efficacy, I wasn’t sure if I was alone in having doubts.

It was therefore wonderful to come across a book that clearly, but rigorously, explained what was going wrong and, more importantly, provided a clear road map for improvement.  Since that time, I have recommended the book to anybody who has attended our training courses, many of our consulting clients and, basically, anybody else who listened.  I was delighted to hear of the release of the second edition, but would it live up to my hopes?

The second edition retains the essential look and feel of the original but has clearly been updated throughout; with many useful references to recent events, particularly in the area of cyber security.  The most obvious addition in the new edition is a completely new chapter (Chapter 4), laying out a simple approach to making the initial transition to quantitative techniques.  This forms a “red-thread” throughout the rest of the book.  There is also important new material on a number of topics, principally:

  • Utility theory (Chapter 6);
  • Inconsistency in expert judgements (Chapter 7); and
  • The analysis of near-misses (Chapter 12).

All of this adds up to a slightly longer, but still very readable, book.

Sadly, the same flawed risk management practices that were highlighted in 2009 are still prevalent today, despite the sustained efforts of Hubbard and others; so the importance of this book has not diminished.  The release of this excellent second edition is very timely and I would thoroughly recommend it to anybody working in any aspect of risk management.  More importantly though, I would also recommend the book to executives and general managers: to paraphrase Georges Clemenceau, risk management is too important to be left to the risk management profession.

As in many parts of the world, here in the UK we have experienced unprecedented events in recent weeks.  Amidst the grim backdrop of the numbers of infections and deaths growing daily we have seen schools, bars and restaurants closed; sport put on hold; and, finally, a nationwide lock-down.  However this has all been achieved with a combination of persuasion and emergency legislation, passed specifically to deal with the Covid-19 outbreak, rather than using the Civil Contingencies Act (CCA).

Passed in 2004, the CCA sought to put emergency planning on a proper footing for the 21st century.  Learning from the experiences of recent events such as the “Millenium Bug”, fuel strikes in 2000, the Foot and Mouth outbreak in 2001 and 9/11; the CCA was designed to provide a flexible framework for planning and responding to crises in our modern age.  The most visible outcomes from passing of the CCA were:

  • The designation of Cat 1 (eg the Emergency Services, NHS, Local Authorities, Environment Agency) and Cat 2 (eg ports, airports, railways, utilities) Responders with various statutory duties;
  • The coordination of local planning through Local Resilience Forums (LRFs); and
  • The publication of Community Risk Registers by these LRFs.

But the CCA also contained various emergency powers to enable the Government to deal with extraordinary situations; and it is the failure to make use of any of these powers that is curious at the present time.  Actually this is not an unusual observation: in many cases, when faced with an incident or crisis, organisations ignore the plans that they have documented, tested and exercised in favour of making things up as they go along.  Why?

Some of the reasons that have been observed in other instances of organisations not using their pre-prepared plans when facing a real incident are:

  • The senior management team lack awareness of and/or confidence in the written plans;
  • The plan can only be triggered by certain prescribed events, none of which occur in this particular scenario; and/or
  • The senior management team believe that the incident requires a brand new bespoke solution, rather than any of the generic solutions documented in plan.

One of the key areas we look for with any organisation when conducting a post-incident debrief is examples of where they have not used their pre-prepared plans.  We then explore with them why, in each instance, they chose not to.  As the current crisis subsides, and we can start looking again to the future, it would be very useful for the UK Government to examine why they chose not to utilise the CCA in the Covid-19 outbreak.

Given the heightened risk of cyber incidents in the current Covid-19 crisis, it seems timely to look at the Cyber Security Breaches Survey 2020 published recently by the Department for Digital, Culture, Media and Sport.  Now in its fifth year the survey looks at UK businesses, charities and, for the first time, educational establishments.

In terms of frequency of breaches and attacks, the survey finds little difference from previous years:

  • 46% of businesses (unchanged from last year); and
  • 26% of charities (up from 19% last year)

were aware of a cyber breach or attack in the last 12 months.  Within this overall threat landscape, phishing attacks had increased, whilst malware and other viruses had decreased.  However, looking for the first time at the education sector, the survey found that an astonishing 80% of Further and Higher Education establishments were aware of a breach or attack.

Looking at impacts, the survey found that only 19% of businesses suffering a breach or attack experienced a loss of data or financial cost.  Even within this small subset who experienced a “material outcome”, the average cost reported was only £3230.  This figure seems extremely low when compared to other data sources and brings into question if responding organisations had calculated the full cost of incidents.

The survey also looks at the steps that organisations are taking to manage cyber risk.  Both businesses and charities are more likely to have a written cyber security policy in place (38% and 42% respectively) than in previous years.  Curiously though, given the UK Government’s backing of the scheme, the survey does not specifically ask about Cyber Essentials accreditation.  However, in a slightly worrying revelation, it notes that only 13% of both businesses and charities are even aware of the scheme!

I was delighted to see an article on the BBC website today repeating much of what we said in a blog post back in January!  When we blogged the number of confirmed cases of Coronavirus globally was doubling roughly every two days; but now the BBC reports that confirmed cases in the UK are only doubling every 3-4 days.  In the current situation we have to be thankful for any good news.

This is also a reminder that our understanding of the pandemic is still changing from day to day as more data emerges – a theme that we first blogged about in January too.  This theme was reinforced again in the news today with reports of a new piece of research that predicted there could be as few as 7000 Coronavirus deaths in the UK.

It is therefore probably timely to update another one of our blog pieces, this time from March 12th, in which we mentioned the progress of the disease in Italy.  Fitting a logistic curve to the data up to that point (at which time Italy had reported about 12 500 cases) suggested that the total number of confirmed cases in Italy would ultimately reach about 40 000 but, sadly, that has not proven to be the case.  Re-fitting the curve two weeks later suggests a much more prolonged and severe wave of infections.

Meanwhile, fitting to data in the UK a week ago suggested a final total of around 50 000 confirmed cases for this pandemic wave.

Once again, this picture changes daily as new information emerges: to reiterate, the predicted total for Italy trebled over the course of two weeks.  However,as can be seen from the graph, so far progress of the disease is actually below the fitted curve.  We’ll keep you posted.

Two reports have now been completed into the cause of the failure of the slipway at Toddbrook Reservoir in Whaley Bridge on 1st August 2019:

  • The “Toddbrook Reservoir Independent Review Report” by Professor David Balmforth, commissioned by DEFRA; and
  • Report on the Nature and Root Cause of the Toddbrook Reservoir Auxiliary Spillway Failure on 1st August 2019” by Dr Andy Hughes, commissioned by the Canal & River Trust (CRT).

Both reports are publicly available on-line.  Whilst I don’t pretend to understand the technical details, both authors are quite clear that there were multiple serious defects in the original design of the spillway.  This enabled water to flow under the slabs forming the spillway, eroding the fill beneath them and, ultimately, displacing the slabs themselves.

Interestingly though, the authors differ in the significance of the contribution of widely-reported maintenance issues at the dam: Balmforth views this as another primary cause of the failure, whereas Hughes sees it as a very much a secondary consideration.  Balmforth also mentions a third contributing factor, the failure of the CRT to lower the water level when the severe weather warning was first issued; but is unable to judge if this could have prevented the outcome on the day.

Whilst Balmforth and Hughes are concerned with specific issues of dam design, the general pattern of multiple “latent incubating defects” in a system is very familiar from studies of previous disasters from Aberfan to the Challenger Space Shuttle.  As in these previous incidents, various people in various different organisations (and indeed members of the public in Whaley Bridge) were aware that there were issues with the dam, but nobody was able to put the pieces together: what Barry Turner called a “Failure of Foresight”.  Turner went on to identify four common features in such failures, two of which are specifically highlighted again in the reports into Toddbrook Reservoir:

  • Division of responsibilities – both organisationally between CRT and DEFRA, and individually between Supervising Engineers and Inspecting Engineers; and
  • Poor intra/inter organisational communications – in particular people not having access to drawings and other documents that they needed, and the failure of the most recent Inspecting Engineer’s report to prompt urgent remedial action on the slipway.

Reflecting on the incident at Toddbrook Reservoir, if we can identify and address these sorts of problems in our own organisations then we are one step closer to preventing a disaster closer to home.

It’s exactly six weeks since we first blogged about the spread of coronavirus – at that point we warned that the next few weeks were likely to be characterised by considerable uncertainty.  Much has happened since then, just in the last few days we have seen:

  • The formal declaration of a pandemic;
  • A lock-down in Italy;
  • Cancelling of flights from Europe to the US; and
  • Closure of schools in Ireland

However, we still know comparatively little about the threat that we are dealing with.

There is now reasonable agreement about some of the critical epidemiological details, such as:

  • The reproduction number, R0, lies between 2 and 3; and
  • The incubation period is around 5 days.

But still we are unable to answer critical questions like “How quickly will it spread?”, “Where will it hit next?”, “How bad will it get?” and “What can we do to limit spread of the disease?”  Absent clarity on these points it is perhaps not surprising that we see different governments reacting in completely different ways.  Interestingly, research published in the Journal of Clinical Medicine in early February predicted the likelihood of an outbreak in Italy as lower than that in the USA, Canada, the UK or Germany: so why has it happened?

Whilst we may not (yet) understand the cause of the outbreak in Italy, it may still be a useful case study of the effects of a coronavirus outbreak in a country like the UK.  The graph below shows confirmed cases up to today and a simple logistic curve predicting the progress of the disease up to the end of the month.

The chilling observation is that we in the UK are now at the point, in terms of both confirmed cases and deaths, that Italy was at about two weeks ago; but it just seems impossible to predict with any accuracy whether we will now follow a similar trajectory or not.  All the same there must already be lessons being learned at government, industry and firm-level that we can usefully apply in this country: it is vital that we seize this opportunity in the limited time available.






Understandably we are all focused on the growing threat of coronavirus; but that doesn’t mean that other risks have gone away.  In particular this week we saw announcements of high-profile data breaches at Network Rail and Virgin Media.

On Monday it emerged that the email addresses and travel details of about 10,000 people who used free wifi at UK railway stations had been exposed online.  The database, found on Amazon Web Services by a security researcher, included personal contact details and dates of birth.  Then on Thursday it was announced that a database containing details of 900 000 Virgin Media customers and potential customers had been accessible on-line for ten months.  Once again this contained phone numbers, home and email addresses.  It is believed that neither database contained any passwords or financial details.

Whilst the underlying cause of the incidents appears very similar, failure to properly secure information stored in the cloud; the responses have been quite different.  Virgin Media promptly acknowledged that the information was accessed “on at least one occasion”; apologised to customers; and informed the Information Commissioner’s Office (ICO).  By contrast the wifi provider to Network Rail, C3UK, stated on Monday that “To the best of our knowledge, this database was only accessed by ourselves and the security firm and no information was made publicly available;” and, based on this, they had chosen not to inform the ICO.

It is not clear if C3UK’s approach has provided much reassurance to passengers who may have been affected.  It would appear though that their customers, Network Rail and train operating companies, are not overly impressed.  Network Rail have stated that they have contacted the ICO themselves and had “strongly suggested” to C3UK that it considered reporting the vulnerability; and Greater Anglia said it no longer used C3UK to provide its station wifi.

Understandably, media attention in the UK is focused on the growing threat of Coronavirus and the two recent severe weather events. However, as well as these ongoing high-profile stories; there has also been a recent spate of food product recalls.

We have been tracking the number of food product recalls on the FSA website for some years now and the figures are pretty low and fairly steady (the figures for electrical product recalls, also shown, actually appear to be falling):

However, despite this, there have been five food product recalls in the last week alone:

  • Lidl – Lupilu baby food
  • Waitrose – Duchy organic almonds
  • Nestle – Ski yogurt
  • Iceland – Vegetable lasagne
  • Coop – Gro sticky toffee pudding

For more details of these recalls go to the FSA website.

How surprised should we be by this: is something amiss? If the true underlying incidence of recalls is about 50 per year (or one a week); we would expect to see a week with 5 or more recalls about once every 5 years; so it does seem a little strange.

We don’t want to spread alarm though – this is almost certainly just a random blip. It is important to bear in mind that if you look hard enough you will find some unusual events out there! Nonetheless, if you’re in the manufacturing sector, it might not be a bad idea to give your recall plans a quick refresh.