A number of prominent UK universities are amongst hundreds of organisations globally whose data has been stolen in a ransomware attack on cloud-computing provider Blackbaud. Remarkably, it has emerged that Blackbaud was attacked back in May but waited two months to inform its users. It has also emerged that they paid an undisclosed ransom in return for “confirmation” that the stolen data had been destroyed. Unsurprisingly, Blackbaud are being widely criticised for both the payment of a ransom to criminals and the delay in informing customers. Given their poor handling of the incident it is debatable how reassured we can be by the company’s claims that:
- “The majority of our customers were not part of this incident”; and
- There is “no reason to believe that the stolen data was or will be misused”.
Universities and charities typically use Blackbaud to manage alumni and donor relations so, in many cases, the personal data stolen is fairly limited. However there are exceptions; it is reported that the University of York has told its students and alumni that student numbers, addresses, phone and email addresses, details of occupation and employer details were among the data stolen.
Whilst the current focus is on the failings of Blackbaud, there are ongoing wider concerns over information security issues within the higher education sector. According to a recent survey by Redscan (to which 86 UK institutions responded):
- Only 54% of university staff had received any information security training; and
- Over half of universities had reported at least one data breach to the Information Commissioner’s Office (ICO).
This tallies with the UK Government’s Cyber Security Breaches Survey 2020, which found that 80% of Further and Higher Education establishments were aware of a breach or attack. Given the value of the intellectual property, and the quantity of sensitive personal data on staff and students, that universities hold; these figures are very worrying.
The BBC have published a fascinating birds-eye view of a ransomware attack at the University of California San Francisco this week. Acting on a tip-off, the BBC were able to follow the on-line ransom negotiations as they happened, culminating in the payment of $1.14m. We can only speculate, but the willingness of the university to deal with criminals suggests that the data that was being ransomed:
- Had not been properly backed up; and/or
- Had not been anonymised/encrypted.
Of course, followers of our blog will not be surprised to hear of another organisation paying a ransom: we blogged about this trend back in June. The Hiscox Cyber Readiness Report last year found that one in six firms that were targeted paid a ransom of some sort, and this could very well be an underestimate: another survey by Malwarebytes put the figure at nearly 40%. It has been widely reported that Travelex ended up paying a ransom of $2.3m following the high-profile attack on their systems at the start of the year. Whilst, on the practical side, a survey by Coverware found that 96% of ransom payments were rewarded with a successful decryption tool; there are still profound ethical and reputational issues around paying out to criminals in this way.
Rather than have to make the invidious choice about whether or not to pay a ransom, surely it is better to invest ahead of time in your information security. Follow the link to find out how we can help you to put a robust information security management system in place for your organisation.
There are many interesting lessons to learn in the unfolding saga at on-line sports retailer Wiggle…
Customers first started raising concerns over two weeks ago about orders being placed on their Wiggle accounts (and payments taken) without their knowledge. Some people also reported that they had been locked out of their accounts. The company’s initial response was characterised by a complete failure to engage with customers’ concerns. As of Monday they have publicly acknowledged that there is a problem, but the tone of their communications is still defensive, focusing on the fact that “Our systems remain secure” and “customers’ login details have been acquired outside of Wiggle’s systems.”
The most likely scenario seems to be that, using personal details stolen elsewhere, fraudsters were able to log in to people’s Wiggle accounts where individuals had re-used login details and passwords from other services. The fraudsters were then able to place orders and change account details (including login details) on these accounts. Whilst Wiggle seem to be placing great significance on the fact that the data was not stolen from them, and that there was therefore no data breach, that is of little interest or comfort to affected customers. Moreover, “credential stuffing attacks” such as these are a notifiable data protection incident in their own right (Wiggle has confirmed that it has reported the incident to the ICO).
Clearly there are important lessons here for all of us as consumers, principally about not re-using login details for multiple sites. The incident also highlights the challenge for on-line retailers in striking the correct balance between security and convenience: it has surprised many people that the fraudsters were able to order goods to be sent to a new address without having to re-enter any card details. But the primary lesson for all organisations is that information security incidents will continue to occur and that you need to be ready to respond quickly when they do. Critically, that involves having processes in place for investigating reports of suspicious activity in a timely fashion and for communicating effectively with customers.
We blogged back in January about how GDPR fines were starting to bite. Now, drawing on data from GDPR Enforcement Tracker, we take a first look at the fines that have been issued under GDPR specifically for data breaches.
The database lists 70 fines related to data breaches, ranging in value from €300 to €10m. 21 countries have levied fines so far, with the greatest number being imposed in Romania (15 fines). Not all entries include the number of people affected by breaches but, from the data available, there is certainly a significant spread in scale from incidents only affecting 1 or 2 individuals to an incident where the records of 6 million people were compromised.
The mean value of approximately €250 000 is very skewed by a few large fines, so it is perhaps more informative to look at the median value of around €25 000. This is perhaps a surprisingly low figure given the maximum fines of 4% of global turnover allowed under GDPR; but probably reflects a sensible and pragmatic application of the new powers by the various regulatory authorities. As ever though, it is important to remember that fines may only be a small fraction of the total costs to the company of a data breach: the IBM/Ponemon Institute 2018 Cost of Data Breach Survey found that the largest component of the cost of a data breach was lost business.
(At the time of writing we are still waiting for the UK Information Commissioner’s Office (ICO) to confirm the level of fines that will be imposed on British Airways and Marriott International. The ICO announced its intention to fine these firms £183m and £99m respectively but neither of these amounts have yet been finalised.)
There has been much media coverage today of “Exercise Iris”, an exercise delivered to Scottish Health Boards in March 2018 by the Scottish Government’s Health Protection Division. The exercise scenario was based on an outbreak of Middle East Respiratory Syndrome (MERS) in Scotland, and media reporting has focused on why the exercise recommendations were not shared and acted upon more widely. Understandably, people are asking if we might be in a better situation now if the lessons identified two years ago had prompted action across the UK.
Really there are two separate issues here: sharing observations and recommendations from exercises; and turning these recommendations into actions. We will explore the latter issue first.
The failure to convert “lessons identified” into “lessons learned” is a well established theme in both the practitioner and academic literature on crisis management. Time and time again recommendations are made following an exercise, or indeed an actual incident, but they are never followed through. Looking at the post-exercise report from Exercise Iris, there are two obvious reasons why this may have been the case here: many of the recommendations are rather vague; and, at least in the public version of the report, no deadlines are given for completion. The situation is further complicated because the actions fall on many different organisations (individual Health Boards, the Scottish Government etc) and it is not clear who was responsible overall for seeing that actions were completed. It is thus not altogether surprising that resources were not immediately made available to address the identified issues.
The information sharing issue is less straightforward. The Scottish Government has stated that the findings were shared with attendees, which implies that they were not shared with anybody else at the time. It is reported that the findings were subsequently shared with the UK Government’s New and Emerging Respiratory Virus Threats Advisory Group in June 2019. One has to be mindful of hindsight bias here: now that we are in a pandemic it seems obvious that these particular findings should have been shared widely; but if everybody shares the findings of every single exercise with everybody else then there would simply be information overload and none of the reports would ever get read. Perhaps the real questions is why, when we became aware that we were facing a pandemic, were the recommendations still not widely disseminated. Maybe all we need is a searchable database of post-exercise reports from across the health services, emergency services, central and local government upon which we can draw when we need to.
In summary then, there should clearly be better mechanisms for sharing findings from exercises throughout the UK (and beyond) but, even where information is shared, it is far from certain that it will be acted upon.
I’m sure I wasn’t the only person to be somewhat surprised at the news that Baroness Dido Harding has been appointed to oversee the implementation of the new NHS Covid-19 app. Rightly or wrongly, she will always be associated with the massive data breach at TalkTalk in October 2015 and has received significant criticism for her handling of that incident. As one commentator optimistically phrased it, she may have learnt some useful lessons from that incident. Hopefully that is true, but it is hardly likely to inspire confidence in a scheme that is already highly controversial.
The news also reminded me of another interesting blog post of ours from last year. The post summarised findings from a new academic study of the cost to organisations of data breaches. As well as addressing the main research question the authors also found, somewhat surprisingly, that:
- The pay of CEOs in firms that had had a data breach increased relative to firms that hadn’t; and
- Security breaches had no effect on the rate of CEO turnover.
Whilst Baroness Harding did eventually leave TalkTalk, it was not before she famously picked up a substantial bonus. It is not my intention to criticise individuals, rather I repeat the story because it suggests that CEOs are not adequately incentivised to manage information security risks. If CEOs know that their remuneration and career prospects will not be damaged, even by a spectacular data breach; why would they allocate scarce resources to mitigate the risk?
That leads on finally to the other big information security story of the week – EasyJet. The headlines have focused on the total number, 9 million, of customers affected. But perhaps the more worryingly, it is reported that over 2000 customers had their credit card details compromised. Given that this incident occurred post-GDPR EasyJet may be looking at a very significant fine when, with a global pandemic going on and almost no air travel taking place, they have enough problems already.
An article by Cambridge Risk Solutions, published this week in Continuity Central, looks at whether there is any evidence that firms that follow good practice in business continuity management (BCM) have fared better in the current Covid-19 pandemic. Specifically it looks at the impact on the share prices of companies in the FTSE 100 from mid-February to Mid-April, to see if those that have adopted BCM have suffered less damage to shareholder value.
Sadly the results are inconclusive: there is no association between adoption of BCM good practice and falls in share price at any stage during the 8-week period studied. This could be because the effect is very small and buried in the noise, but the article also considers other possible explanations, including:
- The possibility that good-practice-based plans were abandoned by senior management when faced with a crisis of such magnitude; or
- Good-practice-based plans were implemented but failed to mitigate the impact on businesses.
The answers to both of these questions will be vital in learning lessons from this dreadful crisis and improving the practice of BCM for the future.
You can read the full article here.
Reading the first edition of “The Failure of Risk Management: Why it’s Broken and how to Fix it”,by Douglas Hubbard, back in 2009 was a professional epiphany for me. Having been working in business continuity management for about five years at this stage, I was aware of the prevalence of many questionable practices in risk management. But seeing how entrenched these methods were, and how confident people were in their efficacy, I wasn’t sure if I was alone in having doubts.
It was therefore wonderful to come across a book that clearly, but rigorously, explained what was going wrong and, more importantly, provided a clear road map for improvement. Since that time, I have recommended the book to anybody who has attended our training courses, many of our consulting clients and, basically, anybody else who listened. I was delighted to hear of the release of the second edition, but would it live up to my hopes?
The second edition retains the essential look and feel of the original but has clearly been updated throughout; with many useful references to recent events, particularly in the area of cyber security. The most obvious addition in the new edition is a completely new chapter (Chapter 4), laying out a simple approach to making the initial transition to quantitative techniques. This forms a “red-thread” throughout the rest of the book. There is also important new material on a number of topics, principally:
- Utility theory (Chapter 6);
- Inconsistency in expert judgements (Chapter 7); and
- The analysis of near-misses (Chapter 12).
All of this adds up to a slightly longer, but still very readable, book.
Sadly, the same flawed risk management practices that were highlighted in 2009 are still prevalent today, despite the sustained efforts of Hubbard and others; so the importance of this book has not diminished. The release of this excellent second edition is very timely and I would thoroughly recommend it to anybody working in any aspect of risk management. More importantly though, I would also recommend the book to executives and general managers: to paraphrase Georges Clemenceau, risk management is too important to be left to the risk management profession.
As in many parts of the world, here in the UK we have experienced unprecedented events in recent weeks. Amidst the grim backdrop of the numbers of infections and deaths growing daily we have seen schools, bars and restaurants closed; sport put on hold; and, finally, a nationwide lock-down. However this has all been achieved with a combination of persuasion and emergency legislation, passed specifically to deal with the Covid-19 outbreak, rather than using the Civil Contingencies Act (CCA).
Passed in 2004, the CCA sought to put emergency planning on a proper footing for the 21st century. Learning from the experiences of recent events such as the “Millenium Bug”, fuel strikes in 2000, the Foot and Mouth outbreak in 2001 and 9/11; the CCA was designed to provide a flexible framework for planning and responding to crises in our modern age. The most visible outcomes from passing of the CCA were:
- The designation of Cat 1 (eg the Emergency Services, NHS, Local Authorities, Environment Agency) and Cat 2 (eg ports, airports, railways, utilities) Responders with various statutory duties;
- The coordination of local planning through Local Resilience Forums (LRFs); and
- The publication of Community Risk Registers by these LRFs.
But the CCA also contained various emergency powers to enable the Government to deal with extraordinary situations; and it is the failure to make use of any of these powers that is curious at the present time. Actually this is not an unusual observation: in many cases, when faced with an incident or crisis, organisations ignore the plans that they have documented, tested and exercised in favour of making things up as they go along. Why?
Some of the reasons that have been observed in other instances of organisations not using their pre-prepared plans when facing a real incident are:
- The senior management team lack awareness of and/or confidence in the written plans;
- The plan can only be triggered by certain prescribed events, none of which occur in this particular scenario; and/or
- The senior management team believe that the incident requires a brand new bespoke solution, rather than any of the generic solutions documented in plan.
One of the key areas we look for with any organisation when conducting a post-incident debrief is examples of where they have not used their pre-prepared plans. We then explore with them why, in each instance, they chose not to. As the current crisis subsides, and we can start looking again to the future, it would be very useful for the UK Government to examine why they chose not to utilise the CCA in the Covid-19 outbreak.
Given the heightened risk of cyber incidents in the current Covid-19 crisis, it seems timely to look at the Cyber Security Breaches Survey 2020 published recently by the Department for Digital, Culture, Media and Sport. Now in its fifth year the survey looks at UK businesses, charities and, for the first time, educational establishments.
In terms of frequency of breaches and attacks, the survey finds little difference from previous years:
- 46% of businesses (unchanged from last year); and
- 26% of charities (up from 19% last year)
were aware of a cyber breach or attack in the last 12 months. Within this overall threat landscape, phishing attacks had increased, whilst malware and other viruses had decreased. However, looking for the first time at the education sector, the survey found that an astonishing 80% of Further and Higher Education establishments were aware of a breach or attack.
Looking at impacts, the survey found that only 19% of businesses suffering a breach or attack experienced a loss of data or financial cost. Even within this small subset who experienced a “material outcome”, the average cost reported was only £3230. This figure seems extremely low when compared to other data sources and brings into question if responding organisations had calculated the full cost of incidents.
The survey also looks at the steps that organisations are taking to manage cyber risk. Both businesses and charities are more likely to have a written cyber security policy in place (38% and 42% respectively) than in previous years. Curiously though, given the UK Government’s backing of the scheme, the survey does not specifically ask about Cyber Essentials accreditation. However, in a slightly worrying revelation, it notes that only 13% of both businesses and charities are even aware of the scheme!