M. Angela Sasse's research portrait

Forging the strongest links - "Designing usable Security that works with and for, rather than against, users and their organisations"

It's often said that the weakest link in the security chain is humans. They definitely are a trial for security experts. People forget, lose, and write down passwords – when they're not choosing them badly in the first place. As the phone hacking headlines made famously clear, as targets of social engineering (or "blagging") people can be tricked, bribed, cajoled, and misled into revealing confidential information about the inner workings of the systems they're supposed to protect. And people expose data when they shouldn't, by copying and transferring it (as in the case of the two CDs containing the entire HMRC Child Benefits database that got lost in transit to the National Audit office) or by losing it outright (as in dozens of cases of lost or stolen laptops and flash drives).

There are two possible responses to this. One is to recognise that for all their foibles, flaws, and inconsistencies, humans have compensating assets that can be enlisted to help improve security: humans are also flexible, curious, ingenious, intuitive, and courageous. The other is to see only the risk that they pose and do your best to lock them down as much as possible. If you take this approach – analogous to requiring all flyers to wear only a thin layer of plastic wrap, carry no luggage, and sit chained to their seats  – you will have a lot of expense cleaning up messes, the users will hate you and be motivated to find ways around your rules, and they will be unable to help in any unforeseen emergency. The designers of security systems typically prefer the seemingly logical but actually dangerous conclusion that users must be strictly controlled and, as the clearly broken element of the security system, fixed through training and education, the equivalent of teaching those bound, plastic-wrapped flyers sphincter exercises.

Since 1999 Angela Sasse has researched and promoted the first approach as the principal theme of her work: designing usable security that works with and for, rather than against, users and their organisations. Her group's overall goal is to produce foundational papers in the area of human-centered security, privacy, identity, and trust.

The problem with the "users are dangerous" approach,  as Sasse, and Anne Adams (then a PhD student, now a lecturer at the Open University) pointed out in their landmark 1999 paper Users Are Not the Enemy, is that security designed along these lines creates barriers that users must overcome in order to do their jobs and, in turn and as a result, a resource-intensive administrative burden for their organisation. When BT hired Sasse and Adams to conduct the study the paper recounts, lost and forgotten passwords were consuming help desk resources without limit, at ever-increasing cost to the company.

Security practice has changed distressingly little since then, as Sasse and UCL research fellow Philip Ingelsant wrote in a follow-up paper presented at the 2010 CHI conference, The true cost of unusable password policies: password use in the wild. Organisations still routinely require their employees to devise, use, and frequently change passwords according to complicated and inconsistent rules that make them difficult to create and remember and require users to organise their tasks around the password mechanisms. It is commonplace for security people to say that if users only understood the dangers they would behave differently; Sasse and Ingelsant counter-argue that, "If security people only understood the true costs for users and the organisation they would set policies differently".

Usability became a vital component of general software development nearly 20 years ago, when the goal of expanding personal computer sales into the mainstream meant the industry had to adapt to its new market of non-expert users or go bankrupt providing technical support. The security sector has been slow to catch on, and there has been surprisingly little crossover into the design of security products and processes. Even well-established standards of software design are neither implemented in the interfaces used to configure security products nor reflected in the security policies devised by organisations.

Sasse, with Philip Ingelsant and researchers from Newcastle University, discusses these failures in the 2010 paper, A Stealth Approach to Usable Security: Helping IT Security Managers to Identify Workable Security Solutions, presented at the New Security Paradigms Workshop. Their conclusion: the problem starts at the top, because CISOs don't know how to apply research findings on usability and economics of security. Discovering this gap led the group to consider CISOs themselves as users in need of more usable tools and better feedback. The paper accordingly goes on to propose a prototype tool that shows the impact of different decisions to help CISOs make more informed choices and provide justification for their decisions to senior managers and other stakeholders.

If users pay in inconvenience, aggravation, and lost time, organisations pay both financially (by having to provide the resources to support poorly designed work practices), and in lost productivity and increased, unsuspected vulnerabilities (when users react to onerous demands by devising workarounds). Worst of all, because each unusable security measure is designed in isolation from the others, stressed-out users struggle to comply with the poorly-understood aggregate burden of many conflicting requirements. Ironically, as Sasse and "Usable Security: Why Do We Need It? How Do We Get It?", published in the 2005 book Security and Usability (edited by Lorrie Faith Cranor and Simson Garfinkel and published by O'Reilly), note, in designing phishing scams attackers have paid more attention to human factors in security than security designers have.

Sasse has studied these problems from a variety of angles. In Divide and Conquer: The role of trust and assurance in the design of secure socio-technical systems, presented at NSPW 2005, she and Ivan Flechais (now a lecturer in software engineering at Oxford) argued that since security systems must perforce involve people, dependable security requires understanding of both people and technology. Key to this strategy is distinguishing between the socio-economic and security definitions of "trust": the difference between shared values and responsibility between humans (sociology) and systems from which all vulnerabilities have been removed (computer science). Sasse and Ingelsant dub the latter "assurance".

Many organisations manage the conflict between assurance and trust by trying to implement systems that do not require employees to be trusted – the "enemy" scenario already shown by Sasse's work to be counter-productive (or the plastic-wrapped flyer). Better, more secure systems, Sasse and Flechais conclude, can be built by combining the social and technological approaches. In 2007, with fellow UCL researcher Cecilia Mascolo, they elaborated on this theme in Integrating Security and Usability Into the Requirements and Design Process, published in Electronic Security and Digital Forensics.

The latter is one of several papers Sasse has authored discussing design principles for more usable security systems. In the "Usable Security" paper, Sasse and Flechais presented AEGIS (for Appropriate and Effective Guidance for Information Security), a model they devised with UCL senior lecturer Stephen Hailes to simplify usable security design for administrators and developers. With Jens Riegelsberger (now a researcher at Google), in Ignore These at Your Peril: Ten principles for trust design, presented at the 3rd (2010) International Conference on Trust and Trustworthy Computing, Sasse noted the lack of progress in the general understanding of trust issues between the early 1990s and 2005 despite a considerable change of focus. In the early 1990s, establishing "trust online" meant encouraging consumers to engage in ecommerce and disclose credit card and other details. By 2005, the techniques ecommerce sites adopted to make their sites "user-friendly" and apparently trustworthy had been adopted by attackers. Over-reliance on any one type of trust signal, they conclude, makes it a likely vector for attack; instead, designers need to understand users' internalised social norms and habits and support a number of different types of signalling.

Often the easiest way to get organisations and industries to focus on a necessary change is to appeal to the bottom line, the second strand emanating from that original 1999 paper. Sasse also has a substantial body of work studying the costs of poorly designed security, the field now known as security economics. In The Compliance Budget: Managing Security Behaviour in Organisations, presented at the 2008 New Security Paradigms Workshop, Sasse, in collaboration with PhD student Adam Beautement and Hewlett-Packard researcher Mike Wonham, analysed the user's security burden in economic terms. Security measures, they argued, need to be seen in context with all the other demands on a user's time and attention. The user's ability to comply – the "compliance budget" – is limited and needs to be managed like any other finite corporate resource. Remembering one password may seem simple enough; remembering 6.5 passwords (the average found in the 2010 study of password use), each with its own generation rules and expiry setting, may be overwhelming.

In another 2008 paper, presented at WEIS, Modelling the Human and Technological Costs and Benefits of USB Memory Stick Security, Sasse, working with Adam Beautement and researchers from HP Labs, Merrill Lynch, and the University of Bath, took an entirely new tack by analysing this common security policy violation by applying a macroeconomics-inspired model, similar to interest rate models used by central banks. Their basic hypothesis is that quantifying the tension between the threat to confidentiality and improved availability of data may help organisations make better decisions about how to direct their investment in technology and how to formualte security policies. Use of encryption, for example, rose sharply with increased monitoring and IT support. Ultimately, they found that the trade-off between confidentiality and availability does exist. Users do make cost-effective decisions – but from their own point of view. Helping them to understand and internalise the risks for the organisation is an important step in enlisting users as part of the defence.

That first 1999 paper helped found two important new areas of security research: human factors or usability, now often called HCIsecurity, and security economics. Both areas have grown substantially since then, and both are now covered by an increasing number of conferences and academic departments. For HCIsecurity, conference venues include SOUPS (Symposium on Usable Privacy and Security), CHI (ACM Special Interest Group on Computer-Human Interaction), ACSAC (Annual Computer Security Applications Conference), CCS (ACM SIG on Computer and Communications Security), Usenix, and the New Security Paradigms Workshop. For security economics the annual Workshop on the Economics of Information Security (WEIS) is now ten years old.

This page was last modified on 19 Jun 2013.

M. Angela Sasse

Head of Information Security Research



6.06, Malet Place Engineering


+44 020 7679 7212


+44 020 7387 1397


a.sasse [at] cs.ucl.ac.uk