Publications

  • Proceedings on Privacy Enhancing Technologies
    Apu Kapadia, Steven J. Murdoch (eds.)
    It is our great pleasure to introduce the second issue of PoPETs, an open access journal that publishes articles accepted to the annual Privacy Enhancing Technologies Symposium (PETS).
    Proceedings on Privacy Enhancing Technologies, Volume 2015, Number 2. De Gruyter Open, June 2015. (Journal of the 15th Privacy Enhancing Technologies Symposium, Philadelphia, PA, USA). [ editors introduction | closing slides ]
  • CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization
    Robert N.M. Watson, Jonathan Woodruff, Peter G. Neumann, Simon W. Moore, Jonathan Anderson, David Chisnall, Nirav Dave, Brooks Davis, Khilan Gudka, Ben Laurie, Steven J. Murdoch, Robert Norton, Michael Roe, Stacey Son, Munraj Vadera
    CHERI extends a conventional RISC Instruction-Set Architecture, compiler, and operating system to support fine-grained, capability-based memory protection to mitigate memory-related vulnerabilities in C-language TCBs. We describe how CHERI capabilities can also underpin a hardware-software object-capability model for application compartmentalization that can mitigate broader classes of attack. Prototyped as an extension to the open-source 64-bit BERI RISC FPGA soft-core processor, FreeBSD operating system, and LLVM compiler, we demonstrate multiple orders-of-magnitude improvement in scalability, simplified programmability, and resulting tangible security benefits as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs. We evaluate incrementally deployable CHERI-based compartmentalization using several real-world UNIX libraries and applications.
    2015 IEEE Symposium on Security and Privacy, San Jose, CA, US, 18–20 May 2015. [ paper | DOI 10.1109/SP.2015.9 ]
  • Proceedings on Privacy Enhancing Technologies
    Apu Kapadia, Steven J. Murdoch (eds.)
    It is our great pleasure to introduce the first issue of PoPETs, an open access journal that publishes articles accepted to the annual Privacy Enhancing Technologies Symposium (PETS).
    Proceedings on Privacy Enhancing Technologies, Volume 2015, Number 1. De Gruyter Open, April 2015. (Journal of the 15th Privacy Enhancing Technologies Symposium, Philadelphia, PA, USA). [ editors introduction ]
  • Be Prepared: The EMV Preplay Attack
    Mike Bond, Marios O. Choudary, Steven J. Murdoch, Sergei Skorobogatov, Ross Anderson
    The leading system for smart card-based payments worldwide, EMV (which stands for Europay, MasterCard, and Visa), is widely deployed in Europe and is starting to be introduced in the US as well. Despite this wide deployment, a series of significant vulnerabilities make EMV vulnerable to the preplay attack. Specifically, weak and defective random number generators and various protocol failures leave the system open to fraud at scale.
    IEEE Security and Privacy, Volume 13, Number 2, pages 56–64, March–April 2015. [ article | DOI 10.1109/MSP.2015.24 ]
  • Optimising node selection probabilities in multi-hop M/D/1 queuing networks to reduce latency of Tor
    Steven Herbert, Steven J. Murdoch, Elena Punskaya
    The expected cell latency for multi-hop M/D/1 queuing networks, where users choose nodes randomly according to some distribution, is derived. It is shown that the resulting optimisation surface is convex, and thus gradient-based methods can be used to find the optimal node assignment probabilities. This is applied to a typical snapshot of the Tor anonymity network at 50% usage, and leads to a reduction in expected cell latency from 11.7 ms using the original method of assigning node selection probabilities to 1.3 ms. It is also shown that even if the usage is not known exactly, the proposed method still leads to an improvement.
    IET Electronics Letters Volume 50, Issue 17, Pages 1205–1207, 14 August 2014. [ paper | DOI 10.1049/el.2014.2136 ]
  • Privacy Enhancing Technologies 2014
    Emiliano De Cristofaro, Steven J. Murdoch (eds.)
    Either through a deliberate desire for surveillance or an accidental consequence of design, there are a growing number of systems and applications that record and process sensitive information. As a result, the role of privacy-enhancing technologies becomes increasingly crucial, whether adopted by individuals to avoid intrusion in their private life, or by system designers to offer protection to their users. The 14th Privacy Enhancing Technologies Symposium (PETS 2014) addressed the need for better privacy by bringing together experts in privacy and systems research, cryptography, censorship resistance, and data protection, facilitating the collaboration needed to tackle the challenges faced in designing and deploying privacy technologies.
    14th Privacy Enhancing Technologies Symposium (PETS 2014), Amsterdam, Netherlands, 16–18 July 2014. Published in LNCS 8555, Springer-Verlag. [ DOI 10.1007/978-3-319-08506-7 | papers | opening slides ]
  • EMV: Why Payment Systems Fail
    Ross Anderson, Steven J. Murdoch
    What lessons might we learn from the chip cards used for payments in Europe, now that the U.S. is adopting them too?
    Communications of the ACM Volume 57, Number 6, Pages 24–28, June 2014. [ paper | DOI 10.1145/2602321 | ACM version ]
  • Chip and Skim: cloning EMV cards with the pre-play attack
    Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, Ross Anderson
    EMV, also known as “Chip and PIN”, is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a “pre-play” attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat.
    2014 IEEE Symposium on Security and Privacy, San Jose, CA, US, 18–21 May 2014. [ paper ]
  • Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture
    Robert N.M. Watson, Peter G. Neumann, Jonathan Woodruff, Jonathan Anderson, David Chisnall, Brooks Davis, Ben Laurie, Simon W. Moore, Steven J. Murdoch, Michael Roe
    This document describes the rapidly maturing design for the Capability Hardware Enhanced RISC Instructions (CHERI) Instruction-Set Architecture (ISA), which is being developed by SRI International and the University of Cambridge. The document is intended to capture our evolving architecture, as it is being refined, tested, and formally analyzed. We have now reached 70% of the time for our research and development cycle.
    CHERI is a hybrid capability-system architecture that combines new processor primitives with the commodity 64-bit RISC ISA enabling software to efficiently implement fine-grained memory protection and a hardware-software object-capability security model. These extensions support incrementally adoptable, high-performance, formally based, programmer-friendly underpinnings for fine-grained software decomposition and compartmentalization, motivated by and capable of enforcing the principle of least privilege. The CHERI system architecture purposefully addresses known performance and robustness gaps in commodity ISAs that hinder the adoption of more secure programming models centered around the principle of least privilege. To this end, CHERI blends traditional paged virtual memory with a per-address-space capability model that includes capability registers, capability instructions, and tagged memory that have been added to the 64-bit MIPS ISA via a new capability coprocessor.
    CHERI’s hybrid approach, inspired by the Capsicum security model, allows incremental adoption of capability-oriented software design: software implementations that are more robust and resilient can be deployed where they are most needed, while leaving less critical software largely unmodified, but nevertheless suitably constrained to be incapable of having adverse effects. For example, we are focusing conversion efforts on low-level TCB components of the system: separation kernels, hypervisors, operating system kernels, language runtimes, and userspace TCBs such as web browsers. Likewise, we see early-use scenarios (such as data compression, image processing, and video processing) that relate to particularly high-risk software libraries, which are concentrations of both complex and historically vulnerability-prone code combined with untrustworthy data sources, while leaving containing applications unchanged.
    This report describes the CHERI architecture and design, and provides reference documentation for the CHERI instruction-set architecture (ISA) and potential memory models, along with their requirements. It also documents our current thinking on integration of programming languages and operating systems. Our ongoing research includes two prototype processors employing the CHERI ISA, each implemented as an FPGA soft core specified in the Bluespec hardware description language (HDL), for which we have integrated the application of formal methods to the Bluespec specifications and the hardware-software implementation.
    Technical Report UCAM-CL-TR-850, University of Cambridge, Computer Laboratory, April 2014. [ paper ]
  • Capability Hardware Enhanced RISC Instructions: CHERI User’s Guide
    Robert N.M. Watson, David Chisnall, Brooks Davis, Wojciech Koszek, Simon W. Moore, Steven J. Murdoch, Peter G. Neumann, Jonathan Woodruff
    The CHERI User’s Guide documents the software environment for the Capability Hardware Enhanced RISC Instructions (CHERI) prototype developed by SRI International and the University of Cambridge. The User’s Guide is targeted at hardware and software developers working with capability-enhanced software. It describes the CheriBSD operating system, a version of the FreeBSD operating system that has been adapted to support userspace capability systems via the CHERI ISA, and the CHERI Clang/LLVM compiler suite. It also describes the earlier Deimos demonstration microkernel.
    Technical Report UCAM-CL-TR-851, University of Cambridge, Computer Laboratory, April 2014. [ paper ]
  • Bluespec Extensible RISC Implementation: BERI Hardware Reference
    Robert N.M. Watson, Jonathan Woodruff, David Chisnall, Brooks Davis, Wojciech Koszek, A. Theodore Markettos, Simon W. Moore, Steven J. Murdoch, Peter G. Neumann, Robert Norton, Michael Roe
    The BERI Hardware Reference documents the Bluespec Extensible RISC Implementation (BERI) developed by SRI International and the University of Cambridge. The reference is targeted at hardware and software developers working with the BERI1 and BERI2 processor prototypes in simulation and synthesized to FPGA targets. We describe how to use the BERI1 and BERI2 processors in simulation, the BERI1 debug unit, the BERI unit-test suite, how to use BERI with Altera FPGAs and Terasic DE4 boards, the 64-bit MIPS and CHERI ISAs implemented by the prototypes, the BERI1 and BERI2 processor implementations themselves, and the BERI Programmable Interrupt Controller (PIC).
    Technical Report UCAM-CL-TR-852, University of Cambridge, Computer Laboratory, April 2014. [ paper ]
  • Bluespec Extensible RISC Implementation: BERI Software Reference
    Robert N.M. Watson, David Chisnall, Brooks Davis, Wojciech Koszek, Simon W. Moore, Steven J. Murdoch, Peter G. Neumann, Jonathan Woodruff
    The BERI Software Reference documents how to build and use FreeBSD on the Bluespec Extensible RISC Implementation (BERI) developed by SRI International and the University of Cambridge. The reference is targeted at hardware and software programmers who will work with BERI or BERI-derived systems.
    Technical Report UCAM-CL-TR-853, University of Cambridge, Computer Laboratory, April 2014. [ paper ]
  • Security Protocols and Evidence: Where Many Payment Systems Fail
    Steven J. Murdoch, Ross Anderson
    As security protocols are used to authenticate more transactions, they end up being relied on in legal proceedings. Designers often fail to anticipate this. Here we show how the EMV protocol – the dominant card payment system worldwide – does not produce adequate evidence for resolving disputes. We propose five principles for designing systems to produce robust evidence. We apply these to other systems such as Bitcoin, electronic banking and phone payment apps. We finally propose specific modifications to EMV that could allow disputes to be resolved more efficiently and fairly.
    Financial Cryptography and Data Security, Barbados, 03–07 March 2014. [ paper | slides ]
  • Quantifying and Measuring Anonymity
    Steven J. Murdoch
    The design of anonymous communication systems is a relatively new field, but the desire to quantify the security these systems offer has been an important topic of research since its beginning. In recent years, anonymous communication systems have evolved from obscure tools used by specialists to mass-market software used by millions of people. In many cases the users of these tools are depending on the anonymity offered to protect their liberty, or more. As such, it is of critical importance that not only can we quantify the anonymity these tools offer, but that the metrics used represent realistic expectations, can be communicated clearly, and the implementations actually offer the anonymity they promise. This paper will discuss how metrics, and the techniques used to measure them, have been developed for anonymous communication tools including low-latency networks and high-latency email systems.
    Data Privacy Management and Autonomous Spontaneous Security with International Workshop on Quantitative Aspects in Security Assurance, Egham, UK, 12–13 September 2013. Published in LNCS 8247, Springer-Verlag. Keynote talk and invited paper. [ paper | slides ]
  • No magic formula
    Steven J. Murdoch
    Without a commitment to transparency and solid knowledge about how the internet works, the UK government is hampering online freedom. It’s time to overhaul the system.
    Index on Censorship, Volume 42, Issue 2, pages 136–139, June 2013. [ article | DOI link ]
  • Internet Censorship and Control
    Steven J. Murdoch, Hal Roberts (eds.)
    The Internet is and has always been a space where participants battle for control. The two core protocols that define the Internet – TCP and IP – are both designed to allow separate networks to connect to each other easily, so that networks that differ not only in hardware implementation (wired vs. satellite vs. radio networks) but also in their politics of control (consumer vs. research vs. military networks) can interoperate easily. It is a feature of the Internet, not a bug, that China – with its extensive, explicit censorship infrastructure – can interact with the rest of the Internet.
    IEEE Internet Computing, Volume 17, Number 3, May 2013. [ open access | introduction ]
  • Towards a Theory of Application Compartmentalisation
    Robert N.M. Watson, Steven J. Murdoch, Khilan Gudka, Jonathan Anderson, Peter G. Neumann, Ben Laurie
    Application compartmentalisation decomposes software applications into sandboxed components, each delegated only the rights it requires to operate. Compartmentalisation is seeing increased deployment in vulnerability mitigation, motivated informally by appeal to the principle of least privilege. Drawing a comparison with capability systems, we consider how a distributed system interpretation supports an argument that compartmentalisation improves application security.
    Twenty-first International Workshop on Security Protocols, Cambridge, UK, 19–20 March 2013. Published in LNCS 8263, Springer-Verlag. [ paper | DOI 10.1007/978-3-642-41717-7_4 ]
  • How Certification Systems Fail: Lessons from the Ware Report
    Steven J. Murdoch, Mike Bond, Ross Anderson
    The heritage of most security certification standards in the banking industry can be traced back to a 1970 report by a task force operating under the auspices of the US Department of Defense. Since then, standards have changed, both in their approach and scope, but what lessons can we learn from the original work?
    IEEE Security and Privacy, Volume 10, Number 6, pages 40–44, November–December 2012. [ accepted version | DOI link to edited version ]
  • Chip and Skim: cloning EMV cards with the pre-play attack
    Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, Ross Anderson
    EMV, also known as “Chip and PIN”, is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this number. This exposes them to a “pre-play” attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically (in the sense of extracting the key material and loading it into another card). Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. Pre-play attacks may also be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled the flaw to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, or monitoring customer complaints. Finally we discuss countermeasures.
    Accompanying invited talk at CHES 2012 (arXiv:1209.2531), Leuven, Belgium, 11 September 2012. [ paper ]
  • CHERI: a research platform deconflating hardware virtualization and protection
    Robert N.M. Watson, Peter G. Neumann, Jonathan Woodruff, Jonathan Anderson, Ross Anderson, Nirav Dave, Ben Laurie, Simon W. Moore, Steven J. Murdoch, Philip Paeps, Michael Roe, Hassen Saidi
    Contemporary CPU architectures conflate virtualization and protection, imposing virtualization-related performance, programmability, and debuggability penalties on software requiring finegrained protection. First observed in micro-kernel research, these problems are increasingly apparent in recent attempts to mitigate software vulnerabilities through application compartmentalisation. Capability Hardware Enhanced RISC Instructions (CHERI) extend RISC ISAs to support greater software compartmentalisation. CHERI’s hybrid capability model provides fine-grained compartmentalisation within address spaces while maintaining software backward compatibility, which will allow the incremental deployment of fine-grained compartmentalisation in both our most trusted and least trustworthy C-language software stacks. We have implemented a 64-bit MIPS research soft core, BERI, as well as a capability coprocessor, and begun adapting commodity software packages (FreeBSD and Chromium) to execute on the platform.
    Runtime Environments, Systems, Layering and Virtualized Environments (RESoLVE'12), 03 March 2012. [ paper | slides ]
  • Wall 2.0
    Steven J. Murdoch
    The “Great Firewall of China” inherited its name (and technology) from network firewall products, designed to protect a company from attackers on the Internet. Physical firewalls are designed to protect a building from the spread of fire, network firewalls are designed to protect the controlled corporate environment from the more the chaotic Internet, and the Great Wall of China was designed to protect from outside invaders. The analogy is clear, but can be misleading – Internet censorship is different in many ways to physical walls.
    The European, 13 August 2011. [ article (English and German) | original (German) ]
  • Might Financial Cryptography Kill Financial Innovation? – The Curious Case of EMV
    Ross Anderson, Mike Bond, Omar Choudary, Steven J. Murdoch, Frank Stajano
    The credit card system has been one of the world’s great successes because of its adaptability. By the mid-1990s, a credit card had become a mechanism for authenticating a transaction by presenting a username (the card number) and a password (the expiry date, plus often a CVV) that was already used in mail order and could be adapted with little fuss to the Internet. Now banks in Europe, and increasingly elsewhere, have moved to the EMV “Chip and PIN” system which uses not just smart cards but also “trusted” hardware. The cryptography supported by this equipment has made some kinds of fraud much rarer – although other kinds have increased, and the jury is still out on the net effect. In the USA in particular, some banks and others oppose EMV on the grounds that it will damage innovation to move to a monolithic and inflexible system.
    We discuss the effects that cryptographic lock-down might have on competition and innovation. We predict that EMV will be adapted to use cards as keys; we have found, for example, that the DDA signature can be used by third parties and expect this to be used when customers use a card to retrieve already-purchased goods such as air tickets. This will stop forged credit cards being used to board airplanes.
    We also investigate whether EMV can be adapted to move towards a world in which people can use bank cards plus commodity consumer electronics to make and accept payments. Can the EMV payment ecology be made more open and competitive, or will it have to be replaced? We have already seen EMV adapted to the CAP system; this was possible because only one bank, the card issuer, had to change its software. It seems the key to innovation is whether its benefits can be made sufficiently local and incremental. We therefore explore whether EMV can be adapted to peer-to-peer payments by making changes solely to the acquirer systems. Finally, we discuss the broader issue of how cryptographic protocols can be made extensible. How can the protocol designer steer between the Scylla of the competition authorities and the Charybdis of the chosen protocol attack?
    Financial Cryptography and Data Security, St Lucia, 28 February–04 March 2011. [ paper ]
  • Impact of Network Topology on Anonymity and Overhead in Low-Latency Anonymity Networks
    Claudia Diaz, Steven J. Murdoch, Carmela Troncoso
    Low-latency anonymous communication networks require padding to resist timing analysis attacks, and dependent link padding has been proven to prevent these attacks with minimal overhead. In this paper we consider low-latency anonymity networks that implement dependent link padding, and examine various network topologies. We find that the choice of the topology has an important influence on the padding overhead and the level of anonymity provided, and that Stratified networks offer the best trade-off between them. We show that fully connected network topologies (Free Routes) are impractical when dependent link padding is used, as they suffer from feedback effects that induce disproportionate amounts of padding; and that Cascade topologies have the lowest padding overhead at the cost of poor scalability with respect to anonymity. Furthermore, we propose an variant of dependent link padding that considerably reduces the overhead at no loss in anonymity with respect to external adversaries. Finally, we discuss how Tor, a deployed large-scale anonymity network, would need to be adapted to support dependent link padding.
    10th Privacy Enhancing Technologies Symposium (PETS 2010), Berlin, Germany, 21–23 July 2010. [ paper | slides ]
  • Destructive Activism: The Double-Edged Sword of Digital Tactics
    Steven J. Murdoch
    So far this book has viewed the empowerment of citizens through digital means as largely positive. However, the ability of the Internet to share information, coordinate action, and launch transnational campaigns can also be used for destructive ends. This chapter describes how some of the tactics adopted by digital activists have been used to disrupt communications, deface or destroy virtual property, organize malicious actions offline, and publish personal information or disinformation. Actions that cause physical harm to human beings or endanger property have yet to be engaged as a tactic of activism, but this chapter will describe how other groups have taken this route. We address physical harm in this chapter because its represents the next frontier of destructive digital activism. We often view digital activism as a series of positive practices that have the power to remedy injustice. However, digital tools—and the very infrastructure of the Internet—are value neutral and can be used for a variety of activities. The tools and practices can thus be seen as a double-edged sword to be used constructively or destructively. This dual nature raises ethical questions that I will address at the end of the chapter.
    In Digital Activism Decoded: The New Mechanics of Change, Mary Joyce, ed., (New York: iDebate Press), 2010. [ chapter | full book | book website | buy from Amazon UK | buy from Amazon US ]
  • Chip and PIN is Broken
    Steven J. Murdoch, Saar Drimer, Ross Anderson, Mike Bond
    EMV is the dominant protocol used for smart card payments worldwide, with over 730 million cards in circulation. Known to bank customers as “Chip and PIN”, it is used in Europe; it is being introduced in Canada; and there is pressure from banks to introduce it in the USA too. EMV secures credit and debit card transactions by authenticating both the card and the customer presenting it through a combination of cryptographic authentication codes, digital signatures, and the entry of a PIN. In this paper we describe and demonstrate a protocol flaw which allows criminals to use a genuine card to make a payment without knowing the card’s PIN, and to remain undetected even when the merchant has an online connection to the banking network. The fraudster performs a man-in-the-middle attack to trick the terminal into believing the PIN verified correctly, while telling the issuing bank that no PIN was entered at all. The paper considers how the flaws arose, why they remained unknown despite EMV’s wide deployment for the best part of a decade, and how they might be fixed. Because we have found and validated a practical attack against the core functionality of EMV, we conclude that the protocol is broken. This failure is significant in the field of protocol design, and also has important public policy implications, in light of growing reports of fraud on stolen EMV cards. Frequently, banks deny such fraud victims a refund, asserting that a card cannot be used without the correct PIN, and concluding that the customer must be grossly negligent or lying. Our attack can explain a number of these cases, and exposes the need for further research to bridge the gap between the theoretical and practical security of bank payment systems.
    2010 IEEE Symposium on Security and Privacy, Oakland, CA, US, 16–19 May 2010. Awarded outstanding paper award by IEEE Security & Privacy Magazine. [ paper | slides | slides (PDF) | FAQ | video | poster ]
  • Verified by Visa and MasterCard SecureCode: or, How Not to Design Authentication
    Steven J. Murdoch, Ross Anderson
    Banks worldwide are starting to authenticate online card transactions using the ‘3-D Secure’ protocol, which is branded as Verifed by Visa and MasterCard SecureCode. This has been partly driven by the sharp increase in online fraud that followed the deployment of EMV smart cards for cardholder-present payments in Europe and elsewhere. 3-D Secure has so far escaped academic scrutiny; yet it might be a textbook example of how not to design an authentication protocol. It ignores good design principles and has significant vulnerabilities, some of which are already being exploited. Also, it provides a fascinating lesson in security economics. While other single sign-on schemes such as OpenID, InfoCard and Liberty came up with decent technology they got the economics wrong, and their schemes have not been adopted. 3-D Secure has lousy technology, but got the economics right (at least for banks and merchants); it now boasts hundreds of millions of accounts. We suggest a path towards more robust authentication that is technologically sound and where the economics would work for banks, merchants and customers – given a gentle regulatory nudge.
    Financial Cryptography and Data Security, Tenerife, Canary Islands, 25–28 January 2010. [ paper ]
  • A Case Study on Measuring Statistical Data in the Tor Anonymity Network
    Karsten Loesing, Steven J. Murdoch, Roger Dingledine
    The Tor network is one of the largest deployed anonymity networks, consisting of 1500+ volunteer-run relays and probably hundreds of thousands of clients connecting every day. Its large user-base has made it attractive for researchers to analyze usage of a real deployed anonymity network. The recent growth of the network has also led to performance problems, as well as attempts by some governments to block access to the Tor network. Investigating these performance problems and learning about network blocking is best done by measuring usage data of the Tor network. However, analyzing a live anonymity system must be performed with great care, so that the users’ privacy is not put at risk. In this paper we present a case study of measuring two different types of sensitive data in the Tor network: countries of connecting clients, and exiting traffic by port. Based on these examples we derive general guidelines for safely measuring potentially sensitive data, both in the Tor network and in other anonymity networks.
    Workshop on Ethics in Computer Security Research, Tenerife, Canary Islands, 28 January 2010. [ paper ]
  • Reliability of Chip & PIN evidence in banking disputes
    Steven J. Murdoch
    Smart cards are being increasingly used for payment, having been issued across most of Europe, and they are in the process of being implemented elsewhere. These systems are almost exclusively based on a global standard &ndash EMV (named after its designers: Europay, Mastercard, Visa) – and commonly known as Chip & PIN in the United Kingdom. Consequently, the reliability of the Chip & PIN system, and the evidence it generates, has been an increasingly important aspect of disputes between banks and their customers. A common simplification made by banks when deciding whether to refund a disputed transaction, is the assertion that cloned smart cards will be detected, and that the correct PIN must be entered for a transaction to succeed. The reality is more complex, so it can be difficult to distinguish the difference between customer fraud, a third party criminal attack, and customer negligence. This article will discuss the situations which may cause disputed transactions to arise, what may be inferred from the evidence, and the effect of this on banking disputes.
    Digital Evidence and Electronic Signature Law Review, Volume 6, pages 98–115, ISSN 1756-4611, 2009. [ article | alternative link ]
  • Failures of Tamper-Proofing in PIN Entry Devices
    Saar Drimer, Steven J. Murdoch, Ross Anderson
    Bank customers are forced to rely on PIN entry devices in stores and bank branches to protect account details. The authors examined two market-leading devices and found them easy to compromise owing to both their design and the processes used to certify them as secure.
    IEEE Security and Privacy, Volume 7, Number 6, pages 39–45, November–December 2009. [ article | DOI link ]
  • Optimised to fail: Card readers for online banking
    Saar Drimer, Steven J. Murdoch, Ross Anderson
    The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer's debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.
    Financial Cryptography and Data Security, Rockley, Barbados, 23–26 February 2009. [ paper | slides ]
  • An Improved Clock-skew Measurement Technique for Revealing Hidden Services
    Sebastian Zander, Steven J. Murdoch
    The Tor anonymisation network allows services, such as web servers, to be operated under a pseudonym. In previous work Murdoch described a novel attack to reveal such hidden services by correlating clock skew changes with times of increased load, and hence temperature. Clock skew measurement suffers from two main sources of noise: network jitter and timestamp quantisation error. Depending on the target’s clock frequency the quantisation noise can be orders of magnitude larger than the noise caused by typical network jitter. Quantisation noise limits the previous attacks to situations where a high frequency clock is available. It has been hypothesised that by synchronising measurements to the clock ticks, quantisation noise can be reduced. We show how such synchronisation can be achieved and maintained, despite network jitter. Our experiments show that synchronised sampling significantly reduces the quantisation error and the remaining noise only depends on the network jitter (but not clock frequency). Our improved skew estimates are up to two magnitudes more accurate for low-resolution timestamps and up to one magnitude more accurate for high-resolution timestamps, when compared to previous random sampling techniques. The improved accuracy not only allows previous attacks to be executed faster and with less network traffic but also opens the door to previously infeasible attacks on low-resolution clocks, including measuring skew of a HTTP server over the anonymous channel.
    17th USENIX Security Symposium, San Jose, CA, USA, 28 July–01 August 2008. [ paper | slides ]
  • Tools and Technology of Internet Filtering
    Steven J. Murdoch, Ross Anderson
    In 2008 the OpenNet Initiative published the results of their survey of global Internet filtering. This chapter gives an introduction to the concepts and technologies needed to better appreciate the results presented in the rest of the book. A short Internet primer is followed with a description of the different approaches to filtering, and their various advantages and disadvantages. Finally the role of filtering within a more general censorship regime is discussed.
    The full text of the other introductory chapters are available on the book website. Also available are the results of the survey itself.
    In Access Denied: The Practice and Policy of Global Internet Filtering, Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, eds., (Cambridge: MIT Press), 2008. [ chapter | buy from Amazon UK | buy from Amazon US ]
  • Metrics for Security and Performance in Low-Latency Anonymity Systems
    Steven J. Murdoch, Robert N.M. Watson
    In this paper we explore the tradeoffs between security and performance in anonymity networks such as Tor. Using probability of path compromise as a measure of security, we explore the behaviour of various path selection algorithms with a Tor path simulator. We demonstrate that assumptions about the relative expense of IP addresses and cheapness of bandwidth break down if attackers are allowed to purchase access to botnets, giving plentiful IP addresses, but each with relatively poor symmetric bandwidth. We further propose that the expected latency of data sent through a network is a useful performance metric, show how it may be calculated, and demonstrate the counter-intuitive result that Tor's current path selection scheme, designed for performance, both performs well and is good for anonymity in the presence of a botnet based adversary.
    8th Privacy Enhancing Technologies Symposium (PETS 2008), Leuven, Belgium, 23–25 July 2008. [ paper | slides ]
  • On the Origins of a Thesis
    Steven J. Murdoch
    A PhD thesis typically reads as an idealised narrative: how would the author perform their research had the results and conclusions been known in advance. This rarely occurs in practice. Failed experiments, unexpected results, and new collaborations frequently change the course of research. This paper describes the course of my thesis, and how its initial topic of distributed databases changed to covert channels, then anonymity, before eventually settling on links between the two. This illustrates concrete benefits from informal interactions, low-overhead collaboration, and flexibility of research project plans.
    International Workshop on Security and Trust Management (keynote), Trondheim, Norway, 16–17 June 2008. [ paper | slides ]
  • Thinking Inside the Box: System-level Failures of Tamper Proofing
    Saar Drimer, Steven J. Murdoch, Ross Anderson
    PIN entry devices (PEDs) are critical security components in EMV smartcard payment systems as they receive a customer's card and PIN. Their approval is subject to an extensive suite of evaluation and certification procedures. In this paper, we demonstrate that the tamper proofing of PEDs is unsatisfactory, as is the certification process. We have implemented practical low-cost attacks on two certified, widely-deployed PEDs – the Ingenico i3300 and the Dione Xtreme. By tapping inadequately protected smartcard communications, an attacker with basic technical skills can expose card details and PINs, leaving cardholders open to fraud. We analyze the anti-tampering mechanisms of the two PEDs and show that, while the specific protection measures mostly work as intended, critical vulnerabilities arise because of the poor integration of cryptographic, physical and procedural protection. As these vulnerabilities illustrate a systematic failure in the design process, we propose a methodology for doing it better in the future. These failures also demonstrate a serious problem with the Common Criteria. So we discuss the incentive structures of the certification process, and show how they can lead to problems of the kind we identified. Finally, we recommend changes to the Common Criteria framework in light of the lessons learned.
    2008 IEEE Symposium on Security and Privacy, Oakland, CA, US, 18–21 May 2008. Awarded outstanding paper award by IEEE Security & Privacy Magazine. [ paper | slides | extended technical report – UCAM-CL-TR-711 | further information – videos, letters from vendors, FAQ ]
  • Hardened Stateless Session Cookies
    Steven J. Murdoch
    Stateless session cookies allow web applications to alter their behaviour based on user preferences and access rights, without maintaining server-side state for each session. This is desirable because it reduces the impact of denial of service attacks and eases database replication issues in load-balanced environments. The security of existing session cookie proposals depends on the server protecting the secrecy of a symmetric MAC key, which for engineering reasons is usually stored in a database, and thus at risk of accidental leakage or disclosure via application vulnerabilities. In this paper we show that by including a salted iterated hash of the user password in the database, and its pre-image in a session cookie, an attacker with read access to the server is unable to spoof an authenticated session. Even with knowledge of the server’s MAC key the attacker needs a user’s password, which is not stored on the server, to create a valid cookie. By extending an existing session cookie scheme, we maintain all the previous security guarantees, but also preserve security under partial compromise.
    Sixteenth International Workshop on Security Protocols, Cambridge, UK, 16–18 April 2008. [ paper | slides ]
  • Shifting Borders
    Steven J. Murdoch, Ross Anderson
    In A Declaration of the Independence of Cyberspace, John Perry Barlow called for communities built around the Internet to be independent of national governments and borders: a Utopian ideal that has failed to materialise. The Internet does have borders, for similar reasons that national boundaries exist: they ease administration, permit collective defence and can be founded in culture.
    While it is true that Internet borders do not have to be the same as political boundaries, the two have naturally mirrored each other. This is hardly a surprise since the Internet was built on the infrastructure of telecommunications companies, often controlled or regulated by nation states.
    Index on Censorship, Volume 36, Issue 4, pages 156–159, November 2007. [ article | DOI link ]
  • Covert channel vulnerabilities in anonymity systems
    Steven J. Murdoch
    The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users' privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way. This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems.
    PhD thesis, Technical Report UCAM-CL-TR-706, University of Cambridge, Computer Laboratory, December 2007. Awarded prize for best PhD thesis by ERCIM security and trust management working group. [ thesis ]
  • Keep Your Enemies Close: Distance Bounding Against Smartcard Relay Attacks
    Saar Drimer, Steven J. Murdoch
    Modern smartcards, capable of sophisticated cryptography, provide a high assurance of tamper resistance and are thus commonly used in payment applications. Although extracting secrets out of smartcards requires resources beyond the means of many would-be thieves, the manner in which they are used can be exploited for fraud. Cardholders authorize financial transactions by presenting the card and disclosing a PIN to a terminal without any assurance as to the amount being charged or who is to be paid, and have no means of discerning whether the terminal is authentic or not. Even the most advanced smartcards cannot protect customers from being defrauded by the simple relaying of data from one location to another. We describe the development of such an attack, and show results from live experiments on the UK's EMV implementation, Chip & PIN. We discuss previously proposed defences, and show that these cannot provide the required security assurances. A new defence based on a distance bounding protocol is described and implemented, which requires only modest alterations to current hardware and software. As far as we are aware, this is the first complete design and implementation of a secure distance bounding protocol. Future smartcard generations could use this design to provide cost-effective resistance to relay attacks, which are a genuine threat to deployed applications. We also discuss the security-economics impact to customers of enhanced authentication mechanisms.
    16th USENIX Security Symposium, Boston, MA, USA, 06–10 August 2007. Awarded prize for best student paper at USENIX Security 2007. [ paper ]
  • Securing Network Location Awareness with Authenticated DHCP
    Tuomas Aura, Michael Roe, Steven J. Murdoch
    Network location awareness (NLA) enables mobile computers to recognize home, work and public networks and wireless hotspots and to behave differently at different locations. The location information is used to change security settings such as firewall rules. Current NLA mechanisms, however, do not provide authenticated location information on all networks. This paper describes a novel mechanism, based on public-key authentication of DHCP servers, for securing NLA at home networks and wireless hotspots. The main contributions of the paper are the requirements analysis, a naming and authorization scheme for network locations, and the extremely simple protocol design. The mobile computer can remember and recognize previously visited networks securely even when there is no PKI available. This is critical because we do not expect the majority of small networks to obtain public-key certificates. The protocol also allows a network administrator to pool multiple, heterogeneous access links, such as a campus network, to one logical network identity. Another major requirement for the protocol was that it must not leak information about the mobile host's identity or affiliation. The authenticated location information can be used to minimize attack surface on the mobile host by making security-policy exceptions specific to a network location.
    3rd International Conference on Security and Privacy in Communication Networks (SecureComm), Nice, France, 17–20 September 2007. [ paper ]
  • Dynamic Host Configuration Protocol
    Tuomas Aura, Michael Roe, Steven J. Murdoch
    Dynamic host configuration protocol (DHCP) is extended in order to assist with secure network location awareness. In an embodiment a DHCP client receives a signed DHCP response message from a DHCP server, the signed message comprising at least a certificate chain having a public key. In that embodiment the DHCP client validates the certificate chain and verifies the signature of the signed message. If this is successful the DHCP client accesses stored settings for use with the server. The stored settings are accessed at least using information about the public key. In some embodiments signed DHCPOFFER messages and signed DHCPACK messages are used. In another embodiment the signed DHCP message comprises a location identifier which is, for example, a domain name system (DNS) suffix of a DHCP server.
    United States Patent, US 8239549 B2, 12 September 2007. Also published as applications US2009/0070474, WO2009/035829A1. [ patent ]
  • Secure Network Location Awareness
    Tuomas Aura, Michael Roe, Steven J. Murdoch
    Secure network location awareness is provided whereby a client is able to use appropriate settings when communicating with an access node of a communications network. In an embodiment a client receives a signed message from the access node, the signed message comprising at least a certificate chain having a public key. In some embodiments the certificate chain may be only a self-signed certificate and in other embodiments the certificate chain is two or more certificates in length. The client validates the certificate chain and verifies the signature of the signed message. If this is successful the client accesses stored settings for use with the access node. The stored settings are accessed at least using information about the public key. In another embodiment the signed message also comprises a location identifier which is, for example, a domain name system (DNS) suffix of the access node.
    United States Patent Application, US 2009/0070582 A1, 12 September 2007. [ patent application ]
  • Sampled Traffic Analysis by Internet-Exchange-Level Adversaries
    Steven J. Murdoch, Piotr Zieliński
    Existing low-latency anonymity networks are vulnerable to traffic analysis, so location diversity of nodes is essential to defend against attacks. Previous work has shown that simply ensuring geographical diversity of nodes does not resist, and in some cases exacerbates, the risk of traffic analysis by ISPs. Ensuring high autonomous-system (AS) diversity can resist this weakness. However, ISPs commonly connect to many other ISPs in a single location, known as an Internet eXchange (IX). This paper shows that IXes are a single point where traffic analysis can be performed. We examine to what extent this is true, through a case study of Tor nodes in the UK. Also, some IXes sample packets flowing through them for performance analysis reasons, and this data could be exploited to de-anonymize traffic. We then develop and evaluate Bayesian traffic analysis techniques capable of processing this sampled data.
    7th Workshop on Privacy Enhancing Technologies, Ottawa, Canada, 20–22 June 2007. Nominated for the 2008 PET workshop award for outstanding Research in Privacy Enhancing Technologies. [ paper | slides ]
  • Ignoring the Great Firewall of China
    Richard Clayton, Steven J. Murdoch, Robert N.M. Watson
    The so-called "Great Firewall of China" operates, in part, by inspecting Transmission Control Protocol (TCP) packets for keywords that are to be blocked. If the keyword is present, TCP reset packets are sent to both endpoints of the connection, which then close. However, the original packets pass through the firewall unscathed. Therefore, if the endpoints completely ignore the firewall's resets, the connection will proceed unhindered and the firewall will be ineffective. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block any more connections from the same machine. This latter behaviour of the firewall can be leveraged into a denial-of-service attack on third-party machines.
    I/S: A Journal of Law and Policy for the Information Society, Volume 3, Issue 2, pages 271–296, 2007. Extended version of the PET 2006 paper. [ paper ]
  • Hot or Not: Revealing Hidden Services by their Clock Skew
    Steven J. Murdoch
    Location-hidden services, as offered by anonymity systems such as Tor, allow servers to be operated under a pseudonym. As Tor is an overlay network, servers hosting hidden services are accessible both directly and over the anonymous channel. Traffic patterns through one channel have observable effects on the other, thus allowing a service's pseudonymous identity and IP address to be linked. One proposed solution to this vulnerability is for Tor nodes to provide fixed quality of service to each connection, regardless of other traffic, thus reducing capacity but resisting such interference attacks. However, even if each connection does not influence the others, total throughput would still affect the load on the CPU, and thus its heat output. Unfortunately for anonymity, the result of temperature on clock skew can be remotely detected through observing timestamps. This attack works because existing abstract models of anonymity-network nodes do not take into account the inevitable imperfections of the hardware they run on. Furthermore, we suggest the same technique could be exploited as a classical covert channel and can even provide geolocation.
    13th ACM Conference on Computer and Communications Security (CCS), Alexandria, Virginia, USA, 30 October–03 November 2006. Also presented at NoVA Sec, 02 November 2006. [ paper | slides | code | ACM version ]
  • Ignoring the Great Firewall of China
    Richard Clayton, Steven J. Murdoch, Robert N.M. Watson
    The so-called "Great Firewall of China" operates, in part, by inspecting TCP packets for keywords that are to be blocked. If the keyword is present, TCP reset packets (viz: with the RST flag set) are sent to both endpoints of the connection, which then close. However, because the original packets are passed through the firewall unscathed, if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block further connections from the same machine. This latter behaviour can be leveraged into a denial-of-service attack on third-party machines.
    6th Workshop on Privacy Enhancing Technologies, Cambridge, England, 28–30 June 2006. Published in LNCS 4258, Springer-Verlag. [ paper ]
  • Phish and Chips (Traditional and New Recipes for Attacking EMV)
    Ben Adida, Mike Bond, Jolyon Clulow, Amerson Lin, Steven J. Murdoch, Ross Anderson, Ronald L. Rivest
    This paper surveys existing and new security issues affecting the EMV electronic payments protocol. We first introduce a new price/effort point for the cost of deploying eavesdropping and relay attacks – a microcontroller-based interceptor costing less than $100. We look next at EMV protocol failures in the back-end security API, where we describe two new attacks based on chosen-plaintext CBC weaknesses, and on key separation failues. We then consider future modes of attack, specifically looking at combining the phenomenon of phishing (sending unsolicited messages by email, post or phone to trick users into divulging their account details) with chip card sabotage. Our proposed attacks exploit covert channels through the payments network to allow sabotaged cards to signal back their PINS. We hope these new recipes will enliven the debate about the pros and cons of Chip and PIN at both technical and commercial levels.
    Fourteenth International Workshop on Security Protocols, Cambridge, UK, 27–29 March 2006. [ paper ]
  • Chip and Spin
    Ross Anderson, Mike Bond, Steven J. Murdoch
    The new UK "Chip and PIN" card payments scheme has recently gone live. It has been spun in the media so far as "a safer way to pay" and as "the biggest change to payment since decimalisation". However, the latest fraud figures show that fraud is up, not down – and the Chip and PIN scheme is being blamed. So how secure is it really? And who will benefit most from its introduction? This note briefly considers liability issues, technical shortcomings and management failures.
    Computer Security Journal, Volume 22, Issue 2, pages 1–6, 2006. First published in May 2005. [ paper ]
  • Message Splitting Against the Partial Adversary
    Andrei Serjantov, Steven J. Murdoch
    We review threat models used in the evaluation of anonymity systems' vulnerability to traffic analysis. We then suggest that, under the partial adversary model, if multiple packets have to be sent through these systems, more anonymity can be achieved if senders route the packets via different paths. This is in contrast to the normal technique of using the same path for them all. We comment on the implications of this for message-based and connection-based anonymity systems. We then proceed to examine the only remaining traffic analysis attack – one which considers the entire system as a black box. We show that it is more difficult to execute than the literature suggests, and attempt to empirically estimate the parameters of the Mixmaster and the Mixminion systems needed in order to successfully execute the attack.
    5th Workshop on Privacy Enhancing Technologies, Dubrovnik (Cavtat), Croatia, 30 May–01 June 2005. Published in LNCS 3856, Springer-Verlag. [ paper | data ]
  • Embedding Covert Channels into TCP/IP
    Steven J. Murdoch, Stephen Lewis
    It is commonly believed that steganography within TCP/IP is easily achieved by embedding data in header fields seemingly filled with “random” data, such as the IP identifier, TCP initial sequence number or the least significant bit of the TCP timestamp. We show that this is not the case; these fields naturally exhibit sufficient structure and non-uniformity to be efficiently and reliably differentiated from unmodified ciphertext. Previous work on TCP/IP steganography does not take this into account and, by examining TCP/IP specifications and open source implementations, we have developed tests to detect the use of naïve embedding. Finally, we describe reversible transforms that map block cipher output into TCP ISNs, indistinguishable from those generated by Linux and OpenBSD. The techniques used can be extended to other operating systems. A message can thus be hidden in such a way that an attacker cannot demonstrate its existence without knowledge of a secret key.
    7th Information Hiding Workshop, Barcelona, Catalonia (Spain), 06–08 June 2005. Published in LNCS 3727, Springer-Verlag. [ paper ]
  • Low-Cost Traffic Analysis of Tor
    Steven J. Murdoch, George Danezis
    Tor is the second generation Onion Router, supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as web browsing, but insecure against traffic analysis attacks by a global passive adversary. We present new traffic analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams and therefore greatly reduce the anonymity provided by Tor. Furthermore, we show that otherwise unrelated streams can be linked back to the same initiator. Our attack is feasible for the adversary anticipated by the Tor designers. Our theoretical attacks are backed up by experiments performed on the deployed, albeit experimental, Tor network. Our techniques should also be applicable to any low latency anonymous network. These attacks highlight the relationship between the field of traffic analysis and more traditional computer security issues, such as covert channel analysis. Our research also highlights that the inability to directly observe network links does not prevent an attacker from performing traffic analysis: the adversary can use the anonymising network as an oracle to infer the traffic load on remote nodes in order to perform traffic analysis.
    2005 IEEE Symposium on Security and Privacy, Oakland, California, USA, 08–11 May 2005. Nominated for the 2006 PET workshop award for outstanding Research in Privacy Enhancing Technologies; awarded 2006 Computer Laboratory prize for most notable paper. [ paper | code ]
  • Unwrapping the Chrysalis
    Mike Bond, Daniel Cvrcek, Steven J. Murdoch
    We describe our experiences reverse engineering the Chrysalis-ITS Luna CA3 a PKCS#11 compliant cryptographic token. Emissions analysis and security API attacks are viewed by many to be simpler and more efficient than a direct attack on an HSM. But how difficult is it to actually "go in the front door"? We describe how we unpicked the CA3 internal architecture and abused its low-level API to impersonate a CA3 token in its cloning protocol – and extract PKCS#11 private keys in the clear. We quantify the effort involved in developing and applying the skills necessary for such a reverse-engineering attack. In the process, we discover that the Luna CA3 has far more undocumented code and functionality than is revealed to the end-user.
    Technical Report UCAM-CL-TR-592, University of Cambridge, Computer Laboratory, June 2004. Also published in Czech as Bezpen hardware, kter nen zase tak bezpe in Data Security Management Rok 8, Cislo 5/2004, strany 44–47 and Reverse-engineering kryptografickho modulu in Crypto-World Rok 6, Cislo 9/2004, strany 8–14. [ paper | code ]
  • Covert Channels for Collusion in Online Computer Games
    Steven J. Murdoch, Piotr Zieliński
    Collusion between partners in Contract Bridge is an oft-used example in cryptography papers and an interesting topic for the development of covert channels. In this paper, a different type of collusion is discussed, where the parties colluding are not part of one team, but instead are multiple independent players, acting together in order to achieve a result that none of them are capable of achieving by themselves. Potential advantages and defences against collusion are discussed. Techniques designed for low-probability-of-intercept spread spectrum radio and multilevel secure systems are also applied in developing covert channels suitable for use in games. An example is given where these techniques were successfully applied in practice, in order to win an online programming competition. Finally, suggestions for further work are explored, including exploiting similarities between competition design and the optimisation of voting systems.
    6th Information Hiding Workshop, Toronto, Ontario, Canada, 23–25 May 2004. Published in LNCS 3200, Springer-Verlag. [ paper | slides ]
  • Compounds: a Next-Generation Hierarchical Data Model
    Markus G. Kuhn, Steven J. Murdoch, Piotr Zieliński
    Compounds provide a simple, flexible, hierarchical data model that unifies the advantages of XML and file systems. We originally designed it for Project Dendros, our distributed, revision-controlled storage system that aims to fully separate the control over data from its storage location. Compounds also provide an excellent extensible and general-purpose data format. A processing framework based on stackable filters allowed us to add rich functionality in a highly modular manner, including access control, compression, encryption, serialization, querying, transformation, remote access, and revision control.
    Microsoft Research Academic Days, Dublin, Ireland, 13–16 April 2004. [ poster ]