Research Talks

  • 4 February 2022, 09:00 (Canceled)
    Daniel Woods, Innsbruck University
    Measuring and managing cyber risk
    Abstract: Improving cybersecurity across society requires more than just designing secure systems, we must also understand the evidence base and incentive structure that leads firms to adopt secure solutions. This talk begins with a systematisation of 30 years of quantitative cyber risk research. I then present an approach to estimating the risk and size of cyber losses that involves reverse engineering insurance prices. I also present ongoing work trying to quantify software security via 0-day exploit prices. The second part of the talk presents evidence about how insurers and lawyers are fundamentally changing how firms respond to cyber incidents.

    Bio: Daniel Woods is currently a Marie Curie Fellow at the University of Innsbruck in the Austrian Alps. He received his PhD from the University of Oxford’s computer science department, during which he visited the University of Tulsa as a Fulbright scholar. He received an MSci in mathematics from the University of Bristol.

    Home Page: https://informationsecurity.uibk.ac.at/people/daniel-woods/

    Google Scholar: https://scholar.google.com/citations?user=Vbr7JG4AAAAJ&hl=en

    Join: https://ucl.zoom.us/j/91500185309?pwd=cEdaM3pLbkl0NkhXR25uUWxBSG92QT09

  • 4 February 2022, 10:30
    Mark Warner, Northumbria University
    I’d prefer not to say: Investigating the effects of information control mechanisms in different online social environments
    Abstract: When signing up to social networking sites, completing online profiles, or after sending messages to friends on WhatsApp, people may choose to withhold certain information about themselves or even delete information previously disclosed. Whilst providing users with control over their personal information is clearly important, little is known about the impact these control mechanisms have on users. In this talk, we will explore work conducted as part of my PhD which investigated the effect of HIV status non-disclosures in dating apps used by gay and bisexual men. This work highlights the potential negative effect non-disclosures can have on user desirability, and how this differs depending on other disclosed characteristics of the user. I will then share findings from a study I recently conducted on message deletion in mobile messaging apps, and highlight similarities in the findings across these two very different online environments

    Bio: Mark Warner is a Senior Lecturer in Computing and Information Sciences and is part of the social computing research group. He conducts interdisciplinary research at the intersection of human-computer interaction, information security, and crime and policing. Mark was the computer science lead for OMDDAC, a UKRI funded observatory for monitoring data driven approaches to COVID-19 and is currently leading a REPHRAIN funded project developing a proactive online harms intervention tool. In addition to this research, he is an expert advisor for the National Police Chiefs Council (NPCC) Vulnerability Knowledge and Practice Programme (VKPP), providing expert advice, consultation, and scrutiny of the programmes research. He also acts as an expert advisor on the Thames Valley Police data ethics committee providing guidance on the use of data-driven systems in policing. In 2020 he completed his PhD at UCL’s Interaction Centre, prior to which he gained an MSc in Security Management whilst working as a digital forensics engineer, a career he held for over 10 years.

    Home Page: https://www.northumbria.ac.uk/about-us/our-staff/w/mark-warner/

    Google Scholar: https://scholar.google.co.uk/citations?user=B2MxYPYAAAAJ&hl=en

    Join: https://ucl.zoom.us/j/91456071327?pwd=dkRzYU9DNkdDQlBOT1daelk4TEtrUT09

    [Import to Outlook]

  • 8 February 2022, 09:00
    Martin Kleppman, Cambridge University
    Confidentiality, Integrity, and Availability of Collaboration Software
    Abstract: Signal, WhatsApp, and other secure messaging apps have brought end-to-end encryption to billions of users. Unfortunately, many other applications still lack end-to-end security guarantees: in particular, with real-time collaboration software such as Google Docs, Overleaf, Figma, or Trello, we still have to blindly trust cloud services to process the users’ unencrypted data. This is particularly problematic for use cases such as journalistic investigations, medical records, or sensitive negotiations. This talk introduces our group’s research on improving the security characteristics of collaboration software, while retaining the convenience of real-time collaboration. To improve confidentiality, we are applying end-to-end encryption, and using anonymity protocols to provide metadata privacy. To improve integrity, we aim to cryptographically verify that collaborators have consistent views of the shared document. To improve availability, our “local-first” approach ensures that even if the cloud service shuts down or suspends user accounts, users do not lose any data. Our approach is both principled and practical. We are using a wide variety of techniques, including cryptographic protocol design, formal verification of algorithms, carefully optimised data structures, and open source application prototypes, with the goal of making secure collaboration software a practical reality.

    Bio: Dr. Martin Kleppmann is a research fellow and affiliated lecturer at the University of Cambridge, and author of the bestselling book “Designing Data-Intensive Applications” (O’Reilly Media). He works on distributed systems security and collaboration software. Previously he was a software engineer and entrepreneur, co-founding and selling two startups, and working on large-scale data infrastructure at LinkedIn.

    Home Page: https://martin.kleppmann.com/

    Google Scholar: https://scholar.google.com/citations?user=TbyvU7oAAAAJ&hl=en

    Join: https://ucl.zoom.us/j/95784644175?pwd=NHU3Y3IyUHd2emFnUmR5QXRNb1pXUT09

    [Import to Outlook]


  • 8 February 2022, 10:30
    Ilia Shumailov, Cambridge University
    Machine Learning in context of Computer Security
    Abstract: Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover my recent work on attacking and defending machine learning pipelines; I will describe how, otherwise correct, ML components end up being vulnerable because an attacker can break their underlying assumptions. First, with an example of attacks against text preprocessing, I will discuss why a holistic view of the ML deployment is a key requirement for ML security. Second, I will describe how an adversary can exploit the computer systems, underlying the ML pipeline, to develop availability attacks at both training and inference stages. At the training stage, I will present data ordering attacks that break stochastic optimisation routines. At the inference stage, I will describe sponge examples that soak up a large amount of energy and take a long time to process. Finally, building on my experience attacking ML systems, I will discuss developing robust defenses against ML attacks, which consider an end-to-end view of the ML pipeline.

    Bio: Ilia Shumailov holds a BSc in Computer Science from University of St Andrews and MPhil in Advanced Computer Science from the University of Cambridge. Since 2017 Ilia has been reading for a PhD in Computer Science under the supervision of Prof Ross Anderson. During his PhD, Ilia has worked on a number of projects spanning the fields of machine learning security, cybercrime analysis and signal processing.

    Home Page: https://www.cl.cam.ac.uk/~is410/

    Google Scholar: https://scholar.google.co.uk/citations?user=e-YbZyEAAAAJ

    Join: https://ucl.zoom.us/j/93729429419?pwd=dG9hMkY4L05lM1dNZnRtTGxOc2FEZz09

    [Import to Outlook]


  • 22 February 2022, 09:00
    Arthur Gervais, Imperial College
    How Dark is the Forest? On Blockchain Extractable Value and High-Frequency Trading in Decentralized Finance

    Abstract: Permissionless blockchains such as Bitcoin have excelled at financial services. Yet, opportunistic traders extract monetary value from the mesh of decentralized finance (DeFi) smart contracts through so-called blockchain extractable value (BEV). The recent emergence of centralized BEV relayer portrays BEV as a positive additional revenue source. Because BEV, however, was quantitatively shown to deteriorate the blockchain’s consensus security, BEV relayers endanger the ledger security by incentivizing rational miners to fork the chain. For example, a rational miner with a 10% hashrate will fork Ethereum if a BEV opportunity exceeds 4× the block reward. In this talk, we quantify the BEV danger by deriving the USD extracted from sandwich attacks, liquidations, and decentralized exchange arbitrage. We estimate that over 32 months, BEV yielded 540.54M USD in profit, divided among 11,289 addresses when capturing 49,691 cryptocurrencies and 60,830 on-chain markets. The highest BEV instance we find amounts to 4.1M USD, 616.6× the Ethereum block reward. Moreover, while the practitioner’s community has discussed the existence of generalized trading bots, we are, to our knowledge, the first to provide a concrete algorithm. Our algorithm can replace unconfirmed transactions without the need to understand the victim transactions’ underlying logic, which we estimate to have yielded a profit of 57,037.32 ETH (35.37M USD) over 32 months of past blockchain data.

    Bio: Arthur Gervais is a Lecturer (equivalent Assistant Professor) at Imperial College London. He’s passionate about information security and worked since 2012 on blockchain related topics, with a recent focus on Decentralized Finance (DeFi). He is co-instructor in the first DeFi MOOC attracting over 2800 students in the Fall 2021 (https://defi-learning.org/).

    Home Page: http://arthurgervais.com

    Google Scholar: https://scholar.google.ch/citations?user=jLr_xi4AAAAJ&hl=en

    Join: https://ucl.zoom.us/j/96139024855?pwd=YVhIaktmcmpIRUVrVVdhQlVSSCtJZz09

    [Import to Outlook]

  • 22 February 2022, 10:30
    Matthew Mirman, ETH Zurich
    Trustworthy Deep Learning: Methods, Systems and Theory
    Abstract: Deep learning models are quickly becoming an integral part of a plethora of high stakes applications, including autonomous driving and health care. As the discovery of vulnerabilities and flaws in these models has become frequent, so has the interest in ensuring their safety, robustness and reliability. My research addresses this need by introducing new core methods and systems that can establish desirable mathematical guarantees of deep learning models. In the first part of my talk I will describe how we leverage abstract interpretation to scale verification to orders of magnitude larger deep neural networks than prior work, at the same time demonstrating the correctness of significantly more properties. I will then show how these techniques can be extended to ensure, for the first time, formal guarantees of probabilistic semantic specifications using generative models. In the second part, I will show how to fuse abstract interpretation with the training phase so as to improve a model’s amenability to certification, allowing us to guarantee orders of magnitude more properties than possible with prior work. Finally, I will discuss exciting theoretical advances which address fundamental questions on the very existence of certified deep learning.

    Bio: Matthew Mirman is a final-year PhD student at ETH Zürich, supervised by Martin Vechev. His main research interests sit at the intersection of programming languages, machine learning, and theory with applications to creating safe and reliable artificial intelligence systems. Prior to ETH, he completed his B.Sc. and M.Sc. at Carnegie-Mellon University supervised by Frank Pfenning.

    Home Page: http://www.mirman.com/

    Google Scholar: https://scholar.google.com/citations?hl=en&user=ovm4iLwAAAAJ

    Join: https://ucl.zoom.us/j/93903168541?pwd=UEtYcE9tNzlENHJoSXREK3NzUWxSdz09

    [Import to Outlook]


  • 24 February 2022, 13:30
    Yixin Zou, University of Michigan
    Improving People’s Adoption of Security and Privacy Behaviors
    Abstract: Experts recommend a plethora of advice for staying safe online, yet people still use weak passwords, fall for scams, or ignore software updates. Such inconsistent adoption of protective behaviors is understandable given the need to navigate other priorities and constraints in everyday life. Yet when the actions taken are insufficient to mitigate potential risks, it leaves people – especially those already marginalized – vulnerable to dire consequences from financial loss to abuse and harassment. In this talk, I share findings from my research on hurdles that prevent people from adopting secure behaviors and solutions that encourage adoption in three domains: designing data breach notifications, informing privacy interface guidelines in regulations, and supporting survivors of tech-enabled abuse. (1) Even small changes in system design can make a big difference. I empirically show consumers’ low awareness of data breaches, rational justifications and biases behind inaction, and how to motivate consumers to change breached passwords through nudges in breach notifications. (2) Public policy is essential in incentivizing companies to implement better data practices, but policymaking needs to be informed by evidence from research. I present a series of user studies that led to a user-tested icon for conveying the “do not sell my personal information” opt-out, now part of the California Consumer Privacy Act (CCPA). (3) Different user groups have different threat models and safety needs, requiring special considerations in developing and deploying interventions. Drawing on findings from focus groups, I discuss how computer security support agents can help survivors of tech-enabled abuse using a trauma-informed approach. Altogether, I highlight the impact of my research on technology design, public policy, and educational efforts. I end the talk by discussing how my interdisciplinary, human-centered approach in solving security and privacy challenges can apply to future work such as improving expert advice and developing trauma-informed computing systems.

    Bio: Yixin Zou (she/her) is a Ph.D. Candidate at the University of Michigan School of Information. Her research interests span cybersecurity, privacy, and human-computer interaction, with an emphasis on improving people’s adoption of protective behaviors and supporting vulnerable populations (e.g., survivors of intimate partner violence and older adults) in protecting their digital safety. Her research has received a Best Paper Award at the Symposium on Usable Privacy and Security (SOUPS) and two Honorable Mentions at the ACM Conference on Human Factors in Computing Systems (CHI). She has been an invited speaker at the US Federal Trade Commission’s PrivacyCon, and she co-led the research effort that produced the opt-out icon in the California Consumer Privacy Act (CCPA). She has also collaborated with industry partners at NortonLifeLock and Mozilla, and her research at Mozilla has directly influenced the product development of Firefox Monitor. Before joining the University of Michigan, she received a Bachelor’s degree in Advertising from the University of Illinois at Urbana-Champaign.

    Home Page: https://yixinzou.github.io

    Google Scholar: https://scholar.google.com/citations?user=3sEYZIEAAAAJ&hl=en

    Join: https://ucl.zoom.us/j/96802863445?pwd=UUpXSDZCb1Awcnc4R2lvQnpBNmxxUT09

    [Import to Outlook]


  • 24 February 2022, 15:00
    Pratyush Mishra, UC Berkeley
    Privacy and Scalability for Decentralized Systems
    Abstract: Our existing digital infrastructure requires trust in a small number of centralized entities. The poor fault-tolerance and auditability of this architecture has motivated interest in systems like Ethereum that decentralize trust across many nodes by having every node re-execute computations to check their correctness. However, this strategy leads to poor privacy and scalability guarantees. In this talk, I will show how to obtain decentralized trust systems that achieve strong privacy and scalability properties by relying on efficient cryptographic proofs (zkSNARKs). In particular, I will present ZEXE, a system for decentralized private computation where all transactions are indistinguishable from one another, irrespective of the underlying computation. I will then briefly describe a new paradigm for constructing concretely efficient and easy-to-deploy zkSNARKs.

    Bio: Pratyush Mishra is a cryptographer at Aleo. He recently completed his Ph.D. in Computer Science at UC Berkeley. His research interests include computer security and cryptography, with a focus on the theory and practice of succinct cryptographic proof systems, and on efficient systems for secure machine learning. He is a co-author of the arkworks zkSNARK libraries, which are used by several academic and industrial projects.

    Home Page: https://people.eecs.berkeley.edu/~pratyushmishra/

    Google Scholar: https://scholar.google.com/citations?user=URyAEqUAAAAJ&hl=en

    Join: https://ucl.zoom.us/j/91710786002?pwd=eHg5b2VGTWMyTXFJRHUyK2FpZU9mdz09

    [Import to Outlook]