Seminars

Upcoming events

  • 10 October 2024, 16:00
    Lisa Chernenko
    Outgroup dehumanisation in Russian and Ukrainian Telegram – language representation and the role of ingroup identity [Zoom] [Live Stream]
    Abstract: Intergroup dehumanisation, understood as a denial of human nature in outgroup other, represents a pressing concern for today’s society. It hinders empathy, prosocial behaviour, and contributes to between-group aggression. Its consequences are particularly dangerous in the context of international military conflicts as dehumanisation contributes to support for war, war-related violence, and usually accompanies genocidal conflicts. This motivated the focus of this study on the blatant forms of dehumanisation towards an outgroup defined in political or national terms, with a specific focus on the relations between Ukrainians, Russians, and Belarusians around the time of the Russian invasion of Ukraine in 2022.
    The study draws attention to previously under-researched aspect in outgroup dehumanisation, specifically the role of ingroup perception in it. Outgroup dehumanisation involves excluding the outgroup from the community one identifies with, thus reinforcing the boundary between ingroup and outgroup. This highlights the comparative nature of dehumanisation, suggesting its basis might lie more in comparative ingroup superiority bias rather than in outgroup inferiority bias. Existing research however generally concentrates solely on negative aspects of outgroup perception in dehumanising attitudes. While some studies have gauged dehumanisation through ingroup-outgroup perception differences, they lacked a ground truth measure for dehumanisation, leaving its comparative nature largely unexamined. Employing generative Large Language Model, we develop a dataset of Telegram channels posts, classified as dehumanising or neutral. Utilising NLP tools we analyse the role of ingroup-outgroup perception disparities in dehumanisation, specifically addressing its relation to affective polarisation.

    Bio: Lisa Chernenko is a DPhil candidate in Social Data Science at the Oxford Internet Institute. Her doctoral research is generously funded by the OII Shirley Scholarship and the Dieter Schwarz Foundation stipend. Lisa’s research explores strategies to counteract dehumanisation in online communication, specifically focusing on the phenomenon of re-humanisation. In the aftermath of Russia’s 2022 invasion of Ukraine, she examines linguistic facets of outgroup dehumanisation and mechanics of re-humanisation in relation to Ukrainians and Russians across digital platforms. In addition to her PhD project, Lisa works as an Associate Researcher and a Project Lead for the “Ukraine Case Studies” in the Portulans Institute focusing on information and communication operations with implications for the Russian war in Ukraine and long-term global practices and policies.

FAQs

  • How do I subscribe to seminar announcements?
    You can subscribe to our mailing list by sending an email with subject “subscribe” to infosec-seminars-join (at) ucl.ac.uk. You can also subscribe to our Google Calendar: [ICS] [HTML].
  • In what time zone are the seminars?
    All seminars are on London time (typically at 16:00).
  • Can people not affiliated with UCL attend the seminars?
    Yes, seminars are open to everyone! At the moment, we’re virtual, so you just need to register on Zoom or join the YouTube livestream. When we’ll restart in-person, we’ll post more details.
  • How can I learn more about InfoSec research and teaching activities at UCL?
    Check out the UCL’s InfoSec research group page. We also run an MSc in Information Security Degree and a Centre for Doctoral Training in Cybersecurity, and maintain a blog called Bentham’s Gaze.
  • Any other questions?
    Please email us!

Past Events

2023

  • 3 May 2024, 16:00
    Federico Barbero
    Understanding Long-Range Interactions in Message-Passing Graph Neural Networks and Transformers [Zoom]
    Abstract: In this talk, I will discuss how Message-Passing Graph Neural Networks propagate information across a graph, relating the mechanism to heat equations and spectral graph theory. In particular, I will show how this inevitably leads to undesirable properties when it comes to modelling “long-range interactions” over a domain. More precisely, I will show how “over-smoothing” (latent signals becoming increasingly similar) arises due to the relationship between message-passing and heat equations, and how “over-squashing” (information being compressed in “bottlenecks”) relates to spectral quantities of the graph. I will then go over our recent ICML 2023 and ICLR 2024 works that aim to understand and improve on such issues. Finally, I will connect this direction of research to Transformers, a model that still remains the de facto standard when it comes to modelling long-range interactions over numerous domains.

    Bio: Federico is a PhD student at the department of computer science of the University of Oxford, supervised by Michael Bronstein. He is currently a research intern at Google DeepMind working on algorithmic reasoning. Previously, he was a research intern at Microsoft Research working on protein folding with large scale geometric Transformers.
  • 29 February 2024, 16:00
    Alberto Sonnino
    Mysticeti: Low-Latency DAG Consensus with Fast Commit Path [Zoom] [Live Stream]
    Abstract: This talk introduces Mysticeti a byzantine consensus protocol with low-latency and high resource efficiency. It leverages a DAG based on Threshold Clocks and incorporates innovations in pipelining and multiple leaders to reduce latency in the steady state and under crash failures. Mysticeti is the first byzantine protocol to achieve WAN latency of 0.5s for consensus commit, at a throughput of over 50k TPS that matches the state-of-the-art. Additionally, and if time permits, this talk describes a variant of Mysticeti, called Mysticeti-FPC, that incorporates a fast commit path that has even lower latency by forgoing consensus whenever possible.

    Bio:I am a research scientist at Mysten Labs working on the Sui blockchain. I am also affiliated with the computer science department of University College London (UCL). My research interests are in distributed systems, blockchains, and privacy enhancing technologies. These days I mostly work on Byzantine fault tolerant systems for blockchain applications including consensus protocols, consensus-less (broadcast-based) algorithms, and distributed execution engines. I spend most of my time developing new algorithm to produce more performant distributed systems. A key aspect of my work is to leverage all the resources available to the machine and scale blockchain validators to run on multiple machines. The typical goal of my projects is to go beyond the research stage, I spend considerable effort to implement and evaluate systems to ultimately run them in production.

  • 22 February 2024, 16:00
    Roslyn Fuller
    Title: From Public Square to Online Decisions - Principles of Digital Democracy [Zoom] [Live Stream]
    Abstract: Dr. Roslyn Fuller is the Managing Director of the Solonian Democracy Institute, which researches alternative democratic practices and tracks the emergence of democracy-enabling technologies and companies. She studied law at the Georg-August-University in Germany (2005) before writing her PhD on Democracy and International Law at Trinity College, Dublin (2010). In addition to lecturing law at Trinity College, the National University of Ireland and Griffith College, Dr. Fuller ran on a platform of direct digital democracy in the 2016 Irish national elections. The author of several books, including Beasts and Gods: How Democracy Changed Its Meaning and Lost Its Purpose (Zed/Bloomsbury, 2015 - shortlisted for the Eric Hoffer Grand Prize in 2021) and In Defence of Democracy (Polity, 2019), Dr. Fuller also contributes to several newspapers on topics related to democracy and international law, including the Los Angeles Review of Books, The Irish Times, and The Financial Times, among others. Her most recent book, Principles of Digital Democracy: Theory and Case Studies was published by de Gruyter in 2023.

    Bio: Both social media and artificial intelligence have been hyped as examples of ‘digital democracy’, while significant resources continue to be devoted to combatting the unwanted ‘side effects’ of internet-mediated political participation, such as ‘filter bubbles’, ‘fake news’, and potential election hacking threats. Such conventional views of ‘online democracy’ as corporate ‘safe spaces’ for networking or data trawling, are often excessively techno-determinist and rarely involve any meaningful role for popular sovereignty. Ironically, the excessive focus on exclusion and control of these systems makes maintaining their integrity, in particular combatting the key threat to democracy - corruption - much more difficult. This talk intertwines considerations of both social and technical security of online democracy, in particular how efforts to control political outcomes as well as online speech, rather than channelling citizens’ natural desire to participate into effective decision-making institutions with transparent implementation, are quickly eroding the basis of political life. Starting with the principles of democracy itself (from its Athenian origins to the “Hot Gates” of modern elections), this talk will explore what drives and what hinders democracy, how to develop a virtuous cycle of participation and how digital democracy can actually improve on some of the key issues of offline democracy today.

  • 15 February 2024, 16:00
    Alexandros Efstratiou
    Title: What I see is all that’s real: The dynamics of polarization and misinformation on social media platforms [Zoom]
    Abstract: The advent of Web 2.0 opened up new challenges in understanding the decentralized flow of information and the influence that social media users and communities exert on each other. Though social media has the capacity of expanding democratic participation, it can also amplify harms like misinformation and political polarization. In this talk, I will present some of our work that demonstrated social psychological phenomena at scale on the Reddit and Twitter platforms. Beginning with a large-scale examination of Reddit’s political sphere, I will discuss the interplay between political echo chamber participation and the subsequent hostility expressed in intergroup interaction contexts. As a follow up, I will delve into the capacity of more moderate Reddit community members to positively influence their peers. Finally, I will discuss the role of polarization dynamics in the spread of misinformation, specifically focusing on a study of how network segregation enabled the misrepresentation of scientific consensus on COVID-19.

    Bio: Alexandros Efstratiou is a PhD candidate at UCL’s CDT in Cybersecurity and the Department of Computer Science. He is part of UCL’s Information Security Research Group (Isec) and the International Data-driven Research for Advanced Modeling and Analysis (iDRAMA) lab. With a background in social psychology and behavioral science, his work analyzes and uncovers social psychological phenomena on social media at scale, particularly focusing on how they relate to the propagation of online misinformation and the enabling of intergroup polarization.

  • 18 January 2024, 16:00
    Doug Zytko, University of Michigan-Flint
    Title: Computer-Mediated Consent as a Lens to Study and Design for Mitigation of Interpersonal Harm [Zoom] [Live Stream]
    Abstract: The absence of consent - or voluntary agreement - is the defining characteristic of interpersonal harms that occur both online and in-person. While various technological solutions to harm have been devised, there is a conspicuous absence of technologies that mediate consent practices themselves: the ways in which people give, receive, and deny agreement to behavior. This is a significant gap because research shows that harms such as sexual violence are often perpetrated unintentionally due to misperceptions of consent, such as overreliance on nonverbal cues and one’s ability to “sense” what behavior is acceptable. In this talk I first present research into how dating apps currently - and inadvertently - mediate consent to sexual activity as a lens to understand how and why computer-mediated sexual violence occurs. I will then delve into ongoing research into the design of new technologies that deliberately mediate consent exchange to mitigate unintentional harm.

    Bio: Dr. Douglas Zytko is an Associate Professor in the College of Innovation & Technology at the University of Michigan-Flint, where he also directs the PhD program in Computing. His research uses consent as a lens to study and design technologies for computer-mediated sexual violence mitigation and ethical collection and processing of personal data for AI model training. Dr. Zytko’s research into computer-mediated consent has won multiple research awards, including Best Paper, Best Paper Honorable Mention, and Impact Recognition at CSCW. His research has been funded by the National Science Foundation, United States Department of Defense, and industry partners.

  • 7 December 2023, 16:00
    Marco Gutfleisch, Ruhr University Bochum
    Title: Secure Software Engineering: Organisational Challenges and Problems [[Zoom](https://ucl.zoom.us/j/97688304453?pwd=ekJ2eVFmd0hnSWUyVUV0U2dhblF5UT09] [Live Stream]
    Abstract: Secure Software Engineering is often associated with writing secure code or the execution of penetration tests. But these are just two of hundreds of activities that support secure software development. Several concepts, models and guidelines exist – still many developments teams struggle with implementing security into their daily routines. Even though there exist a variety of studies investigating individual developers within controlled environments, there are only a few studies, which investigated the effect of security interventions with developers in organizational settings. The main part of the talks focuses on one of the most prominent organisational security concepts: Security champions. In Organizations, security champions serve as local representatives that encourage and monitor security policies, with the task of being an extension of the security management team.

    Bio: Since 2019 Marco Gutfleisch is a Usable Security Researcher at the chair of Human-Centred Security at the Ruhr University Bochum and is currently enrolled in a PhD program within the Cluster of Excellence CASA. Its research focuses on protective measures against large-scale attackers, for which Marco is investigating problems and solutions that could support the software industry to develop more secure and usable software while considering different organisational, technical, and human factors. Before he started his PhD, he worked for three years in quality assurance at a large German cybersecurity software company. Influenced by his industrial experience and following the example of his first supervisor Prof. M. Angela Sasse, his research is characterized by practicality and applicability.


  • 22 November 2023, 16:00
    Maria Santos, UCL CDT Cybersecurity & Information Security
    Post-quantum secure signature schemes from isogenies [Zoom] [Live Stream]
    Abstract: Most public-key cryptography that is deployed in today’s systems is susceptible to attacks by quantum computers. With increasing investment in the development of large-scale quantum computers, it is important to develop cryptography that is secure against both classical and quantum attacks. Considering this, in 2016, NIST began an effort to standardise post-quantum secure key exchange mechanisms and signature schemes. In this talk, we will focus on signature schemes, and introduce SQIsign, the only isogeny-based signature scheme that was submitted to NISTs recent alternate call for signatures and boasts the smallest combined signature and public key sizes. We will discuss the benefits and drawbacks of SQIsign compared to other post-quantum secure signatures, and present joint work that aims to obtain faster verification for SQIsign.

    Bio: I am currently a fourth year PhD student, part of the CDT for Cybersecurity and the Information Security group at University College London. My main interests are post-quantum cryptography, specifically isogeny-based protocols, and applications. My main supervisors are Philipp Jovanovic and Sarah Meiklejohn. Before starting my PhD, I completed my Undergraduate and master’s degree in Mathematics at the University of Cambridge, specialising in Algebraic Number Theory and Elliptic Curves. I am passionate about communicating cryptography and mathematics to others and have written a series of blog posts on isogeny-based cryptography, as well as giving a number of outreach talks to STEM students. Visit my website for more information about these: www.mariascrs.com.


  • 16 November 2023, 16:00
    Angela Sasse
    Guardians of the Digital Galaxy: protecting the cybersecurity workforce [Zoom] [Live Stream]
    Abstract: Over the past 5 years, there has been a growing body of research that suggests cybersecurity professionals are not exactly a happy and confident lot. CISOs complain about not being understood and supported by company leaders on the one hand and fail to connect with other organisational functions and employees on the other. Hielscher et al. found that CISOs like the idea of human-centred security, but feel powerless to make the necessary changes to security policies and mechanisms. Analysing the discourse security in organisations and in public forums, Menges et al. describe the relationship status of security experts and non-experts as dysfunctional. The picture is no better with other security specialists: Sundaramurthy et al. report that security operations staff suffer from burnout and stress. Gutfleisch et al. found that security champions in software development teams lack the tools and resources to help their colleagues produce secure code. We also know that cyber security experts are increasingly becoming the focus of attackers, and that they and their families can become subject to social engineering attacks, entrapment and blackmail. Given that we already have a shortage of qualified cybersecurity staff, a reduction in capacity due to stress, burnout or fear has to be prevented. In this talk, I will discuss how we can prepare cyber security professionals better for their job, and what organisations should do to protect them and themselves.

    Bio: M. Angela Sasse is the professor of human-centred technology at UCL, and of human-centred security at Ruhr-University Bochum in Germany. She is a pioneer of usable security research – the 1999 paper “Users are not the Enemy” (co-authored with Anne Adams) is the most cited publication on usable security. She was the Director of the Research Centre on Socio-Technical Security (RISCS) from 2012-2017, and co-authored the NCSC CyberBoK Chapter on Human Factors. In recent years, her focus has been empirical research on how large organisations manage cybersecurity risks. She is a fellow of the Royal Academy of Engineering and the German National Academy of Sciences “Leopoldina”.


  • 1 November 2023, 16:00
    Amirreza Sarencheh, University of Edinburgh
    PEReDi: Privacy-Enhanced, Regulation Friendly and Distributed Central Bank Digital Currencies [Zoom] [Live Stream]
    Abstract: Central Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and as such need to tackle two fundamental requirements that are in conflict. On the one hand, it is desired they are private so that a financial “panopticon” is avoided, while on the other, they should be regulation-friendly in the sense of facilitating any threshold-limiting, tracing, and counterparty auditing functionality that is necessary to comply with regulations such as Know Your Customer (KYC), Anti Money Laundering (AML) and Combating Financing of Terrorism (CFT) as well as financial stability considerations. In this work, we put forth a new model for CBDCs and an efficient construction that, for the first time, fully addresses these issues simultaneously. Moreover, recognizing the importance of avoiding a single point of failure, our construction is distributed so that all its properties can withstand suitably bounded entities getting corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction uses suitable cryptographic tools to thwart man-in-the-middle attacks, it showcases a novel traceability mechanism with significant performance gains compared to previously known techniques and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and communication overhead when the sender and receiver are honest. Going beyond “simple” payments, we also discuss how our scheme can facilitate one-off large transfers complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized in the Universal Composition (UC) framework providing in this way a modular and secure way to embed it within a larger financial ecosystem. The paper is available at https://eprint.iacr.org/2022/974.

    Bio: Amirreza Sarencheh is a third-year cryptography and blockchain Ph.D. candidate at the University of Edinburgh under the supervision of Aggelos Kiayias and Markulf Kohlweiss. With experience in both entrepreneurship and academia, his research interest is providing efficient and secure solutions to challenging real-world problems with a focus on blockchain. While pursuing his Ph.D., he has worked with renowned blockchain companies, including IOG (IOHK) and Polymesh. He has designed novel Central Bank Digital Currency, Decentralized Identity, and Stablecoin systems with a focus on achieving full privacy, comprehensive regulatory insights, and efficiency simultaneously.

  • 26 October 2023, 16:00
    Alberto Sonnino
    Narwhal and Bullshark: DAG-based Mempool and Efficient BFT Consensus [Zoom] [Live Stream]
    Abstract: We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal histories of transactions. Narwhal tolerates an asyn- chronous network and maintains high performance despite failures. Narwhal is designed to easily scale-out using multiple workers at each validator, and we demonstrate that there is no foreseeable limit to the throughput we can achieve. Composing Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff) yields significantly better throughput even in the presence of faults or intermittent loss of liveness due to asynchrony. However, loss of liveness can result in higher latency. To achieve overall good performance when faults occur we design Tusk, a zero-message overhead asynchronous consensus protocol, to work with Narwhal. We demonstrate its high performance under a variety of configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800 tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000 tx/sec with about 3 seconds latency. Under faults, both proto- cols maintain high throughput, but Narwhal-HotStuff suffers from increased latency. We then present BullShark, the first directed acyclic graph (DAG) based asynchronous Byzantine Atomic Broadcast protocol that is optimized for the common synchronous case. Like previous DAG-based BFT protocols, BullShark requires no extra communication to achieve consensus on top of building the DAG. That is, parties can totally order the vertices of the DAG by interpreting their local view of the DAG edges. Unlike other asynchronous DAG-based protocols, BullShark provides a practical low latency fast-path that exploits synchronous periods and deprecates the need for notoriously complex view-change mechanisms. BullShark achieves this while maintaining all the desired properties of its predecessor DAG-Rider. Namely, it has optimal amortized communication complexity, it provides fairness and asynchronous liveness, and safety is guaranteed even under a quantum adversary. In order to show the practicality and simplicity of our approach, we also introduce a standalone partially synchronous version of BullShark which we evaluate against the state of the art. The implemented protocol is embarrassingly simple (200 LOC on top of an existing DAG-based mempool implementation). It is highly efficient, achieving for example, 125,000 transaction per second with a 2 seconds latency for a deployment of 50 parties. In the same setting the state of the art pays a steep 50% latency increase as it optimizes for asynchrony.

    Bio:I received my PhD from University College London (UCL) advised by George Danezis and Jens Groth. During my PhD I co-founded chainspace.io, which built a scalable and privacy-preserving smart contract platform. Chainspace scales by sharding its state among sub-quorums of nodes and supports privacy-preserving smart contracts by separating the contract’s execution logic from its verification through zero-knowledge proofs. The company was built from several academic works such as chainspace, byzcuit, and coconut (the first three chapters of my PhD thesis). We were then acquired by Facebook (now named Meta) in February 2019. I then helped design the Novi wallet and Libra payment system. Designing Libra (later renamed Diem) required numerous research innovations such as the Jolteon consensus protocol, the Carousel leader election protocol, and the Twins testing framework. The project also led to the creation of the open-source and production-ready Diem codebase that became the foundation of Aptos. While at Meta I also co-authored the FastPay consensus-less payment system (the last chapter of my PhD thesis), the Narwhal DAG-based mempool, and the Bullshark consensus protocol. I then left Meta in 2022 to commercialize these projects, branded as Sui. Link to papers • https://sonnino.com/papers/narwhal-and-tusk.pdf • https://sonnino.com/papers/bullshark.pdf

  • 27 July 2023, 16:00
    Dr Aydin Abadi
    Earn while You Reveal: Private Set Intersection that Rewards Participants [Zoom] [Live Stream]
    Abstract: Private Set Intersection (PSI) is an elegant cryptographic protocol that allows parties to find the intersection of their private sets without revealing anything beyond the result. PSIs have been used in various privacy-enhancing technologies; for instance, in federated machine learning, combating financial fraud, or COVID-19 contact tracing solutions.
    In this talk, I will highlight two facts about PSIs; namely, (1) a non-empty result always reveals something about the private input sets of the parties and (2) in various variants of PSI, not all parties necessarily receive or are interested in the result. I will explain a vital research gap; namely, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy.
    Also, I will talk about our multi-party PSI, called “Anesidora”, which fills the aforementioned gap. Anesidora rewards parties who contribute their private input sets to the protocol. It is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries. During the development of Anesidora, we devised (i) the first fair multi-party PSI called “Justitia” and (ii) propose the notion of unforgeable polynomials. In this talk, I will discuss these two notions as well.

    Bio: Aydin Abadi is a Senior Research Fellow at UCL. His main research interests include information security, privacy, and cryptography, with a focus on (1) developing Privacy Enhancing Technologies, (2) devising solutions to deal with payment fraud, and (3) blockchain technology. Before joining UCL he held a lectureship position at the University of Gloucestershire and before that, he was a Research Associate at the Blockchain Technology Lab, at the University of Edinburgh.

  • 30 June 2023, 14:15 [Hybrid Event, Register]
    Savvas Zannettou, TU Delft
    Understanding and Detecting Hateful Content Using Contrastive Learning
    (ACE-CSR Event)

    Abstract: Indisputably, the Web has revolutionized how people receive, consume, and interact with information. At the same time, unfortunately, the Web offers a fertile ground for online harm like the spread of hateful content. There is a pressing need to develop techniques and tools to understand, detect, and mitigate these issues on the Web. In this talk, I will present our work on understanding and detecting hateful content using recent Artificial Intelligence (AI) advancements. The talk will focus on how we can use AI models based on contrastive learning to detect hateful content across multiple modalities (text and images) and understand the spread and evolution of hateful content online.

    Bio: Savvas Zannettou is an Assistant Professor at Delft University of Technology (TU Delft) and an associated researcher with the Max Planck Institute for Informatics. Before joining TU Delft, he was a Postdoctoral Researcher at Max Planck Institute for Informatics. He obtained his PhD from Cyprus University of Technology in 2020. His research focuses on applying machine learning and data-driven quantitative analysis to understand emerging phenomena on the Web, such as the spread of false information and hateful rhetoric. Also, he is interested in understanding algorithmic recommendations on the Web, their effect on end-users, and to what extent algorithms recommend extreme content. Finally, he is interested in analyzing content moderation systems to understand the effectiveness of moderation interventions on the Web.

  • 30 June 2023, 15:00 [Hybrid Event, Register]
    Gianluca Stringhini, Boston University
    Computational Methods to Measure and Mitigate Online Disinformation
    (ACE-CSR Event)

    Abstract: The Web has allowed disinformation to reach an unprecedented scale, allowing it to become ubiquitous and harm society in multiple ways. To be able to fully understand this phenomenon, we need computational tools able to trace false information, monitoring a plethora of online platforms and analyzing not only textual content but also images and videos. In this talk, I will present my group’s efforts in developing tools to automatically monitor and model online disinformation. These tools allow us to recommend social media posts that should receive soft moderation, to identify false and misleading images posted online, and to detect inauthentic social network accounts that are likely involved in state-sponsored influence campaigns. I will then discuss our research on understanding the potentially unwanted consequences of suspending misbehaving users on social media.

    Bio: Gianluca Stringhini is an Assistant Professor in the Electrical and Computer Engineering Department at Boston University, holding affiliate appointments in the Computer Science Department, in the Faculty of Computing and Data Sciences, in the BU Center for Antiracist Research, and in the Center for Emerging Infectious Diseases Policy & Research. In his research Gianluca applies a data-driven approach to better understand malicious activity on the Internet. Through the collection and analysis of large-scale datasets, he develops novel and robust mitigation techniques to make the Internet a safer place. Over the years, Gianluca has worked on understanding and mitigating malicious activities like malware, online fraud, influence operations, and coordinated online harassment. He received multiple prizes including an NSF CAREER Award in 2020, and his research won multiple Best Paper Awards. Gianluca has published over 100 peer reviewed papers including several in top computer security conferences like IEEE Security and Privacy, CCS, NDSS, and USENIX Security, as well as top measurement, HCI, and Web conferences such as IMC, ICWSM, CHI, CSCW, and WWW.

  • 12 June 2023, 14:00 (in person, 169 Euston Road R103)
    Hans W.A. Hanley, Stanford University
    Online Information Flows and Ecosystems: Understanding the Role of Misinformation and AI-Generated Media

    Abstract: Misinformation, propaganda, and outright lies proliferate on the web, with some of these narratives having dangerous real-world consequences on public health, elections, and individual safety. With the advent of ChatGPT and other generative AI models, individual actors can scale misinformation campaigns to increasingly large degrees. However, despite the known impact that misinformation on online ecosystems and the potential dangers of AI-written content, the research community largely lacks automated and programmatic approaches for tracking narratives as well as the role of AI-generated news articles. In this work, firstly utilizing daily scrapes of 3,074 news websites (both mainstream and misinformation), the large-language model MPNet, and DP-Means clustering, we build a system to automatically isolate and analyze the narratives being spread within online ecosystems. Secondly, to understand the impact of AI-Written content, we present one of the first large-scale studies of the prevalence of AI-written articles within online news media. Training a DeBERTa-based synthetic news detector and classifying over 12.91 million articles, we find that between January 1, 2022, and April 1, 2023, the relative number of synthetic news articles increased by 79.4% on mainstream websites while increasing by 342% on misinformation sites.

    Bio: Hans is a 3rd year Ph.D. student at Stanford University supervised by Professor Zakir Durumeric and researching in the Empirical Security Research Group. His research focuses on natural language processing, computer security, and the spread of misinformation online. His research is supported by the Meta/Facebook Ph.D. Research Fellowship and the National Science Foundation Graduate Research Fellowship. Hans completed two Masters’ degrees in Computer Science and in Statistics with the Daniel M. Sachs Scholarship at the University of Oxford. Hans completed his undergraduate degree in Electrical Engineering at Princeton University.


  • 25 May 2023, 16:00
    Joon Sung Park, Stanford University
    Generative Agents: Interactive Simulacra of Human Behavior

    Abstract: Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents–computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture–observation, planning, and reflection–each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

    Bio: Joon Sung Park is a third-year computer science PhD student in the Human-Computer Interaction and Natural Language Processing groups at Stanford University, advised by Michael S. Bernstein and Percy Liang. He explores how we can leverage advances in natural language processing and machine learning, such as the development of large language models, to enable new interactive opportunities. His work has won a best paper award at CHI as well as multiple best paper nominations and other paper awards at CHI, CSCW, and ASSETS, and has been reported in venues such as Nature Machine Intelligence and Communications of ACM. Joon is recognized with the Microsoft Research Ph.D. Fellowship (2022), Stanford School of Engineering Fellowship (2021), and Siebel Scholar Award (2019). He holds a bachelor’s degree in Computer Science from Swarthmore College, and a master’s degree in Computer Science under the supervision of Karrie Karahalios from UIUC.

  • 11 May 2023, 16:00
    Dr Carolina Are, Northumbria University’s Centre for Digital Citizens
    Pole dancing against the algorithm

    Abstract: What can pole dancing teach tech companies about content governance? Dr Carolina Are, (a.k.a. @ bloggeronpole) is a London-based Italian researcher, activist and blogger with a PhD in content moderation. An Innovation Fellow at Northumbria University’s Centre for Digital Citizens, Carolina is currently leading a project at the intersection between online abuse and online censorship, of which she has both academic and personal experience as a platform governance researcher and as a social media creator herself. Carolina has received direct apologies from Instagram about shadowbanning and led international protests and campaigns against online censorship. As a result, she has published some of the first studies on Instagram’s shadowban of pole dancing, and continues to publish work on de-platforming, online abuse and content moderation. In this talk, she will discuss algorithmic bias against nudity, its relationship with the patriarchy and with whorephobia, sharing some insights from her latest studies looking at mass reporting of sex positive activists and sex workers, as well as with tips, gossip and concerns about Big Tech’s power over our bodies.

    Bio: Dr Carolina Are, aka @bloggeronpole, has a PhD in content moderation and is currently working as Innovation Fellow at Northumbria University’s Centre for Digital Citizens. Following her own experiences of censorship on Instagram and TikTok, she has been researching on algorithmic bias against nudity and sexuality on social media, and has published the first study on the shadowbanning of pole dancing in Feminist Media Studies. Her work has been published in Social Media + Society, Media, Culture & Society and Porn Studies, and it has appeared in The New York Times, The Atlantic, The Guardian, The Conversation, the BBC, Wired, the MIT Technology Review. She’s behind various petitions, campaigns and studies to fight for more equal moderation of nudity and sexuality on social media, having been one of the founders of #EveryBODYVisible in 2019 and having created a recent petition against Instagram’s terms of use that was signed by over 100,000 people.

  • 23 March 2023, 16:00
    Christopher Wood, Research Team Lead, Cloudflare Research
    The Path of Privacy-Preserving Measurement

    Abstract: For many applications, measurement is an essential part of improving the end-user’s experience. Measurement might allow applications to tune performance, identify known issues or bugs, or experiment with new features. However, in some cases, such measurements may contain client sensitive information, and refusing to collect such information may impede or even prevent effective measurement. In recent years, a number of technologies for privacy-preserving measurement have been developed to help navigate this tussle. One successful example is Prio, a lightweight form of multiparty computation specifically designed for computing aggregate statistics without revealing any one user’s contribution to the aggregate. In this talk, we will discuss the motivation behind Prio and related MPC techniques in practice, cover its trajectory from research to practice, and highlight open problems in the space of privacy-preserving measurement.

    Bio: Christopher Wood is a Research Lead at Cloudflare Research. Outside of Cloudflare, he is co-chair of the TLS and MASQUE working groups at the IETF, as well as the PEARG research group in the IRTF. Before joining Cloudflare, Christopher worked on transport security, privacy, and cryptography engineering at Apple, as well as future Internet architectures at Xerox PARC. His interests lay at the intersection of network protocol design, communications security, privacy, and applied cryptography. At Cloudflare, he leads projects focused on security and privacy enhancements to a variety of systems, protocols, and applications. Christopher holds a Ph.D. in computer science from UC Irvine.

  • 9 March 2023, 16:00
    Prof. Dr. Karola Marky, Ruhr-University Bochum, Germany
    Can we vote online? Or should we stay away from it? On overview of the unique security and human factors challenges in online voting

    Abstract: This talk provides a summary of state-of-the-art online voting protocols specifically considering security and human factors. Based on recent research, the audience will learn about the unique challenges that researchers and developers face when developing online voting systems that a) offer specific security properties, b) consider capabilities of the entire voter population and c) unique scalability aspects. Further challenges based on society about also global threats will be presented and discussed.

    Bio: Karola Marky is an associate professor at the Ruhr-University Bochum, Germany, where she leads the Digital Sovereignty Group. Her main research area is Human Factors in Cybersecurity and Privacy particularly focusing on Digital Sovereignty, i.e., informational self-determination of individuals in their digital lives. She has major research contributions in the fields of privacy solutions for IoT devices, (two-factor) authentication, and electronic voting published at top conferences and journal, such as CHI, TOCHI, SOUPS, or Usenix Security.

2022

  • 13 December 2022, 11:00, Hybrid Seminar
    Prof. Ahmad-Reza Sadeghi (TU Darmstadt)
    Pushing the Frontiers of Federated Learning: From Security Applications to Mitigation of Poisoning Attack

    Abstract: Federated Learning (FL) is a collaborative machine learning approach allowing several parties to jointly train a model without the need to share their private local datasets. FL is an enabling technology that can benefit distributed security-critical applications. Recently, FL has been shown to be susceptible to poisoning attacks, in which an adversary injects manipulated model updates into the federated model aggregation process to destroy or corrupt the resulting predictions, or implant hidden functionalities (aka backdoors).
    In this talk, we present our recent research work and experiences, also with industrial partners, concerning both the utilization of FL in large scale security applications as well as building FL systems resilient to poisoning attacks. Finally, we discuss the lessons learned and future research directions.

    Bio: Ahmad-Reza Sadeghi is a professor of Computer Science and the head of the System Security Lab at Technical University of Darmstadt, Germany. He has been leading several Collaborative Research Labs with Intel since 2012, and with Huawei since 2019. He has studied both Mechanical and Electrical Engineering and holds a Ph.D. in Computer Science from the University of Saarland, Germany. Prior to academia, he worked in R&D of IT-enterprises, including Ericsson Telecommunications. He has been continuously contributing to security and privacy research field. He was Editor-In-Chief of IEEE Security and Privacy Magazine, and has been serving on a variety of editorial boards such as ACM TODAES, ACM TIOT, and ACM DTRAP. For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany. In 2018, he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security. In 2021, he was honored with Intel Academic Leadership Award at USENIX Security conference for his influential research on cybersecurity and in particular on hardware-assisted security. Ahmad is also the recipient of the prestigious Advanced European Research Council Award. https://www.informatik.tu-darmstadt.de/systemsecurity/people_sys/people_details_sys_45184.en.jsp

  • 8 December 2022, 14:00
    Dr Isabel Straw, Emergency Doctor, PhD in Artificial Intelligence in Healthcare
    When brain implants go wrong: The cybersecurity of implanted and interconnected medical technologies

    Abstract: The digitisation of the healthcare sector has potentiated a range of previously unseen clinical syndromes. For the individual patient, ingested and implanted medical technologies can malfunction manifesting in novel symptoms and signs. On a population level, our reliance on telemedicine and the ‘Internet of Medical Things (IoMT)’ creates new public health challenges, including the increasing prevalence of healthcare cyberattacks. In the field of medicine, technological vulnerabilities must be framed through the lens of clinical consequence, centring patient impact in the development of threat models. In this talk we will consider these issues through the story of a patient who suffered from a malfunctioning Deep Brain Stimulator (DBS). We will consider the implications of malfunctioning medical technologies, the role of programmers and cybersecurity experts in this field, and gaps in research and guidance that need to be addressed.

    Bio: Isabel specializes in the intersection of Artificial Intelligence (AI), clinical medicine and healthcare inequalities. Alongside her clinical work as an Emergency Doctor, she leads research projects on bias in Medical AI, the cybersecurity of implanted devices and tech-abuse in medical settings. In 2022 she delivered ethical hacking workshops at ‘May Contain Hackers’ in the Netherlands exploring the cybersecurity vulnerabilities of connected medical devices. Additionally, she spoke at the biohacking village of DEFCON (USA) on the topic of malfunctioning Deep Brain Stimulators (DBS). She has experience in international settings both in her clinical work, and in policy settings at the United Nations.
  • 2 December 2022
    Dr Orla Lynskey (London School of Economics)
    The Legal Implications of Synthetic Data
    [Recording upon request]

    Abstract: This seminar will explore the legal implications of artificially generated data (synthetic data). It assesses the implications of synthetic data for law and regulation around three key themes: data access; data privacy and data quality. While these legal implications are not revolutionary, our analysis suggests that synthetic data may require a recalibration of the balancing of interests found in existing legal frameworks. Furthermore, viewing our data governance frameworks through the lens of synthetic data serves to illuminate the key tensions and ambiguities in these frameworks.

    Bio: https://www.lse.ac.uk/law/people/academic-staff/orla-lynskey
  • 10 November 2022
    Caitlin McGrane (RMIT University, Australia)
    Smartphones, surveillance and risk in women’s everyday lives: perspectives from Australia
    [Recording]

    Abstract: Smartphones are an everyday mobile media that most adults use, but there can also be privacy and surveillance risks when using smartphones. Women’s uses of smartphones require specific analysis because technological development and use has been dominated by men and male perspectives have become the default framework through which all people’s uses are understood (Wajcman, 1991). Feminist approaches to technology, such as mobile media (Fortunati, 2009), take seriously women’s relationships to and uses of technology while also remaining critical of how these gendered relations can be experienced unevenly depending on social privilege and marginalisation. This talk focuses on the thoughts, feelings and concerns of four women in relation to their smartphones, and how their smartphones influence their everyday lives. The findings suggested that although participants could recognise the possibility of the risks of surveillance by their smartphones, there were a range of different responses to these feelings. The talk concludes by offering some potential pathways forward to furthering our understanding of how smartphones impact women’s everyday lives and what feminist activism around smartphones and surveillance might entail.

    Bio: Caitlin McGrane is a feminist researcher and online safety expert. She is a PhD candidate in the Digital Ethnography Research Centre at RMIT University. Her doctoral research investigates women’s everyday gendered uses and practices of smartphones, and how mobile media practices influence feminism. She is also the Manager, Policy and Online Safety at Gender Equity Victoria where she leads projects that challenge gender-based harassment and abuse online and in workplaces. Caitlin’s most recent publication is titled ‘Towards an Affirmative Ethics of Women’s Smartphone Uses in Victoria, Australia’ in Australian Feminist Studies.
  • 20 October 2022
    Albrecht Kurze (Chemnitz University of Technology, Germany)
    Learnings from Sensing Home and Guess the Data for IoT Privacy in the Home
    [Recording]

    Abstract: Simple smart home sensors, e.g. for temperature, humidity or light, increasingly collect seemingly inconspicuous data. To investigate how people interact with this type of data we created the “Sensing Home Kit” and used it in a number of deployments in ‘the home’, including our data-driven method “Guess the Data” for individual and collective data work. The talk will summarize a series of articles on this topic. Our findings show that participants often came up with creative ways to make sense and make use of the sensor data. We confirmed prior work showing that human sensemaking of such sensor data can easily reveal domestic activities. We also found unexpected and unintended uses. The ability to reconstruct behavior, exposure of sensitive personal data, and the use of sensor data as evidence and for lateral surveillance within the household easily leads to threats for privacy. Eventually this results in a number of wicked implications for collecting and sharing even simple sensor data in the home, e.g., even if no evil intention was upfront, no AI is interpreting the data, no anonymous “Big Brother” is involved, etc.

    Bio: Albrecht Kurze is a Research Assistant at Chemnitz University of Technology, working at the intersection of human-computer interaction and the Internet of Things. He holds a PhD in computer science, and used to work in interdisciplinary projects with psychology, design and social sciences. He researches how smart, connected technology is used in the home and how a user-centered and participatory design approach can help develop better technologies. He is particularly interested in how sensors, sensor-based smart objects in the home and the interaction with them and their data can benefit users while avoiding undesirable effects, such as privacy infringements.
  • 21 July 2022
    Colin Ife (UCL Alumnus)
    Public PhD Viva: Measuring and Disrupting Malware Distribution Networks: An Interdisciplinary Approach
    [Recording]

    Abstract: Malware Delivery Networks (MDNs) are networks of webpages, servers, devices, and computer files that are used by cybercriminals to proliferate malicious software (or malware) onto victim machines. The business of malware delivery is a complex and multifaceted one that has become increasingly profitable over the last few years. Up until very recently, the research community had conducted insightful but isolated studies into the different facets of malicious file distribution, giving a limited picture of the malicious file delivery ecosystem. Using a data-driven and interdisciplinary approach, this research pursues two goals. One, measure the malicious file delivery ecosystem, bringing prior research into context, and to understand precisely how these malware operations respond to security and law enforcement intervention. And two, taking into account the overlapping research efforts of the information security and crime science communities towards preventing cybercrime, identify mitigation strategies and intervention points to disrupt this criminal economy more effectively.

    Bio: As a member of UCL’s Information Security Research Group and the SECReT Doctoral Training Centre, Colin Ife attained his Ph.D. in Security Science. The themes of his research centred on malware, internet measurements, and cybercrime. He is currently Threat Intelligence Team Lead at Glasswall.
  • 23 June 2022, Distinguished Seminar
    Thomas Ristenpart (Cornell Tech)
    Mitigating Technology Abuse in Intimate Partner Violence
    [Recording]

    Abstract: In this talk, I’ll overview our work on technology abuse in the context of intimate partner violence (IPV). IPV is a widespread social ill affecting about one in four women and one in ten men at some point in their lives. Via interviews with survivors and professionals, online measurement studies, and reverse engineering of malicious tools, our research has provided the most granular view to date of technology abuse in IPV contexts. This has helped educate our efforts on intervention design, most notably in the form of what we call clinical computer security: direct, expert assistance to help survivors navigate technology abuse. Our work led to establishing the Clinic to End Tech Abuse, which has so far worked to help hundreds of survivors of IPV in New York City. The talk will include content on abuse, including discussion of physical, sexual, and emotional violence.

    Bio: Thomas Ristenpart is an Associate Professor at Cornell Tech and a member of the Computer Science department at Cornell University. His research spans a wide range of computer security topics, with recent focuses including digital privacy and safety in intimate partner violence, mitigating abuse and harassment online, cloud computing security, improvements to authentication mechanisms including passwords, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography. Homepage: https://tech.cornell.edu/people/thomas-ristenpart/
  • 23 June 2022
    Yvo Desmedt (University of Texas at Dallas)
    Framing and Realistic Secret Sharing
    [Recording]

    Abstract: The use of Game Theory to Secret Sharing has lead to Rational Secret Sharing (RSS). It claims that from an economic viewpoint it would be irrational for parties to reveal their shares, and so the secret will never be reconstructed! In this presentation we present Realistic Secret Sharing, which we contrast with Rational Secret Sharing (RSS). We do not claim that RSS is wrong, but that it is restricted to a limited number of settings. In the presentation we explain when these settings occur and when not. In the last case we have realistic secret sharing, and the secret will be reconstructed! In the 2nd part of this talk, we introduce forensics aspects of secret sharing. Suppose that a dealer makes a legal will and distributes shares to family members using Shamir Secret Sharing scheme. Obviously, some of these parties are interested in having a preliminary (i.e., before the death of the dealer), unauthorized, reconstruction of the secret. When the will is released preliminary, one may want to trace who the parties were that illegally reconstructed the secret. Unfortunately such a forensics analysis has no value because the parties releasing the will can frame others. This talk is open to anyone familiar only with linear algebra. The talk is based on papers published in GameSec 2019 and IEEE Trans. Inf. Forensics Security 2021.

    Bio: Yvo Desmedt is the Jonsson Distinguished Professor at the University ofTexas at Dallas, a Honorary Professor at University College London, a Fellow of the International Association of Cryptologic Research (IACR) and a Member of the Belgium Royal Academy of Science. He received his Ph.D. (1984, Summa cum Laude) from the University of Leuven, Belgium. He held positions at: Universite de Montreal, University of Wisconsin - Milwaukee (founding director of the Center for Cryptography, Computer and Network Security), and Florida State University (Director of the Laboratory of Security and Assurance in Information Technology). He was BT Chair and Chair of Information Communication Technology at University College London. He has held numerous visiting appointments. He is the Editor-in-Chief of IET Information Security and Chair of the Steering Committee of CANS. He was Program Chair of e.g., Crypto 1994, the ACM Workshop on Scientific Aspects of Cyber Terrorism 2002, and ISC 2013. He has authored over 200 refereed papers, primarily on cryptography, computer security, and network security. He has made important predictions, such as his 1983 technical description how cyber could be used to attack control systems (realized by Stuxnet), and his 1996 prediction hackers will target Certifying Authorities (DigiNotar was targeted in 2011). He also authored the first paper on Hardware Trojan (Proc. Crypto 1986). He was requested to give feedback on the report by the US Presidential Commission on Critical Infrastructures Protection, on the list of Top 10 Scientific Issues Concerning Development of Human Society (China), and gave feedback on some US NIST standards.
  • 14 June 2022
    Savvas Zannettou (TU Delft)
    Towards Understanding Soft Moderation Interventions on the Web
    [Recording]

    Abstract: The spread of misinformation online is a challenging problem with a substantial societal impact. Motivated by this, social media platforms implement and have in-place content moderation systems that usually use a combination of AI and human moderators to mitigate the spread of harmful content like misinformation. In this talk, I will provide an overview of content moderation interventions that are applied online by social media platforms and present some of my work that focuses on understanding the use and effectiveness of soft moderation interventions (e.g., the addition of a warning label that is attached along with potentially harmful content) on two social media platforms (Twitter and TikTok).

    Bio: Savvas Zannettou is an Assistant Professor in the Technology, Policy, and Management (TPM) faculty at TU Delft and an associated researcher with the Max Planck Institute for Informatics. Before joining TU Delft, he was a Postdoctoral Researcher at Max Planck Institute for Informatics. Savvas’ research focuses on applying machine learning and data-driven quantitative analysis to understand emerging phenomena on the Web, such as the spread of false information and hateful rhetoric. Also, he is interested in understanding algorithmic recommendations on the Web, their effect on end-users, and to what extent algorithms recommend extreme content. Finally, he is interested in analyzing content moderation systems to understand the effectiveness of moderation interventions on the Web.

  • 26 May 2022
    Megan Knittel, Michigan State University
    The Internet of Things and Intimate Partner Abuse: Examining Prevalence, Risks, and Outcomes
    [Recording]

    Abstract: In this talk, I will begin with a discussion of a recent paper examining prevalence, risk factors, support-seeking, and personal outcomes of Internet of Things (IoT)-mediated intimate partner abuse. We conducted a survey (N=384) using the MTurk platform of adult women living in the United States who self-reported having experienced intimate partner abuse. We found that approximately 20% of women reported experiencing adverse behavior from an intimate partner using an IoT device, with the most common perpetration occurring with personal assistant devices and GPS enabled devices. Additionally, we found that Internet use skills and privacy/security behavior did not mitigate experiencing violence or adverse outcomes. Finally, our data suggest that experiencing IoT-mediated abuse predicted more severe personal outcomes than non-IoT mediated abuse. I will discuss the implications of these findings for human computer interaction design and information policy. For the last part of my talk, I will also discuss preliminary findings from my dissertation. For this work, I am conducting a netnography of online support spaces in conjunction with interviews with survivors to further examine the role of networked homes in experiences of abuse and support-seeking.

    Bio: Megan Knittel is a 4th year PhD candidate in the Department of Media & Information and the James H. and Mary B. Quello Center for Media & Information Policy at Michigan State University. Her research centers on the role of social computing technologies in experiences of identity-related violence and marginalization. Much of her work is focused on online communities and how these spaces can support collaborative sense-making for the adoption and use of emerging technologies, particularly for marginalized communities and topics. Her dissertation project, “Smart Homes, Smart Harms: Understanding Risks, Impacts, and Support-Seeking in Cases of Internet of Things-Mediated Intimate Partner Violence”, centers on using qualitative methodologies to understand how sensor-based computing devices that make up the Internet of Things intersect with trajectories of intimate partner abuse, with an emphasis support-seeking strategies, barriers, and outcomes.

  • 19 May 2022
    Karl Wüst, CISPA
    Platypus: A Central Bank Digital Currency with Unlinkable Transactions and Privacy-Preserving Regulation
    [Recording]

    Abstract: Due to the popularity of blockchain-based cryptocurrencies, the increasing digitalization of payments, and the constantly reducing role of cash in society, central banks have shown an increased interest in deploying central bank digital currencies (CBDCs) that could serve as a digital equivalent of cash. While most recent research on CBDCs focuses on blockchain technology, it is not clear that this choice of technology provides the optimal solution. In particular, the centralized trust model of a CBDC offers opportunities for different designs. This talk presents a design for retail CBDCs that builds on ideas from traditional (centralized) e-cash schemes instead of using a blockchain-based system. This CBDC design, called Platypus, provides strong privacy, high scalability, and an expressive but simple regulation mechanism, which are all critical features for a CBDC. Platypus achieves these properties by adapting techniques similar to those used in anonymous blockchain cryptocurrencies like Zcash, applying them to the e-cash context, and combining them with a novel privacy-preserving regulation mechanism.

    Bio: Karl Wüst is a tenure-track faculty member at the CISPA Helmholtz Center for Information Security since October 2021. Previously, he completed his PhD at ETH Zurich in the System Security Group. His research interests are broadly in information security with a particular focus on security and privacy aspects of digital currency and smart contract systems as well as some aspects of trustworthy computing. His research combines techniques from cryptography, distributed systems, and trusted hardware to build systems that are practical and balance the trade-off between reducing trust assumptions and high performance.

  • 12 May 2022
    Harel Berger, Ariel University
    Advanced Android malware attacks against ML detection systems
    [Recording]
    Abstract:A growing number of malware detection methods are heavily based on Machine Learning (ML) and Deep Learning techniques. However, these classifiers are often vulnerable to evasion attacks, in which an adversary manipulates a malicious instance from being detected. This study offers a framework that enhances the effectiveness of ML-based malware detection systems in the field of Android application packages (APK). This work follows previous work on the PDF domain. This framework analyzes different aspects of defenses based on retraining methods of problem-space and feature-space evasion attacks. Also, several key insights were drawn during this research. The first insight is the creation of the malicious predictor system that tries to predict if an evasion attack is successful. The second insight is the effect of merging two types of feature sets to address evasion attacks of multiple types.

    Bio: Harel Berger received his B.Sc. degree in Computer Science from Bar Ilan University, Ramat Gan, Israel, in 2016, and his M.Sc. in Computer Science and Mathematics from Ariel University, Ariel, Israel, in 2018, where he is currently pursuing the Ph.D. in the area of mobile security and network security in the Department of Computer Science. He also received his B.Ed. from Hertzog college in Alon Shvut in 2013.
  • 3 March 2022
    Sahar Abdelnabi, CISPA
    Multi-modal Fact-checking: Out-of-Context Images and How to Catch Them
    [Recording]
    Abstract: Misinformation is now a major problem due to its potential high risks to our core democratic and societal values and orders. Out-of-context misinformation is one of the easiest and most effective ways used by adversaries to spread viral false stories. In this threat, a real image is re-purposed to support other narratives by misrepresenting its context and/or elements. This talk will present our recent work to establish the first benchmark for multi-modal fact-checking. The internet is being used as the go-to way to verify information using different sources and modalities. Our goal is an inspectable method that automates this time-consuming and reasoning-intensive process by fact-checking the image-caption pairing using Web evidence. We leverage evidence using Web search via one modality, and perform a cycle consistency check to reason against the other modality. We propose a novel detection model to mimic the human fact-checking across the same and different modalities. Our results show that our framework is on a par with the average human performance, and significantly outperforms baselines that do not consider external evidence.

    Bio: Sahar Abdelnabi is a PhD candidate at CISPA Helmholtz Center for Information Security, advised by Prof. Dr. Mario Fritz. She performs interdisciplinary research in the broad intersection of natural language processing, computer vision, and machine learning with security. This includes studying the vulnerabilities, limitations, and malicious use of ML models and how to defend against them (e.g., deepfakes, watermarking, and models attribution), in addition to leveraging ML to develop solutions for technical problems with significant social impacts (e.g., misinformation, phishing).
  • 10 February 2022
    Bogdan Kulynych, EPFL
    Disparate Vulnerability to Membership Inference Attacks
    [Recording]
    Abstract: A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. This talk will present an in-depth theoretical and empirical study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. On the theoretical side, I will present necessary and sufficient conditions for preventing MIAs, both on average and for population subgroups, using a new notion of distributional generalization. I will also show the connections of disparate vulnerability to algorithmic fairness and to differential privacy. On the practical side, I will show that estimating disparate vulnerability to MIAs by naïvely applying existing attacks can lead to overestimation. I will show which attacks are suitable for estimating disparate vulnerability and provide a statistical framework for doing so reliably. I will present experiments finding statistically significant evidence of disparate vulnerability in realistic settings. More details are in the paper: https://arxiv.org/abs/1906.00389. This is a joint work with Mohammad Yaghini (University of Toronto), Giovanni Cherubin (Alan Turing Institute), Michael Veale (University College London), and Carmela Troncoso (EPFL).

    Bio: Bogdan Kulynych is a PhD student at EPFL SPRING Lab. His interest is in studying harmful effects of machine-learning, algorithmic, and optimization systems, and, leveraging security and privacy techniques and principles, developing mitigations against these harmful effects.
  • 3 February 2022
    Amir Naseredini, University of Sussex
    Systematic Analysis of Programming Languages and Their Execution Environments for Spectre Attacks [Recording]
    Abstract: In this paper, we analyze the security of programming languages and their execution environments (compilers and interpreters) with respect to Spectre attacks. The analysis shows that only 16 out of 42 execution environments have mitigations against at least one Spectre variant, i.e., 26 have no mitigations against any Spectre variant. Using our novel tool Speconnector, we develop Spectre proof-of-concept attacks in 8 programming languages and on code generated by 11 execution environments that were previously not known to be affected. Our results highlight some programming languages that are used to implement security-critical code, but remain entirely unprotected, even three years after the discovery of Spectre.

    Bio: Amir Naseredini is a Ph.D. candidate and an Associate Tutor at FoSS group at the University of Sussex. After obtaining his Doctoral degree, his ultimate goal was to pursue a career inline with his research interest in a dynamic research environment including pioneer companies and/or academia. https://sahnaseredini.github.io/
  • 27 January 2022
    Sandra Deepthy Siby, EPFL
    WebGraph: Capturing Advertising and Tracking Information Flows for Robust Blocking
    [Recording]
    Abstract: Users rely on ad and tracker blocking tools to protect their privacy. Unfortunately, existing ad and tracker blocking tools are susceptible to mutable advertising and tracking content. In this paper, we first demonstrate that a state-of-the-art ad and tracker blocker, AdGraph, is susceptible to such adversarial evasion techniques that are currently deployed on the web. Second, we introduce WebGraph, the first ML-based ad and tracker blocker that detects ads and trackers based on their action rather than their content. By featurizing the actions that are fundamental to advertising and tracking information flows – e.g., storing an identifier in the browser or sharing an identifier with another tracker – WebGraph performs nearly as well as prior approaches, but is significantly more robust to adversarial evasions. In particular, we show that WebGraph achieves comparable accuracy to AdGraph, while significantly decreasing the success rate of an adversary from near-perfect for AdGraph to around 8% for WebGraph. Finally, we show that WebGraph remains robust to sophisticated adversaries that use adversarial evasion techniques beyond those currently deployed on the web.

    Bio: Sandra is a PhD candidate in the Security and Privacy Engineering (SPRING) lab at EPFL. Her research interests are mainly in the areas of network security, web security, and privacy. The overarching theme of her research is to understand what we can learn from analysing meta-data, in the context of security and privacy. She applies this analysis to two use-cases: improving resistance of networking protocols to traffic analysis, and developing automated tracking detection on websites.
  • 20 January 2022
    Bristena Oprisanu – Public PhD Talk
    Evaluating Methods for Privacy-Preserving Data Sharing in Genomics
    [Recording]
    Abstract: The availability of genomic data is often essential to progress in biomedical re- search, personalized medicine, drug development, etc. However, its extreme sensitivity makes it problematic, if not outright impossible, to publish or share it. In this dissertation, we study and build systems that are geared towards privacy preserving genomic data sharing. We first look at the Matchmaker Exchange, a platform that connects multiple distributed databases through an API and allows researchers to query for genetic variants in other databases through the network. However, queries are broadcast to all researchers that made a similar query in any of the connected databases, which can lead to a reluctance to use the platform, due to loss of privacy or competitive advantage. In order to overcome this reluctance, we propose a framework to support anonymous querying on the platform. Since genomic data’s sensitivity does not degrade over time, we analyze the real-world guarantees provided by the only tool available for long term genomic data storage. We find that the system offers low security when the adversary has access to side information, and we support our claims by empirical evidence. We also study the viability of synthetic data for privacy preserving data sharing. Since for genomic data research, the utility of the data provided is of the utmost importance, we first perform a utility evaluation on generative models for different types of datasets (i.e., financial data, images, and locations). Then, we propose a privacy evaluation framework for synthetic data. We then perform a measurement study assessing state-of-the-art generative models specifically geared for human genomic data, looking at both utility and privacy perspectives. Overall, we find that there is no single approach for generating synthetic data that performs well across the board from both utility and privacy perspectives.

    Bio: Bristena Oprisanu is a PhD Candidate within the Information Security Research Group at UCL. Her research focuses on Enabling Progress in Genomic Research Via Privacy-Preserving Data Sharing, and it is currently sponsored by Google Inc. She is supervised by Dr. Emiliano De Cristofaro and Dr. Christophe Dessimoz. Before this she did an MSc in Information Security at UCL, and an MSci in Mathematics with Economics at UCL as well. Bristena’s research interests include privacy enhancing technologies, applied cryptography and cryptanalysis https://www.bristenaop.com . Currently, she works for Bitfount, a start-up for federated machine learning.
  • 20 January 2022
    Ania Piotrowska – Public PhD Talk
    Building a private future for the internet with the Nym mixnet
    [Recording]
    Abstract:Internet was not designed with privacy as a fundamental property at its inception. As a consequence, the lack of privacy exposes billions of people to privacy breaches and mass surveillance. Anonymous communication networks, such as Tor, are vital to maintain our privacy, however, Tor does not defend against powerful adversaries. For message-based systems, it has been shown that mix networks that re-order (mix) packets can defend against these nation-state level adversaries. Nym is building a permissionless and incentivised communication infrastructure, which provides full-stack privacy even against corporations and government actors with the capacity to capture all global internet traffic. In this talk, we outline two core components of the Nym design. We’ll start with network-level anonymity, explaining how Nym’s decentralized mixnet (which I designed during my PhD at UCL) offers better metadata protection than VPNs, Tor, or peer-to-peer solutions. Next, we will outline Nym’s anonymous credentials, which allow the users to prove the right to use applications and services integrated with the Nym network without involving unnecessary user identification and tracking.

    Bio: Ania Piotrowska is a co-founder and Head of Research at Nym Technologies, where she contributes to the R&D of the Nym infrastructure. Her research interests span several aspects of security, privacy-enhancing technologies, distributed systems, and anonymous communication (onion routing, mix networks, p2p). She is also interested in blockchain technologies, particularly in the context of the privacy of cryptocurrencies. Ania received her Ph.D. in Computer Science from the University College London (Information Security Group) in 2020. Her doctoral thesis entitled “Low-latency mix networks for anonymous communication” was completed under the supervision of Prof. George Danezis and Prof. Sarah Meiklejohn. During her Ph.D., she spent a few months as an intern at DeepMind and Chainalysis. Ania obtained her BSc and MSc from Wroclaw University of Technology (Faculty of Fundamental Problems of Technology). She is based in London (GMT). https://aniampio.github.io
  • 13 January 2022
    Mohamed Khamis, University of Glasgow
    Security and Privacy in the Age of Ubiquitous Computing
    [Recording]
    Abstract: Today, a thermal camera can be bought for < £150 and used to track the heat traces your fingers produced when entering your password on your keyboard. We recently found that thermal imaging can reveal 100% of PINs entered on smartphones up to 30 seconds after they have been entered. Other ubiquitous sensors are continuously becoming more powerful and affordable. They can now be maliciously exploited even by average non-tech-savvy users. The ubiquity of smartphones can itself be a threat to privacy; with personal data being accessible essentially everywhere, sensitive information can easily become subject to prying eyes. There is a significant increase in the number of novel platforms in which users need to perform secure transactions (e.g., payments in VR stores), yet we still use technologies from the 1960s to secure access to them. Mohamed will talk about the implications of these developments and his work in this area with a focus on the challenges, opportunities, and directions for future work.

    Bio: Dr Mohamed Khamis is a lecturer at the University of Glasgow’s School of Computing Science, where he leads research into Human-centered Security. Mohamed and his team a) investigate how ubiquitous sensors impact privacy, security and safety, and b) design user-centered approaches to overcome these threats. For example, he is currently studying how thermal cameras can be used maliciously to infer sensitive input on touchscreens and keyboards. He also collaborates with Facebook/Meta Reality Labs to uncover how Augmented and Virtual Reality headsets pose significant privacy risks to their users and bystanders in their vicinity. He has 90+ publications in TOCHI, CHI, IMWUT, UIST and other top human-computer interaction and usable security and privacy publication venues. He has served on the program committee of CHI since 2019, and he is an editorial board member of IMWUT and the International Journal on Human-Computer Studies. His research is supported by the UK National Cyber Security Centre, the UK Engineering & Physical Sciences Research Council, PETRAS, REPHRAIN, the Royal Society of Edinburgh and Facebook Reality Labs. Mohamed received his PhD from Ludwig Maximilian University of Munich.

2021

  • 16 December 2021
    Luca De Feo, IBM Research
    Isogenies as a foundation of time delay cryptography
    [Recording]
    >Abstract: Time delay cryptography has recently emerged as an alternative to multiparty computation for removing trusted parties from distributed protocols. It is especially attractive in protocols with a large number of participants, as it tends to scale much better than MPC. As an example, Verifiable Delay Functions have only been formalized in 2019, and they are already used or being considered for use in several cryptocurrencies. So far, basically all practical time delay cryptography is based on groups of unknown order, typically RSA groups (with a trusted setup) or ideal class groups of quadratic imaginary number fields. Isogenies of elliptic curves have been used as a foundation for post-quantum cryptography for more than 15 years. In 2019, in a joint work with Masson, Petit and Sanso, we observed that walks in supersingular isogeny graphs could also be used as a foundation for time delay cryptography, although not necessarily in a quantum safe manner. In a recent joint work with Burdges, we introduced a new time delay primitive, named Delay Encryption, and gave the only known instantiation based on the same framework as the isogeny based VDF. In this talk we will review the basic theory of isogenies, explain how they naturally lead to (conjecturally) incompressible sequential computation, and see how they can be combined with pairings to construct time delay primitives. Then, we will discuss the quirks and challenges associated to putting isogeny based delay cryptography into practice.

    Bio: Luca De Feo received his PhD from École Polytechnique (France) in 2010, with a thesis on computer algebra and computational number theory. He then joined Université de Versailles (France) in 2011 as Assistant Professor, where he kept working on computer algebra and cryptography. He is currently employed at IBM Research, where he works on post-quantum cryptography and related topics.
  • 9 December 2021
    Jiahua Xu, UCL
    Decentralized Exchanges (DEX) with Automated Market Maker (AMM) Protocols
    [Recording]
    Abstract: As an integral part of the decentralized finance (DeFi) ecosystem, decentralized exchanges (DEX) with automated market maker (AMM) protocols have gained massive traction with the recently revived interest in blockchain and distributed ledger technology (DLT) in general. Instead of matching the buy and sell sides, AMMs employ a peer-to-pool method and determine asset price algorithmically through a so-called conservation function. To facilitate the improvement and development of AMM-based DEX, we create the first systematization of knowledge in this area. We first establish a general AMM framework describing the economics and formalizing the system’s state-space representation. We then employ our framework to systematically compare the top AMM protocols’ mechanics, illustrating their conservation functions, as well as slippage and divergence loss functions. We further discuss security and privacy concerns, how they are enabled by AMM-based DEX’s inherent properties, and explore mitigating solutions. Finally, we conduct a comprehensive literature review on related work covering both DeFi and conventional market microstructure.

    Bio: Dr. Jiahua Xu is Lecturer in Financial Computing at UCL, where she teaches Blockchain Technologies and Machine Learning in Finance. She is a researcher at the university’s Centre for Blockchain Technologies, and serves as Programme Director of the MSc Emerging Digital Technologies under the Computer Science Department. Jiahua’s research interests lie primarily in blockchain economics, behavioural finance, and risk management. Jiahua earned her PhD from the University of St. Gallen in Switzerland, MSc from the University of Mannheim in Germany, and BA from Fudan University in China. She visited and has ongoing research collaboration with Harvard Business School, Imperial College London, and Vienna University of Economics and Business.
  • 2 December 2021
    Kostantinos Papadamou, UCL
    Characterizing Abhorrent, Misinformative, and Mistargeted Content on YouTube
    [Recording]
    Abstract: YouTube has revolutionized the way people discover and consume video content. Although YouTube facilitates easy access to hundreds of well-produced educational, entertaining, and trustworthy news videos, abhorrent, misinformative and mistargeted content is also common. The platform is plagued by various types of inappropriate content including: 1) disturbing videos targeting young children; 2) hateful and misogynistic content; and 3) pseudoscientific and conspiratorial content. While YouTube’s recommendation algorithm plays a vital role in increasing user engagement and YouTube’s monetization, its role in unwittingly promoting problematic content is not entirely understood. In this talk, I will present our results from three cases studies on abhorrent, misinformative, and mistargeted content on YouTube, and I will motivate why it is important to investigate the role of the YouTube’s recommendation algorithm in the discovery and dissemination of such content. Specifically, in these cases studies we devise various methodologies to detect problematic content, and we use them to simulate the behaviour of users casually browsing YouTube to shed light on: 1) the risks of YouTube media consumption by young children; 2) the role of YouTube’s recommendation algorithm in the dissemination of hateful and misogynistic content, by focusing on the Involuntary Celibates (Incels) community; and 3) user exposure to pseudoscientific misinformation on various parts of the platform and how this exposure changes based on the user’s watch history.

    Bio: Dr. Kostantinos Papadamou is a Post-doctoral Researcher at University College London working on the PROACTIVE project as part of REPHRAIN. Kostantinos holds a PhD in Computer Science from the Cyprus University of Technology. In 2018, he was a Research Intern at Telefonica Research for 6 months. His research focuses on applying deep learning and data-driven quantitative analysis to study emerging phenomena in social networks and user-generated video platforms like YouTube. His research interests lie in the fields of social networks analysis, security in social networks, fake news, deep learning, big data analysis, and authentication security.
  • 18 November 2021
    Elissa Redmiles, Max Planck Institute
    Sex, Work, and Technology: Lessons for Internet Governance & Digital Safety
    [Recording]
    Abstract: Sex workers sit at the intersection of multiple marginalized identities and make up a sizable workforce: the UN estimates that at least 42 million sex workers are conducting business across the globe. Sex workers face a unique and significant set of digital, social, political, legal, and safety risks; yet their digital experiences have received little study in the CS and HCI literature. In this talk we will review findings from a 2-year long study examining how sex workers who work in countries where sex work is legal (Germany, Switzerland, the UK) use technology to conduct business and how they have developed digital strategies for staying safe online and offline. We will then describe how these findings can inform broader conversations around internet governance, digital discrimination, and safety protections for other marginalized and vulnerable users whose experiences bisect the digital and physical.

    Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, Facebook, the World Bank, the Center for Democracy and Technology, and the University of Zurich. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as the New York Times, Scientific American, Rolling Stone, Wired, Business Insider, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards from Facebook as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.
  • 28 October 2021
    Aydin Abadi, UCL
    Polynomial Representation Is Tricky: Maliciously Secure Private Set Intersection Revisited
    [Recording]
    Abstract: Private Set Intersection protocols (PSIs) allow parties to compute the intersection of their private sets, such that nothing about the sets’ elements beyond the intersection is revealed. PSIs have a variety of applications, primarily in efficiently supporting data sharing in a privacy-preserving manner. At Eurocrypt 2019, Ghosh and Nilges proposed three efficient PSIs based on the polynomial representation of sets and proved their security against active adversaries. In this talk, I will discuss that these three PSIs are susceptible to several serious attacks. The attacks let an adversary (1) learn the correct intersection while making its victim believe that the intersection is empty, (2) learn a certain element of its victim’s set beyond the intersection, and (3) delete multiple elements of its victim’s input set. I will explain why the proofs did not identify these attacks and discuss how the issues can be rectified. This is a joint work with Steven Murdoch (UCL) and Thomas Zacharias (University of Edinburgh).

    Bio: Aydin Abadi is a research fellow at UCL. His research interests include information security, privacy, cryptography, and blockchain technology. Prior to holding this position, he held lectureship and research associate positions at the University of Gloucestershire and Edinburgh respectively.
  • 7 October 2021
    Jaap-Henk Hoepman, Radboud University Nijmegen
    Privacy Is Hard and Seven Other Myths. Achieving Privacy through Careful Design
    [Recording]
    Abstract: Technological developments have made it easier to invade our privacy. Yet technology can also be used to protect privacy. Privacy by design is a methodology that aims to incorporate privacy in the system development cycle from the very start. Careful design makes it possible to make the services that we use in our daily life much more privacy friendly. In this talk I will show how, using concrete examples, thus debunking several privacy myths, like “we are not collecting personal data”, “we always need to know who you are” and “privacy is hard”. (This is a talk based on my book with the same title that will appear at MIT Press on October 5).

    Bio: Jaap-Henk Hoepman (1966) is associate professor at the Digital Security group of the Radboud University, Nijmegen, the Netherlands, working for the iHub, the interdisciplinary research hub on Security, Privacy, and Data Governance. He is also an associate professor in the IT Law section of the Transboundary Legal Studies department of the Faculty of Law of the University of Groningen. Moreover, he is a principal scientist (and former scientific director and co-founder) of the Privacy & Identity Lab. He is a columnist for the Financieele Dagblad (FD, a major Dutch newspaper) and a regular guest on the Dutch national radio news show Nieuws en Co. Jaap-Henk studies privacy by design and privacy friendly protocols for identity management and the Internet of Things. He speaks on these topics at national and international congresses and publishes papers in (inter)national journals. He also appears in the media as security and privacy expert, and writes about his research in the popular press, and he is actively involved in the public debate concerning security and privacy in our society.
  • 23 September 2021
    Beba Cibralic, Georgetown University
    How do we draw the line between permissible and impermissible online influence? [Recording]
    Abstract: Since the 2016 Russian influence campaign against the United States, scholars have tried to articulate, in precise terms, why the influence campaign was harmful, wrong, and, according to some, illegal. Some scholars have argued that the influence campaign was an infringement of sovereignty. Others have argued that it undermined the right to self-determination. Stronger still, some have suggested that the campaign might constitute an attack. I contend that none of these frameworks are adequate for explaining the particular wrong of foreign influence, nor are they conceptually satisfying in the context of online influence. I argue that to articulate the wrong of certain kinds of influence, we ought to reframe the conversation so that it is not about “foreign influence” but “pernicious influence”. Instead of focusing on the nationality of the actor, we should focus on the specific features of influence we take to be normatively problematic, such as deception and/or the spread of disinformation. The upshots of this account are that it allows us to talk meaningfully about the connection between domestic and foreign influence, and to draw lines between permissible and impermissible influence, broadly construed.

    Bio: Beba is a Ph.D candidate in philosophy and Fritz Family Fellow at Georgetown University focusing on applied ethics, social and political philosophy, and social epistemology. Her dissertation examines the ethical, political, and legal status of online influence efforts.Beba is also co-authoring a textbook for MIT Press on the philosophy of machine agency. In 2022, Beba will be a visitor at Cambridge University’s Leverhulme Centre for the Future of Intelligence, and at Australian National University’s Humanising Machine Intelligence Project. Previously, Beba worked as a Semester Research Analyst at the Center for Security and Emerging Technology (CSET), and participated in the Stanford US-Russia Forum, where she worked on US-Russia cyber cooperation. Beba holds an MA in China Studies from Peking University, where she studied as a Yenching Academy Fellow, and BA in philosophy and political science from Wellesley College (magna cum laude, Phi Beta Kappa).
  • 29 July 2021
    Aydin Abadi, UCL
    Multi-instance Publicly Verifiable Time-lock Puzzle and its Applications
    [Recording]
    Abstract: Time-lock puzzles are elegant protocols that enable a party to lock a message such that no one else can unlock it until a certain time elapses. Nevertheless, existing schemes are not suitable for the case where a server is given multiple instances of a puzzle scheme at once and it must unlock them at different points in time. If the schemes are naively used in this setting, then the server has to start solving all puzzles as soon as it receives them, that ultimately imposes significant computation cost and demands a high level of parallelisation. In this talk, I will discuss a new generic primitive called “multi-instance time-lock puzzle” that tackles the aforementioned issues, by composing a puzzle’s instances. I will also talk about the primitive’s candidate construction called: “chained time-lock puzzle” (C-TLP). It allows the server, given instances’ composition, to solve puzzles sequentially, without having to run parallel computations on them. C-TLP makes black-box use of a standard time-lock puzzle scheme and is accompanied by a lightweight publicly verifiable algorithm. It is the first time-lock puzzle that offers a combination of the above features. Moreover, I will discuss how C-TLP can be used to build the first “outsourced proofs of retrievability” that can support real-time detection and fair payment while having lower overhead than the state of the art. Also, one can substitute a “verifiable delay function” with C-TLP (in certain cases), to gain much better efficiency.

    Bio: Aydin Abadi is a research fellow at UCL. Prior to that, he held lectureship and research associate positions at the University of Gloucestershire and Edinburgh respectively. During working at the University of Edinburgh he was a member of blockchain technology lab where he conducted research in blockchain and cryptography as well as developing several (decentralised) applications. He received a Ph.D. in secure multiparty computation (i.e., private set intersection) from the University of Strathclyde, Glasgow.
  • 15 July 2021
    Steffen Becker and Carina Wiesen, Ruhr-Universität Bochum
    Towards Cognitive Obfuscation - Understanding Cognitive Processes of Hardware Reverse Engineers
    [Recording]
    Abstract: Hardware builds the foundation of our modern digital society with its innumerable interconnected electronic devices, and is realized in form of integrated circuits (ICs), i.e., microchips, which often perform various security-critical functions. They are, thus, attractive targets for attacks and malicious manipulations. In our talk, we focus on a specific method to understand the inner structures and functionalities of microchips – hardware reverse engineering (HRE) which is applied for legitimate purposes (e.g., detection of hardware Trojans), and also to illegitimate ends such as intellectual property infringement, or the injection of malicious hardware backdoors. As tools that automate the entire HRE process do not yet exist, hardware reverse engineers are forced to make sense out of semi-automated HRE steps that are driven by human problem-solving processes and cognitive factors. Consequently, the success of HRE strongly depends on the analysts’ cognitive processes. However, the understanding of the underlying cognitive processes and factors in HRE have thus far not gained much attention in the research community, and remain largely unexplored and opaque. In our talk, we will provide an overview of our initial research results from our interdisciplinary research project. We present a study with hardware reverse engineers on different levels of expertise (i.e., intermediate and expert) who were asked to complete a realistic HRE task involving the removal of an intellectual property protection mechanism from an unknown chip design. A qualitative analysis of 2,445 detailed log entries led to the creation of a hierarchical HRE taxonomy consisting of 103 unique open codes and an in-depth analysis of applied problem-solving strategies. We discuss our findings in the light of recent literature on problem solving and expertise, and outline ideas for future research on quantifying our exploratory results and to develop novel countermeasures impeding HRE.

    Bio: Carina Wiesen is a research assistant at the Educational Psychology Lab in the Institute of Educational Research at the Ruhr-Universität Bochum (supervised by Prof. Dr. Nikol Rummel and Prof. Dr.-Ing. Christof Paar). Currently she is a Ph.D. candidate in the Cluster of Excellence CASA and associated to the Max Planck Institute for Security and Privacy. Her research focuses on human problem-solving processes in hardware reverse engineering (HRE). In particular, she is strongly interested in exploring how engineers analyze an unknown chip design and to derive first ideas for the development of novel forms of countermeasures impeding HRE.



    Bio: Steffen Becker is a PhD candidate in the Cluster of Excellence CASA at the Ruhr-Universität Bochum and the Max Planck Institute for Security and Privacy, supervised by Prof. Dr.-Ing. Christof Paar and Prof. Dr. Nikol Rummel. In his research, he aims to render hardware more secure against reverse-engineering-based attacks by studying the human factors involved in reverse engineering. Steffen is also interested in end-user perceptions and behavior regarding security and privacy.
  • 17 June 2021
    Andrew Lewis-Pye, LSE
    Consensus in the Permissionless Setting
    [Recording]
    Abstract: In the distributed computing literature, consensus protocols have traditionally been studied in a setting where all participants are known to each other from the start of the protocol execution. In the parlance of the ‘blockchain’ literature, this is referred to as the permissioned setting. What differentiates the most prominent blockchain protocol Bitcoin from these previously studied protocols is that it operates in a permissionless setting, i.e. it is a protocol for establishing consensus over an unknown network of participants that anybody can join, with as many identities as they like in any role. I’ll talk about recent work with Tim Roughgarden in which we describe a formal framework for the analysis of both permissioned and permissionless systems.

    Bio: Andrew Lewis-Pye is a Professor in the Department of Mathematics at the London School of Economics. Prior to coming to LSE, he was a Royal Society University Research Fellow at the University of Leeds, and a Marie-Curie Fellow at the University of Siena. The bulk of his research has been in Computability Theory and Algorithmic Randomness, but he has also worked in fields as diverse as Network Science, Statistical Mechanics and Population Genetics. His most recent research interests are in cryptocurrencies.
  • 10 June 2021
    Nicolas Christin, Carnegie Mellon University
    Cryptocurrency trading at 10: From “Monopoly money” to billion-dollar derivatives markets
    [Recording]
    Abstract: In a little more than a decade, modern cryptocurrencies have gone from a marginal product used mostly by hobbyists, to a viable alternative currency for fringe markets, to supporting an entire class of financial assets. In this talk, I will start by looking at the early days of spot markets (fiat for cryptocurrency), outlining some of the inherent risks in that ecosystem. I will then discuss how, since 2018, the cryptocurrency trading landscape has evolved to a hybrid ecosystem featuring complex and popular derivatives products. I will present results based on our study of BitMEX, one of the first derivatives platforms for leveraged cryptocurrency trading. BitMEX trades on average over 3 billion dollars worth of volume per day, and allows users to go long or short Bitcoin with up to 100x leverage. I will discuss how BitMEX products have become the standard across other cryptocurrency derivatives platforms, such as Binance, FTX, or others, which now feature daily trading volumes that, in aggregate, rival those of the New York Stock Exchange. Through an analysis on-chain forensics, public liquidation events, and a site-wide chat room, I will describe the diverse ensemble of amateur and professional traders that forms this community, and how derivative trading has impacted cryptocurrency asset prices, notably how it has led to dramatic price movements in the underlying spot markets.

    Bio: Nicolas Christin is an Associate Professor (with tenure) at Carnegie Mellon University, jointly appointed in the School of Computer Science and the Department of Engineering and Public Policy. He holds a Ph.D. in Computer Science from the University of Virginia, and was a post-doc at UC Berkeley prior to joining Carnegie Mellon in 2005. His research interests are in computer and information systems security. Most of his work is at the boundary of measurements, systems and policy research. He has most recently focused on security analytics, online crime modeling, and economic and human aspects of computer security. His group’s research won several awards (best paper awards at conferences such as ACM CHI or USENIX Security, IEEE Cybersecurity Award, Allen Newell Award for Research Excellence, …).
  • 20 May 2021
    Tim Roughgarden, Columbia University
    Transaction Fee Mechanism Design for the Ethereum Blockchain:
    An Economic Analysis of EIP-1559
    [Recording]
    Abstract: EIP-1559 is a proposal to make several tightly coupled changes to the Ethereum blockchain’s transaction fee mechanism, including the introduction of variable-size blocks and a burned base fee that rises and falls with demand. This proposal is slated for deployment in the London fork (scheduled for late summer 2021), and will be the biggest economic change made to a major blockchain to date. In this talk we formalize the problem of designing a transaction fee mechanism, taking into account the many idiosyncrasies of the blockchain setting (ranging from off-chain collusion between miners and users to the ease of money-burning). We then situate the specific mechanism proposed in EIP-1559 in this framework and rigorously interrogate its game-theoretic properties. We also touch on two alternative designs that offer different sets of incentive trade-offs.

    Bio: Tim Roughgarden is a Professor of Computer Science at Columbia University. Prior to joining Columbia, he spent 15 years on the computer science faculty at Stanford, following a PhD at Cornell and a postdoc at UC Berkeley. His research interests include the many connections between computer science and economics, as well as the design, analysis, applications, and limitations of algorithms. For his research, he has been awarded the ACM Grace Murray Hopper Award, the Presidential Early Career Award for Scientists and Engineers (PECASE), the Kalai Prize in Computer Science and Game Theory, the Social Choice and Welfare Prize, the Mathematical Programming Society’s Tucker Prize, and the EATCS-SIGACT Gödel Prize. He was an invited speaker at the 2006 International Congress of Mathematicians, the Shapley Lecturer at the 2008 World Congress of the Game Theory Society, and a Guggenheim Fellow in 2017. He has written or edited ten books and monographs, including Twenty Lectures on Algorithmic Game Theory (2016), Beyond the Worst-Case Analysis of Algorithms (2020), and the Algorithms Illuminated book series (2017-2020).

    Homepage: https://timroughgarden.org
  • 29 April 2021
    Emiliano De Cristofaro, UCL
    Studying Jerks on the Web: A Socio-Technical Perspective
    [Recording]
    Abstract: Over the past two decades, the world has seen an explosion of data. While in the past controlled experiments, surveys, or compilation of high-level statistics allowed us to gain insights into the problems we explored, the Web has brought about a host of new challenges for researchers hoping to gain an understanding of modern socio-technical behavior. First, even discovering appropriate data sources is not a straight forward task. Next, although the Web enables us to collect highly detailed digital information, there are issues of availability and ephemerality: simply put, researchers have no control over what data a 3rd party platform collects and exposes, and more specifically, no control over how long that data will remain available. Third, the massive scale and multiple formats data are available in requires creative execution of analysis. Finally, modern socio-technical problems, while related to typical social problems, are fundamentally different, and in addition to posing a research challenge, can also cause disruption in researchers’ personal lives. In this talk, I will discuss how our work has overcome the above challenges. Using concrete examples from our research, I will delve into some of the unique datasets and analyses we have performed, focusing on emerging issues like hate speech, coordinate harassment campaigns, and deplatforming as well as modeling the influence that Web communities have on the spread of disinformation, weaponized memes, etc. Finally, I will discuss how we can design proactive systems to anticipate and predict online abuse and, if time permits, how the “fringe” information ecosystem exposes researchers to attacks by the very actors they study.

    Bio: Emiliano De Cristofaro is a Professor at UCL (UCL), where he heads the Information Security Research Group, a Faculty Fellow at the Alan Turing Institute, and a co-founder of the iDramaLab. Before moving to London, he was a research scientist at Xerox PARC. He received a PhD in Networked Systems from the University of California, Irvine in 2011. Overall, Emiliano does research in the broad security, safety, and privacy areas. These days he mostly works on tackling problems at the intersection of machine learning and security/privacy/safety, as well as understanding and countering information weaponization via data-driven analysis. In 2013 and 2014, he co-chaired the Privacy Enhancing Technologies Symposium, in 2018, the security and privacy track at WWW and the privacy track at CCS, and in 2020 the Truth and Trust Online (TTO) Conference. He has also received best paper awards from NDSS, ACM IMC, and the Cybersafety workshop.

    Homepage: https://emilianodc.com
  • 25 March 2021
    Joseph Tanega, Vrije Universiteit Brussels
    NFT Art, Digital Asset-Backed Securities, and Universal Constructions in The Mathematical Philosophy of Law and Finance
    Virtual

  • 11 March 2021
    Giovanni Cherubin, The Alan Turing Institute
    Black-box leakage estimation, and some thoughts on its applicability to membership inference and synthetic data
    Virtual

  • 4 March 2021
    Arthur Gervais, Imperial College London
    Flash Loans for Fun and Profit
    Virtual

  • 18 February 2021
    Benjamin Alexander Steer, Queen Mary University
    Moving with the Times: Investigating the Alt-Right Network Gab with Temporal Interaction Graphs
    Virtual

  • 11 February 2021
    Craig Costello, Microsoft
    Finding twin smooth integers for isogeny-based cryptography
    Virtual

2020

  • 17 December 2020
    Michael Veale, UCL Law
    The use and (potential) abuse of privacy-preserving infrastructures
    Virtual
  • 10 December 2020
    Chelsea Komlo, University of Waterloo
    Introducing FROST: Flexible Round-Optimized Schnorr Threshold Signatures
    Virtual
  • 3 December 2020
    Ryan Castelucci, White Ops
    BitCry
    Virtual
  • 26 November 2020
    Arianna Trozze, UCL
    Explaining Prosecution Outcomes for Cryptocurrency-based Financial Crimes
    Virtual
  • 19 November 2020
    Henry Skeoch, UCL
    Cyber-insurance: what is the right price?
    Virtual
  • 12 November 2020
    Antonis Papasavva, UCL
    “Go back to Reddit!”: Detecting Hate and Analyzing Narratives of Online Fringe Communities
    Virtual
  • 5 November 2020
    Alin Tomescu, VMware
    Authenticated Data Structures for Stateless Validation and Transparency Logs
    Virtual
  • 28 May 2020
    Henry Corrigan-Gibbs, EPFL
    Private Information Retrieval with Sublinear Online Time
    Virtual
  • 7 May 2020
    Fabio Pierazzi, King’s College London
    Intriguing Properties of Adversarial ML Attacks in the Problem Space
    Virtual
  • 11 March 2020
    Yang Zhang, CISPA Helmholtz Center for Information Security
    Towards Understanding Privacy Risks of Machine Learning Models
    Malet Place Engineering Building 6.12A
  • 5 March 2020
    Gene Tsudik, UC Irvine
    Reconciling security and real-time constraints for simple IoT devices
    Main Quad Pop Up 101

2019

  • 12 December 2019
    Mathieu Baudet, Facebook Calibra
    LibraBFTv2: Optimistically-linear BFT Consensus with Concrete Latency Bounds
    Roberts 421
  • 21 November 2019
    Ilias Leontiadis, Samsung AI
    Learnings from industrial research on privacy and machine learning on wireless networks
    Roberts 421
  • 8 November 2019
    Ian Goldberg, University of Waterloo
    Walking Onions: Scaling Anonymity Networks while Protecting Users
    Malet Place Engineering Building 1.03
  • 7 November 2019
    Enrico Mariconti, UCL
    “You Know What to Do”: Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks
    Drayton House B03 Ricardo LT
  • 31 October 2019
    Bristena Oprisanu, UCL
    How Much Does GenoGuard Really “Guard”? An Empirical Analysis of Long-Term Security for Genomic Data
    Drayton House B03 Ricardo LT
  • 17 October 2019
    Kirill Nikitin, EPFL
    Reducing Metadata Leakage from Encrypted Files and Communication with PURBs
    Drayton House B03 Ricardo LT
  • 10 October 2019
    Grace Cassey, CyLon
    First Steps Towards Building a Cybersecurity Spinout
    Drayton House B03 Ricardo LT
  • 3 October 2019
    Nicolas Kourtellis, Telefonica Research
    Online user tracking and personal data leakage in the big data era
    Roberts Building G06
  • 5 September 2019
    Simon Parkin and Albesa Demjaha, UCL
    “You’ve left me no choices”: Security economics to inform behaviour intervention support in organizations
    Roberts 309
  • 15 August 2019
    Guillermo Suarez de Tangil Rotaeche, King’s College London
    A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted Wealth
    Roberts 309
  • 8 August 2019
    Savvas Zannettou, Cyprus University of Technology
    Towards Understanding the Behavior of State-Sponsored Trolls and their Influence on the Web
    Roberts 309
  • 8 August 2019
    Haaroon Yousaf, UCL
    Tracing Transactions Across Cryptocurrency Ledgers
    Roberts 309
  • 1 August 2019
    Simon Parkin, UCL
    Of Two Minds about Two-Factor: Understanding Everyday FIDO U2F Usability through Device Comparison and Experience Sampling
    Roberts 309
  • 25 July 2019
    Matthew Wixey, UCL
    Sound Effects: Exploring Acoustic Cyber-Weapons
    Roberts 309
  • 18 July 2019
    Prof. Dr. Christian Hammer, Uni Potsdam
    Security and Privacy Issues due to Android Intents
    Roberts 309
  • 4 July 2019
    Alexandros Mittos, UCL
    Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective
    Roberts 309
  • 28 June 2019
    Battista Biggio, University of Cagliari
    Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning
    Alan Turing Institute, Jack Good Meeting Room
  • 27 June 2019
    Colin Ife, UCL
    Waves of Malice: A Longitudinal Measurement of the Malicious File Delivery Ecosystem on the Web
    Roberts 309
  • 10 June 2019
    Matthew Wright, Rochester Institute of Technology
    Deep Fingerprinting: Undermining Website Fingerprinting Defenses with Deep Learning
    Roberts 309
  • 12 June 2019
    Houman Homayoun, George Mason University
    Towards Hardware Cybersecurity
    Roberts 309
  • 30 May 2019
    Nissy Sombatruang, UCL
    The Continued Risks of Public Wi-Fi and Why Users Keep Using It
    Roberts 309
  • 16 May 2019
    Andrei Sabelfeld, Chalmers University of Technology
    Securing IoT Apps
    Roberts 309
  • 9 May 2019
    Ilia Shumailov, University of Cambridge
    Towards Adversarial Sample Detection in Constraint Devices, Key Embedding and Neural Cryptography
    Roberts 309
  • 2 May 2019
    Shi Zhou, UCL
    Twitter Botnets Detection – Star Wars and Failure of Supervised Learning
    Roberts 309
  • 25 April 2019
    Adria Gascon, Alan Turing Institute
    Privacy-Preserving Data Analysis: Proofs, Algorithms, and Systems
    Roberts 309
  • 28 March 2019
    Enrico Mariconti, UCL
    One Does Not Simply Walk Into Mordor A PhD Journey in Malicious Behavior Detection
    Roberts 309
  • 28 February 2019
    Jonathan Lusthaus, Oxford University
    Industry of Anonymity: Inside the Business of Cybercrime
    Roberts 309
  • 14 February 2019
    Simon Parkin, UCL
    Device Purchase as an Opportune Moment for Security Behavior Change / Perceptions and Reality of Windows 10 Home Edition Update Features
    Roberts 309
  • 7 February 2019
    Mustafa Al-Bassam, UCL
    Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities
    Roberts 309
  • 29 January 2019
    Alvaro Garcia-Perez, IMDEA Software Institute
    Federated Byzantine Quorum Systems
    Roberts 309
  • 31 January 2019
    Vid Simoniti, University of Liverpool
    Deception and Politics Online: A Philosophical Approach
    Roberts 309
  • 17 January 2019
    Soteris Demetriou, Imperial College London
    Security and Privacy Challenges in User-Facing, Complex, Interconnected Environments
    Roberts 309

2018

  • 13 December 2018
    Mark Goodwin, Mozilla
    Fixing Revocation: How We Failed and How We’ll Succeed
    Roberts 309
  • 17 December 2018
    Nick Spooner, UC Berkeley
    Aurora: Transparent zkSNARKs for R1CS
    Robert 309
  • 6 December 2018
    Konstantinos Chalkias, R3
    Hash-Based Post-Quantum Signatures Tailored to Blockchains
    Roberts 309
  • 15 November 2018
    Lucky Onwuzurike, UCL
    Measuring and Mitigating Security and Privacy Issues on Android Applications
    Roberts 309
  • 8 November 2018
    Emiliano De Cristofaro, UCL
    On the Origins of Memes by Means of Fringe Web Communities
    Roberts 309
  • 1 November 2018
    Didem Özkul, UCL
    Location (un)intelligence: Politics and limitations of location-based profiling
    Roberts 309
  • 11 October 2018
    Ranjan Pal, University of Cambridge
    Privacy Trading in the Apps and IoT Age: Markets and Computation
    Roberts 309
  • 4 October 2018
    Jonathan Spring, UCL
    Towards Scientific Incident Response
    Roberts 309
  • 30 August 2018
    Apoorvaa Deshpande, Brown University
    Fully Homomorphic NIZK Proofs
    Roberts 421
  • 23 August 2018
    Lucky Onwuzurike, UCL
    A Family of Droids–Android Malware Detection via Behavioral Modeling: Static vs Dynamic Analysis
    Roberts 421
  • 23 August 2018
    Neema Kotonya, UCL
    Of Wines and Reviews: Measuring and Modeling the Vivino Wine Social Network
    Roberts 421
  • 9 August 2018
    Luca Melis, UCL
    Public PhD Talk: Building and Evaluating Privacy-Preserving Data Processing Systems
    Roberts 421
  • 2 August 2018
    Lina Dencik, Cardiff University
    Understanding data in relation to social justice
    Roberts 421
  • 19 July 2018
    Sarah Meiklejohn and Mathilde McBride, UCL
    When technology and policy conflict: Distributed Ledgers and the GDPR right to be forgotten
    Roberts 421
  • 9 July 2018
    Lujo Bauer, Carnegie Mellon University
    Back to the Future: From IFTTT to XSS, it’s all about the information-flow lattice
    Malet 1.03
  • 11 July 2018
    Farinaz Koushanfar, UC San Diego
    Deep Learning on Private Data
    MPEB 1.03
  • 5 July 2018
    Kat J. Cecil, UCL
    Talking whiteness: Black women’s narratives of working in UK Higher Education
    Roberts 421
  • 14 June 2018
    Leonie Tanczer, UCL
    Gender and IoT: Discussing security principles for victims of Internet of Things (IoT)-supported tech abuse
    Roberts 421
  • 7 June 2018
    Gareth Tyson, Queen Mary University of London
    Facebook (A)Live? Are live social broadcasts really broadcasts?
    Roberts 421
  • 31 May 2018
    Ralph Holtz, University of Sydney
    Are we there yet? HTTPS security 7 years after DigiNotar
    Roberts 421
  • 17 May 2018
    Andelka Phillips, Trinity College Dublin
    Of Contracts and DNA - Reading the fine print when buying your genetic self online
    Roberts 421
  • 10 May 2018
    Jonathan Spring, UCL
    Meta-Issues in Information Security: Let’s talk about publication bias
    Roberts 421
  • 3 May 2018
    Luca Viganò, Kings College
    A Formal Approach to Cyber-Physical Attacks
    Roberts 421
  • 30 April 2018
    Jeremy Blackburn, University of Alabama at Birmingham
    Data-driven Research for Advanced Modeling and Analysis or: How I Learned to Stop Worrying and Love the DRAMA
    MPEB 1.20
  • 12 April 2018
    Jonathan Bootle, UCL
    Cryptanalysis of Compact-LWE
    Roberts 421
  • 5 April 2018
    Mustafa Al-Bassam, UCL
    Chainspace: A Sharded Smart Contracts Platform
    Roberts 421
  • 22 March 2018
    Shehar Bano, UCL
    Meta-Issues in Information Security: Ethical Issues in Network Measurement
    Main Quad Pop-Up 102
  • 15 March 2018
    Paul Grubbs, Cornell University
    Message Franking: From Invisible Salamanders to Encryptment
    Main Quad Pop-Up 102
  • 8 March 2018
    Kasper Bonne Rasmussen, Oxford University
    Device Pairing at the Touch of an Electrode
    Main Quad Pop-Up 102
  • 1 March 2018
    Apostolos Pyrgelis, UCL
    Knock Knock, Who’s There? Membership Inference on Aggregate Location Data
    Main Quad Pop-Up 102
  • 1 March 2018
    Kit Smeets, UCL
    Rounded Gaussians - Fast and Secure Constant-Time Sampling for Lattice-Based Crypto
    Main Quad Pop-Up 102
  • 8 February 2018
    Jaya Klara Brekke, Durham University
    Tracing Trustlessness
    Main Quad Pop-Up 102
  • 1 February 2018
    Ben Livshits, Imperial College London
    Research Challenges in a Modern Web Browser
    Main Quad Pop-Up 102
  • 25 January 2018
    Tristan Caulfield, UCL
    Meta-Issues in Information Security: fake news as a security incident
    Main Quad Pop-Up 102
  • 18 January 2018
    Jamie Hayes, UCL
    Adversarial Machine Learning
    Main Quad Pop-Up 102
  • 11 January 2018
    Mark Handley, UCL
    Meltdown and Spectre vulnerabilities: What went wrong?
    Roberts 508

2017

  • 14 December 2017
    Benedikt Bünz, Stanford University
    Bulletproofs: Short Proofs for Confidential Transactions and More
    Roberts 508
  • 7 December 2017
    Luca Melis, UCL
    Differentially Private Mixture of Generative Neural Networks
    Roberts 508
  • 30 November 2017
    Steven Murdoch, UCL
    Working with the media
    Roberts 508
  • 23 November 2017
    Jonathan Bootle, UCL
    Linear-Time Zero-Knowledge Proofs for Arithmetic Circuit Satisfiability
    Roberts 508
  • 16 November 2017
    Alice Hutchings, University of Cambridge
    Cybercrime in the sky
    Roberts 508
  • 9 November 2017
    Mobin Javed, Uc Berkeley
    Mining Large-Scale Internet Data to Find Stealthy Abuse
    Roberts 508
  • 3 November 2017
    Alexander Koch, Karlsruhe Institute of Technology
    The Minimum Number of Cards in Practical Card-based Protocols
    MPEB 6.12
  • 26 October 2017
    Vincent Primault, UCL
    Evaluating and Configuring Location Privacy Protection Mechanisms
    Roberts 508
  • 19 October 2017
    Changyu Dong, Newcastle University
    Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing
    Roberts 508
  • 12 October 2017
    Arthur Gervais, ETH Zurich
    On the Security and Scalability of Proof of Work Blockchains
    Roberts 508
  • 5 October 2017
    Raphael Toledo, UCL
    Mix-ORAM: Towards Delegated Shuffles
    Roberts 508
  • 5 October 2017
    Ania Piotrowska, UCL
    AnNotify: A Private Notification Service
    Roberts 508
  • 28 September 2017
    Nicolas Christin, Carnegie Mellon University
    Bridging large-scale data collection and analysis
    Roberts G08
  • 14 September 2017
    Jonathan Spring, UCL
    Practicing a Science of Security: A Philosophy of Science Perspective
    Roberts G08
  • 24 August 2017
    Francois Labreche, École Polytechnique de Montreal
    POISED: Spotting Twitter Spam Off the Beaten Paths
    Gordon Street(25)
  • 10 August 2017
    Ian Miers, Johns Hopkins University
    ZCash: past, present, and future of an Anonymous Bitcoin like Crypto-Currency
    Gordon Street(25)
  • 3 August 2017
    Patrick McCorry, UCL
    Applications of the Blockchain using Cryptography
    Gordon Street(25)
  • 31 July 2017
    Sanaz Taheri Boshrooyeh, Koç University
    Inonymous: Anonymous Invitation-Based System
    Roberts
  • 31 July 2017
    Devris Isler, Koç University
    Threshold Single Password AuthenticationThreshold Single Password Authentication
    Roberts
  • 20 July 2017
    Prof Adam O’Neill, Georgetown University
    New Results on Secure Outsourced Database Storage
    Gordon Street(25)
  • 13 July 2017
    Apostolos Pyrgelis, UCL
    What Does The Crowd Say About You? Evaluating Aggregation-based Location Privacy
    Gordon Street(25)
  • 6 July 2017
    Prof Negar Kiyavash, UIUC
    Adversarial machine learning: the case of optimal attack strategies against recommendation systems
    Gordon Street(25)
  • 22 June 2017
    Prof Jintai Ding, University of Cincinnati
    Post Quantum key Exchange
    Gordon Street(25)
  • 15 June 2017
    Guillermo Suárez-Tangil, UCL
    How to deal with that many apps: towards the use of lightweight techniques on the detection of mobile malware
    Gordon Street(25)
  • 6 June 2017
    Prof Adam Doupé, Arizona State University
    The Effectiveness of Telephone Phishing Scams and Possible Solutions
    MPEB 1.02
  • 1 June 2017
    Marjori Pomarole, Facebook
    Automatic Learning and Enforcement of Authorization Rules in Online Social Networks
    Gordon Street(25)
  • 4 May 2017
    Ruba Abu-Salma, UCL
    Obstacles to the Adoption of Secure Communication Tools
    Gordon Street(25)
  • 27 April 2017
    Anna Squicciarini, Penn State University
    Toward Controlling Malicious Users in Online Social Platforms
    Roberts 309
  • 6 April 2017
    Paul Simmonds, Global Identity Foundation
    Fix digital identity! Stop the bad guys
    Gordon Street(25)
  • 27 March 2017
    Brian Witten, Symantec
    Emerging Security Research at Symantec Research Labs
    MPEB 103
  • 23 March 2017
    Shehar Bano, UCL
    Characterization of Internet Censorship from Multiple Perspectives
    Gordon Street(25)
  • 16 March 2017
    Prof Foteini Baldimtsi, UCL
    TumbleBit: An Untrusted Bitcoin-Compatible Anonymous Payment Hub
    Gordon Street(25)
  • 9 March 2017
    Katriel Cohn-Gordon, Oxford University
    Post-compromise Security and the Signal Protocol
    Gordon Street(25)
  • 2 March 2017
    Hamish, UK Civil Service
    Perspectives on the Investigatory Powers Act
    Gordon Street(25)
  • 23 February 2017
    Mohammad Hajiabadi, UCL
    Limitations of black-box constructions in cryptography
    Gordon Street(25)
  • 16 February 2017
    Joanne Woodage, Royal Holloway, University of London
    Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results
    Gordon Street(25)
  • 9 February 2017
    Enrico Mariconti, UCL
    MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models
    Gordon Square (16-18) 101
  • 9 February 2017
    Arman Khouzani, QMUL
    Universally Optimal Design For Minimum Information Leakage
    Gordon Square (16-18) 101
  • 2 February 2017
    Prof Carlos Cid, Royal Holloway, University of London
    A Model for Secure and Mutually Beneficial Software Vulnerability Sharing in Competitive Environments
    Gordon Street(25)
  • 26 January 2017
    Gerard Briscoe, UCL
    Designing Digital Cultures For Preferable Futures
    Gordon Street(25)
  • 19 January 2017
    Vasilios Mavroudis, UCL
    On the Privacy and Security of the Ultrasound Ecosystem
    Gordon Street(25)

2016

  • 7 December 2016
    Lorenzo Cavallaro, Royal Holloway, University of London
    CopperDroid: Automatic Android Malware Analysis and Classification
    Anatomy G29
  • 1 December 2016
    Alexandra Silva, UCL
    Automata learning - infinite alphabets and application to verification
    Gordon Street(25)
  • 24 November 2016
    Peter Scholl, University of Bristol
    Identifying Cheaters in Secure Multi-Party Computation
    Gordon Street(25)
  • 24 November 2016
    Gunes Acar, KU Leuven
    Advanced online tracking: A look into the past and the future
    Gordon Street(25)
  • 17 November 2016
    Christophe Petit, Oxford University
    Post-quantum cryptography based on supersingular isogeny problems?
    Gordon Street(25)
  • 9 November 2016
    Mary Maller, UCL
    Déjà Q All Over Again: Tighter and Broader Reductions of q-Type Assumptions
    Roberts 309
  • 9 November 2016
    Jeremiah Onaolapo, UCL
    Understanding The Use Of Leaked Webmail Credentials
    Roberts 309
  • 3 November 2016
    N Asokan, Aalto University
    Technology Transfer from Security Research Projects: A Personal Perspective
    Gordon Street(25)
  • 27 October 2016
    Apostolos Pyrgelis, UCL
    Privacy-Friendly Mobility Analytics using Aggregate Location Data
    Gordon Street(25)
  • 13 October 2016
    Kostas Chatzikokolakis, LIX, École Polytechnique
    Geo-indistinguishability: A Principled Approach to Location Privacy
    Gordon Street(25)
  • 29 September 2016
    Sune K. Jakobsen, UCL
    Cryptogenography: Anonymity without trust
    Roberts 110
  • 22 September 2016
    Lukasz Olejnik, UCL
    Designing Web with Privacy
    MPEB 1.02
  • 15 September 2016
    Pengfei Wang, National University of Defense Technology
    How Double-Fetch Situations turn into Double-Fetch Vulnerabilities: A Study of Double Fetches in the Linux Kernel
    Engineering Front Executive Suite 103
  • 15 September 2016
    Liqun Chen, HP
    Cryptography in Practice
    Engineering Front Executive Suite 103
  • 5 August 2016
    Sanjay K. Jha, University of New South Wales
    A Changing Landscape: Securing The Internet Of Things (IoT)
    MPEB 1.02
  • 28 July 2016
    Delphine Reinhardt, University of Bonn
    Roberts 110
  • 26 July 2016
    Gilles Barthe, IMDEA Software Institute
    Language-based techniques for cryptography and privacy
    Computer Science Distinguished Letcture, MPEB 1.02*
  • 14 July 2016
    Yvo Desmedt, UCL, UT Dallas
    Internet Voting on Insecure Platforms
    Roberts 110
  • 7 July 2016
    Sebastian Meiser, UCL
    Your Choice MATor(s): Large-scale Quantitative Anonymity Assessment of Tor Path Selection Algorithms against Structural Attacks
    MPEB 1.04
  • 7 July 2016
    Jonathan Bootle, UCL
    How to do Zero Knowledge from Discrete Logs in under 7kB
    MPEB 1.04
  • 30 June 2016
    Raphael Toledo, UCL
    Roberts 110
  • 23 June 2016
    Maura Paterson,, Birkbeck
    Algebraic Manipulation Detection Codes and Generalized Difference Families
    Roberts 422
  • 16 June 2016
    Simon Parkin, UCL
    Productive Security: A scalable methodology for analysing employee security behaviours
    Robers 309
  • 10 June 2016
    Eran Toch, Tel Aviv University
    Not Even Past: Longitudinal Privacy in Online Social Networks
    Roberts 508
  • 9 June 2016
    Ingolf Becker, UCL
    International Comparison of Bank Fraud Reimbursement: Customer Perceptions and Contractual Terms
    Roberts 110
  • 26 May 2016
    David Bernhard, Bristol University
    Ballot Privacy
    Roberts 110
  • 19 May 2016
    Panagiotis Andriotis, UCL
    Digital Forensics: Retrieving Evidence from Mobile Devices
    Roberts 110
  • 12 May 2016
    Prof Aris Pagourtzis, NTUA
    Reliable Message Transmission Despite Limited Knowledge and Powerful Adversaries
    Roberts 110
  • 28 April 2016
    Pyrros Chaidos, UCL
    Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting
    Roberts 110
  • 21 April 2016
    Mirco Musolesi, UCL
    Identity and Identification in the Smartphone Era
    Roberts 110
  • 14 April 2016
    Robin Wilton, The Internet Society
    Trust, Ethics and Autonomy - the ethics of the Internet
    Roberts 4.22
  • 7 April 2016
    Prof Kenny Paterson, Royal Holloway,University of London
    Cryptographic Vulnerability Disclosure - The Good, The Bad, and The Ugly
    Roberts 110
  • 31 March 2016
    Gábor Gulyás, Inria
    Taking Re-identification Attacks of Social Networks to the Next Level
    Roberts 110
  • 24 March 2016
    Prof Jens Groth, UCL
    Cryptography for Eagles
    Roberts 110
  • 17 March 2016
    Wouter Lueks, University of Nijmegen
    Distributed encryption and applications
    Roberts 110
  • 10 March 2016
    Bingsheng Zhang, Lancaster University
    On Secure E-voting systems — End-2-end Verifiability, Privacy, Scalability, Accountability
    Roberts 110
  • 9 March 2016
    Ben Livshits, Microsoft Research Redmond
    Finding Malware at Web Scale
    CS Seminar, Medawar G02 Watson LT*
  • 3 March 2016
    Prof Aurélien Francillon, Eurecom
    Trust, but verify: why and how to establish trust in embedded devices
    Roberts 110
  • 25 February 2016
    Prof Fabio Massacci, University of Trento
    Cyberinsurance: good for your company, bad for your country?
    MPEB 6.12
  • 18 February 2016
    Luca Melis, UCL
    Efficient Private Statistics with Succinct Sketches
    Roberts 110
  • 18 February 2016
    Sheharbano Khattak, University of Cambridge
    Do You See What I See? Differential Treatment of Anonymous Users
    Roberts 110
  • 15 February 2016
    Cecilie Oerting, UCL
    Shining Light on Darknet: Does anonymity disinhibit user behavior on underground marketplaces?
    Roberts 110
  • 11 February 2016
    Tristan Caulfield, UCL
    Discrete Choice, Social Interaction, and Policy in Encryption Technology Adoption
    Roberts 110
  • 11 February 2016
    Simon Parkin, UCL
    Better the Devil You Know: A User Study of Two CAPTCHAs and a Possible Replacement Technology
    Roberts 110
  • 11 February 2016
    Simon Parkin, UCL
    An Exploratory Study of User Perceptions of Payment Methods in the UK and the US
    Roberts 110
  • 28 January 2016
    Jonathan Spring, UCL
    Avoiding pseudoscience: prudence, logic, and verification in studying information security
    Roberts 110
  • 21 January 2016
    Marcel Keller, Bristol University
    Malicious-for-free OT Extension and Its Application to MPC
    Roberts 110
  • 14 January 2016
    Anil Madhavapeddy, University of Cambridge
    Unikernels: Library operating systems for the masses
    Roberts 110

2015

  • 17 December 2015
    Peter Ryan, University of Luxembourg
    Voting with Transparent Verification and Coercion Mitigation
    MPEB 6.12
  • 10 December 2015
    Kasper Bonne Rasmussen, University of Oxford
    Efficient and Scalable Oblivious User Matching
    Birbeck B30
  • 3 December 2015
    Bruce Christianson, University of Hertfordshire
    Implementing Impossible Requirements - changing the role of trust in secure systems design
    Torrington (1-19) 115 Galton LT
  • 27 November 2015
    Alexandros Kapravelos, NCSU
    Analyzing and understanding in depth malicious browser extensions
    MPEB 6.12
  • 26 November 2015
    Prof Bhavani Thuraisingham, University of Texas
    Cloud-Centric Assured Information Sharing
    Birkbeck B30
  • 19 November 2015
    Sergio Maffeis, Imperial College
    Language based Web security
    Birkbeck B30
  • 17 November 2015
    Geoffroy Couteau, ENS
    Encryption Switching Protocols
    Roberts 4.21
  • 12 November 2015
    Seny Kamara, Microsoft Research
    Inference Attacks on Property-Preserving Encrypted Databases
    MPEB 1.02
  • 12 November 2015
    Melissa Chase, Microsoft Research
    Algebraic MACs and Lightweight Anonymous Credentials
    MPEB 1.02
  • 6 November 2015
    Radu Sion, Stony Brook University
    Privacy, Security, and Energy in Modern Clouds. Three Buzzwords in A Boat: The Amusing Adventures of a Naive Academic on Wall Street
    MPEB 6.12
  • 30 October 2015
    Prof Susanne Bødker, Aarhus University
    Experiencing Security
    MPEB 1.02
  • 29 October 2015
    Benoit Libert, ENS Lyon
    Fully secure functional encryption for linear functions from standard assumptions
    Birkbeck B30
  • 15 October 2015
    Thomas Peters, ENS Paris
    Short Group Signatures via Structure-Preserving Signatures: Standard Model Security from Simple Assumptions
    Birkbeck B30
  • 8 October 2015
    Henrik Ziegeldorf, RWTH Aachen University
    Secure and Anonymous Decentralized Bitcoin Mixing
    Birbeck B30
  • 2 October 2015
    Khilan Gudka, University of Cambridge
    Clean Application Compartmentalization with SOAAP
    Roberts 309
  • 1 October 2015
    Prof Chris Mitchell, Royal Holloway
    Real-world security analyses of OAuth 2.0 and OpenID Connect
    Birkbeck B30
  • 24 September 2015
    Dr. Ana Salagean, Loughborough University
    Higher order differential attacks on stream ciphers
    MPEB 1.02
  • 18 September 2015
    Pyrros Chaidos, UCL
    Short Accountable Ring Signatures Based on DDH
    MPEB 1.02
  • 10 September 2015
    Sarah Meiklejohn, UCL
    Centrally Banked Cryptocurrencies
    Birkbeck B30
  • 27 August 2015
    Odette Beris, UCL
    The Behavioural Security Grid (BSG) Risk and Emotion
    Roberts 422
  • 27 August 2015
    Steve Dodier-Lazaro, UCL
    Appropriation and Principled Security
    Roberts 422
  • 27 August 2015
    Simon Parkin, UCL
    Title: Appropriation of security technologies in the workplace
    Roberts 422
  • 20 August 2015
    Oliver Hohlfeld, Aachen University
    An Internet Census Taken by an Illegal Botnet
    Roberts 421
  • 20 August 2015
    Dali Kaafar, NICTA
    How smart is our addiction? Some experimental analyses of Security and Privacy in the mobile apps ecosystem
    Roberts 421
  • 13 August 2015
    Steve Dodier-Lazaro, UCL
    Research tools for remote user studies within UCL ISRG
    Roberts 309
  • 30 July 2015
    Ingolf Becker, UCL
    Applying Sentiment Analysis to Identify Different Conceptions of Security and Usability
    MPEB 1.02
  • 30 July 2015
    Kat Krol, UCL
    “Too taxing on the mind!” Authentication grids are not for everyone
    MPEB 1.02
  • 23 July 2015
    Gareth Tyson, Queen Mary
    Is your VPN keeping you safe?
    MPEB 1.02
  • 9 July 2015
    Andreas M. Antonopoulos, University of Nicosia
    Consensus algorithms, blockchain technology and bitcoin
    Roberts G06 Sir Ambrose Fleming LT
  • 2 July 2015
    Gennaro Parlato, University of Southampton
    Security Analysis of Self-Administrated Role-Based Access Control through Program Verification
    South Wing 9 Garwood LT
  • 25 June 2015
    Mauro Migliardi, University of Padova
    Green, Energy-Aware Security? What are we talking about? And Why?
    MPEB 1.02
  • 18 June 2015
    Elisabeth Oswald, University of Bristol
    Making the most of leakage
    Roberts 421
  • 11 June 2015
    Lucky Onwuzurike, UCL
    Danger is My Middle Name - Experimenting with SSL Vulnerabilities on Android Apps
    Torrington (1-19) 115 Galton LT
  • 11 June 2015
    Emiliano de Cristofaro, UCL
    Controlled Data Sharing for Collaborative Predictive Blacklisting
    Torrington (1-19) 115 Galton LT
  • 3 June 2015
    Matthew Smith, University of Bonn
    System Security meets Usable Security – Administrators and Developers are humans too
    MPEB 1.02
  • 4 June 2015
    Ben Smith, École Polytechnique
    (Slightly) more practical quantum factoring
    MPEB 1.20
  • 28 May 2015
    Jamie Hayes, UCL
    Guard Sets for Onion Routing
    MPEB 1.02
  • 28 May 2015
    Angela Sasse, UCL
    Current and emerging attacks on banking systems: report from a practitioner workshop
    MPEB 1.02
  • 21 May 2015
    Sandra Scott-Hayward, Queen’s University Belfast
    Design for deployment of Secure, Robust, and Resilient Software-Defined Networks
    MPEB 1.02
  • 21 May 2015
    Michiel Kosters, Nanyang Technological University
    The last fall degree and an application to HFE
    MPEB 1.02
  • 14 May 2015
    Mariana Raykova, SRI International
    Candidate Indistinguishability Obfuscation and Applications
    Roberts 309
  • 14 May 2015
    Marco Cova, Lastline, Inc.
    Analyzing Malware at Scale
    Roberts 309
  • 12 May 2015
    Luciano Bello, Chalmers Technical University
    Information-flow tracking for web technologies
    MPEB 1.03
  • 7 May 2015
    Martin Albrecht, RHUL
    So, how hard is this LWE thing, anyway?
    MPEB 1.03
  • 12 May 2015
    Luciano Bello, Chalmers Technical University
    Information-flow tracking for web technologies
    MPEB 1.03
  • 30 April 2015
    Steve Brierley, University of Cambridge
    The impact of quantum computing on cryptography
    Roberts 309
  • 23 April 2015
    Prof Mark Ryan , Birmingham University
    Du-Vote: Remote Electronic Voting with Untrusted Computers
    MPEB 1.03
  • 16 April 2015
    Emiliano De Cristofaro, UCL
    The Genomics Revolution: Innovation Dream or Privacy Nightmare?
    MPEB 1.03
  • 9 April 2015
    Essam Ghadafi, UCL
    Decentralized Traceable Attribute-Based Signatures
    MPEB 1.03
  • 26 March 2015
    Pyrros Chaidos, UCL
    Making Sigma-protocols Non-interactive and Building Referendums without Random Oracles
    MPEB 1.02
  • 19 March 2015
    Paul Burton, University of Bristol
    DataSHIELD: taking the analysis to the data not the data to the analysis
    MPEB 1.03
  • 12 March 2015
    Markulf Kohlweiss, Microsoft Research
    Triple Handshake: Can cryptography, formal methods, and applied security be friends?
    MPEB 1.03
  • 5 March 2015
    J. Clark, G. Eydmann, Wynyard Group
    Wynyard Group – Advance Crime Analytics for Foreign Fighters Analysis
    MPEB 1.03
  • 26 February 2015
    Nicolas Courtois, UCL
    Bad randoms, key management and how to steal bitcoins
    MPEB 1.03
  • 19 February 2015
    Emil Lupu, Imperial College
    On the Challenges of Detecting and Diagnosing Malicious Data InjectionsOn the Challenges of Detecting and Diagnosing Malicious Data Injections
    MPEB 1.03
  • 12 February 2015
    Ian Goldberg, University of Waterloo
    Ibis: An Overlay Mix Network for Microblogging
    MPEB 1.03
  • 5 February 2015
    David Clark, UCL
    Detecting Malware with Information Complexity
    MPEB 1.03
  • 29 January 2015
    K. Krol, I. Kirlappos, UCL
    Upcoming papers at NDSS Usable Security Workshop (USEC’15)
    MPEB 1.03
  • 19 January 2015
    Ben Livshits, Microsoft Research
    PrePose: Security and Privacy for Gesture-Based Programming
    MPEB 6.12
  • 22 January 2015
    Ioannis Papagiannis, Facebook
    Uncovering Large Groups of Active Malicious Accounts in Online Social Networks
    MPEB 1.03
  • 15 January 2015
    Tristan Caulfield, UCL
    Modelling Security Policy
    MPEB 1.03
  • 12 January 2015
    Alptekin Küpçü, Koç University
    Single Password Authentication
    MPEB 6.12

2014

  • 18 December 2014
    Jon Crowcroft, University of Cambridge
    Can we build a Europe-only cloud, and should we?
    Roberts 110
  • 11 December 2014
    Ian Brown, Oxford Internet Institute
    The feasibility of transatlantic privacy-protective standards for surveillance
    Roberts 110
  • 4 December 2014
    Nik Whitfield, Panaseer
    Adventures in cyber risk metrics and anomaly detection for Insider and APT
    Roberts 110
  • 27 November 2014
    George Danezis, UCL
    An Automated Social Graph De-anonymization Technique
    Roberts 110
  • 20 November 2014
    Vasileios Routsis, UCL
    The evolution of online self-disclosure and privacy ethics. Normalising modern-day surveillance
    Roberts 110
  • 13 November 2014
    Mike Bond, Cryptomathic
    EMV Pre-Play and Relay Attacks - A New Frontier
    Roberts 110
  • 31 October 2014
    Prof Stefan Dziembowski, University of Warsaw
    Bitcoin contracts — digital economy without lawyers?
    MPEB 1.02
  • 16 October 2014
    Emiliano De Cristofaro, UCL
    What’s wrong with the Interwebs? Recent results measuring Web Filtering and Facebook Like Fraud
    Roberts 110
  • 9 October 2014
    Giovanni Vigna, UC Santa Barbara
    Eliciting maliciousness: from exploit toolkits to evasive malware
    Roberts 110
  • 18 September 2014
    Adrian Perrig, ETH Zurich
    PoliCert: A Highly Resilient Public-Key Infrastructure
    MPEB 6.12
  • 12 September 2014
    Andelka Phillips, Oxford University
    Genetic Testing Goes Online An overview of the industry and the challenges for regulators
    MPEB 6.12
  • 8 September 2014
    Martin Emms, Newcastle University
    Is the future of credit card fraud contactless?
    MPEB 6.12
  • 4 September 2014
    Christophe Petit, UCL
    On the complexity of the elliptic curve discrete logarithm problem for binary curves
    MPEB 6.12
  • 7 August 2014
    Susan E. McGregor, Columbia University
    Communicating Securely, Communicating Security: Information Security Issues for Journalists
    MPEB 6.12
  • 22 July 2014
    Gene Tsudik, UC Irvine
    Elements of Trust in Named-Data Networking
    MPEB 1.02
  • 10 July 2014
    Angela Sasse, UCL
    What security practitioners really think about usability – Insights from 3 case studies
    MPEB 6.12
  • 26 June 2014
    Amir Herzberg, Bar-Ilan University
    AnonPoP: the Anonymous Post-Office Protocol
    MPEB 1.20
  • 26 June 2014
    Srdjan Capkun, ETH Zurich
    Selected Results in Location-Based Security
    MPEB 6.12
  • 12 June 2014
    Steve Dodier-Lazaro, UCL
    Towards systematic application sandboxing on Linux
    MPEB 6.12
  • 29 May 2014
    Ivan Martinovic, Oxford University
    Fasten Your Seatbelts – An Overview and Security Considerations of Next Generation Air Traffic Communication
    MPEB 6.12
  • 15 May 2014
    Odette Beris and Tony Morton, UCL
    Employee Risk Understanding and Compliance: Looking Through a Johari Window
    MPEB 6.12
  • 1 May 2014
    Flavio Garcia, University of Birmingham
    The Pitfalls of Cyber-Security Research: From an Ethical and Legal Perspective
    MPEB 6.12