AwesomeIT logo

Awesome IT 2025

Crime and the digital world

April 18th 2025

Science Park 904

On the 18th of April 2025, the thirteenth edition of Awesome IT will be held. Join us for a day of informative and inspiring talks from experts in a wide range of IT-related fields.


Get your ticket now for €8,50

Speakers


Click on a name on the left for more information about a speaker

Bas van den Brink

Bas van den Brink

Bas is a consultant on the Forensic Technology team at KPMG and a licensed private investigator. During his time with KPMG, he has participated in forensic investigations for both large corporate and government clients, specializing in the technical facets of eDiscovery. He holds a Bachelor's degree in Artificial Intelligence and a Master's degree in Information Science, both from the University of Amsterdam


Talk: Finding digital evidence through eDiscovery

In today's digital age, forensic investigations frequently encompass a significant digital component, necessitating the examination of vast datasets to unearth critical evidence. During this talk we will look into the methodologies employed to tackle these challenges using the Electronic Discovery Reference Model (EDRM). We will explore the various tools utilized throughout the investigative process and discuss the integration and impact of Artificial Intelligence


Meike Kombrink

Meike Kombrink

Meike is currently a Forensic image examiner in training at the Netherlands Forensic Institute, while simultaneously finishing her PhD. For her PhD she focusses on the applicability of steganography detection tools. Before the started these positions she acquired a master in Forensic Science and a bachelor Artificial Intelligence, both at the UvA.


Talk: Finding what is hidden in plain sight

Steganography is the art and science of covert communication. It allows anyone to send a message without a third party suspecting that a message was communicated. It has been claimed that steganography has been used by (amongst others) terrorist groups and for the distribution of Child Sexual Abuse Material (CSAM). It is therefore vital for Law Enforcement Agencies (LEAs) to adequately detect when a message was hidden and then uncover the contents of said message. Current detection schemes are mainly aimed at binary classification, i.e. there is steganography or there is not. Even though this type of detection is often not very useful since the use of steganography is not criminal in and of itself. Additionally, every steganography tool hides information in a slightly different way. Hence the extraction of the information will also be tool specific, and thus knowing only that steganography has been used is not the most helpful approach to this problem.

To fix this issue we are developing a tool-based detection system that does not merely provide a binary answer, but ranks the potentially used software in order of likeliness to be used for the input. This way investigators get much more practical leads on what steps to take to extract the information. In order to improve the forensic interpretation and the explainability of the results, features that can be used for the classification are found and then combined in a self-learned Bayesian network (also referred to as structure learning). Thus providing a useful investigative tool for the forensic community.


Stijn van Lierop

Stijn van Lierop

Stijn van Lierop is a forensic data scientist at the Evidence Evaluation and Statistics team of the Netherlands Forensic Institute. He is currently doing a PhD at the AI4Forensics lab of the University of Amsterdam focusing on the development of methods for the detection of deepfakes and synthetic media in forensic casework. He holds a master's degree in Artificial Intelligence from Radboud University and a master's degree in Forensic Science from the University of Amsterdam.


Talk: Is it real or not? Deepfakes and GenAI

Deepfakes and synthetic media can be used to create and manipulate evidence as well as commit or facilitate criminal activities. As generative models continue to evolve and become increasingly accessible to the public, the development of methods that can distinguish between real and synthetic media is therefore also becoming more important. In this talk we will look into some recent advancements in AI-based detection of deepfakes and synthetic media, providing an overview of the current state-of-the-art techniques. We will discuss how advanced machine learning models, such as transformers and deep neural networks are employed for deepfake detection, and examine the strategies used to enhance detection accuracy and performance. In addition to discussing progress in the field, we will address the remaining challenges, including generalizability, robustness, computational costs of methods and evasion strategies.


Laura Pavias

Laura Pavias

Laura Pavias is sinds 2023 werkzaam bij het Openbaar Ministerie Amsterdam als senior adviseur digitale opsporing en cybercrime binnen het Cluster Digitale Criminaliteit (CDC). Daarvoor is zij werkzaam geweest bij de politie in diverse functies, waarvan de laatste 4 jaar als juridisch adviseur digitale opsporing en cybercrime bij het Team Digitale Opsporing Amsterdam.

Laura adviseert op de verscheidene aspecten van digitale opsporing. Zowel op specifieke digitale vraagstukken binnen opsporingsonderzoeken als bij het verkennen van juridische mogelijkheden van het inzetten van nieuwe methoden en technieken binnen de juridische kaders van strafvordering en privacy. De vertaalslag maken en verbinding leggen tussen de juridische, technische & tactische aspecten in een opsporingsonderzoek is één van de leukste uitdagingen binnen haar vakgebied.


Talk: [Dutch only] Het spoor van een cybercrimineel: van hack tot veroordeling

deze talk neemt Laura je mee in een afgerond strafrechtelijk onderzoek dat geleid werd door de Cyber Officieren van het Openbaar Ministerie Amsterdam in samenwerking met het Cybercrimeteam van de Politie Amsterdam. In deze zaak was sprake van onder meer computervredebreuk, het verspreiden van ransomware en afdreiging door middel van betaling met cryptocurrency. Deze zaak leidde uiteindelijk tot de veroordeling van de verdachte met een celstraf van vier jaar waarvan één jaar voorwaardelijk. Hoe kwam het Cybercrimeteam deze verdachte op het spoor? Welke (digitale) bevoegdheden konden vervolgens worden ingezet om het bewijs tegen deze verdachte verzamelen? Tijdens deze talk krijg je een unieke en interessante inkijk in een cyberzaak!


Barend Frans

Barend Frans

Barend zet zich al sinds 1987 in voor een veiliger Nederland. Na tien jaar op straat als agent, dook hij de digitale wereld in – een wereld waarin hij sinds 1997 bijna alle denkbare functies binnen de politie heeft vervuld. Maar één ding is altijd hetzelfde gebleven: de burger staat voorop.

Zijn missie? Nederland digitaal veiliger en bewuster maken. Niet alleen, maar samen met partners, gebaseerd op wederzijds vertrouwen en maatschappelijke verantwoordelijkheid. Want de politie kan veel, maar niet alles. Barend gelooft in de kracht van samenwerking – samen staan we sterker in de strijd tegen digitale dreigingen!


Talk: [Dutch only] Achter de schermen bij Team Digitale Opsporing en het Cybercrimeteam

Barend neemt je mee in de wereld van het Team Digitale Opsporing. Wat gebeurt er achter de schermen? En zijn cybercriminelen echt slimmer dan de politie, zoals vaak wordt gezegd?

Ontdek hoe een team van IT-specialisten zich dagelijks inzet om Nederland (digitaal) veiliger te maken. Barend laat zien aan welke onderzoeken zij werken en hoe cybercrime-teams complexe zaken aanpakken. Welke kennis en expertise komt daarbij kijken? Duik mee in de fascinerende wereld van digitale opsporing en cybercriminaliteit!


Domain: TDB

Lukas Snoek & Nils Hulzebosch

Lukas Snoek & Nils Hulzebosch

Lukas is a data scientist at the Dutch Police, working on (technical) model validation and fairness of ML/AI models. Previously, he did a PhD in cognitive neuroscience at the University of Amsterdam on the use of machine learning for psychology and neuroscience research, followed by a post-doc at the University of Glasgow on deep neural network models of (3D) face perception.

Nils is a data scientist at the Dutch Police, working on developing and validating AI models for operational purposes. Previously, he did a Master's in Artificial Intelligence at the University of Amsterdam.


Talk: "Putting science back in data science: statistical model validation for trustworthy AI" & "Helping detectives find relevant information using AI"

This talk consists of two parts:

Trustworthy AI-models should be accurate, fair, and robust, especially in high-stakes domains like law enforcement and medicine. Data science offers a rich repertoire of tools to quantitatively evaluate models, like cross-validation, an abundance of (fairness) metrics and sensitivity tests. These tools, however, often lack the rigor and parsimony associated with the field of statistics. In this talk, I argue that the development and application of AI can benefit from statistical methods used in the empirical sciences, offering a rigorous model validation methodology for developing trustworthy AI.

For their investigations, detectives have to search for relevant information in seized electronic devices, such as smartphones and laptops. These devices contain large amounts of chats, documents, images, videos, audio files, and more, and are impossible to go through manually. In this talk, I discuss how TROI (Team Rendement Operationele Informatie) develops applications and AI models to help detectives quickly find relevant data.


Domain: TDB

Lisa Kohl

 Lisa Kohl

Lisa Kohl is a tenured researcher in the CWI Cryptology group, Amsterdam. A special focus of her work lies in exploring new directions in secure computation with the goal of developing practical post-quantum secure protocols. Before coming to CWI, she worked as a postdoctoral researcher with Yuval Ishai at Technion. In 2019, she completed her PhD at Karlsruhe Institute of Technology under the supervision of Dennis Hofheinz. During her PhD, she spent eight months in the FACT center at Reichman University (IDC Herzliya) for a research visit with Elette Boyle.


Talk: Secure Multi-Party Computation, or: How to Detect Money Laundering Across Banks without Sharing Data

We live in a world that's driven by data — but as the amount of sensitive information grows, so does the challenge of using it responsibly. Organizations often need to work together, but sharing raw data across different institutions can create major privacy, security, and legal risks. To keep up with the demands of this data economy, we need new ways to collaborate without giving up control over our private data. Secure Multi-Party Computation (MPC) is one of the key technologies making this possible. It allows multiple parties to jointly compute results without revealing their individual data. In this talk, I will dive into the basics of MPC and show how it can be used in real-world scenarios — like helping banks detect money laundering together, without ever having to share sensitive customer information. MPC opens up exciting new possibilities for privacy-preserving collaboration, not just in finance, but across many fields where data privacy matters.


Program


Click or tap on a talk for more information.

C0.05

C1.110

C0.110

Hal

11:00
11:15

Walk-in

11:15 - 11:45

11:30
11:45

Plenary Welcome

11:45 - 12:00

12:00

Barend Frans

12:00 - 13:00

[Dutch only] Achter de schermen bij Team Digitale Opsporing en het Cybercrimeteam

Stijn van Lierop

12:00 - 13:00

Is it real or not? Deepfakes and GenAI

12:15
12:30
12:45
13:00

Lunch

13:00 - 14:00

13:15
13:30
13:45
14:00

Laura Pavias

14:00 - 15:00

[Dutch only] Het spoor van een cybercrimineel: van hack tot veroordeling

Bas van den Brink

14:00 - 15:00

Finding digital evidence through eDiscovery

Lisa Kohl

14:00 - 15:00

Secure Multi-Party Computation, or: How to Detect Money Laundering Across Banks without Sharing Data

14:15
14:30
14:45
15:00

Break

15:00 - 15:30

15:15
15:30

Lukas Snoek & Nils Hulzebosch

15:30 - 16:30

"Putting science back in data science: statistical model validation for trustworthy AI" & "Helping detectives find relevant information using AI"

Meike Kombrink

15:30 - 16:30

Finding what is hidden in plain sight

15:45
16:00
16:15
16:30

Plenary Closing

16:30 - 16:45

16:45

Drinks

16:45 - 18:00

17:00

Partners