
Awesome IT 2025
Crime and the digital world
April 18th 2025
Science Park 904On the 18th of April 2025, the thirteenth edition of Awesome IT will be held. Join us for a day of informative and inspiring talks from experts in a wide range of IT-related fields.
Speakers
Click on a name on the left for more information about a speaker
Bas van den Brink

Bas is a consultant on the Forensic Technology team at KPMG and a licensed private investigator. During his time with KPMG, he has participated in forensic investigations for both large corporate and government clients, specializing in the technical facets of eDiscovery. He holds a Bachelor's degree in Artificial Intelligence and a Master's degree in Information Science, both from the University of Amsterdam
Talk: Finding digital evidence through eDiscovery
In today's digital age, forensic investigations frequently encompass a significant digital component, necessitating the examination of vast datasets to unearth critical evidence. During this talk we will look into the methodologies employed to tackle these challenges using the Electronic Discovery Reference Model (EDRM). We will explore the various tools utilized throughout the investigative process and discuss the integration and impact of Artificial Intelligence
Meike Kombrink

Meike is currently a Forensic image examiner in training at the Netherlands Forensic Institute, while simultaneously finishing her PhD. For her PhD she focusses on the applicability of steganography detection tools. Before the started these positions she acquired a master in Forensic Science and a bachelor Artificial Intelligence, both at the UvA.
Talk: Finding what is hidden in plain sight
Steganography is the art and science of covert communication. It allows anyone to send a message without a third party suspecting that a message was communicated. It has been claimed that steganography has been used by (amongst others) terrorist groups and for the distribution of Child Sexual Abuse Material (CSAM). It is therefore vital for Law Enforcement Agencies (LEAs) to adequately detect when a message was hidden and then uncover the contents of said message. Current detection schemes are mainly aimed at binary classification, i.e. there is steganography or there is not. Even though this type of detection is often not very useful since the use of steganography is not criminal in and of itself. Additionally, every steganography tool hides information in a slightly different way. Hence the extraction of the information will also be tool specific, and thus knowing only that steganography has been used is not the most helpful approach to this problem.
To fix this issue we are developing a tool-based detection system that does not merely provide a binary answer, but ranks the potentially used software in order of likeliness to be used for the input. This way investigators get much more practical leads on what steps to take to extract the information. In order to improve the forensic interpretation and the explainability of the results, features that can be used for the classification are found and then combined in a self-learned Bayesian network (also referred to as structure learning). Thus providing a useful investigative tool for the forensic community.
Stijn van Lierop

Stijn van Lierop is a forensic data scientist at the Evidence Evaluation and Statistics team of the Netherlands Forensic Institute. He is currently doing a PhD at the AI4Forensics lab of the University of Amsterdam focusing on the development of methods for the detection of deepfakes and synthetic media in forensic casework. He holds a master's degree in Artificial Intelligence from Radboud University and a master's degree in Forensic Science from the University of Amsterdam.
Talk: Is it real or not? Deepfakes and GenAI
Deepfakes and synthetic media can be used to create and manipulate evidence as well as commit or facilitate criminal activities. As generative models continue to evolve and become increasingly accessible to the public, the development of methods that can distinguish between real and synthetic media is therefore also becoming more important. In this talk we will look into some recent advancements in AI-based detection of deepfakes and synthetic media, providing an overview of the current state-of-the-art techniques. We will discuss how advanced machine learning models, such as transformers and deep neural networks are employed for deepfake detection, and examine the strategies used to enhance detection accuracy and performance. In addition to discussing progress in the field, we will address the remaining challenges, including generalizability, robustness, computational costs of methods and evasion strategies.
Laura Pavias
Openbaar Ministerie
Talk: TBD
TBD
Barend Frans

Barend zet zich al sinds 1987 in voor een veiliger Nederland. Na tien jaar op straat als agent, dook hij de digitale wereld in – een wereld waarin hij sinds 1997 bijna alle denkbare functies binnen de politie heeft vervuld. Maar één ding is altijd hetzelfde gebleven: de burger staat voorop.
Zijn missie? Nederland digitaal veiliger en bewuster maken. Niet alleen, maar samen met partners, gebaseerd op wederzijds vertrouwen en maatschappelijke verantwoordelijkheid. Want de politie kan veel, maar niet alles. Barend gelooft in de kracht van samenwerking – samen staan we sterker in de strijd tegen digitale dreigingen!
Talk: Achter de schermen bij Team Digitale Opsporing en het Cybercrimeteam [Dutch only]
Barend neemt je mee in de wereld van het Team Digitale Opsporing. Wat gebeurt er achter de schermen? En zijn cybercriminelen echt slimmer dan de politie, zoals vaak wordt gezegd?
Ontdek hoe een team van IT-specialisten zich dagelijks inzet om Nederland (digitaal) veiliger te maken. Barend laat zien aan welke onderzoeken zij werken en hoe cybercrime-teams complexe zaken aanpakken. Welke kennis en expertise komt daarbij kijken? Duik mee in de fascinerende wereld van digitale opsporing en cybercriminaliteit!
Domain: TDB
Lukas Snoek & Nils Hulzebosch
Lukas is a data scientist at the Dutch Police, working on (technical) model validation and fairness of ML/AI models. Previously, he did a PhD in cognitive neuroscience at the University of Amsterdam on the use of machine learning for psychology and neuroscience research, followed by a post-doc at the University of Glasgow on deep neural network models of (3D) face perception.
Nils is a data scientist at the Dutch Police, working on developing and validating AI models for operational purposes. Previously, he did a Master's in Artificial Intelligence at the University of Amsterdam.
Talk: "Putting science back in data science: statistical model validation for trustworthy AI" & "Helping detectives find relevant information using AI"
This talk consists of two parts:
Trustworthy AI-models should be accurate, fair, and robust, especially in high-stakes domains like law enforcement and medicine. Data science offers a rich repertoire of tools to quantitatively evaluate models, like cross-validation, an abundance of (fairness) metrics and sensitivity tests. These tools, however, often lack the rigor and parsimony associated with the field of statistics. In this talk, I argue that the development and application of AI can benefit from statistical methods used in the empirical sciences, offering a rigorous model validation methodology for developing trustworthy AI.
For their investigations, detectives have to search for relevant information in seized electronic devices, such as smartphones and laptops. These devices contain large amounts of chats, documents, images, videos, audio files, and more, and are impossible to go through manually. In this talk, I discuss how TROI (Team Rendement Operationele Informatie) develops applications and AI models to help detectives quickly find relevant data.
Domain: TDB
Program
Click or tap on a talk for more information.
C0.05 |
C1.110 |
Hal |
|
---|---|---|---|
11:00 | |||
11:15 |
Walk-in11:15 - 11:45
|
||
11:30 | |||
11:45 |
Plenary Welcome11:45 - 12:00
|
||
12:00 |
Barend Frans12:00 - 13:00Achter de schermen bij Team Digitale Opsporing en het Cybercrimeteam [Dutch only] |
Stijn van Lierop12:00 - 13:00Is it real or not? Deepfakes and GenAI |
|
12:15 | |||
12:30 | |||
12:45 | |||
13:00 |
Lunch13:00 - 14:00
|
||
13:15 | |||
13:30 | |||
13:45 | |||
14:00 |
Laura Pavias14:00 - 15:00TBD |
Bas van den Brink14:00 - 15:00Finding digital evidence through eDiscovery |
|
14:15 | |||
14:30 | |||
14:45 | |||
15:00 |
Break15:00 - 15:30
|
||
15:15 | |||
15:30 |
Lukas Snoek & Nils Hulzebosch15:30 - 16:30"Putting science back in data science: statistical model validation for trustworthy AI" & "Helping detectives find relevant information using AI" |
Meike Kombrink15:30 - 16:30Finding what is hidden in plain sight |
|
15:45 | |||
16:00 | |||
16:15 | |||
16:30 |
Plenary Closing16:30 - 16:45
|
||
16:45 |
Drinks16:45 - 18:00
|
||
17:00 |