AwesomeIT logo

Awesome IT 2024

March 15th 2024

Pakhuis De Zwijger

On the 15th of March 2024, the thirteenth edition of Awesome IT will be held. Join us for a day of informative and inspiring talks from experts in a wide range of IT-related fields.

Get your ticket now for €13,50


Click on a name on the left for more information about a speaker

Jeroen van der Laak

Jeroen van der Laak

Jeroen van der Laak is professor of computational Pathology at the department of Pathology of the Radboud University Medical Center in Nijmegen, Netherlands. He researches the use of machine learning for the analysis of microscopic pathology images, in order to support the pathologist in reaching an accurate and reproducible diagnosis or prognosis. He leads projects focusing on breast cancer, gynecological cancer and transplant kidney biopsies, funded by the Dutch Cancer Society, the Dutch Kidney Foundation, European Union and others. He is coordinator of the Bigpicture project, and is CSO for the Radboudumc spinoff Aiosyn.

Talk: Deep learning of digitized histopathological slides

Deep learning is a state-of-the-art pattern recognition technique that has proven extremely powerful for the analysis of digitized histopathological slides. The number of studies presenting highly promising results for solving diagnostic tasks in histopathology has grown exponentially over the last few years. Examples are subtyping of lung and skin tumors, breast and prostate cancer grading, and detection of metastases. Unfortunately, few studies so far include an external validation using large, independent cohorts, let alone study the true clinical usefulness in prospective studies. As a result, the balance between promise and hype in public opinion may be skewed. In this talk, I will present some of the current possibilities of AI for histopathology, and discuss potential future developments. I will also address the challenges that have to be overcome before we can deliver true value to pathologists, patients, and the healthcare system.

Domain: AI/Data science

Gerrit Oomens

Gerrit Oomens

Gerrit Oomens is program manager IT at the Faculty of Science, where he oversees the development and integration of custom software for the faculty. Fifteen years ago, he came up with and built DataNose and he has not managed to entirely get away from it since then. Nowadays he is responsible for improving data protection and security within the faculty, where he tries to ensure a secure but still somewhat practical way of working. In his free time, Gerrit is the CTO of the Digitaal Informatieplatform Podiumkunsten, a non-profit organization that develops digital infrastructure in the Dutch performing arts sector.

Talk: How to secure the UvA?

Increasingly universities and other public institutions are targeted by cyber criminals looking to steal data or deploy ransomware. Universities additionally handle large amounts of sensitive personal data, e.g. within student administration or in biomedical research, which poses various privacy risks for the people involved. All of this means there are increasingly stringent rules with regards to information security and privacy within the higher education sector. At the same time, a university is by definition a public institution that is characterized by a great deal of openness and collaboration with the outside world. These two perspectives are not always easy to unite in practice.

In this talk I will discuss the efforts within the UvA to increase our security and the opposition they encounter within the university. We will look at a number of examples of where things don’t quite work out as intended with regards to security, both in policy and implementation, and think about ways to do better. We’ll look into the pitfalls encountered when trying to secure a large organization like the UvA and the risks posed by information systems such as DataNose as well as AI tools such as online proctoring and large language models.

Domain: Cybersecurity

Daniel Oberski

Daniel Oberski

Daniel Oberski is professor of Social & Health Data Science at Utrecht University, where he heads the Human Data Science group and is involved in number of collaborative national and international research projects. His team works on applying machine learning and statistical models to the social and clinical sciences, and also develops new algorithms, models, and open source software to make sense of the related data.

Examples of applications he has worked on include: natural language processing of electronic health records; financial distress; adolescents’ psychological well-being; quality management systems for clinical AI; fake news detection; teaching computers to do systematic literature reviews; and temperate forest ecology.

These topics are clearly very different. Still, they can all be approached with a common mindset: “the joy of playing in other people’s backyards” (paraphrasing John Tukey, statistician and inventor of the word “bit”). In his talk, Daniel will do his best to convey some of this joy to you.

Talk: Data science in the backyard


Domain: Social & Health Data Science

Nathan van der Lei

Nathan van der Lei

Nathan studied BSc Econometrics & Operations Research at VU university and MSc joint degree Computational Science at UvA & VU. While working for different client profiles as software consultant and data scientist, Nathan was inspired through the Big Data 4 Small Babies (BD4DB) project at UMC Utrecht: applying his data skillset to develop an algorithm that predicts sepsis (an extreme reaction to an infection) in pre-term neonatal babies inspired, but implementing the algorithm in clinical practise proved challenging. Now, working as a machine learning engineer at Expertisecentrum Zorgalgoritmen, Nathan’s work is focused on implementing algorithms into the Dutch healthcare system.

Talk: Implementing machine learning in Dutch Healthcare

Expertisecentrum Zorgalgoritmen is a startup that designs high-quality artificial intelligence as a medical device. The company was founded in 2021 together with 29 hospitals united within the Vereniging Samenwerkende Algemene Ziekenhuizen (SAZ). Expertisecentrum Zorgalgoritmen provides hospitals with CE-marked machine learning algorithms to support healthcare professionals in their work. Predictions of the algorithms are integrated into the electronic health record systems. Expertisecentrum Zorgalgoritmen also guides hospitals in the technical and clinical implementation and supports staff training. At Expertisecentrum Zorgalgoritmen, our primary focus lies in the practical implementation of machine learning. This emphasis means we are committed to bringing sophisticated AI solutions (Explainable Boosting Machines) from the drawing board to the real-world.

Expertisecentrum Zorgalgoritmen develops and tests a software product called EzaPredictive 1.0. EzaPredictive 1.0 matches patient specific actual parameters, current trends and clinical history to provide information for adequate patient management based on applicable real-world data. The tool shows healthcare professionals predictions about patients’ expected length-of-stay in emergency rooms (ER), likelihood of admission from ER to hospital and expected length-of-stay for patients in clinic. The predictions come from comparing the data available at that specific moment for that specific patient in the EHR system to historical data of similar patients and their previously encountered outcomes.

In this presentation, Nathan van der Lei will present an overview of AI applications in the Dutch healthcare system, explain the development process and implementation process for innovative machine learning tools in the healthcare setting and explain some of the inner workings and testing of EzaPredictive 1.0. The talk uses the “Guideline AI in healthcare” ( as a guiding reference. After this talk you will better understand what it takes to develop and implement machine learning algorithms in the Dutch healthcare setting.

Domain: Applied AI

Nienke Duetz & Willemijn Beks

Nienke Duetz & Willemijn Beks

Nienke Duetz en Willemijn Beks hebben beide Kunstmatige Intelligentie en Computer Science aan de UvA en de VU gestudeerd. Na hun studie zijn ze aan de slag gegaan als software developers bij Navara. Deze combinatie aan disciplines geeft hen een sterke theoretische achtergrondkennis om Large Language Models te kunnen begrijpen, maar ze zien ook van dichtbij hoe deze modellen gebruikt kunnen worden voor software-oplossingen in het bedrijfsleven.

Talk: Bouw futuristische applicaties met GPT

Duik in de wereld van Large Language Models (LLM's) in deze boeiende presentatie. Leer wat LLM's zijn, hoe ze werken, en wat hun rol is in het huidige technologische landschap. Krijg inzicht in hoe je deze krachtige modellen in je eigen applicaties kunt integreren, van theorie tot praktijk. Beleef een live demonstratie waarin we een werkende toepassing van een LLM doorlopen. Zie hoe een kennisbank en een agent-toepassing worden aangedreven door LLM's en ervaar hun potentieel in real-world scenario's. Deze presentatie is ideaal voor iedereen die zijn begrip van LLM's wil verdiepen en wil leren hoe deze modellen praktisch kunnen worden toegepast.

Domain: AI

Lynda Hardman

Lynda Hardman

Lynda Hardman ( is Principal Researcher & Strategist at Centrum Wiskunde & Informatica (CWI,, the Dutch national research centre for Mathematics and Computer Science. She is full professor, part-time, of Multimedia Discourse Interaction at Utrecht University. She researches how visualisations can be used to improve the way domain experts interpret and interact with (linked) data. She was the president of Informatics Europe 2016-2017. During her time as board member, she founded the IE working group Women in Informatics Research and Education. She was named ACM Distinguished Scientist in 2014 and is a Fellow of the British Computer Society.

Talk: Exploring neuroscience literature in Augmented Reality

Maintaining an overview of publications in the neuroscientific field is challenging, in particular in tasks such as investigating relations between brain regions and brain diseases. To support neuroscientists in this challenge, we investigate whether using Augmented Reality can make analyses of literature more accessible and integrate them into current work practices. We explore a number of questions, such as whether interaction with a large body of literature using topics provides a useful way for neuroscientists to explore and understand specific relationships. Our assumption is that by providing overviews of the correlations among concepts, these will allow neuroscientists to better understand the gaps in the literature and more quickly identify suitable experiments to carry out. We currently provide functionality to visualize and filter direct and indirect relations and to compare the results of queries. Our visualization work is based on an analysis of the neuroscience publications in PubMed. This provides an association graph among topics involving cognitive functions, genes, proteins, brain diseases and brain regions. We describe our prototype 3D AR implementation DatAR and challenges we face.

Domain: Data Science, Human Centred Data Analytics, Augmented Reality

Kamiel Verhelst & Rene Bruinink

Kamiel Verhelst & Rene Bruinink

Kamiel Verhelst studied Geo-Information Science at Wageningen University, with a special focus on citizen science and deforestation detection from satellite images. Kamiel started his professional career as a GIS specialist at Geodan (part of Sogelink Netherlands), where he works mostly on data conversion and visualization. More recently, as part of the Research Department, his efforts have shifted towards 3D spatial data, and specifically the challenge of converting various data formats to 3D datasets that are suitable for viewing in browsers.

Rene Bruinink studied Human Geography at Utrecht University where he obtained a Master in Geographic Information Systems (GIS). He started his professional career as a Crime Analyst for the Dutch Police and as a GIS software developer at NATO’s Consultation Command & Control Agency at The Hague where he was involved in prototyping software solutions for the AWACS Division. He currently is employed at LOCATIQS Group, acting as HR Manager for group members Geodan and GOconnectIT, both leading Geo-IT companies in The Netherlands.

Talk: Location Intelligence and Digital Twins

In this lecture, experts from Geodan will explain the concepts of Location Intelligence and Digital Twins. Location Intelligence in general, and Digital Twin models in particular, have proven to be unique instruments to help solve many challenges in which geographical- or spatial data is of concern. Approximately 80% of all data can somehow be related to a certain location on earth. Location is therefore the binding aspect that can be used to gather various data sources and analyze them in its interdependent context. The added value of a Digital Twin lies in combining and visualizing large amounts of (spatial) data, after which experts can make more informed decisions. By using practical examples it is demonstrated how Digital Twins are vital in tackling the complex challenges of today.

Domain: Location Intelligence

Nick van Osta

Nick van Osta

Nick van Osta graduated from the Technical University Eindhoven in 2017 with a master’s in biomedical engineering. He obtained his PhD at the Department of Biomedical Engineering (Maastricht University) under the supervision of Prof. Joost Lumens and Prof. Tammo Delhaas. During his PhD, he worked on important aspects of patient-specific modelling. As of 2021, he continued his career as a platform scientist at the Department of Biomedical Engineering (Maastricht University) to further explore the applicability of the Digital Twin approach.

Talk: The Digital Twin as a virtual mirror of the patient’s heart.

Computational models allow us to enhance our understanding of the pump function of the heart. At Maastricht University, the CircAdapt model has been developed which is a relatively simple biophysical model. This model is used to educate medicine students and to conduct academic research. To aid personalized medicine, we developed an optimization strategy to create simulations personalized to clinical data based on sensitivity analysis and uncertainty quantification.

In this talk we will look at the general concept of a biophysical model and provide some examples of its applicability in both research and educational settings. Next, we will discuss what it takes to create a Digital Twin of a patient's heart and discuss the hurdles to overcome. Lastly, we present applications of our Digital Twin approach in which we quantify properties of the heart related to disease and even predict treatment.

Domain: Bioinformatics


Click or tap on a talk for more information.

De Grote Zaal

De IJzaal

De Hal



09:30 - 10:10


Plenary Welcome

10:10 - 10:30


Nathan van der Lei

10:30 - 11:30

Implementing machine learning in Dutch Healthcare

Jeroen van der Laak

10:30 - 11:30

Deep learning of digitized histopathological slides


First break

11:30 - 11:50


Gerrit Oomens

11:50 - 12:50

How to secure the UvA?

Kamiel Verhelst & Rene Bruinink

11:50 - 12:50

Location Intelligence and Digital Twins



12:50 - 13:50


Daniel Oberski

13:50 - 14:50

Data science in the backyard

Nienke Duetz & Willemijn Beks

13:50 - 14:50

Bouw futuristische applicaties met GPT


Second break

14:50 - 15:10


Nick van Osta

15:10 - 16:10

The Digital Twin as a virtual mirror of the patient’s heart.

Lynda Hardman

15:10 - 16:10

Exploring neuroscience literature in Augmented Reality


Plenary Closing

16:10 - 16:20



16:20 - 17:30