Click on each topic to get more details.
Simulation of the ATLAS L0 Muon Trigger
The ATLAS Muon Trigger is of central relevance in the ATLAS Physics program, being the online selection system of the p-p collision events. The trigger system will undergo a major upgrade after the end of LHC run-3 in 2026.
New detectors will be installed in the innermost station of the spectrometer, with new readout and trigger electronics.
The thesis will focus on the development and validation of the detector and trigger simulation, and on the study and optimization of the muon trigger algorithms performance.
Contacts: Stefano Rosati
Analysis of the data from cosmic test stands of the new detectors and trigger electronics
The innermost layer of the barrel muon spectrometer will be equipped with new Resistive Plate Chambers (RPC) during the upgrade that will happen from 2026, at the end of the LHC Run-3 data-taking period.
Chamber modules that will be installed in ATLAS are now being tested in cosmic rays test stands at CERN, in which also the new readout electronics and the on-detector Data Collector Transmitter boards (developed in Rome) will be tested. The thesis will focus on the analysis of the data collected in the test stands and on the understanding of the detector and electronics performance.
Contacts: Claudio Luci, Stefano Rosati
Development and test of trigger algorithms for the L0 Muon trigger
The ongoing upgrade of the muon trigger with new detectors and electronics requires the development of novel algorithms, combining the information from all muon subdetectors. The thesis will focus on the algorithms development and on their test both on full Geant-4 simulations of the ATLAS experiment in the conditions of Hi-Luminosity LHC and on real data that are being currently collected during LHC run-3.
Contacts: Massimo Corradi, Stefano Rosati
Implementation of a Graph Neural Network on an FPGA for High Luminosity upgrade of the ATLAS Detector
High-energy physics (HEP) experiments at the Large Hadron Collider (LHC) generate an immense volume of data, requiring real-time processing capabilities to extract relevant physics signals efficiently. As the LHC enters the High-Luminosity (HL) era, with an expected increase in instantaneous luminosity by a factor of ten, conventional data acquisition and triggering systems face significant computational challenges. Field-Programmable Gate Arrays (FPGAs) offer a crucial solution due to their inherent parallelism, low-latency processing, and energy efficiency, making them essential for the next-generation upgrades of the ATLAS detector.
Graph Neural Networks (GNNs) have emerged as a powerful tool for pattern recognition in HEP due to their ability to model complex spatial relationships in particle interactions. Unlike conventional convolutional neural networks (CNNs), which operate on regular grid-like data, GNNs can process irregular and non-Euclidean structures, making them particularly suitable for analyzing particle trajectories and detector hits. GNNs have demonstrated superior performance in tracking, jet identification, and anomaly detection, crucial tasks for improving event selection in high-energy physics experiments.
The primary goal of this project is to implement a hardware-efficient GNN on an FPGA for real-time data processing in the ATLAS detector. This involves designing and optimizing a GNN architecture specifically for FPGA deployment, with a focus on efficient matrix multiplication and sparse graph representation. Additionally, a hardware-friendly inference model will be developed to minimize resource utilization while maintaining high accuracy. The project will integrate the FPGA-based GNN within a simulated detector data pipeline to assess its performance in realistic event processing scenarios. Finally, the implementation will be benchmarked against GPU-based models in terms of latency, power consumption, and inference accuracy.

Contacts: Valerio Ippolito, Davide Fiacco, Giuliano Gustavino, Stefano Giagu
Development and study of the performances of CNN and Graph Neural Networks on hardware accelerators (FPGAs/ACAPs/GPUs) for the High Level trigger of the ATLAS experiment
Development of state-of-the-art DNN/RNN/CNN/GNN neural architectures for the identification of events caused by interesting physics processes in the high-level trigger (software trigger) of the ATLAS experiment. Implementation of the algorithms on hardware coprocessors (FPGA/ACAP/GPU) using next-generation software libraries for optimization and synthesis (Xilinx VITIS AI, Intel OpenVINO) and performance measurement compared to conventional algorithms.

Contacts: Stefano Rosati, Stefano Giagu
AI assisted data-compression strategies to optimise real-time data processing bandwidth and physics potential of the ATLAS experiment at the LHC
The ATLAS experiment at the Large Hadron Collider produces ~ a petabyte of raw data per second, far exceeding storage and bandwidth constraints. Efficient data compression strategies are crucial to optimizing real-time processing while preserving essential physics information. This thesis investigates the application of AI-assisted data compression techniques, leveraging machine learning models to identify and retain high-value physics data while reducing redundancy. The approach explores deep learning-based autoencoders, reinforcement learning, and transformer architectures to dynamically adapt compression rates based on real-time detector conditions and event significance. By integrating AI-driven methods into the ATLAS data acquisition pipeline, this work aims to maximize physics discovery potential while ensuring efficient use of computational and storage resources. The proposed system will be validated using both Monte Carlo simulations and real ATLAS detector data, with the goal of contributing to the broader effort of intelligent, adaptive data handling in high-energy physics.
Contacts: Stefano Giagu