Artificial Intelligence and representation learning algorithms emergence in the last ten years allowed for machine learning tools which could adeptly handle higher-dimensional and more complex problems than previously feasible.

Traditionally the main problem of the analysis of HEP data, characterised by large volumes and high dimensionality, is approached by dimensionality-reduction of data based on a series of analysis steps that operate both on individual collision events and on collections of events. Due to intrinsic difficulty in evaluating the statistical model of the experimental data in terms of equations that can be evaluated, this approach, even if has worked well so far, is not guaranteed to be optimal. Machine learning, and Deep learning in particular, provides an extremely powerful method to condense the relevant information contained in the low-level, high-dimensional data into a higher-level and smaller-dimensional space, and can provide the needed ingredient to fill the gap between the traditional approach and the optimal one.

The Rome group of the ATLAS experiment is actively involved in the design and development of both state of the art and novel deep learning algorithms and methods for the acquisitions, simulation and analysis of LHC data. This range from developing novel low-precision ternary and quantised deep neural network to run in real time on FPGAs for next generation fast triggers for HL-LHC, to developments of convolutional neural networks, generative adversarial networks and variational auto-encoders for physics object reconstruction, identification and data-augmentation to improve the discovery sensitivity of the ATLAS experiment for new physics beyond the Standard Model.