tier2

The LHC Tier-2 Computing Centre of INFN-Rome

The ATLAS and CMS experiments at the LHC produce a huge amount of data. Indeed, proton-proton collisions in the collector occur with a frequency of up to 40 MHz, so that the probability of producing interesting events (usually with a small cross section) is reasonably high. To select interesting events, appropriate triggering systems act on the detectors select only those events that do not have trivial kinematics, potentially useful for precise measurements or for the discovery of new physics.

Trigger systems reduce the acquisition rate down to 300 Hz. Although trigger systems reduce the acquisition rate by many orders of magnitude, the number of events collected each year is on the order of 5 × 109. Given the average event size between 1 and 2 MB, each experiment produces up to 5 × 109 × 2 × 106 = 1016 B/year = 10 PB/year. Physicists must submit their analysis work to the system so that the entire data set can be analysed. They do this using thegrid: a globally distributed IT infrastructure consisting of several data centres scattered across many countries.

The grid is organised hierarchically: the Tier-0 centre resides at the CERN and collects all the data produced by the experiments, distributing them to a few tens of Centre-1 centres. Each Tier – 1 centre hosts a fraction of the entire dataset: one of them is managed by the CNAF laboratory in Bologna. From the Level 1 centres, data are distributed over hundreds of hundredths of a second. The Rome section manages one of these level 2 centres. Works submitted by physicists to the grid are automatically distributed to data centres hosting the requested data. Sub-works are then executed in parallel on CPUs that are in the same data centre, and the results are automatically collected and sent to the sender. The Rome Tier – 2 centre has ten water cooled racks made by Kn urr. Each rack is closed on each side. At the bottom, a heat exchanger into which 12°C water produced by three Stulz chillers is injected lowers the air temperature inside the rack. Three fans on the back side of the racks generate a pressure drop between the front and the back of the rack: chilled air tends to flow to the front of the rack, where it is drawn through the servers by their internal fans and expelled from the back, where it returns to the heat exchanger. In this way, the servers are always at a constant temperature (18°C), regardless of the external weather conditions.

In addition, air conditioning is limited to the volume of the rack, and the room temperature is maintained at comfortable values for technicians and physicists. The solution was also found to be good from an energy point of view. The PUE (energy use efficiency) value of typical data centres, defined as the ratio of the total energy required to run it to the energy required for the servers, is always greater than 2, and typically 3 or more. The PUE of our data centres is about 1.3: only 30% of the total energy is used for support services, such as air conditioning, lighting and so on. It is considered to be among the greenest data centres in operation. A few hundred servers are housed in the ten racks, as well as storage servers for a total of 2 PB and 2 500 processing cores. In other words, our centre guarantees the simultaneous management of as many parallel jobs, giving access to about 20% of the data collected by the experiments each year.

The UPS also has a filter function, which removes fast transients from the power drawn from the grid. The Internet connection is provided by two high-speed redundant networks through router of the Physics Department of the Sapienza University and the GARR. The connection speed reaches 10 Gbps, and we are part of LHCOne, namely, a collection of access locations that are effectively access points in a private network to LHC Tier – 1/2/3 sites.