Skip to main content

ESRF inaugurates new data centre

03-06-2011

The raw product of any experiment at the ESRF is data, and the amount of data that experiments produce continues to double roughly every 18 months. Ten years ago, 1 petabyte storage requirements were difficult to imagine. Today, it has become the standard unit for data-intense facilities like the LHC (15 petabytes/year) or the ESRF (several petabytes/year).

Share

The challenge is to manage the flow of data efficiently and within a limited budget, in a properly dimensioned and efficient IT infrastructure. With an eye on the future, the ESRF has invested in a second data centre which is also a deliverable of the Upgrade Programme,  providing a high-quality environment for central data storage, compute power and network electronics, as well as the software needed to access and manage these resources.

The new data centre was inaugurated on 26 May 2011 by Jean Moulin, Chairman of the ESRF Council, Rafael Abela, Chairman of the ESRF Scientific Advisory Committee, and Francesco Sette, ESRF Director General.

Already today, the new data centre is equipped with state-of-the-art file servers capable of storing almost 1 petabyte, a tape-based archiving facility of several petabytes, compute clusters with a peak performance of 15 teraflops and an extensive 10 Gbit/s Ethernet infrastructure. This can easily be extended thanks to pre-installed power, cooling and networking resources.

This inauguration event took place two years earlier than initially planned. In view of the urgency of additional capacity, an earlier plan to host the new data centre in a new building was dropped in 2009, and the existing Central Building data centre extended instead, at an equivalent cost of 2 M€. The main characteristics of the new centre are a floor surface of 300 m2 and 370 kW power with full-power 10 minute UPS backup.

The works lasted for 18 months and were finished on time and on budget. A unique feature was that all civil construction and installation work was done with the existing equipment kept operating in the same area.

Today, the ESRF has an environment allowing a flexible response to the demands of the scientific community for storage, data analysis capacity and data backup, for many years to come.

 

Additional background information on the new data centre

The high density area

Many of our data analysis algorithms are now running simultaneously on a large number of cores. Many core CPUs are packaged in flat 1U boxes (also called pizza boxes) or blade systems where the power supply and network is shared via the backplane. To efficiently cool such equipment, it is more efficient to bring chilled water close to the racks and to have a local heat exchanger next to the equipment. The gain in efficiency is up to 25% in comparison to the classical air conditioning where the entire room is cooled. Chilled water is supplied from the false floor and circulates through the cooling racks where it meets the hot air from the inside of the cube. Cold air is ejected in front of the racks where it is again taken inwards by the fans of the equipment. The cooling power is adjusted automatically to the thermal load in the racks.

Tape backup

The robotised tape library has been set up for the backup of scientific data. It can host up to 64 LTO tape drives and 8500 tapes. To load and unload tapes, the library features 8 robots (hands) and 2 lifts. The entire library is fault tolerant, and robots can be changed online.

The library in the new data centre has a sister library in the second computer room situated in the Control Room building. The data copied to tape comes mainly from the disk storage systems which are located in the other computer room. This provides a possibility for disaster recovery in case of a fire in one of the two rooms. Tape drives (and tapes) can be freely set up in either library. There are currently a total of 65 tape drives in the two libraries to manage nearly 6000 tapes for a total volume of currently 4 petabytes. With the archiving of in-house research data, this volume will rapidly increase.

There are 8 dedicated backup servers feeding the two libraries (connected to tape drives via fibre optical links). The robotics is controlled via specific software (ACSLS) and communicates via the network. Some critical data is backed up twice, once in each library.

 

Robotised tape library for the backup of scientific data.

Robotised tape library for the backup of scientific data.

The rack infrastructure

All the new racks are already connected on the network and to the UPS so they are ready to receive new servers. This kind of provisioning is very efficient and will allow a minimisation of the time needed to install new hardware.

In the past we had to connect the network cables by hand. Now it is limited to an operation on the network control panel and a software operation to configure the plug to the correct VLAN. Another advantage of the new standardised racks is that it is now possible to monitor the power consumption in real time.

Cabling and cooling system are clearly separated: network and electricity cables are above the racks for easy access, the false floor is reserved for the cooling.

Concerning the power supply, the Canalis system was chosen to make evolutions easier (possibility to connect or disconnect online new equipment).

All equipment receives power from two separate power supplies with two redundant UPS in two separate rooms.

 

Rows of new racks ready to receive new servers.

Rows of new racks ready to receive new servers.

Data centre network

Due to the number of servers and storage systems in the previous data centre, a two-tiers network architecture was enough. All devices of the former data centre were concentrated over a few network chassis. These chassis provided hardware redundancy and a certain level of fault-tolerance. However this had some important limitations: No live firmware updates were possible, because upgrades imply down-time of the network service. Connection of new devices was one by one and this resulted in a “chaotic” growth of network cabling. With the new design of the data centre network, these limitations are gone! The network architecture is now in a three-tiers layout which improves redundancy in terms of bandwidth, load-balancing, fault-tolerance and firmware hot updates.

A full mesh network is now in place, and cabling is already available in advance at each rack end point. This results in a completely redundant path from every single server (up-to 1056 x 1 Gbit copper ports offered for server connections and up to 144 x 10 Gbit fibre ports present for storage systems). Also, a large number of clients at beamlines can now be connected at high-speed (72 x 10 Gbits ports are currently available, and the number of ports can easily be increased).

 

Top image: Inauguration of the data centre: Cutting the ribbon are Rafael Abela (SAC Chairman), Jean Moulin (Council Chairman) and Francesco Sette (ESRF Director General).