Directly derived from the initial technical specifications, a core system has been built up since 1994 and is continuously growing. NICE consists of the following sub-systems:

  1. The High-speed backbone network
  2. The disk storage servers and their associated tape backup units
  3. The compute clusters
  4. The I/O workstations (now located on the Beamlines)

 

1) The High-speed backbone network

 

The traditional Ethernet network, with 10 Mbps shared by many nodes, was saturated already in the early 1990s. The 155Mbps ATM network helped to set up NICE, but it proved essential to find a migration path for all the remaining network to allow for almost unlimited and fast data transfer.

This is why we started in 1999 to implement 100bT switches with Gigabit interswitch communication, starting from the beamline backbone network, and gradually spreading to all the ESRF buildings. Since 2005, 1000bT switches are operating on the Beamlines, and large multi-10Gbe (10Gigabit) highways interconnect the core switches. The first private 10Gigabit link from a Beamline to the storage servers has been introduced in early 2008.

In 2013, the first long-distance 40Gbps links have been setup. Multiple 40Gbps links now interconnect all the core switches. The storage system homes an internal Infiniband network (56Gbps per link).

 

Core device in the main network room

The connection of the site to the Internet (RENATER) is based on a dual 10 Gbit/sec link, with redundant firewalls, routers and bandwidth controllers.

 

2) The disk storage systems and their associated tape backup units

The very first storage systems in NICE were RAID7 units from a small American company called Storage Computer. Their capacities varied from 100 to 320 GigaBytes (GB) per system. All RAID7 systems were decommissioned in the course of 2000.

The next generation of storage systems were based on SUN multiprocessor computers. They provided an excellent load stability, and offered a high degree of flexibility. All SUN storage systems were decommissioned by the beginning of 2006.

 

In 2002 we have switched to Network Appliance (NetApp) FAS940C NAS systems, then to FAS3050C models, then to FAS6000 series with 10Gbe attachments. The total disk capacity of NICE has reached 410 TeraBytes (TB) at the end of 2008, then 750 TB in 2010, 1250 TB in 2012 and more than 2 PetaBytes (PB) in 2013. All systems are operating in cluster mode for the highest availability.

 

gy-close.JPG

Part of a NetApp disk storage system

As of 2013, a single system of 1 PB (/data/visitor single filesystem) allows our users to store experimental data without having to handle the "disk full" problems. Experimental data is automatically removed typically one month after the experiment is completed.

On another hand, "inhouse" data is kept for any duration, the Beamlines manage their private disk space: a small part of the above 1PB is used for inhouse data, and more than 24 Dedicated Storage Servers (DSS) have been introduced in 2013 and 2014. Each DSS is based on the Dell R720 server, provides 50TB of storage and supports either one or several Beamlines.

In 2014, a faster DSS has been introduced: a DDN SFA12KX system running GPFS (a parallel file system developed by IBM) and Infiniband and totalling 1PB of disk storage. In 2015, a second identical system has been added, increasing the total capacity to 2PB. At the end of 2015, the /data/visitor filesystem has been moved from the now old NetApp system to the DDN/GPFS system.

In early 2016, a total of 4.3PB is available on NICE disk storage: 1 on NetApp, 1.3 on Dell DSS, and 2 on DDN/GPFS.

In spring 2016, the DDN/GPFS storage is increased to 4 PB.

 

The backup systems are based on Linear Tape Open (LTO) technology, from LTO-1 to the latest LTO-5. Two large Oracle/Sun/StorageTek SL8500 tape libraries (see below) are currently in operation. There is one library in each Data Center - for some usage like Data Permanent Archiving, data is duplicated on identical tapes in each library.  Each library homes 8,500 tapes maximum handled by 64 tape drives maximum. The backup software in use is Atempo's Time Navigator software (TiNa).

In early 2016, the LTO technology has been abandoned in favour of the proprietary T10000-D format: 8.5TB per tape, aiming at a maximum of 71PB per library.

 

DSCF7932.JPG

One of the Oracle/Sun/StorageTek L8500 tape libraries

 

3) The compute clusters

As of 2008, fifty HP Proliant 145G3 (dual quad-core AMD) and forty Sun V20Z, X2200 and X4100 participated in the compute clusters, with a large variety of scientific software. Between 2009 and 2012, four BullX bladeclusters have been added. Each BullX is capable to home up to 18 dual-CPU blades, or 9 dual-CPU+GPU blades. One of the BullX homes an Infiniband switch in addition, for parallel computing (MPI). CPUs is use are Intel Nehalem and Westmere 6-core, GPUs are NVIDIA Fermi models. All CPU cores enjoy between 2 and 8 GB of RAM memory.

In early 2013, the cluster has been augmented with seven chassis of the Dell C8220 family: 84 CPUs (Intel SandbyBridge 8-core) and 12 (CPU + GPU NVIDIA), all GPUs having Infiniband interfaces. All CPU cores have 8 GB of RAM. Several months later, the Dell cluster has been multiplied by two, with the addition of seven chassis: 80 CPUs (Intel Westmere 10-core) and 16 (CPU + GPU NVIDIA).

After several other extensions in 2014 and 2015, a major increase is scheduled for spring 2016. 

 

CUBE.jpg

The Compute clusters in the main Data Center (photo E. Eyer)

 

C8220.jpg

One of the Dell Blade clusters (photo E. Eyer)

Since January 2011, the compute clusters are split up in a frontend/desktop cluster called 'rnice' and a high performance compute cluster which is only accessible by OAR, a resource manager and batch scheduler designed for large compute clusters. Users can only connect to 'rnice' where they submit jobs to OAR.

 

4) I/O workstations

I/O workstations on the Beamlines have private high-speed links to NICE and allow our visiting scientists to write their data to removable media like USB disks. Writing in parallel on several USB-3 disks is now possible.