Skip to main content

RESources

  • Deadlines:
    HPC/AI:       08/05/25
    DATA:       ~ 18/02/26
    Seminars: ~ 20/02/26
  • Deadlines:
    HPC/AI:       08/05/25
    DATA:       ~ 18/02/26
    Seminars: ~ 20/02/26
  • Deadlines:
    HPC/AI:       08/05/25
    DATA:       ~ 18/02/26
    Seminars: ~ 20/02/26
Supercomputing

HPC RESources

The supercomputing (HPC) resources of the RES consist of access to the supercomputers that make up the RES to carry out cutting-edge research projects in various branches of science.

The total capacity of the RES supercomputers exceeds 329 PFlop/s, providing over 90 PFlop/s exclusively for RES users.

These supercomputers may consist solely of processors (CPUs) or be hybrids with processors and graphic accelerators (CPUs + GPUs). This allows addressing various challenges, ranging from traditional supercomputing for complex calculations to the creation of digital twins to simulate and predict complete systems. 

Measures of performance and computational capacity

There are several metrics to measure the computational capacity and performance of a supercomputer. Three of them are particularly useful: node hours, Flops, and Flop/s:

  • A node hour is equivalent to using 1 processing node in its entirety for 1 hour. These nodes can consist solely of CPUs or a combination of CPUs and GPUs.

    The computing cores of CPUs are not directly comparable to GPUs, and creating equivalents is non-trivial. For this reason, core hours have lost their utility as they do not account for accelerated systems, making node hours a more relevant metric for computational capacity.

    Since supercomputers have multiple nodes, they can be used simultaneously to achieve higher performance through parallel processing. For example, 432 node hours could mean 72 nodes of a supercomputer working simultaneously for 6 hours.

  • Flops (floating-point operations) are calculations involving decimal numbers. Flops serve as a measure of the computational volume of a project and are particularly useful for estimating the size of a project independently of the supercomputer’s configuration.

    Flop/s, on the other hand, represent the number of floating-point operations per second a machine can perform, serving as a measure of the computational power of a supercomputer. In the case of HPC services, these are 64 bit Flop/S, while in the case of AI services those are 32 bit Flop/s.


    These two measures are analogous to water volume and flow rate: Flops represent the total volume of water to be transported (the scientific project to be executed), while Flop/s represent the diameter of the pipe used (the supercomputer). A larger pipe (a more powerful supercomputer) will transport the water faster (execute the project more quickly).

    Both metrics are especially useful with the introduction of GPUs in supercomputers, for the same reasons as node hours. However, they are even more advantageous because they do not depend on the hardware configuration of each machine’s nodes—only on the number of calculations per second the supercomputer can perform and the volume of the project.

Example

A medium/large-sized project on MareNostrum5 could amount to a total of 100,000 node hours. 

The machine has 1,120 compute nodes and a total performance of 249.44 PFlop/s according to the TOP500 benchmark, so each node has an average performance of about 222.71 TFlop/s. Therefore, a project of 100,000 node hours on MN5 Acc would correspond to:

100,000 node hours x 3600s x 222.71 TFlop/s per node = 80,177 EFlop

This equivalence allows estimating the computational effort required for a project simply by knowing the performance of its nodes. Moreover, it facilitates comparing this effort across different machines, which is particularly useful in a distributed infrastructure like the RES.

Artificial Intelligence

AI RESources

Artificial Intelligence (AI) resources are a new series of resources offered by the RES, designed to boost research, development, and innovation (R&D&I) in Spain through the use of supercomputing applied to AI projects across various disciplines.

With the incorporation of these services, the RES not only enhances its traditional supercomputing and data exploitation offerings but also provides specialized access to AI tools, making essential capabilities available to the Spanish research community to address current scientific and technological challenges.

What do AI services include?

AI resources consist of highly specialized hardware and software resources designed to support researchers working with AI models:

  • Hardware: High-capacity processing systems, including hybrid nodes with multiple high-performance GPUs, capable of handling everything from small experiments to extensive training of complex AI models.
  • Specialized software: Users will have access to an updated set of tools and libraries for developing AI projects. These include essential ones such as TensorFlow, PyTorch, CUDA Toolkit, Pandas, XGBoost, Scikit-learn, Keras, or NumPy, as well as recommended ones like Theano, Numba, or CuPy. 
  • Training and technical support: We also offer specialized training and tutorials for users, enabling them to maximize the use of these resources through continuous learning, with support for implementing AI tools in their projects. 

 

Quantum Computing

QUANTUM RESources

The QUANTUM resources are Quantum Computing resources for the development and execution of quantum algorithms, integrating advanced technologies in a hybrid computing environment.

With the inclusion of these experimental-type resources, RES aims to expand its range of available computing resources and provide the European research community with more cutting-edge tools.

What do QUANTUM services include?

The currently available QUANTUM resources consist mainly of two types of quantum computers:

  • Digital Quantum Computers: Systems based on various quantum technologies, integrated with supercomputing infrastructures to facilitate the execution of hybrid quantum-classical algorithms.

  • Quantum Emulators:  Platforms for quantum simulation via traditional computing that allow the development, testing, and optimization of quantum algorithms without the need for physical quantum hardware.
Data Management

DATA RESources

The RES data resources are designed for the management, storage, and exploitation of large volumes of data, facilitating scientific research across multiple disciplines. This enables handling data from simulations, experiments, and advanced analyses, providing researchers with access to high-performance tools at the RES nodes that offer data management and exploitation services.

The RES nodes feature over 183 PB of storage capacity, of which more than 41 PB are allocated to RES users. This infrastructure not only ensures efficient storage but also provides fast and secure access, essential for complex scientific projects.

What do DATA services include?

Data resources encompass a complete storage and processing infrastructure designed to manage large volumes of information efficiently and securely:

  • Storage systems: Multilevel storage including disk storage for frequent access (both in files and objects) and magnetic tape storage for long-term conservation, backups, and efficient archiving of historical data.
  • Virtual infrastructure: Access to customizable virtual machines that allow the creation of specific environments for data processing and analysis, offering flexibility in resource configuration according to the needs of each project.
  • Complementary computing capacity: Limited computational resources for data processing-intensive tasks, enabling complex analyses, transformations, and aggregations of large datasets without the need to create a full project for HPC resources.

 

Data Management Plan

Projects seeking access to the RES DATA resources must submit their own Data Management Plan (DMP) for evaluation.

The DATA resources offered by the RES are exclusively intended for data management and exploitation services, and not for mass storage without corresponding exploitation, which must be reflected in the DMP. 

The minimum size required to carry out a data project is 200 TB, in order to fully take advantage of the capabilities of the RES nodes. Although there is no stipulated limit, typically projects larger than 1 PB are the upper limit for submitted projects.

Contact with the RES

¿Any doubt? ¡Get in touch with us!

Formulario de contacto

CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.