Skip to content

How to use GPUs in your research through SLICES testbeds and services ? An interview with Peter van Daele and Brecht Vermeulen from Imec

With raising usage of AI in recent years, with products such as ChatGPT, you may want to start using AI in your products and research as well. SLICES offers a number of testbeds and services that can help you in the early steps. This article will introduce these briefly.

At the SLICES portal (https://portal.slices-sc.eu/explore/directory/) you can easily look up which infrastructure is available, a.o. GPUs. Of course, you can also sign up easily for an account for using the infrastructure and services.

Figure 1: SLICES testbed directory at the portal

 

If you will select the category ‘GPU’, you will find out that currently 4 infrastructures can offer dedicated GPUs for your research:

  • The Virtual Wall (https://doc.ilabt.imec.be/ilabt/virtualwall/) and Grid5000 (https://www.grid5000.fr/w/Users_Home) offer bare metal access to GPUs: this means you will have ssh access to a linux server which contains GPUs. You will have to install all libraries, drivers and software yourself, but this gives you of course most flexibility.
  • GPULab (https://doc.ilabt.imec.be/ilabt/gpulab/) offers a service in which you can easily launch jobs (docker containers) with a pre-defined software stack (or you can define your own container) and ask for a specific number of GPUs, CPU cores and RAM.
  • GPU JupyterHub (https://doc.ilabt.imec.be/ilabt/jupyter ) offers a jupyter notebook environment with predefined notebooks. This is a very easy first step (web-based) into using GPUs with smaller datasets.

If you are just starting with AI/machine learning, the Jupyter notebooks are the easiest way. You can choose from pre-defined stacks such as PyTorch, TensorFlow, Data Science, R or Spark stacks. A Jupyter notebook is fully web-based and you can interactively execute and debug code and visualize datasets.

Figure 2: Jupyter notebook which is easy to put your first steps in AI/machine learning

For larger datasets or more complex code (or non-python code), GPULab offers more flexibility than the Jupyter notebooks. Here you start as well from a docker container, but that can contain extra software/binaries, and it typically also runs non-interactively, so it can run for a longer duration (e.g. hours, days). You can launch also multiple jobs in parallel.

Figure 3: Example JSON job description for GPULab

However, it might appear that you want to research on bare metal hardware, or have full flexiblility on software and driver versions. For those cases, the Virtual Wall and Grid5000 infrastructure are ideally suited and those can be configured with the jFed tool (https://jfed.ilabt.imec.be). Here, you will end up with ssh root access to a bare metal server (that can contain GPUs if needed) and it’s up to you to configure everything.

Figure 4: Use jFed to reserve and provision bare metal infrastructure

 

In this article we gave a brief overview of the different ways of using GPUs at the SLICES infrastructure. All of this can be done with the same account from the SLICES portal (https://portal.slices-sc.eu). It’s free to sign up for an account.