Technical issues

This document explains how to setup and use some container systems (Singularity, Docker), and tries to solve some technical issues which can occur when using them.

Home directory and configuration files in casa-distro containers

The virtual machine variant of casa-distro images (VirtualBox) are self-contained and are using an internal user (brainvisa in BrainVISA distributions). The remaining of this section thus applies to the container-based variants of casa-distro (singularity and docker).

To run the container, casa-distro (any of bv, casa_distro and casa_distro_admin commands) is using an environment directory on the host filesystem. It is used to read configuration before starting the container, and to store files which may be shared between the container and the host filesystem.

The environment directory may be a personal user installation (if a user installs BrainVisa, or a developer environment, for himself), which we name user environment, or a shared enviroment (if an administrator installs BrainVisa for all users on a network filesystem, typically). In the first case the user has full access to the environment files (he owns the directory and files), but in the second case the shared environment directory and files belong to the administrator, and are read-only for the user.

Configuration files

A first configuration file is read by casa-distro, located on the host filesystem at the following location:

<environment_dir>/conf/casa_distro.json

which, inside the container, is seen as:

/casa/host/conf/casa_distro.json

This environment file is completed by a user file, which is located in the host system user directory, at the following location:

$HOME/.config/casa-distro/casa_distro_3.json

The mount points configured in the bv graphical tool are saved in this latter file, which is shared across all environments run by the user.

Home directory

Containers are running with the same user as the host system user, in order to have the same permissions and identity, especially in regards to files access and ownership. However they are running with a separate user home directory. The reasons is to avoid sharing copnfigs and paths which would be specific to the host system and those for the container, which may be incompatible.

  • In “user environments” (those owned by the user with read-write permissions on the environment directory), the container user home directory is located inside the environment directory. On host side, this is:

    <environment_dir>/home/
    

    Inside the container, the same directory is seen under:

    /casa/host/home
    
  • In “shared environments”, the user cannot write in the environment directory, thus the user home directory has to be outside of it. casa-distro is using the following location rule:

    $HOME/.local/share/casa-distro/<environment_full_dir>
    

    where <environment_full_dir> is obtained by replacing, in the full path of the environment directory, directory separators (/ on Unix systems or \ on Windows systems) by underscores (_). Ex:

    /opt/brainvisa/brainvisa-5.0.0  ->  $HOME/.local/share/casa-distro/opt_brainvisa_brainvisa-5.0.0
    

Containers and distributed execution

BrainVISA can perform distributed execution, using Soma-Workflow.

in an installed environment, commands are also available via “run scripts”, located in <environment>/host/bin/. If this directory is in the computing resource nodes PATH environment variable, then the commands will run the container transparently.

Alternately, soma-workflow 3.1 brings support to spawn Docker or Singularity (or casa_distro) in remote processing. However it needs some additional configuration on server side to specify how to run the containers.

For commands run directly through the python command, more work will be required because the system python is, of course, not overloaded in BrainVISA/Casa run scripts.

Please read Soma-Workflow documentation about it.

Remember that software running that way live in a container, which is more or less isolated from the host system. To access data, casa_distro will likeky need additional directories mount options. It can be specified on casa_distro commandline, or in the file container_options item in <casa_distro_environment>/host/conf/casa_distro.json.

Developing in containers

See also the BrainVisa developers site: https://brainvisa.github.io/

Using Git

Both git ans svn are used in Brainvisa projects sources. svn will probably be completely replaced with git.

git URLs use https by default. It’s OK for anonymous download and update, but they require manual authentication for each push, thus it’s painful. If you have a github account, you can use ssh with a ssh key instead. See https://brainvisa.github.io/contributing.html

Once done git will automatically use ssh. But then ssh needs to be working…

Using ssh

git and ssh may be used either on the host side (since sources are actually stored on the host filesystem), or within the container. As users may have ssh keys and have already registered them in GitHub, they will want to reuse their host ssh keys.

On Linux (or Mac) hosts, it is possible.

Singularity 3 does not allow to mount the host .ssh directory into the already mounted container home directory. So there are 2 other options:

  1. copy the host .ssh directory into the container home:

    cp -a "/host/$CASA_HOST_HOME/.ssh" ~/
    

    But copying private ssh keys is not recommended for security reasons.

  2. use a ssh agent:
    • install keychain. On debian-based Linux distributions, this is:

      sudo apt-get install keychain
      
    • add in your host $HOME/.bash_profile file:

      eval $(keychain --eval --agents ssh id_rsa)
      

Troubleshooting

Typical problems are listed here.

OpenGL is not working, or is slow

with docker

Several options are needed to enable display and OpenGL. Normally casa_distro tries to set them up and should do the best it can.

On machines with nvidia graphics cards and nvidia proprietary drivers, casa_distro will add options to mount the host system drivers and OpenGL libraries into the container in order to have hardware 3D rendering.

Options are setup in the casa_distro.json file so you can check and edit them. Therefore, the detection of nvidia drivers is done on the host machine at the time of build workflow creation: if the build workflow is shared accross several machines on a network, this config may not suit all machines running the container.

However it does not seem to work when ssh connections and remote display are involved.

with singularity

There are several ways to use OpenGL in singularity, depending on the host system, the 3D hardware, the X server, the type of user/ssh connection.

Our container images include a software-only Mesa implementation of OpenGL, which can be used if other solutions fail.

Casa-distro tries to use “reasonable” settings but cannot always detect the best option. Thus the user can control the behavior using the opengl option in casa_distro run, casa_distro shell, casa_distro mrun and casa_distro bv_maker subcommands. This option can take several values: auto, container, nv, or software. The default is, of course, auto.

  • auto: performs auto-detection: same as nv if an NVidia device is detected on a host linux system, otherwise same as container, unless we detect a case where that is known to fail (in which case we would use software).
  • container: passes no special options to Singularity: the mesa installed in the container is used
  • nv tries to mount the proprietary NVidia driver of the host (linux) system in the container
  • software sets LD_LIBRARY_PATH to use a software-only OpenGL rendering. This solution is the slowest but is a fallback when no other solution works.

There are cases where the nvidia option makes things worse (see ssh connections below). If you ever need to disable the nvidia option, you can add an option opengl=software or opengl=container to run, shell and other subcommands:

casa_distro run gui=1 opengl=software glxinfo

If it is OK, you can set this option in the build workflow casa_distro.json config, under the "container_gui_env" key:

{
    "casa_distro_compatibility": "3",
    "name": "brainvida-5.0",
    "image": "/home/bilbo/casa_distro/brainvisa-5.0.sif",
    "type": "user",
    "system": "ubuntu-18.04",
    "container_type": "singularity",
    "distro": "brainvisa",
    "container_options": [
        "--softgl",
    ],
    # ...
}
Via a ssh connection:
same host, different user:
xhost + must have been used on the host system. Works (as long as the XAUTHORITY env variable points to the .Xauthority file from the host user home directory).
different host:

I personally could not make it work using the nv option. But actually outside of casa-distro or any container, it doesn’t work either. Remote GLX rendering has always been a very delicate thing…

It works for me using the software Mesa rendering (slow). So at this point, using casa_distro actually makes it possible to render OpenGL when the host system cannot (or not directly)…

On MacOS systems

Singularity is not working, it’s just doing nothing

Singularity for Mac is available as a beta at the time this document is written (but with no updates nor news in more than a year). It somewhat works but we sometimes ended up with a “silent” virtual machine which seems to do just nothing. But it should work in principle, and sometimes does ;)

We experienced this behaviour on MacOS 10.11 using Singularity Desktop 3.3-beta for Mac. We had to upgrade the system (to 10.15) and then it worked. But then after a few days became silent again, for certain users, using certain images… but it still worked for our BrainVisa images…

GUI is not working in singularity

Graphical commands (brainvisa, anatomist, others…) should run through a X11 server. Xquartz is installed in MacOS systems, but need to be started, and a bit configured.

  • open Xquartz, either using the desktop / finder icon, or by running a X command such as:

    xhost +
    
  • in the Xquartz preferences menu, go to “security” and check the option to enable network connections (tcp) to the X server

  • quit the server, it needs to be restarted

  • run
    xhost +
    

    to enable other users / apps to use the graphical server (this will start Xquartz, if not already running). Note that this command needs to be run again each time the Xquartz server is stopped / restarted.

  • You should use the opengl=software option in casa_distro otherwise 3D will likely crash the programs.

  • now graphical applications should run inside singularity containers. 3D hardware is not used however, rendering is using a software rendering, so it is not fast.

VirtualBox images are crashing when booting

I personally had trouble getting the VirtualBox image to actually run on MacOS 10.15. The virtual machine consistently crashed at boot time. After inspecting the logs I found out that the sound card support might be the cause, and I had to use a “fake sound device” in the virtualbox image settings. Then it appeared that all graphics display was notably slow (either 2D and 3D), whatever video / accelerated 3D support options. And icons and fonts in the virtual machine were microscopic, almost impossible to read, and difficult if even possible to configure in the linux desktop. The “zoom factor x2” option in virtualbox was very handy for that, but reduced the actual resolution by a factor of 2 if I understand. Apart from these limitations, the software was running.

We have a working install procedure from one of our friends using a Mac here:

On a Mac Book Pro, with MacOs 10.15.7 and 16Gb of memory:

  • Install VirtualBox v 6.1

  • Import the Brainvisa VM

  • Disable sound (fake sound device)

  • Guests additions: run the Linux additions after mounting the CD and opening its contents.

  • Shutdown the virtual machine, go to its configuration, and in the ‘Display’ section, chose ‘Graphics Controller: VMSVGA’, tick ‘Activate 3D acceleration’ and increase the ‘Video memory’ to 128Mb.

  • setup shared directory to mount the computer filesystem (my user directory). For this I went into the ‘shared directory’ section of the VM configuration, and asked to have /media/olivier to point on my home directory (on my Mac: /Users/olivier) .

  • There is an issue to fix before accessing /media/olivier, because of a permission issue. It is fixed by typing the following into the terminal:

    sudo usermod -aG vboxsf $(whoami)
    
  • reboot the VM.

  • there is still a keyboard mapping issue, it can probably be fixed in the linux desktop config somewhere.

Good to go !

On Windows systems

Installing Singularity on Windows

  • Singularity may be a bit touchy to install on Windows, it needs Windows 10 with linux subsystem (WSL2) plus other internal options (hyper-V something). It’s possible, not easy.
  • Once singularity is working, to be able to run graphical programs, a X server must be installed. Several ones exist for Windows, several are free, but most of them do not support hardware-accelerated 3D. Xming supports hardware acceleration, but has gone commercial. The latest free implementation was released in 2016, and seems to work. Microsoft is possibly working on another implementation.