PyAims tutorial : programming with AIMS in Python language

This tutorial should work with Python 2 (2.6 or higher), and is also compatible with Python 3 (if pyaims is compiled in python3 mode).

This is a Jupyter/Ipython notebook:

pyaims_tutorial_nb.ipynb

AIMS is a C++ library, but has python language bindings: PyAIMS. This means that the C++ classes and functions can be used from python. This has many advantages compared to pure C++:

  • Writing python scripts and programs is much easier and faster than C++: there is no fastidious and long compilation step.

  • Scripts are more flexible, can be modified on-the-fly, etc

  • It can be used interactively in a python interactive shell.

  • As pyaims is actually C++ code called from python, it is still fast to execute complex algorithms. There is obviously an overhead to call C++ from python, but once in the C++ layer, it is C++ execution speed.

A few examples of how to use and manipulate the main data structures will be shown here.

The data for the examples in this section can be downloaded here: https://brainvisa.info/download/data/test_data.zip. To use the examples directly, users should go to the directory where this archive was uncompressed, and then run ipython from this directory. A cleaner alternative, especially if no write access is allowed on this data directory, is to make a symbolic link to the data_for_anatomist subdirectory

cd $HOME
mkdir bvcourse
cd bvcourse
ln -s <path_to_data>/data_for_anatomist .
ipython

Doing this in python:

To work smoothly with python2 or python3, let’s use print():

[1]:
from __future__ import print_function
import sys
print(sys.version_info)
sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
[2]:
import six
from six.moves.urllib.request import urlopen
import zipfile
import os
import os.path
import tempfile
# let's work in a temporary directory
tuto_dir = tempfile.mkdtemp(prefix='pyaims_tutorial_')
# either we already have test_data.zip in the current directory
# otherwise we fetch it from the server
older_cwd = os.getcwd()
test_data = os.path.join(older_cwd, 'test_data.zip')
print('old cwd:', older_cwd)
if not os.path.exists(test_data):
    print('downloading test_data.zip...')
    f = urlopen('https://brainvisa.info/download/data/test_data.zip')
    test_data = os.path.join(tuto_dir, 'test_data.zip')
    open(test_data, 'wb').write(f.read())
    f.close()
print('test_data:', test_data)
os.chdir(tuto_dir)
f = zipfile.ZipFile(test_data)
f.extractall()
del f
print('we are working in:', tuto_dir)
old cwd: /casa/host/src/aims/aims-free/5.1/pyaims/doc/sphinx
downloading test_data.zip...
test_data: /tmp/pyaims_tutorial_r405x61j/test_data.zip
we are working in: /tmp/pyaims_tutorial_r405x61j

Using data structures

Module importation

In python, the aimsdata library is available as the soma.aims module.

[3]:
import soma.aims
# the module is actually soma.aims:
vol = soma.aims.Volume(100, 100, 100, dtype='int16')

or:

[4]:
from soma import aims
# the module is available as aims (not soma.aims):
vol = aims.Volume(100, 100, 100, dtype='int16')
# in the following, we will be using this form because it is shorter.

IO: reading and writing objects

Reading operations are accessed via a single soma.aims.read() function, and writing through a single soma.aims.write() function. soma.aims.read() function reads any object from a given file name, in any supported file format, and returns it:

[5]:
from soma import aims
obj = aims.read('data_for_anatomist/subject01/subject01.nii')
print(obj.getSize())
obj2 = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
print(obj2.getSize())
obj3 = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')
print(len(obj3.vertex(0)))
assert(obj.getSize() == [256, 256, 124, 1])
assert(obj2.getSize() == [53, 63, 46, 1])
assert(obj3.size() == 1 and len(obj3.vertex(0)) == 33837)
[256, 256, 124, 1]
[53, 63, 46, 1]
33837

The returned object can have various types according to what is found in the disk file(s).

Writing is just as easy. The file name extension generally determines the output format. An object read from a given format can be re-written in any other supported format, provided the format can actually store the object type.

[6]:
from soma import aims
obj2 = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
aims.write(obj2, 'Audio-Video_T_map.ima')
obj3 = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')
aims.write(obj3, 'subject01_Lhemi.gii')

Exercise

Write a little file format conversion tool

Volumes

Volumes are array-like containers of voxels, plus a set of additional information kept in a header structure. In AIMS, the header structure is generic and extensible, and does not depend on a specific file format. Voxels may have various types, so a specific type of volume should be used for a specific type of voxel. The type of voxel has a code that is used to suffix the Volume type: soma.aims.Volume_S16 for signed 16-bit ints, soma.aims.Volume_U32 for unsigned 32-bit ints, soma.aims.Volume_FLOAT for 32-bit floats, soma.aims.Volume_DOUBLE for 64-bit floats, soma.aims.Volume_RGBA for RGBA colors, etc.

Building a volume

[7]:
 # create a 3D volume of signed 16-bit ints, of size 192x256x128
vol = aims.Volume(192, 256, 128, dtype='int16')
# fill it with zeros
vol.fill(0)
# set value 12 at voxel (100, 100, 60)
vol.setValue(12, 100, 100, 60)
# get value at the same position
x = vol.value(100, 100, 60)
print(x)
assert(x == 12)
12
[8]:
# set the voxels size
vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]
print(vol.header())
assert(vol.header() == {'sizeX': 192, 'sizeY': 256, 'sizeZ': 128, 'sizeT': 1, 'voxel_size': [0.9, 0.9, 1.2, 1],
                        'volume_dimension': [192, 256, 128, 1]})
{ 'volume_dimension' : [ 192, 256, 128, 1 ], 'sizeX' : 192, 'sizeY' : 256, 'sizeZ' : 128, 'sizeT' : 1, 'voxel_size' : [ 0.9, 0.9, 1.2, 1 ] }

3D volume: value 12 at voxel (100, 100 ,60)

Basic operations

Whole volume operations:

[9]:
# multiplication, addition etc
vol *= 2
vol2 = vol * 3 + 12
print(vol2.value(100, 100, 60))
vol /= 2
vol3 = vol2 - vol - 12
print(vol3.value(100, 100, 60))
vol4 = vol2 * vol / 6
print(vol4.value(100, 100, 60))
assert(vol2.value(100, 100, 60) == 84)
assert(vol3.value(100, 100, 60) == 60)
assert(vol4.value(100, 100, 60) == 168)
84
60
168

Voxel-wise operations:

[10]:
# fill the volume with the distance to voxel (100, 100, 60)
vs = vol.header()['voxel_size']
pos0 = (100 * vs[0], 100 * vs[1], 60 * vs[2]) # in millimeters
for z in range(vol.getSizeZ()):
    for y in range(vol.getSizeY()):
        for x in range(vol.getSizeX()):
            # get current position in an aims.Point3df structure, in mm
            p = aims.Point3df(x * vs[0], y * vs[1], z * vs[2])
            # get relative position to pos0, in voxels
            p -= pos0
            # distance: norm of vector p
            dist = int(round(p.norm()))
            # set it into the volume
            vol.setValue(dist, x, y, z)
print(vol.value(100, 100, 60))
# save the volume
aims.write(vol, 'distance.nii')
assert(vol.value(100, 100, 60) == 0)
0

Now look at the distance.nii volume in Anatomist.

Distance example

Exercise

Make a program which loads the image data_for_anatomist/subject01/Audio-Video_T_map.nii and thresholds it so as to keep values above 3.

[11]:
from soma import aims
vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
print(vol.value(20, 20, 20) < 3. and vol.value(20, 20, 20) != 0.)
assert(vol.value(20, 20, 20) < 3. and vol.value(20, 20, 20) != 0.)
for z in range(vol.getSizeZ()):
    for y in range(vol.getSizeY()):
        for x in range(vol.getSizeX()):
            if vol.value(x, y, z) < 3.:
                vol.setValue(0, x, y, z)
print(vol.value(20, 20, 20))
aims.write(vol, 'Audio-Video_T_thresholded.nii')
assert(vol.value(20, 20, 20) == 0.)
True
0.0

Thresholded Audio-Video T-map

Exercise

Make a program to dowsample the anatomical image data_for_anatomist/subject01/subject01.nii and keeps one voxel out of two in every direction.

[12]:
from soma import aims
vol = aims.read('data_for_anatomist/subject01/subject01.nii')
# allocate a new volume with half dimensions
vol2 = aims.Volume(vol.getSizeX() // 2, vol.getSizeY() // 2, vol.getSizeZ() // 2, dtype='DOUBLE')
print(vol2.getSizeX())
assert(vol2.getSizeX() == 128)
# set the voxel size to twice it was in vol
vs = vol.header()['voxel_size']
vs2 = [x * 2 for x in vs]
vol2.header()['voxel_size'] = vs2
for z in range(vol2.getSizeZ()):
    for y in range(vol2.getSizeY()):
        for x in range(vol2.getSizeX()):
            vol2.setValue(vol.value(x*2, y*2, z*2), x, y, z)
print(vol.value(100, 100, 40))
print(vol2.value(50, 50, 20))
aims.write(vol2, 'resampled.nii')
assert(vol.value(100, 100, 40) == 775)
assert(vol2.value(50, 50, 20) == 775.)
128
775
775.0

Downsampled anatomical image

The first thing that comes to mind when running these examples, is that they are slow. Indeed, python is an interpreted language and loops in any interpreted language are slow. In addition, accessing individually each voxel of the volume has the overhead of python/C++ bindings communications. The conclusion is that that kind of example is probably a bit too low-level, and should be done, when possible, by compiled libraries or specialized array-handling libraries. This is the role of numpy.

Accessing numpy arrays to AIMS volume voxels is supported:

[13]:
import numpy
vol.fill(0)
arr = numpy.asarray(vol)
# or:
arr = vol.np
# set value 100 in a whole sub-volume
arr[60:120, 60:120, 40:80] = 100
# note that arr is a shared view to the volume contents,
# modifications will also affect the volume
print(vol.value(65, 65, 42))
print(vol.value(65, 65, 30))
aims.write(vol, "cube.nii")
assert(vol.value(65, 65, 42) == 100)
assert(vol.value(65, 65, 30) == 0)
100
0

We can also use numpy accessors and slicing features directly on a Volume object:

[14]:
vol.fill(0)
vol[60:120, 60:120, 40:80] = 100
print(vol.value(65, 65, 42))
print(vol.value(65, 65, 30))
assert(numpy.all(vol[65, 65, 42, 0] == 100))
assert(numpy.all(vol[65, 65, 30, 0] == 0))
100
0

3D volume containing a cube

Now we can re-write the thresholding example using numpy:

[15]:
from soma import aims
vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
arr = vol.np
arr[numpy.where(arr < 3.)] = 0.
print(vol.value(20, 20, 20))
aims.write(vol, 'Audio-Video_T_thresholded2.nii')
assert(vol.value(20, 20, 20) == 0)
0.0

Here, arr < 3. returns a boolean array with the same size as arr, and numpy.where() returns arrays of coordinates where the specified contition is true.

The distance example, using numpy, would like the following:

[16]:
from soma import aims
import numpy
vol = aims.Volume(192, 256, 128, 'S16')
vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]
vs = vol.header()['voxel_size']
pos0 = (100 * vs[0], 100 * vs[1], 60 * vs[2]) # in millimeters
# build arrays of coordinates for x, y, z
x, y, z = numpy.ogrid[0.:vol.getSizeX(), 0.:vol.getSizeY(), 0.:vol.getSizeZ()]
# get coords in millimeters
x *= vs[0]
y *= vs[1]
z *= vs[2]
# relative to pos0
x -= pos0[0]
y -= pos0[1]
z -= pos0[2]
# get norm, using numpy arrays broadcasting
vol[:, :, :, 0] = numpy.sqrt(x**2 + y**2 + z**2)

print(vol.value(100, 100, 60))
assert(vol.value(100, 100, 60) == 0)

# and save result
aims.write(vol, 'distance2.nii')
0

This example appears a bit more tricky, since we must build the coordinates arrays, but is way faster to execute, because all loops within the code are executed in compiled routines in numpy. One interesting thing to note is that this code is using the famous “array broadcasting” feature of numpy, where arrays of heterogeneous sizes can be combined, and the “missing” dimensions are extended.

Copying volumes or volumes structure, or building from an array

To make a deep-copy of a volume, use the copy constructor:

[17]:
vol2 = aims.Volume(vol)
vol2[100, 100, 60, 0] = 12
# now vol and vol2 have different values
print('vol.value(100, 100, 60):', vol.value(100, 100, 60))
assert(vol.value(100, 100, 60) == 0)
print('vol2.value(100, 100, 60):', vol2.value(100, 100, 60))
assert(vol2.value(100, 100, 60) == 12)
vol.value(100, 100, 60): 0
vol2.value(100, 100, 60): 12

If you need to build another, different volume, with the same structure and size, don’t forget to copy the header part:

[18]:
vol2 = aims.Volume(vol.getSize(), 'FLOAT')
vol2.copyHeaderFrom(vol.header())
print(vol2.header())
assert(vol2.header() == {'sizeX': 192, 'sizeY': 256, 'sizeZ': 128, 'sizeT': 1, 'voxel_size': [0.9, 0.9, 1.2, 1],
                         'volume_dimension': [192, 256, 128, 1]})
{ 'volume_dimension' : [ 192, 256, 128, 1 ], 'sizeX' : 192, 'sizeY' : 256, 'sizeZ' : 128, 'sizeT' : 1, 'voxel_size' : [ 0.9, 0.9, 1.2, 1 ] }

Important information can reside in the header, like voxel size, or coordinates systems and geometric transformations to other coordinates systems, so it is really very important to carry this information with duplicated or derived volumes.

You can also build a volume from a numpy array:

[19]:
arr = numpy.array(numpy.diag(range(40)), dtype=numpy.float32).reshape(40, 40, 1) \
    + numpy.array(range(20), dtype=numpy.float32).reshape(1, 1, 20)
# WARNING: AIMS used to require an array in Fortran ordering,
# whereas the numpy addition always returns a C-ordered array
# in 5.1 this limitation is gone, hwever many C++ algorithms will not work
# (and probably crash) with a C-aligned numpy array. If you need, ask a
# fortran orientation this way:
# arr = numpy.array(arr, order='F')
# for today, let's use the C ordered array:
arr[10, 12, 3] = 25
vol = aims.Volume(arr)
print('vol.value(10, 12, 3):', vol.value(10, 12, 3))
assert(vol.value(10, 12, 3) == 25.)

# data are shared with arr
vol.setValue(35, 10, 15, 2)
print('arr[10, 15, 2]:', arr[10, 15, 2])
assert(arr[10, 15, 2] == 35.0)
arr[12, 15, 1] = 44
print('vol.value(12, 15, 1):', vol.value(12, 15, 1))
assert(vol.value(12, 15, 1) == 44.0)
vol.value(10, 12, 3): 25.0
arr[10, 15, 2]: 35.0
vol.value(12, 15, 1): 44.0

4D volumes

4D volumes work just like 3D volumes. Actually all volumes are 4D in AIMS, but the last dimension is commonly of size 1. In soma.aims.Volume_FLOAT.value and soma.aims.Volume_FLOAT.setValue methods, only the first dimension is mandatory, others are optional and default to 0, but up to 4 coordinates may be used. In the same way, the constructor takes up to 4 dimension parameters:

[20]:
from soma import aims
# create a 4D volume of signed 16-bit ints, of size 30x30x30x4
vol = aims.Volume(30, 30, 30, 4, 'S16')
# fill it with zeros
vol.fill(0)
# set value 12 at voxel (10, 10, 20, 2)
vol.setValue(12, 10, 10, 20, 2)
# get value at the same position
x = vol.value(10, 10, 20, 2)
print(x)
assert(x == 12)
# set the voxels size
vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]
print(vol.header())
assert(vol.header() == {'sizeX': 30, 'sizeY': 30, 'sizeZ': 30, 'sizeT': 4, 'voxel_size': [0.9, 0.9, 1.2, 1],
                        'volume_dimension': [30, 30, 30, 4]})
12
{ 'volume_dimension' : [ 30, 30, 30, 4 ], 'sizeX' : 30, 'sizeY' : 30, 'sizeZ' : 30, 'sizeT' : 4, 'voxel_size' : [ 0.9, 0.9, 1.2, 1 ] }

Similarly, 1D or 2D volumes may be used exactly the same way.

Volume views and subvolumes

A volume can be a view into another volume. This is a way of handling “border” with Volume:

[21]:
vol = aims.Volume(14, 14, 14, 1, dtype='int16')
print('large volume size:', vol.shape)
vol.fill(0)
# take a view at position (2, 2, 2, 0) and size (10, 10, 10, 1)
view = aims.VolumeView(vol, [2, 2, 2, 0], [10, 10, 10, 1])
print('view size:', view.shape)
assert(view.posInRefVolume() == (2, 2, 2, 0) and view.shape == (10, 10, 10, 1))
view[0, 0, 0, 0] = 45
assert(vol[2, 2, 2, 0] == 45)
# view is actually a regular Volume
print(type(view))
large volume size: (14, 14, 14, 1)
view size: (10, 10, 10, 1)
<class 'soma.aims.Volume_S16'>

Partial IO

Some volume IO formats implemented in the Soma-IO library support partial IO and reading a view inside a larger volume (since pyaims 5.1). Be careful, all formats do not support these operations.

[22]:
# get volume dimension on file
import os
os.chdir(tuto_dir)
f = aims.Finder()
f.check('data_for_anatomist/subject01/subject01.nii')
# allocate a large volume
vol = aims.Volume(300, 300, 200, 1, dtype=f.dataType())
vol.fill(0)
view = aims.VolumeView(vol, [20, 20, 20, 0], f.header()['volume_dimension'])
print('view size:', view.shape)
# read the view. the otion "keep_allocation" is important
# in order to prevent re-allocation of the volume.
aims.read('data_for_anatomist/subject01/subject01.nii', object=view, options={'keep_allocation': True})
print(view[100, 100, 64, 0])
assert(view[100, 100, 64, 0] == vol[120, 120, 84, 0])
# otherwise we can read part of a volume
view2 = aims.VolumeView(vol, (100, 100, 40, 0), (50, 50, 50, 1))
aims.read('data_for_anatomist/subject01/subject01.nii', object=view2,
          options={'keep_allocation': True, 'ox': 60, 'sx': 50,
                   'oy': 50, 'sy': 50, 'sz': 50})
print(view2[20, 20, 20, 0], vol[120, 120, 60, 0], view[100, 100, 40, 0])
print(view2.shape, view.shape, vol.shape)
assert(view2[20, 20, 20, 0] == vol[120, 120, 60, 0])
assert(view2[20, 20, 20, 0] == view[100, 100, 40, 0])
# view2 is shifted compared to view, check it
assert(view2[20, 20, 20, 0] == view[80, 70, 20, 0])
view size: (256, 256, 124, 1)
667
94 94 94
(50, 50, 50, 1) (256, 256, 124, 1) (300, 300, 200, 1)

Meshes

Structure

A surfacic mesh represents a surface, as a set of small polygons (generally triangles, but sometimes quads). It has two main components: a vector of vertices (each vertex is a 3D point, with coordinates in millimeters), and a vector of polygons: each polygon is defined by the vertices it links (3 for a triangle). It also optionally has normals (unit vectors). In our mesh structures, there is one normal for each vertex.

[23]:
from soma import aims
mesh = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')
vert = mesh.vertex()
print('vertices:', len(vert))
assert(len(vert) == 33837)
poly = mesh.polygon()
print('polygons:', len(poly))
assert(len(poly) == 67678)
norm = mesh.normal()
print('normals:', len(norm))
assert(len(norm) == 33837)
vertices: 33837
polygons: 67678
normals: 33837

To build a mesh, we can instantiate an object of type aims.AimsTimeSurface_<n>_VOID, for example soma.aims.AimsTimeSurface_3_VOID, with n being the number of vertices by polygon. VOID means that the mesh has no texture in it (which we generally don’t use, we prefer using texture as separate objects). Then we can add vertices, normals and polygons to the mesh:

[24]:
# build a flying saucer mesh
from soma import aims
import numpy
mesh = aims.AimsTimeSurface(3)
# a mesh has a header
mesh.header()['toto'] = 'a message in the header'
vert = mesh.vertex()
poly = mesh.polygon()
x = numpy.cos(numpy.ogrid[0.: 20] * numpy.pi / 10.) * 100
y = numpy.sin(numpy.ogrid[0.: 20] * numpy.pi / 10.) * 100
z = numpy.zeros(20)
c = numpy.vstack((x, y, z)).transpose()
vert.assign(numpy.vstack((numpy.array([(0., 0., -40.), (0., 0., 40.)]), c)))
pol = numpy.vstack((numpy.zeros(20, dtype=numpy.int32), numpy.ogrid[3: 23], numpy.ogrid[2: 22])).transpose()
pol[19, 1] = 2
pol2 = numpy.vstack((numpy.ogrid[2: 22], numpy.ogrid[3: 23], numpy.ones(20, dtype=numpy.int32))).transpose()
pol2[19, 1] = 2
poly.assign(numpy.vstack((pol, pol2)))
# write result
aims.write(mesh, 'saucer.gii')
# automatically calculate normals
mesh.updateNormals()

Flying saucer mesh

Modifying a mesh

[25]:
# slightly inflate a mesh
from soma import aims
import numpy
mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')
vert = numpy.asarray(mesh.vertex())
norm = numpy.asarray(mesh.normal())
vert += norm * 2 # push vertices 2mm away along normal
mesh.updateNormals()
aims.write(mesh, 'subject01_Lwhite_semiinflated.mesh')

Now look at both meshes in Anatomist…

Note that this code only works this way from pyaims 4.7, earlier versions had to reassign the coordinates array to the vertices vector of the mesh.

Alternatively, without numpy, we could have written the code like this:

[26]:
mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')
vert = mesh.vertex()
norm = mesh.normal()
for v, n in zip(vert, norm):
    v += n * 2
mesh.updateNormals()
aims.write(mesh, 'subject01_Lwhite_semiinflated.mesh')

Inflated mesh

Handling time

In AIMS, meshes are actually time-indexed dictionaries of meshes. This way a deforming mesh can be stored in the same object. To copy a timestep to another, use the following:

[27]:
from soma import aims
mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')
# mesh.vertex() is equivalent to mesh.vertex(0)
mesh.vertex(1).assign(mesh.vertex(0))
# same for normals and polygons
mesh.normal(1).assign(mesh.normal(0))
mesh.polygon(1).assign(mesh.polygon(0))
print('number of time steps:', mesh.size())
assert(mesh.size() == 2)
number of time steps: 2

Exercise

Make a deforming mesh that goes from the original mesh to 5mm away, by steps of 0.5 mm

[28]:
from soma import aims
import numpy
mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')
vert = numpy.array(mesh.vertex())  # must make an actual copy to avoid modifying timestep 0
norm = numpy.asarray(mesh.normal())
for i in range(1, 10):
    mesh.polygon(i).assign(mesh.polygon())
    vert += norm * 0.5
    mesh.vertex(i).assign(vert)
    # don't bother about normals, we will rebuild them afterwards.
print('number of time steps:', mesh.size())
assert(mesh.size() == 10)
mesh.updateNormals()  # I told you about normals.
aims.write(mesh, 'subject01_Lwhite_semiinflated_time.mesh')
number of time steps: 10

Inflated mesh with timesteps

Textures

A texture is merely a vector of values, each of them is assigned to a mesh vertex, with a one-to-one mapping, in the same order. A texture is also a time-texture.

[29]:
from soma import aims
tex = aims.TimeTexture('FLOAT')
t = tex[0] # time index, inserts on-the-fly
t.reserve(10) # pre-allocates memory
for i in range(10):
    t.append(i / 10.)
print(tex.size())
assert(len(tex) == 1)
print(tex[0].size())
assert(len(tex[0]) == 10)
print(tex[0][5])
assert(tex[0][5] == 0.5)
1
10
0.5

Exercise

Make a time-texture, with at each time/vertex of the previous mesh, sets the value of the underlying volume data_for_anatomist/subject01/subject01.nii

[30]:
from soma import aims
import numpy as np

mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')
vol = aims.read('data_for_anatomist/subject01/subject01.nii')
tex = aims.TimeTexture('FLOAT')
vs = vol.header()['voxel_size']
for i in range(mesh.size()):
    vert = np.asarray(mesh.vertex(i))
    tex[i].assign(np.zeros((len(vert),), dtype=np.float32))
    t = np.asarray(tex[i])
    coords = np.zeros((len(vert), len(vol.shape)), dtype=int)
    coords[:, :3] = np.round(vert / vs).astype(int)
    t[:] = vol[tuple(coords.T)]
aims.write(tex, 'subject01_Lwhite_semiinflated_texture.tex')

Now look at the texture on the mesh (inflated or not) in Anatomist. Compare it to a 3D fusion between the mesh and the MRI volume.

Computed time-texture vs 3D fusion

Bonus: We can do the same for functional data. But in this case we may have a spatial transformation to apply between anatomical data and functional data (which may have been normalized, or acquired in a different referential).

[31]:
from soma import aims
import numpy as np
mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')
vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
# get header info from anatomical volume
f = aims.Finder()
assert(f.check('data_for_anatomist/subject01/subject01.nii'))
anathdr = f.header()
# get functional -> MNI transformation
m1 = aims.AffineTransformation3d(vol.header()['transformations'][1])
# get anat -> MNI transformation
m2 = aims.AffineTransformation3d(anathdr['transformations'][1])
# make anat -> functional transformation
anat2func = m1.inverse() * m2
# include functional voxel size to get to voxel coordinates
vs = vol.header()['voxel_size']
mvs = aims.AffineTransformation3d(np.diag(vs[:3] + [1.]))
anat2func = mvs.inverse() * anat2func
# now go as in the previous program
tex = aims.TimeTexture('FLOAT')
for i in range(mesh.size()):
    vert = np.asarray(mesh.vertex(i))
    tex[i].assign(np.zeros((len(vert),), dtype=np.float32))
    t = np.asarray(tex[i])
    coords = np.ones((len(vert), len(vol.shape)), dtype=np.float32)
    coords[:, :3] = vert
    # apply matrix anat2func to coordinates array
    coords = np.round(coords.dot(anat2func.toMatrix().T)).astype(int)
    coords[:, 3] = 0
    t[:] = vol[tuple(coords.T)]
aims.write(tex, 'subject01_Lwhite_semiinflated_audio_video.tex')

See how the functional data on the mesh changes across the depth of the cortex. This demonstrates the need to have a proper projection of functional data before dealing with surfacic functional processing.

Buckets

“Buckets” are voxels lists. They are typically used to represent ROIs. A BucketMap is a list of Buckets. Each Bucket contains a list of voxels coordinates. A BucketMap is represented by the class soma.aims.BucketMap_VOID.

[32]:
from soma import aims
bck_map=aims.read('data_for_anatomist/roi/basal_ganglia.data/roi_Bucket.bck')
print('Bucket map: ', bck_map)
print('Nb buckets: ', bck_map.size())
assert(bck_map.size() == 15)
for i in range(bck_map.size()):
    b = bck_map[i]
    print("Bucket", i, ", nb voxels:", b.size())
    if b.keys():
        print("  Coordinates of the first voxel:", b.keys()[0].list())
assert(bck_map[0].size() == 2314)
assert(bck_map[0].keys()[0] == [108, 132, 44])
Bucket map:  <soma.aims.BucketMap_VOID object at 0x7f68e581ac20>
Nb buckets:  15
Bucket 0 , nb voxels: 2314
  Coordinates of the first voxel: [108, 132, 44]
Bucket 1 , nb voxels: 2119
  Coordinates of the first voxel: [108, 108, 53]
Bucket 2 , nb voxels: 2639
  Coordinates of the first voxel: [100, 122, 52]
Bucket 3 , nb voxels: 1444
  Coordinates of the first voxel: [107, 128, 57]
Bucket 4 , nb voxels: 715
  Coordinates of the first voxel: [107, 112, 67]
Bucket 5 , nb voxels: 2171
  Coordinates of the first voxel: [143, 130, 44]
Bucket 6 , nb voxels: 2154
  Coordinates of the first voxel: [142, 111, 53]
Bucket 7 , nb voxels: 2063
  Coordinates of the first voxel: [149, 128, 52]
Bucket 8 , nb voxels: 1588
  Coordinates of the first voxel: [140, 130, 57]
Bucket 9 , nb voxels: 1012
  Coordinates of the first voxel: [144, 114, 67]
Bucket 10 , nb voxels: 8240
  Coordinates of the first voxel: [114, 136, 44]
Bucket 11 , nb voxels: 2159
  Coordinates of the first voxel: [97, 130, 52]
Bucket 12 , nb voxels: 2931
  Coordinates of the first voxel: [149, 130, 52]
Bucket 13 , nb voxels: 6279
  Coordinates of the first voxel: [112, 145, 50]
Bucket 14 , nb voxels: 6502
  Coordinates of the first voxel: [133, 147, 50]

Graphs

Graphs are data structures that may contain various elements. They can represent sets of smaller structures, and also relations between such structures. The main usage we have for them is to represent ROIs sets, sulci, or fiber bundles. A graph is represented by the class soma.aims.Graph.

A graph contains:

  • properties of any type, like a volume or mesh header.

  • nodes (also called vertices), which represent structured elements (a ROI, a sulcus part, etc), which in turn can store properties, and geometrical elements: buckets, meshes…

  • optionally, relations, which link nodes and can also contain properties and geometrical elements.

Properties

Properties are stored in a dictionary-like way. They can hold almost anything, but a restricted set of types can be saved and loaded. It is exactly the same thing as headers found in volumes, meshes, textures or buckets.

[33]:
from soma import aims
graph = aims.read('data_for_anatomist/roi/basal_ganglia.arg')
print(graph)
assert(repr(graph).startswith("{ '__syntax__' : 'RoiArg', 'RoiArg_VERSION' : '1.0', "
                              "'filename_base' : 'basal_ganglia.data',"))
print('properties:', graph.keys())
assert(len([x in graph.keys()
            for x in ('RoiArg_VERSION', 'filename_base', 'roi.global.bck',
                      'type.global.bck', 'boundingbox_max')]) == 5)
for p, v in graph.items():
    print(p, ':', v)
graph['gudule'] = [12, 'a comment']
{ '__syntax__' : 'RoiArg', 'RoiArg_VERSION' : '1.0', 'filename_base' : 'basal_ganglia.data', 'roi.global.bck' : 'roi roi_Bucket.bck roi_label', 'type.global.bck' : 'roi.global.bck', 'boundingbox_max' : [ 255, 255, 123 ], 'boundingbox_min' : [ 0, 0, 0 ], 'voxel_size' : [ 0.9375, 0.9375, 1.20000004768372 ], 'object_attributes_colors' : <Can't write data of type rc_ptr of map of string_vector of S32 in >, 'aims_objects_table' : <Can't write data of type rc_ptr of map of string_map of string_GraphElementCode in >, 'aims_reader_filename' : 'data_for_anatomist/roi/basal_ganglia.arg', 'aims_reader_loaded_objects' : 3, 'header' : { 'arg_syntax' : 'RoiArg', 'data_type' : 'VOID', 'file_type' : 'ARG', 'object_type' : 'Graph' } }
properties: ('RoiArg_VERSION', 'filename_base', 'roi.global.bck', 'type.global.bck', 'boundingbox_max', 'boundingbox_min', 'voxel_size', 'object_attributes_colors', 'aims_objects_table', 'aims_reader_filename', 'aims_reader_loaded_objects', 'header')
RoiArg_VERSION : 1.0
filename_base : basal_ganglia.data
roi.global.bck : roi roi_Bucket.bck roi_label
type.global.bck : roi.global.bck
boundingbox_max : [255, 255, 123]
boundingbox_min : [0, 0, 0]
voxel_size : [0.9375, 0.9375, 1.2]
object_attributes_colors : <Can't write data of type rc_ptr of map of string_vector of S32 in >
aims_objects_table : <Can't write data of type rc_ptr of map of string_map of string_GraphElementCode in >
aims_reader_filename : data_for_anatomist/roi/basal_ganglia.arg
aims_reader_loaded_objects : 3
header : { 'arg_syntax' : 'RoiArg', 'data_type' : 'VOID', 'file_type' : 'ARG', 'object_type' : 'Graph' }

Note

Only properties declared in a “syntax” file may be saved and re-loaded. Other properties are just not saved.

Vertices

Vertices (or nodes) can be accessed via the vertices() method. Each vertex is also a dictionary-like properties set.

[34]:
for v_name in sorted([v['name'] for v in graph.vertices()]):
    print(v_name)
Caude_droit
Caude_gauche
Corps_caude_droit
Corps_caude_gauche
Pallidum_droit
Pallidum_gauche
Putamen_droit_ant
Putamen_droit_post
Putamen_gauche_ant
Putamen_gauche_post
Striatum_ventral_droit
Striatum_ventral_gauche
Thalamus_droit
Thalamus_gauche
v1v2v3

To insert a new vertex, the soma.aims.Graph.addVertex() method should be used:

[35]:
v = graph.addVertex('roi')
print(v)
assert(v.getSyntax() == 'roi')
v['name'] = 'new ROI'
print(v)
assert(v == {'name': 'new ROI'})
{ '__syntax__' : 'roi' }
{ '__syntax__' : 'roi', 'name' : 'new ROI' }

Edges

An edge, or relation, links nodes together. Up to now we have always used binary, unoriented, edges. They can be added using the soma.aims.Graph.addEdge() method. Edges are also dictionary-like properties sets.

[36]:
v2 = [x for x in graph.vertices() if x['name'] == 'Pallidum_gauche'][0]
if sys.version_info[2] < 3:
    del x # python2 keeps this intermediate variable allocated: clean it.
e = graph.addEdge(v, v2, 'roi_link')
print(graph.edges())
# get vertices linked by this edge
print(sorted([x['name'] for x in e.vertices()]))
assert(sorted([x['name'] for x in e.vertices()]) == ['Pallidum_gauche', 'new ROI'])
[ { '__syntax__' : 'roi_link' } ]
['Pallidum_gauche', 'new ROI']

### Adding meshes or buckets in a graph vertex or relation

Setting meshes or buckets in vertices properties is OK internally, but for saving and loading, additional consistancy must be ensured and internal tables update is required. Then, use the soma.aims.GraphManip.storeAims function:

[37]:
mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')
# store mesh in the 'roi' property of vertex v of graph graph
aims.GraphManip.storeAims(graph, v, 'roi', mesh)

Other examples

There are other examples for pyaims here.

Using algorithms

AIMS contains, in addition to the different data structures used in neuroimaging, a set of algorithms which operate on these structures. Currently only a few of them have Python bindings, because we develop these bindings in a “lazy” way, only when they are needed. The algorithms currently available include data conversion, resampling, thresholding, mathematical morphology, distance maps, the mesher, some mesh generators, and a few others. But most of the algorithms are still only available in C++.

Volume Thresholding

[38]:
from soma import aims, aimsalgo
# read a volume with 2 voxels border
vol = aims.read('data_for_anatomist/subject01/subject01.nii', border=2)
# use a thresholder which will keep values above 600
ta = aims.AimsThreshold(aims.AIMS_GREATER_OR_EQUAL_TO, 600, intype=vol)
print('vol:', vol.getSize())
# use it to make a binary thresholded volume
tvol = ta.bin(vol)
print(tvol.value(0, 0, 0))
assert(tvol.value(0, 0, 0) == 0)
print(tvol.value(100, 100, 50))
assert(tvol.value(100, 100, 50) == 32767)
aims.write(tvol, 'thresholded.nii')
vol: [256, 256, 124, 1]
0
32767

Thresholded T1 MRI

Warning

warning:: Some algorithms need that the volume they process have a border: a few voxels all around the volume. Indeed, some algorithms can try to access voxels outside the boundaries of the volume which may cause a segmentation error if the volume doesn’t have a border. That’s the case for example for operations like erosion, dilation, closing. There’s no test in each point to detect if the algorithm tries to access outside the volume because it would slow down the process.

In the previous example, a 2 voxels border is added by passing a parameter border=2 to soma.aims.read function.

Mathematical morphology

[39]:
# apply 5mm closing
clvol = aimsalgo.AimsMorphoClosing(tvol, 5)
aims.write(clvol, 'closed.nii')

Closing of a thresholded T1 MRI

Mesher

[40]:
m = aimsalgo.Mesher()
mesh = aims.AimsSurfaceTriangle() # create an empty mesh
# the border should be -1
clvol.fillBorder(-1)
# get a smooth mesh of the interface of the biggest connected component
m.getBrain(clvol, mesh)
aims.write(mesh, 'head_mesh.gii')

Head mesh

The above examples make up a simplified version of the head mesh extraction algorithm in VipGetHead, used in the Morphologist pipeline.

Surface generation

The soma.aims.SurfaceGenerator allows to create simple meshes of predefined shapes: cube, cylinder, sphere, icosehedron, cone, arrow.

[41]:
from soma import aims
center = (50, 25, 20)
radius = 53
mesh1 = aims.SurfaceGenerator.icosahedron(center, radius)
mesh2 = aims.SurfaceGenerator.generate(
    {'type': 'arrow', 'point1': [30, 70, 0],
     'point2': [100, 100, 100], 'radius': 20, 'arrow_radius': 30,
     'arrow_length_factor': 0.7, 'facets': 50})
# get the list of all possible generated objects and parameters:
print(aims.SurfaceGenerator.description())
assert('arrow_length_factor' in aims.SurfaceGenerator.description()[0])
[ { 'arrow_length_factor' : 'relative length of the head', 'arrow_radius' : 'radius of the tail', 'facets' : '(optional) number of facets of the cone section (default: 4)', 'point1' : '3D position of the head', 'point2' : '3D position of the center of the bottom', 'radius' : 'radius of the head', 'type' : 'arrow' }, { 'closed' : '(optional) if non-zero, make polygons for the cone end (default: 0)', 'facets' : '(optional) number of facets of the cone section (default: 4)', 'point1' : '3D position of the sharp end', 'point2' : '3D position of the center of the other end', 'radius' : 'radius of the 2nd end', 'smooth' : '(optional) make smooth normals and shared vertices (default: 0)', 'type' : 'cone' }, { 'center' : '3D position of the center', 'radius' : 'half-length of the edge', 'smooth' : '(optional) make smooth normals and shared vertices (default: 0)', 'type' : 'cube' }, { 'closed' : '(optional) if non-zero, make polygons for the cylinder ends (default: 0)', 'facets' : '(optional) number of facets of the cylinder section (default: 4)', 'point1' : '3D position of the center of the 1st end', 'point2' : '3D position of the center of the 2nd end', 'radius' : 'radius of the 1st end', 'radius2' : '(optional) radius of the 2nd end (default: same as radius)', 'smooth' : '(optional) make smooth normals and shared vertices for the tube part (default: 1)', 'type' : 'cylinder' }, { 'center' : '3D position of the center, may also be specified as \'point1\' parameter', 'facets' : '(optional) number of facets of the sphere. May also be specified as \'nfacets\' parameter (default: 225)', 'radius1' : 'radius1', 'radius2' : 'radius2', 'type' : 'ellipse', 'uniquevertices' : '(optional) if set to 1, the pole vertices are not duplicated( default: 0)' }, { 'center' : '3D position of the center', 'radius' : 'radius', 'type' : 'icosahedron' }, { 'center' : '3D position of the center, may also be specified as \'point1\' parameter', 'facets' : '(optional) minimum number of facets of the sphere. (default: 30)', 'radius' : 'radius', 'type' : 'icosphere' }, { 'boundingbox_max' : '3D position of the higher bounding box', 'boundingbox_min' : '3D position of the lower bounding box', 'smooth' : '(optional) make smooth normals and shared vertices (default: 0)', 'type' : 'parallelepiped' }, { 'center' : '3D position of the center, may also be specified as \'point1\' parameter', 'facets' : '(optional) number of facets of the sphere. May also be specified as \'nfacets\' parameter (default: 225)', 'radius' : 'radius', 'type' : 'sphere', 'uniquevertices' : '(optional) if set to 1, the pole vertices are not duplicated( default: 0)' } ]

Generated icosahedron and arrow

Interpolation

Interpolators help to get values in millimeters coordinates in a discrete space (volume grid), and may allow voxels values mixing (linear interpolation, typically).

[42]:
from soma import aims
import numpy as np
# load a functional volume
vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')
# get the position of the maximum
maxval = vol.max()
pmax = [p[0] for p in np.where(vol.np == maxval)]
# set pmax in mm
vs = vol.header()['voxel_size']
pmax = [x * y for x,y in zip(pmax, vs)]
# take a sphere of 5mm radius, with about 200 vertices
mesh = aims.SurfaceGenerator.sphere(pmax[:3], 5., 200)
vert = mesh.vertex()
# get an interpolator
interpolator = aims.aims.getLinearInterpolator(vol)
# create a texture for that sphere
tex = aims.TimeTexture_FLOAT()
tx = tex[0]
tx2 = tex[1]
tx.reserve(len(vert))
tx2.reserve(len(vert))
for v in vert:
    tx.append(interpolator.value(v))
    # compare to non-interpolated value
    tx2.append(vol.value(*[int(round(x / y)) for x,y in zip(v, vs)]))
aims.write(tex, 'functional_tex.gii')
aims.write(mesh, 'sphere.gii')

Look at the difference between the two timesteps (interpolated and non-interpolated) of the texture in Anatomist.

image0 Interpolated vs not interpolated texture

Types conversion

The Converter_*_* classes allow to convert some data structures types to others. Of course all types cannot be converted to any other, but they are typically used ton convert volumed from a given voxel type to another one. A “factory” function may help to build the correct converter using input and output types. For instance, to convert the anatomical volume of the previous examples to float type:

[43]:
from soma import aims
vol = aims.read('data_for_anatomist/subject01/subject01.nii')
print('type of vol:', type(vol))
assert(type(vol) is aims.Volume_S16)
c = aims.Converter(intype=vol, outtype=aims.Volume('FLOAT'))
vol2 = c(vol)
print('type of converted volume:', type(vol2))
assert(type(vol2) is aims.Volume_FLOAT)
print('value of initial volume at voxel (50, 50, 50):', vol.value(50, 50, 50))
assert(vol.value(50, 50, 50) == 57)
print('value of converted volume at voxel (50, 50, 50):', vol2.value(50, 50, 50))
assert(vol2.value(50, 50, 50) == 57.0)
type of vol: <class 'soma.aims.Volume_S16'>
type of converted volume: <class 'soma.aims.Volume_FLOAT'>
value of initial volume at voxel (50, 50, 50): 57
value of converted volume at voxel (50, 50, 50): 57.0

Resampling

Resampling allows to apply a geometric transformation or/and to change voxels size. Several types of resampling may be used depending on how we interpolate values between neighbouring voxels (see interpolators): nearest-neighbour (order 0), linear (order 1), spline resampling with order 2 to 7 in AIMS.

[44]:
from soma import aims, aimsalgo
import math
vol = aims.read('data_for_anatomist/subject01/subject01.nii')
# create an affine transformation matrix
# rotating pi/8 along z axis
tr = aims.AffineTransformation3d(aims.Quaternion([0, 0, math.sin(math.pi / 16), math.cos(math.pi / 16)]))
tr.setTranslation((100, -50, 0))
# get an order 2 resampler for volumes of S16
resp = aims.ResamplerFactory_S16().getResampler(2)
resp.setDefaultValue(-1) # set background to -1
resp.setRef(vol) # volume to resample
# resample into a volume of dimension 200x200x200 with voxel size 1.1, 1.1, 1.5
resampled = resp.doit(tr, 200, 200, 200, (1.1, 1.1, 1.5))
# Note that the header transformations to external referentials have been updated
print(resampled.header()['referentials'])
assert(resampled.header()['referentials']
       == ['Scanner-based anatomical coordinates', 'Talairach-MNI template-SPM'])
import numpy
numpy.set_printoptions(precision=4)
for t in resampled.header()['transformations']:
    print(aims.AffineTransformation3d(t))
aims.write(resampled, 'resampled.nii')
["Scanner-based anatomical coordinates", "Talairach-MNI template-SPM"]
[[ -0.9239  -0.3827   0.     193.2538]
 [  0.3827  -0.9239   0.      34.6002]
 [  0.       0.      -1.      73.1996]
 [  0.       0.       0.       1.    ]]
[[-9.6797e-01 -4.1623e-01  1.0548e-02  2.0329e+02]
 [ 3.8418e-01 -8.9829e-01  3.6210e-02  2.8707e+00]
 [ 3.9643e-03 -2.0773e-02 -1.2116e+00  9.3405e+01]
 [ 0.0000e+00  0.0000e+00  0.0000e+00  1.0000e+00]]

Load the original image and the resampled in Anatomist. See how the resampled has been rotated. Now apply the NIFTI/SPM referential info on both images. They are now aligned again, and cursor clicks correctly go to the same location on both volume, whatever the display referential for each of them.

Aimsalgo resampling

PyAIMS / PyAnatomist integration

It is possible to use both PyAims and PyAnatomist APIs together in python. See the Pyanatomist / PyAims tutorial.

At the end, cleanup the temporary working directory

[45]:
# cleanup data
import shutil
os.chdir(older_cwd)
shutil.rmtree(tuto_dir)
[ ]: