{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "PyAims tutorial : programming with AIMS in Python language\n", "==========================================================\n", "\n", "This tutorial should work with Python 2 (2.6 or higher), and is also compatible with Python 3 (if pyaims is compiled in python3 mode).\n", "\n", "This is a [Jupyter/Ipython notebook](http://jupyter.org/):\n", "\n", "* [download notebook](pyaims_tutorial_nb.ipynb).\n", "\n", "pyaims_tutorial_nb.ipynb\n", "\n", "AIMS is a C++ library, but has python language bindings: [PyAIMS](https://brainvisa.info/pyaims/sphinx/). This means that the C++ classes and functions can be used from python. \n", "This has many advantages compared to pure C++:\n", "\n", "* Writing python scripts and programs is much easier and faster than C++: there is no fastidious and long compilation step.\n", "* Scripts are more flexible, can be modified on-the-fly, etc\n", "* It can be used interactively in a python interactive shell.\n", "* As pyaims is actually C++ code called from python, it is still fast to execute complex algorithms. \n", " There is obviously an overhead to call C++ from python, but once in the C++ layer, it is C++ execution speed.\n", "\n", "A few examples of how to use and manipulate the main data structures will be shown here.\n", "\n", "The data for the examples in this section can be downloaded here: [https://brainvisa.info/download/data/test_data.zip](https://brainvisa.info/download/data/test_data.zip). \n", "To use the examples directly, users should go to the directory where this archive was uncompressed, and then run ipython from this directory.\n", "A cleaner alternative, especially if no write access is allowed on this data directory, is to make a symbolic link to the *data_for_anatomist* subdirectory\n", "\n", "```bash\n", "cd $HOME\n", "mkdir bvcourse\n", "cd bvcourse\n", "ln -s /data_for_anatomist .\n", "ipython\n", "```\n", "\n", "Doing this in python:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To work smoothly with python2 or python3, let's use print():" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from __future__ import print_function\n", "import sys\n", "print(sys.version_info)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import six\n", "from six.moves.urllib.request import urlopen\n", "import zipfile\n", "import os\n", "import os.path\n", "import tempfile\n", "# let's work in a temporary directory\n", "tuto_dir = tempfile.mkdtemp(prefix='pyaims_tutorial_')\n", "# either we already have test_data.zip in the current directory\n", "# otherwise we fetch it from the server\n", "older_cwd = os.getcwd()\n", "test_data = os.path.join(older_cwd, 'test_data.zip')\n", "print('old cwd:', older_cwd)\n", "if not os.path.exists(test_data):\n", " print('downloading test_data.zip...')\n", " f = urlopen('https://brainvisa.info/download/data/test_data.zip')\n", " test_data = os.path.join(tuto_dir, 'test_data.zip')\n", " open(test_data, 'wb').write(f.read())\n", " f.close()\n", "print('test_data:', test_data)\n", "os.chdir(tuto_dir)\n", "f = zipfile.ZipFile(test_data)\n", "f.extractall()\n", "del f\n", "print('we are working in:', tuto_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Using data structures\n", "---------------------\n", "\n", "### Module importation\n", "\n", "In python, the aimsdata library is available as the [soma.aims](index.html#module-soma.aims) module.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import soma.aims\n", "# the module is actually soma.aims:\n", "vol = soma.aims.Volume(100, 100, 100, dtype='int16')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "or:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "# the module is available as aims (not soma.aims):\n", "vol = aims.Volume(100, 100, 100, dtype='int16')\n", "# in the following, we will be using this form because it is shorter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IO: reading and writing objects\n", "\n", "Reading operations are accessed via a single [soma.aims.read()](pyaims_highlevel.html#soma.aims.read) function, and writing through a single [soma.aims.write()](pyaims_highlevel.html#soma.aims.write) function. \n", "[soma.aims.read()](pyaims_highlevel.html#soma.aims.read) function reads any object from a given file name, in any supported file format, and returns it:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "obj = aims.read('data_for_anatomist/subject01/subject01.nii')\n", "print(obj.getSize())\n", "obj2 = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "print(obj2.getSize())\n", "obj3 = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')\n", "print(len(obj3.vertex(0)))\n", "assert(obj.getSize() == [256, 256, 124, 1])\n", "assert(obj2.getSize() == [53, 63, 46, 1])\n", "assert(obj3.size() == 1 and len(obj3.vertex(0)) == 33837)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The returned object can have various types according to what is found in the disk file(s).\n", "\n", "Writing is just as easy. The file name extension generally determines the output format. \n", "An object read from a given format can be re-written in any other supported format, provided the format can actually store the object type." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "obj2 = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "aims.write(obj2, 'Audio-Video_T_map.ima')\n", "obj3 = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')\n", "aims.write(obj3, 'subject01_Lhemi.gii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "

Exercise

\n", "Write a little file format conversion tool\n", "
\n", "\n", "### Volumes\n", "\n", "Volumes are array-like containers of voxels, plus a set of additional information kept in a header structure. \n", "In AIMS, the header structure is generic and extensible, and does not depend on a specific file format. \n", "Voxels may have various types, so a specific type of volume should be used for a specific type of voxel. \n", "The type of voxel has a code that is used to suffix the Volume type: [soma.aims.Volume_S16](pyaims_lowlevel.html#soma.aims.soma.aims.Volume_S16) for signed 16-bit ints, [soma.aims.Volume_U32](pyaims_lowlevel.html#soma.aims.soma.aims.Volume_U32) \n", "for unsigned 32-bit ints, [soma.aims.Volume_FLOAT](pyaims_lowlevel.html#soma.aims.Volume_FLOAT) for 32-bit floats, [soma.aims.Volume_DOUBLE](pyaims_lowlevel.html#soma.aims.soma.aims.Volume_DOUBLE) for 64-bit floats, [soma.aims.Volume_RGBA](pyaims_lowlevel.html#soma.aims.soma.aims.Volume_RGBA) for RGBA colors, etc.\n", "\n", "\n", "#### Building a volume" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " # create a 3D volume of signed 16-bit ints, of size 192x256x128\n", "vol = aims.Volume(192, 256, 128, dtype='int16')\n", "# fill it with zeros\n", "vol.fill(0)\n", "# set value 12 at voxel (100, 100, 60)\n", "vol.setValue(12, 100, 100, 60)\n", "# get value at the same position\n", "x = vol.value(100, 100, 60)\n", "print(x)\n", "assert(x == 12)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set the voxels size\n", "vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]\n", "print(vol.header())\n", "assert(vol.header() == {'sizeX': 192, 'sizeY': 256, 'sizeZ': 128, 'sizeT': 1, 'voxel_size': [0.9, 0.9, 1.2, 1],\n", " 'volume_dimension': [192, 256, 128, 1]})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![3D volume: value 12 at voxel (100, 100 ,60)](images/volume1.png \"3D volume: value 12 at voxel (100, 100 ,60)\")\n", "\n", "\n", "#### Basic operations\n", "\n", "Whole volume operations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# multiplication, addition etc\n", "vol *= 2\n", "vol2 = vol * 3 + 12\n", "print(vol2.value(100, 100, 60))\n", "vol /= 2\n", "vol3 = vol2 - vol - 12\n", "print(vol3.value(100, 100, 60))\n", "vol4 = vol2 * vol / 6\n", "print(vol4.value(100, 100, 60))\n", "assert(vol2.value(100, 100, 60) == 84)\n", "assert(vol3.value(100, 100, 60) == 60)\n", "assert(vol4.value(100, 100, 60) == 168)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Voxel-wise operations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# fill the volume with the distance to voxel (100, 100, 60)\n", "vs = vol.header()['voxel_size']\n", "pos0 = (100 * vs[0], 100 * vs[1], 60 * vs[2]) # in millimeters\n", "for z in range(vol.getSizeZ()):\n", " for y in range(vol.getSizeY()):\n", " for x in range(vol.getSizeX()):\n", " # get current position in an aims.Point3df structure, in mm\n", " p = aims.Point3df(x * vs[0], y * vs[1], z * vs[2])\n", " # get relative position to pos0, in voxels\n", " p -= pos0\n", " # distance: norm of vector p\n", " dist = int(round(p.norm()))\n", " # set it into the volume\n", " vol.setValue(dist, x, y, z)\n", "print(vol.value(100, 100, 60))\n", "# save the volume\n", "aims.write(vol, 'distance.nii')\n", "assert(vol.value(100, 100, 60) == 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now look at the *distance.nii* volume in Anatomist.\n", "\n", "![Distance example](images/distance.png \"Distance example\")\n", "\n", "\n", "
\n", "

Exercise

\n", "\n", "Make a program which loads the image *data_for_anatomist/subject01/Audio-Video_T_map.nii* and thresholds it so as to keep values above 3.\n", "
\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "print(vol.value(20, 20, 20) < 3. and vol.value(20, 20, 20) != 0.)\n", "assert(vol.value(20, 20, 20) < 3. and vol.value(20, 20, 20) != 0.)\n", "for z in range(vol.getSizeZ()):\n", " for y in range(vol.getSizeY()):\n", " for x in range(vol.getSizeX()):\n", " if vol.value(x, y, z) < 3.:\n", " vol.setValue(0, x, y, z)\n", "print(vol.value(20, 20, 20))\n", "aims.write(vol, 'Audio-Video_T_thresholded.nii')\n", "assert(vol.value(20, 20, 20) == 0.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Thresholded Audio-Video T-map](images/threshold.png \"Thresholded Audio-Video T-map\")\n", "\n", "
\n", "

Exercise

\n", "\n", "Make a program to dowsample the anatomical image *data_for_anatomist/subject01/subject01.nii* and keeps one voxel out of two in every direction.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n", "# allocate a new volume with half dimensions\n", "vol2 = aims.Volume(vol.getSizeX() // 2, vol.getSizeY() // 2, vol.getSizeZ() // 2, dtype='DOUBLE')\n", "print(vol2.getSizeX())\n", "assert(vol2.getSizeX() == 128)\n", "# set the voxel size to twice it was in vol\n", "vs = vol.header()['voxel_size']\n", "vs2 = [x * 2 for x in vs]\n", "vol2.header()['voxel_size'] = vs2\n", "for z in range(vol2.getSizeZ()):\n", " for y in range(vol2.getSizeY()):\n", " for x in range(vol2.getSizeX()):\n", " vol2.setValue(vol.value(x*2, y*2, z*2), x, y, z)\n", "print(vol.value(100, 100, 40))\n", "print(vol2.value(50, 50, 20))\n", "aims.write(vol2, 'resampled.nii')\n", "assert(vol.value(100, 100, 40) == 775)\n", "assert(vol2.value(50, 50, 20) == 775.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Downsampled anatomical image](images/resampled.png \"Downsampled anatomical image\")\n", "\n", "The first thing that comes to mind when running these examples, is that they are *slow*. \n", "Indeed, python is an interpreted language and loops in any interpreted language are slow. \n", "In addition, accessing individually each voxel of the volume has the overhead of python/C++ bindings communications. \n", "The conclusion is that that kind of example is probably a bit too low-level, and should be done, when possible, by compiled libraries or specialized array-handling libraries. \n", "This is the role of **numpy**.\n", "\n", "Accessing numpy arrays to AIMS volume voxels is supported:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy\n", "vol.fill(0)\n", "arr = numpy.asarray(vol)\n", "# or:\n", "arr = vol.np\n", "# set value 100 in a whole sub-volume\n", "arr[60:120, 60:120, 40:80] = 100\n", "# note that arr is a shared view to the volume contents,\n", "# modifications will also affect the volume\n", "print(vol.value(65, 65, 42))\n", "print(vol.value(65, 65, 30))\n", "aims.write(vol, \"cube.nii\")\n", "assert(vol.value(65, 65, 42) == 100)\n", "assert(vol.value(65, 65, 30) == 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also use numpy accessors and slicing features directly on a Volume object:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vol.fill(0)\n", "vol[60:120, 60:120, 40:80] = 100\n", "print(vol.value(65, 65, 42))\n", "print(vol.value(65, 65, 30))\n", "assert(numpy.all(vol[65, 65, 42, 0] == 100))\n", "assert(numpy.all(vol[65, 65, 30, 0] == 0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![3D volume containing a cube](images/cube.png \"3D volume containing a cube\")\n", "\n", "Now we can re-write the thresholding example using numpy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "arr = vol.np\n", "arr[numpy.where(arr < 3.)] = 0.\n", "print(vol.value(20, 20, 20))\n", "aims.write(vol, 'Audio-Video_T_thresholded2.nii')\n", "assert(vol.value(20, 20, 20) == 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, `arr < 3.` returns a boolean array with the same size as `arr`, and [numpy.where()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) returns arrays of coordinates where the specified contition is true.\n", "\n", "The distance example, using numpy, would like the following:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "import numpy\n", "vol = aims.Volume(192, 256, 128, 'S16')\n", "vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]\n", "vs = vol.header()['voxel_size']\n", "pos0 = (100 * vs[0], 100 * vs[1], 60 * vs[2]) # in millimeters\n", "# build arrays of coordinates for x, y, z\n", "x, y, z = numpy.ogrid[0.:vol.getSizeX(), 0.:vol.getSizeY(), 0.:vol.getSizeZ()]\n", "# get coords in millimeters\n", "x *= vs[0]\n", "y *= vs[1]\n", "z *= vs[2]\n", "# relative to pos0\n", "x -= pos0[0]\n", "y -= pos0[1]\n", "z -= pos0[2]\n", "# get norm, using numpy arrays broadcasting\n", "vol[:, :, :, 0] = numpy.sqrt(x**2 + y**2 + z**2)\n", "\n", "print(vol.value(100, 100, 60))\n", "assert(vol.value(100, 100, 60) == 0)\n", "\n", "# and save result\n", "aims.write(vol, 'distance2.nii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This example appears a bit more tricky, since we must build the coordinates arrays, but is way faster to execute, because all loops within the code are executed in compiled routines in numpy. \n", "One interesting thing to note is that this code is using the famous \"array broadcasting\" feature of numpy, where arrays of heterogeneous sizes can be combined, and the \"missing\" dimensions are extended.\n", "\n", "\n", "#### Copying volumes or volumes structure, or building from an array\n", "\n", "To make a deep-copy of a volume, use the copy constructor:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vol2 = aims.Volume(vol)\n", "vol2[100, 100, 60, 0] = 12\n", "# now vol and vol2 have different values\n", "print('vol.value(100, 100, 60):', vol.value(100, 100, 60))\n", "assert(vol.value(100, 100, 60) == 0)\n", "print('vol2.value(100, 100, 60):', vol2.value(100, 100, 60))\n", "assert(vol2.value(100, 100, 60) == 12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you need to build another, different volume, with the same structure and size, don't forget to copy the header part:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vol2 = aims.Volume(vol.getSize(), 'FLOAT')\n", "vol2.copyHeaderFrom(vol.header())\n", "print(vol2.header())\n", "assert(vol2.header() == {'sizeX': 192, 'sizeY': 256, 'sizeZ': 128, 'sizeT': 1, 'voxel_size': [0.9, 0.9, 1.2, 1],\n", " 'volume_dimension': [192, 256, 128, 1]})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Important information can reside in the header, like voxel size, or coordinates systems and geometric transformations to other coordinates systems, \n", "so it is really very important to carry this information with duplicated or derived volumes.\n", "\n", "You can also build a volume from a numpy array:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "arr = numpy.array(numpy.diag(range(40)), dtype=numpy.float32).reshape(40, 40, 1) \\\n", " + numpy.array(range(20), dtype=numpy.float32).reshape(1, 1, 20)\n", "# WARNING: AIMS used to require an array in Fortran ordering,\n", "# whereas the numpy addition always returns a C-ordered array\n", "# in 5.1 this limitation is gone, hwever many C++ algorithms will not work\n", "# (and probably crash) with a C-aligned numpy array. If you need, ask a\n", "# fortran orientation this way:\n", "# arr = numpy.array(arr, order='F')\n", "# for today, let's use the C ordered array:\n", "arr[10, 12, 3] = 25\n", "vol = aims.Volume(arr)\n", "print('vol.value(10, 12, 3):', vol.value(10, 12, 3))\n", "assert(vol.value(10, 12, 3) == 25.)\n", "\n", "# data are shared with arr\n", "vol.setValue(35, 10, 15, 2)\n", "print('arr[10, 15, 2]:', arr[10, 15, 2])\n", "assert(arr[10, 15, 2] == 35.0)\n", "arr[12, 15, 1] = 44\n", "print('vol.value(12, 15, 1):', vol.value(12, 15, 1))\n", "assert(vol.value(12, 15, 1) == 44.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4D volumes\n", "\n", "4D volumes work just like 3D volumes. Actually all volumes are 4D in AIMS, but the last dimension is commonly of size 1. \n", "In [soma.aims.Volume_FLOAT.value](pyaims_lowlevel.html#soma.aims.Volume_FLOAT.value) and [soma.aims.Volume_FLOAT.setValue](pyaims_lowlevel.html#soma.aims.Volume_FLOAT.setValue) methods, only the first dimension is mandatory, \n", "others are optional and default to 0, but up to 4 coordinates may be used. In the same way, the constructor takes up to 4 dimension parameters:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "# create a 4D volume of signed 16-bit ints, of size 30x30x30x4\n", "vol = aims.Volume(30, 30, 30, 4, 'S16')\n", "# fill it with zeros\n", "vol.fill(0)\n", "# set value 12 at voxel (10, 10, 20, 2)\n", "vol.setValue(12, 10, 10, 20, 2)\n", "# get value at the same position\n", "x = vol.value(10, 10, 20, 2)\n", "print(x)\n", "assert(x == 12)\n", "# set the voxels size\n", "vol.header()['voxel_size'] = [0.9, 0.9, 1.2, 1.]\n", "print(vol.header())\n", "assert(vol.header() == {'sizeX': 30, 'sizeY': 30, 'sizeZ': 30, 'sizeT': 4, 'voxel_size': [0.9, 0.9, 1.2, 1],\n", " 'volume_dimension': [30, 30, 30, 4]})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly, 1D or 2D volumes may be used exactly the same way." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Volume views and subvolumes\n", "\n", "A volume can be a view into another volume. This is a way of handling \"border\" with Volume:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vol = aims.Volume(14, 14, 14, 1, dtype='int16')\n", "print('large volume size:', vol.shape)\n", "vol.fill(0)\n", "# take a view at position (2, 2, 2, 0) and size (10, 10, 10, 1)\n", "view = aims.VolumeView(vol, [2, 2, 2, 0], [10, 10, 10, 1])\n", "print('view size:', view.shape)\n", "assert(view.posInRefVolume() == (2, 2, 2, 0) and view.shape == (10, 10, 10, 1))\n", "view[0, 0, 0, 0] = 45\n", "assert(vol[2, 2, 2, 0] == 45)\n", "# view is actually a regular Volume\n", "print(type(view))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Partial IO\n", "\n", "Some volume IO formats implemented in the Soma-IO library support partial IO and reading a view inside a larger volume (since pyaims 5.1). Be careful, all formats do not support these operations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get volume dimension on file\n", "import os\n", "os.chdir(tuto_dir)\n", "f = aims.Finder()\n", "f.check('data_for_anatomist/subject01/subject01.nii')\n", "# allocate a large volume\n", "vol = aims.Volume(300, 300, 200, 1, dtype=f.dataType())\n", "vol.fill(0)\n", "view = aims.VolumeView(vol, [20, 20, 20, 0], f.header()['volume_dimension'])\n", "print('view size:', view.shape)\n", "# read the view. the otion \"keep_allocation\" is important \n", "# in order to prevent re-allocation of the volume.\n", "aims.read('data_for_anatomist/subject01/subject01.nii', object=view, options={'keep_allocation': True})\n", "print(view[100, 100, 64, 0])\n", "assert(view[100, 100, 64, 0] == vol[120, 120, 84, 0])\n", "# otherwise we can read part of a volume\n", "view2 = aims.VolumeView(vol, (100, 100, 40, 0), (50, 50, 50, 1))\n", "aims.read('data_for_anatomist/subject01/subject01.nii', object=view2, \n", " options={'keep_allocation': True, 'ox': 60, 'sx': 50, \n", " 'oy': 50, 'sy': 50, 'sz': 50})\n", "print(view2[20, 20, 20, 0], vol[120, 120, 60, 0], view[100, 100, 40, 0])\n", "print(view2.shape, view.shape, vol.shape)\n", "assert(view2[20, 20, 20, 0] == vol[120, 120, 60, 0])\n", "assert(view2[20, 20, 20, 0] == view[100, 100, 40, 0])\n", "# view2 is shifted compared to view, check it\n", "assert(view2[20, 20, 20, 0] == view[80, 70, 20, 0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Meshes\n", "\n", "#### Structure\n", "\n", "A surfacic mesh represents a surface, as a set of small polygons (generally triangles, but sometimes quads). \n", "It has two main components: a vector of vertices (each vertex is a 3D point, with coordinates in millimeters), \n", "and a vector of polygons: each polygon is defined by the vertices it links (3 for a triangle). It also optionally has normals (unit vectors). \n", "In our mesh structures, there is one normal for each vertex.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "mesh = aims.read('data_for_anatomist/subject01/subject01_Lhemi.mesh')\n", "vert = mesh.vertex()\n", "print('vertices:', len(vert))\n", "assert(len(vert) == 33837)\n", "poly = mesh.polygon()\n", "print('polygons:', len(poly))\n", "assert(len(poly) == 67678)\n", "norm = mesh.normal()\n", "print('normals:', len(norm))\n", "assert(len(norm) == 33837)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To build a mesh, we can instantiate an object of type `aims.AimsTimeSurface__VOID`,\n", "for example [soma.aims.AimsTimeSurface_3_VOID](pyaims_lowlevel.html#soma.aims.AimsTimeSurface_3_VOID), with *n* being the number of vertices by polygon. VOID means that the mesh has no texture in it (which we generally don't use, we prefer using texture as separate objects).\n", "Then we can add vertices, normals and polygons to the mesh:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# build a flying saucer mesh\n", "from soma import aims\n", "import numpy\n", "mesh = aims.AimsTimeSurface(3)\n", "# a mesh has a header\n", "mesh.header()['toto'] = 'a message in the header'\n", "vert = mesh.vertex()\n", "poly = mesh.polygon()\n", "x = numpy.cos(numpy.ogrid[0.: 20] * numpy.pi / 10.) * 100\n", "y = numpy.sin(numpy.ogrid[0.: 20] * numpy.pi / 10.) * 100\n", "z = numpy.zeros(20)\n", "c = numpy.vstack((x, y, z)).transpose()\n", "vert.assign(numpy.vstack((numpy.array([(0., 0., -40.), (0., 0., 40.)]), c)))\n", "pol = numpy.vstack((numpy.zeros(20, dtype=numpy.int32), numpy.ogrid[3: 23], numpy.ogrid[2: 22])).transpose()\n", "pol[19, 1] = 2\n", "pol2 = numpy.vstack((numpy.ogrid[2: 22], numpy.ogrid[3: 23], numpy.ones(20, dtype=numpy.int32))).transpose()\n", "pol2[19, 1] = 2\n", "poly.assign(numpy.vstack((pol, pol2)))\n", "# write result\n", "aims.write(mesh, 'saucer.gii')\n", "# automatically calculate normals\n", "mesh.updateNormals()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Flying saucer mesh](images/saucer.png \"Flying saucer mesh\")\n", "\n", "\n", "#### Modifying a mesh" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# slightly inflate a mesh\n", "from soma import aims\n", "import numpy\n", "mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n", "vert = numpy.asarray(mesh.vertex())\n", "norm = numpy.asarray(mesh.normal())\n", "vert += norm * 2 # push vertices 2mm away along normal\n", "mesh.updateNormals()\n", "aims.write(mesh, 'subject01_Lwhite_semiinflated.mesh')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now look at both meshes in Anatomist...\n", "\n", "Note that this code only works this way from pyaims 4.7, earlier versions had to reassign the coordinates array to the vertices vector of the mesh.\n", "\n", "Alternatively, without numpy, we could have written the code like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n", "vert = mesh.vertex()\n", "norm = mesh.normal()\n", "for v, n in zip(vert, norm):\n", " v += n * 2\n", "mesh.updateNormals()\n", "aims.write(mesh, 'subject01_Lwhite_semiinflated.mesh')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Inflated mesh](images/semi_inflated.png \"Inflated mesh\")\n", "\n", "\n", "#### Handling time\n", "\n", "In AIMS, meshes are actually time-indexed dictionaries of meshes. \n", "This way a deforming mesh can be stored in the same object. \n", "To copy a timestep to another, use the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n", "# mesh.vertex() is equivalent to mesh.vertex(0)\n", "mesh.vertex(1).assign(mesh.vertex(0))\n", "# same for normals and polygons\n", "mesh.normal(1).assign(mesh.normal(0))\n", "mesh.polygon(1).assign(mesh.polygon(0))\n", "print('number of time steps:', mesh.size())\n", "assert(mesh.size() == 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "

Exercise

\n", "Make a deforming mesh that goes from the original mesh to 5mm away, by steps of 0.5 mm\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "import numpy\n", "mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n", "vert = numpy.array(mesh.vertex()) # must make an actual copy to avoid modifying timestep 0\n", "norm = numpy.asarray(mesh.normal())\n", "for i in range(1, 10):\n", " mesh.polygon(i).assign(mesh.polygon())\n", " vert += norm * 0.5\n", " mesh.vertex(i).assign(vert)\n", " # don't bother about normals, we will rebuild them afterwards.\n", "print('number of time steps:', mesh.size())\n", "assert(mesh.size() == 10)\n", "mesh.updateNormals() # I told you about normals.\n", "aims.write(mesh, 'subject01_Lwhite_semiinflated_time.mesh')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Inflated mesh with timesteps](images/semi_inflated_time.png \"Inflated mesh with timesteps\")\n", "\n", "#### Textures\n", "\n", "A texture is merely a vector of values, each of them is assigned to a mesh vertex, with a one-to-one mapping, in the same order.\n", "A texture is also a time-texture." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "tex = aims.TimeTexture('FLOAT')\n", "t = tex[0] # time index, inserts on-the-fly\n", "t.reserve(10) # pre-allocates memory\n", "for i in range(10):\n", " t.append(i / 10.)\n", "print(tex.size())\n", "assert(len(tex) == 1)\n", "print(tex[0].size())\n", "assert(len(tex[0]) == 10)\n", "print(tex[0][5])\n", "assert(tex[0][5] == 0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "

Exercise

\n", "Make a time-texture, with at each time/vertex of the previous mesh, sets the value of the underlying volume *data_for_anatomist/subject01/subject01.nii*\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "import numpy as np\n", "\n", "mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')\n", "vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n", "tex = aims.TimeTexture('FLOAT')\n", "vs = vol.header()['voxel_size']\n", "for i in range(mesh.size()):\n", " vert = np.asarray(mesh.vertex(i))\n", " tex[i].assign(np.zeros((len(vert),), dtype=np.float32))\n", " t = np.asarray(tex[i])\n", " coords = np.zeros((len(vert), len(vol.shape)), dtype=int)\n", " coords[:, :3] = np.round(vert / vs).astype(int)\n", " t[:] = vol[tuple(coords.T)]\n", "aims.write(tex, 'subject01_Lwhite_semiinflated_texture.tex')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now look at the texture on the mesh (inflated or not) in Anatomist. Compare it to a 3D fusion between the mesh and the MRI volume.\n", "\n", "![Computed time-texture vs 3D fusion](images/texture.png \"Computed time-texture vs 3D fusion\")\n", "\n", "**Bonus:** We can do the same for functional data. \n", "But in this case we may have a spatial transformation to apply between anatomical data and functional data \n", "(which may have been normalized, or acquired in a different referential)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "import numpy as np\n", "mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')\n", "vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "# get header info from anatomical volume\n", "f = aims.Finder()\n", "assert(f.check('data_for_anatomist/subject01/subject01.nii'))\n", "anathdr = f.header()\n", "# get functional -> MNI transformation\n", "m1 = aims.AffineTransformation3d(vol.header()['transformations'][1])\n", "# get anat -> MNI transformation\n", "m2 = aims.AffineTransformation3d(anathdr['transformations'][1])\n", "# make anat -> functional transformation\n", "anat2func = m1.inverse() * m2\n", "# include functional voxel size to get to voxel coordinates\n", "vs = vol.header()['voxel_size']\n", "mvs = aims.AffineTransformation3d(np.diag(vs[:3] + [1.]))\n", "anat2func = mvs.inverse() * anat2func\n", "# now go as in the previous program\n", "tex = aims.TimeTexture('FLOAT')\n", "for i in range(mesh.size()):\n", " vert = np.asarray(mesh.vertex(i))\n", " tex[i].assign(np.zeros((len(vert),), dtype=np.float32))\n", " t = np.asarray(tex[i])\n", " coords = np.ones((len(vert), len(vol.shape)), dtype=np.float32)\n", " coords[:, :3] = vert\n", " # apply matrix anat2func to coordinates array\n", " coords = np.round(coords.dot(anat2func.toMatrix().T)).astype(int)\n", " coords[:, 3] = 0\n", " t[:] = vol[tuple(coords.T)]\n", "aims.write(tex, 'subject01_Lwhite_semiinflated_audio_video.tex')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See how the functional data on the mesh changes across the depth of the cortex. \n", "This demonstrates the need to have a proper projection of functional data before dealing with surfacic functional processing.\n", "\n", "\n", "### Buckets\n", "\n", "\"Buckets\" are voxels lists. They are typically used to represent ROIs.\n", "A BucketMap is a list of Buckets. Each Bucket contains a list of voxels coordinates.\n", "A BucketMap is represented by the class [soma.aims.BucketMap_VOID](pyaims_lowlevel.html#soma.aims.BucketMap_VOID)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "bck_map=aims.read('data_for_anatomist/roi/basal_ganglia.data/roi_Bucket.bck')\n", "print('Bucket map: ', bck_map)\n", "print('Nb buckets: ', bck_map.size())\n", "assert(bck_map.size() == 15)\n", "for i in range(bck_map.size()):\n", " b = bck_map[i]\n", " print(\"Bucket\", i, \", nb voxels:\", b.size())\n", " if b.keys():\n", " print(\" Coordinates of the first voxel:\", b.keys()[0].list())\n", "assert(bck_map[0].size() == 2314)\n", "assert(bck_map[0].keys()[0] == [108, 132, 44])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Graphs\n", "\n", "Graphs are data structures that may contain various elements. \n", "They can represent sets of smaller structures, and also relations between such structures. \n", "The main usage we have for them is to represent ROIs sets, sulci, or fiber bundles.\n", "A graph is represented by the class [soma.aims.Graph](pyaims_lowlevel.html#soma.aims.Graph).\n", "\n", "A graph contains:\n", "\n", " * properties of any type, like a volume or mesh header.\n", " * nodes (also called vertices), which represent structured elements (a ROI, a sulcus part, etc), \n", " which in turn can store properties, and geometrical elements: buckets, meshes...\n", " * optionally, relations, which link nodes and can also contain properties and geometrical elements.\n", "\n", "#### Properties\n", "\n", "Properties are stored in a dictionary-like way. They can hold almost anything, but a restricted set of types can be saved and loaded. \n", "It is exactly the same thing as headers found in volumes, meshes, textures or buckets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "graph = aims.read('data_for_anatomist/roi/basal_ganglia.arg')\n", "print(graph)\n", "assert(repr(graph).startswith(\"{ '__syntax__' : 'RoiArg', 'RoiArg_VERSION' : '1.0', \"\n", " \"'filename_base' : 'basal_ganglia.data',\"))\n", "print('properties:', graph.keys())\n", "assert(len([x in graph.keys() \n", " for x in ('RoiArg_VERSION', 'filename_base', 'roi.global.bck', \n", " 'type.global.bck', 'boundingbox_max')]) == 5)\n", "for p, v in graph.items():\n", " print(p, ':', v)\n", "graph['gudule'] = [12, 'a comment']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "

Note

\n", "Only properties declared in a \"syntax\" file may be saved and re-loaded. Other properties are just not saved.\n", "
\n", "\n", "### Vertices\n", "\n", "Vertices (or nodes) can be accessed via the vertices() method. Each vertex is also a dictionary-like properties set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for v_name in sorted([v['name'] for v in graph.vertices()]):\n", " print(v_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To insert a new vertex, the [soma.aims.Graph.addVertex()](pyaims_lowlevel.html#soma.aims.Graph) method should be used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "v = graph.addVertex('roi')\n", "print(v)\n", "assert(v.getSyntax() == 'roi')\n", "v['name'] = 'new ROI'\n", "print(v)\n", "assert(v == {'name': 'new ROI'})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Edges\n", "\n", "An edge, or relation, links nodes together. Up to now we have always used binary, unoriented, edges. \n", "They can be added using the [soma.aims.Graph.addEdge()](pyaims_lowlevel.html#soma.aims.Graph) method. \n", "Edges are also dictionary-like properties sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "v2 = [x for x in graph.vertices() if x['name'] == 'Pallidum_gauche'][0]\n", "if sys.version_info[2] < 3:\n", " del x # python2 keeps this intermediate variable allocated: clean it.\n", "e = graph.addEdge(v, v2, 'roi_link')\n", "print(graph.edges())\n", "# get vertices linked by this edge\n", "print(sorted([x['name'] for x in e.vertices()]))\n", "assert(sorted([x['name'] for x in e.vertices()]) == ['Pallidum_gauche', 'new ROI'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding meshes or buckets in a graph vertex or relation\n", "\n", "Setting meshes or buckets in vertices properties is OK internally, \n", "but for saving and loading, additional consistancy must be ensured and internal tables update is required. \n", "Then, use the [soma.aims.GraphManip.storeAims](pyaims_highlevel.html#soma.aims.GraphManip) function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n", "# store mesh in the 'roi' property of vertex v of graph graph\n", "aims.GraphManip.storeAims(graph, v, 'roi', mesh)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Other examples\n", "\n", "There are other examples for pyaims [here](../examples).\n", "\n", "\n", "Using algorithms\n", "----------------\n", "\n", "AIMS contains, in addition to the different data structures used in neuroimaging, a set of algorithms which operate on these structures. \n", "Currently only a few of them have Python bindings, because we develop these bindings in a \"lazy\" way, only when they are needed. \n", "The algorithms currently available include data conversion, resampling, thresholding, \n", "mathematical morphology, distance maps, the mesher, some mesh generators, and a few others. \n", "But most of the algorithms are still only available in C++.\n", "\n", "\n", "### Volume Thresholding" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims, aimsalgo\n", "# read a volume with 2 voxels border\n", "vol = aims.read('data_for_anatomist/subject01/subject01.nii', border=2)\n", "# use a thresholder which will keep values above 600\n", "ta = aims.AimsThreshold(aims.AIMS_GREATER_OR_EQUAL_TO, 600, intype=vol)\n", "print('vol:', vol.getSize())\n", "# use it to make a binary thresholded volume\n", "tvol = ta.bin(vol)\n", "print(tvol.value(0, 0, 0))\n", "assert(tvol.value(0, 0, 0) == 0)\n", "print(tvol.value(100, 100, 50))\n", "assert(tvol.value(100, 100, 50) == 32767)\n", "aims.write(tvol, 'thresholded.nii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Thresholded T1 MRI](images/aimsalgo_threshold.png \"Thresholded T1 MRI\")\n", "\n", "
\n", "

Warning

\n", "warning:: Some algorithms need that the volume they process have a **border**: a few voxels all around the volume. \n", " Indeed, some algorithms can try to access voxels outside the boundaries of the volume which may cause a segmentation error if the volume doesn't have a border. \n", " That's the case for example for operations like erosion, dilation, closing. \n", " There's no test in each point to detect if the algorithm tries to access outside the volume because it would slow down the process.\n", "\n", " In the previous example, a 2 voxels border is added by passing a parameter *border=2* to [soma.aims.read](pyaims_highlevel.html#soma.aims.read) function.\n", "
\n", "\n", "### Mathematical morphology" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# apply 5mm closing\n", "clvol = aimsalgo.AimsMorphoClosing(tvol, 5)\n", "aims.write(clvol, 'closed.nii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Closing of a thresholded T1 MRI](images/closed.png \"Closing of a thresholded T1 MRI\")\n", "\n", "\n", "### Mesher" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = aimsalgo.Mesher()\n", "mesh = aims.AimsSurfaceTriangle() # create an empty mesh\n", "# the border should be -1\n", "clvol.fillBorder(-1)\n", "# get a smooth mesh of the interface of the biggest connected component\n", "m.getBrain(clvol, mesh)\n", "aims.write(mesh, 'head_mesh.gii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Head mesh](images/head_mesh.png \"Head mesh\")\n", " \n", "The above examples make up a simplified version of the head mesh extraction algorithm in `VipGetHead`, used in the Morphologist pipeline.\n", "\n", "\n", "### Surface generation\n", "\n", "The [soma.aims.SurfaceGenerator](pyaims_algo.html#soma.aims.SurfaceGenerator) allows to create simple meshes of predefined shapes: cube, cylinder, sphere, icosehedron, cone, arrow." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "center = (50, 25, 20)\n", "radius = 53\n", "mesh1 = aims.SurfaceGenerator.icosahedron(center, radius)\n", "mesh2 = aims.SurfaceGenerator.generate(\n", " {'type': 'arrow', 'point1': [30, 70, 0],\n", " 'point2': [100, 100, 100], 'radius': 20, 'arrow_radius': 30,\n", " 'arrow_length_factor': 0.7, 'facets': 50})\n", "# get the list of all possible generated objects and parameters:\n", "print(aims.SurfaceGenerator.description())\n", "assert('arrow_length_factor' in aims.SurfaceGenerator.description()[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Generated icosahedron and arrow](images/surface_generator.png \"Generated icosahedron and arrow\")\n", "\n", "\n", "### Interpolation\n", "\n", "Interpolators help to get values in millimeters coordinates in a discrete space (volume grid), and may allow voxels values mixing (linear interpolation, typically)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "import numpy as np\n", "# load a functional volume\n", "vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n", "# get the position of the maximum\n", "maxval = vol.max()\n", "pmax = [p[0] for p in np.where(vol.np == maxval)]\n", "# set pmax in mm\n", "vs = vol.header()['voxel_size']\n", "pmax = [x * y for x,y in zip(pmax, vs)]\n", "# take a sphere of 5mm radius, with about 200 vertices\n", "mesh = aims.SurfaceGenerator.sphere(pmax[:3], 5., 200)\n", "vert = mesh.vertex()\n", "# get an interpolator\n", "interpolator = aims.aims.getLinearInterpolator(vol)\n", "# create a texture for that sphere\n", "tex = aims.TimeTexture_FLOAT()\n", "tx = tex[0]\n", "tx2 = tex[1]\n", "tx.reserve(len(vert))\n", "tx2.reserve(len(vert))\n", "for v in vert:\n", " tx.append(interpolator.value(v))\n", " # compare to non-interpolated value\n", " tx2.append(vol.value(*[int(round(x / y)) for x,y in zip(v, vs)]))\n", "aims.write(tex, 'functional_tex.gii')\n", "aims.write(mesh, 'sphere.gii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the difference between the two timesteps (interpolated and non-interpolated) of the texture in Anatomist.\n", "\n", "![](images/interpolated.png)\n", "![Interpolated vs not interpolated texture](images/not_interpolated.png \"Interpolated vs not interpolated texture\")\n", "\n", "\n", "### Types conversion\n", "\n", "The `Converter_*_*` classes allow to convert some data structures types to others. \n", "Of course all types cannot be converted to any other, but they are typically used ton convert volumed from a given voxel type to another one. \n", "A \"factory\" function may help to build the correct converter using input and output types. \n", "For instance, to convert the anatomical volume of the previous examples to float type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims\n", "vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n", "print('type of vol:', type(vol))\n", "assert(type(vol) is aims.Volume_S16)\n", "c = aims.Converter(intype=vol, outtype=aims.Volume('FLOAT'))\n", "vol2 = c(vol)\n", "print('type of converted volume:', type(vol2))\n", "assert(type(vol2) is aims.Volume_FLOAT)\n", "print('value of initial volume at voxel (50, 50, 50):', vol.value(50, 50, 50))\n", "assert(vol.value(50, 50, 50) == 57)\n", "print('value of converted volume at voxel (50, 50, 50):', vol2.value(50, 50, 50))\n", "assert(vol2.value(50, 50, 50) == 57.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Resampling\n", "\n", "Resampling allows to apply a geometric transformation or/and to change voxels size. \n", "Several types of resampling may be used depending on how we interpolate values between neighbouring voxels (see interpolators): \n", "nearest-neighbour (order 0), linear (order 1), spline resampling with order 2 to 7 in AIMS." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from soma import aims, aimsalgo\n", "import math\n", "vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n", "# create an affine transformation matrix\n", "# rotating pi/8 along z axis\n", "tr = aims.AffineTransformation3d(aims.Quaternion([0, 0, math.sin(math.pi / 16), math.cos(math.pi / 16)]))\n", "tr.setTranslation((100, -50, 0))\n", "# get an order 2 resampler for volumes of S16\n", "resp = aims.ResamplerFactory_S16().getResampler(2)\n", "resp.setDefaultValue(-1) # set background to -1\n", "resp.setRef(vol) # volume to resample\n", "# resample into a volume of dimension 200x200x200 with voxel size 1.1, 1.1, 1.5\n", "resampled = resp.doit(tr, 200, 200, 200, (1.1, 1.1, 1.5))\n", "# Note that the header transformations to external referentials have been updated\n", "print(resampled.header()['referentials'])\n", "assert(resampled.header()['referentials'] \n", " == ['Scanner-based anatomical coordinates', 'Talairach-MNI template-SPM'])\n", "import numpy\n", "numpy.set_printoptions(precision=4)\n", "for t in resampled.header()['transformations']:\n", " print(aims.AffineTransformation3d(t))\n", "aims.write(resampled, 'resampled.nii')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Load the original image and the resampled in Anatomist. \n", "See how the resampled has been rotated. Now apply the NIFTI/SPM referential info on both images. \n", "They are now aligned again, and cursor clicks correctly go to the same location on both volume, whatever the display referential for each of them.\n", "\n", "![Aimsalgo resampling](images/aimsalgo_resampled.png \"Aimsalgo resampling\")\n", " \n", "\n", "PyAIMS / PyAnatomist integration\n", "--------------------------------\n", "\n", "It is possible to use both PyAims and PyAnatomist APIs together in python.\n", "See [the Pyanatomist / PyAims tutorial](../../pyanatomist/sphinx/pyanatomist_pyaims_tutorial.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At the end, cleanup the temporary working directory\n", "----------------------------------------------------" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# cleanup data\n", "import shutil\n", "os.chdir(older_cwd)\n", "shutil.rmtree(tuto_dir)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" }, "nbsphinx": { "timeout": 120 } }, "nbformat": 4, "nbformat_minor": 2 }