\n",
"
Exercise
\n",
"Make a deforming mesh that goes from the original mesh to 5mm away, by steps of 0.5 mm\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"import numpy\n",
"mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n",
"vert = numpy.array(mesh.vertex()) # must make an actual copy to avoid modifying timestep 0\n",
"norm = numpy.asarray(mesh.normal())\n",
"for i in range(1, 10):\n",
" mesh.polygon(i).assign(mesh.polygon())\n",
" vert += norm * 0.5\n",
" mesh.vertex(i).assign(vert)\n",
" # don't bother about normals, we will rebuild them afterwards.\n",
"print('number of time steps:', mesh.size())\n",
"assert(mesh.size() == 10)\n",
"mesh.updateNormals() # I told you about normals.\n",
"aims.write(mesh, 'subject01_Lwhite_semiinflated_time.mesh')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"#### Textures\n",
"\n",
"A texture is merely a vector of values, each of them is assigned to a mesh vertex, with a one-to-one mapping, in the same order.\n",
"A texture is also a time-texture."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"tex = aims.TimeTexture('FLOAT')\n",
"t = tex[0] # time index, inserts on-the-fly\n",
"t.reserve(10) # pre-allocates memory\n",
"for i in range(10):\n",
" t.append(i / 10.)\n",
"print(tex.size())\n",
"assert(len(tex) == 1)\n",
"print(tex[0].size())\n",
"assert(len(tex[0]) == 10)\n",
"print(tex[0][5])\n",
"assert(tex[0][5] == 0.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"
Exercise
\n",
"Make a time-texture, with at each time/vertex of the previous mesh, sets the value of the underlying volume *data_for_anatomist/subject01/subject01.nii*\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"import numpy as np\n",
"\n",
"mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')\n",
"vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n",
"tex = aims.TimeTexture('FLOAT')\n",
"vs = vol.header()['voxel_size']\n",
"for i in range(mesh.size()):\n",
" vert = np.asarray(mesh.vertex(i))\n",
" tex[i].assign(np.zeros((len(vert),), dtype=np.float32))\n",
" t = np.asarray(tex[i])\n",
" coords = np.zeros((len(vert), len(vol.shape)), dtype=int)\n",
" coords[:, :3] = np.round(vert / vs).astype(int)\n",
" t[:] = vol[tuple(coords.T)]\n",
"aims.write(tex, 'subject01_Lwhite_semiinflated_texture.tex')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now look at the texture on the mesh (inflated or not) in Anatomist. Compare it to a 3D fusion between the mesh and the MRI volume.\n",
"\n",
"\n",
"\n",
"**Bonus:** We can do the same for functional data. \n",
"But in this case we may have a spatial transformation to apply between anatomical data and functional data \n",
"(which may have been normalized, or acquired in a different referential)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"import numpy as np\n",
"mesh = aims.read('subject01_Lwhite_semiinflated_time.mesh')\n",
"vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n",
"# get header info from anatomical volume\n",
"f = aims.Finder()\n",
"assert(f.check('data_for_anatomist/subject01/subject01.nii'))\n",
"anathdr = f.header()\n",
"# get functional -> MNI transformation\n",
"m1 = aims.AffineTransformation3d(vol.header()['transformations'][1])\n",
"# get anat -> MNI transformation\n",
"m2 = aims.AffineTransformation3d(anathdr['transformations'][1])\n",
"# make anat -> functional transformation\n",
"anat2func = m1.inverse() * m2\n",
"# include functional voxel size to get to voxel coordinates\n",
"vs = vol.header()['voxel_size']\n",
"mvs = aims.AffineTransformation3d(np.diag(vs[:3] + [1.]))\n",
"anat2func = mvs.inverse() * anat2func\n",
"# now go as in the previous program\n",
"tex = aims.TimeTexture('FLOAT')\n",
"for i in range(mesh.size()):\n",
" vert = np.asarray(mesh.vertex(i))\n",
" tex[i].assign(np.zeros((len(vert),), dtype=np.float32))\n",
" t = np.asarray(tex[i])\n",
" coords = np.ones((len(vert), len(vol.shape)), dtype=np.float32)\n",
" coords[:, :3] = vert\n",
" # apply matrix anat2func to coordinates array\n",
" coords = np.round(coords.dot(anat2func.toMatrix().T)).astype(int)\n",
" coords[:, 3] = 0\n",
" t[:] = vol[tuple(coords.T)]\n",
"aims.write(tex, 'subject01_Lwhite_semiinflated_audio_video.tex')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See how the functional data on the mesh changes across the depth of the cortex. \n",
"This demonstrates the need to have a proper projection of functional data before dealing with surfacic functional processing.\n",
"\n",
"\n",
"### Buckets\n",
"\n",
"\"Buckets\" are voxels lists. They are typically used to represent ROIs.\n",
"A BucketMap is a list of Buckets. Each Bucket contains a list of voxels coordinates.\n",
"A BucketMap is represented by the class [soma.aims.BucketMap_VOID](pyaims_lowlevel.html#soma.aims.BucketMap_VOID)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"bck_map=aims.read('data_for_anatomist/roi/basal_ganglia.data/roi_Bucket.bck')\n",
"print('Bucket map: ', bck_map)\n",
"print('Nb buckets: ', bck_map.size())\n",
"assert(bck_map.size() == 15)\n",
"for i in range(bck_map.size()):\n",
" b = bck_map[i]\n",
" print(\"Bucket\", i, \", nb voxels:\", b.size())\n",
" if b.keys():\n",
" print(\" Coordinates of the first voxel:\", b.keys()[0].list())\n",
"assert(bck_map[0].size() == 2314)\n",
"assert(bck_map[0].keys()[0] == [108, 132, 44])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Graphs\n",
"\n",
"Graphs are data structures that may contain various elements. \n",
"They can represent sets of smaller structures, and also relations between such structures. \n",
"The main usage we have for them is to represent ROIs sets, sulci, or fiber bundles.\n",
"A graph is represented by the class [soma.aims.Graph](pyaims_lowlevel.html#soma.aims.Graph).\n",
"\n",
"A graph contains:\n",
"\n",
" * properties of any type, like a volume or mesh header.\n",
" * nodes (also called vertices), which represent structured elements (a ROI, a sulcus part, etc), \n",
" which in turn can store properties, and geometrical elements: buckets, meshes...\n",
" * optionally, relations, which link nodes and can also contain properties and geometrical elements.\n",
"\n",
"#### Properties\n",
"\n",
"Properties are stored in a dictionary-like way. They can hold almost anything, but a restricted set of types can be saved and loaded. \n",
"It is exactly the same thing as headers found in volumes, meshes, textures or buckets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"graph = aims.read('data_for_anatomist/roi/basal_ganglia.arg')\n",
"print(graph)\n",
"assert(repr(graph).startswith(\"{ '__syntax__' : 'RoiArg', 'RoiArg_VERSION' : '1.0', \"\n",
" \"'filename_base' : 'basal_ganglia.data',\"))\n",
"print('properties:', graph.keys())\n",
"assert(len([x in graph.keys() \n",
" for x in ('RoiArg_VERSION', 'filename_base', 'roi.global.bck', \n",
" 'type.global.bck', 'boundingbox_max')]) == 5)\n",
"for p, v in graph.items():\n",
" print(p, ':', v)\n",
"graph['gudule'] = [12, 'a comment']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"
Note
\n",
"Only properties declared in a \"syntax\" file may be saved and re-loaded. Other properties are just not saved.\n",
"
\n",
"\n",
"### Vertices\n",
"\n",
"Vertices (or nodes) can be accessed via the vertices() method. Each vertex is also a dictionary-like properties set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for v_name in sorted([v['name'] for v in graph.vertices()]):\n",
" print(v_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To insert a new vertex, the [soma.aims.Graph.addVertex()](pyaims_lowlevel.html#soma.aims.Graph) method should be used:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"v = graph.addVertex('roi')\n",
"print(v)\n",
"assert(v.getSyntax() == 'roi')\n",
"v['name'] = 'new ROI'\n",
"print(v)\n",
"assert(v == {'name': 'new ROI'})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Edges\n",
"\n",
"An edge, or relation, links nodes together. Up to now we have always used binary, unoriented, edges. \n",
"They can be added using the [soma.aims.Graph.addEdge()](pyaims_lowlevel.html#soma.aims.Graph) method. \n",
"Edges are also dictionary-like properties sets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"v2 = [x for x in graph.vertices() if x['name'] == 'Pallidum_gauche'][0]\n",
"if sys.version_info[2] < 3:\n",
" del x # python2 keeps this intermediate variable allocated: clean it.\n",
"e = graph.addEdge(v, v2, 'roi_link')\n",
"print(graph.edges())\n",
"# get vertices linked by this edge\n",
"print(sorted([x['name'] for x in e.vertices()]))\n",
"assert(sorted([x['name'] for x in e.vertices()]) == ['Pallidum_gauche', 'new ROI'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding meshes or buckets in a graph vertex or relation\n",
"\n",
"Setting meshes or buckets in vertices properties is OK internally, \n",
"but for saving and loading, additional consistancy must be ensured and internal tables update is required. \n",
"Then, use the [soma.aims.GraphManip.storeAims](pyaims_highlevel.html#soma.aims.GraphManip) function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"mesh = aims.read('data_for_anatomist/subject01/subject01_Lwhite.mesh')\n",
"# store mesh in the 'roi' property of vertex v of graph graph\n",
"aims.GraphManip.storeAims(graph, v, 'roi', mesh)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Other examples\n",
"\n",
"There are other examples for pyaims [here](../examples).\n",
"\n",
"\n",
"Using algorithms\n",
"----------------\n",
"\n",
"AIMS contains, in addition to the different data structures used in neuroimaging, a set of algorithms which operate on these structures. \n",
"Currently only a few of them have Python bindings, because we develop these bindings in a \"lazy\" way, only when they are needed. \n",
"The algorithms currently available include data conversion, resampling, thresholding, \n",
"mathematical morphology, distance maps, the mesher, some mesh generators, and a few others. \n",
"But most of the algorithms are still only available in C++.\n",
"\n",
"\n",
"### Volume Thresholding"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims, aimsalgo\n",
"# read a volume with 2 voxels border\n",
"vol = aims.read('data_for_anatomist/subject01/subject01.nii', border=2)\n",
"# use a thresholder which will keep values above 600\n",
"ta = aims.AimsThreshold(aims.AIMS_GREATER_OR_EQUAL_TO, 600, intype=vol)\n",
"print('vol:', vol.getSize())\n",
"# use it to make a binary thresholded volume\n",
"tvol = ta.bin(vol)\n",
"print(tvol.value(0, 0, 0))\n",
"assert(tvol.value(0, 0, 0) == 0)\n",
"print(tvol.value(100, 100, 50))\n",
"assert(tvol.value(100, 100, 50) == 32767)\n",
"aims.write(tvol, 'thresholded.nii')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"
Warning
\n",
"warning:: Some algorithms need that the volume they process have a **border**: a few voxels all around the volume. \n",
" Indeed, some algorithms can try to access voxels outside the boundaries of the volume which may cause a segmentation error if the volume doesn't have a border. \n",
" That's the case for example for operations like erosion, dilation, closing. \n",
" There's no test in each point to detect if the algorithm tries to access outside the volume because it would slow down the process.\n",
"\n",
" In the previous example, a 2 voxels border is added by passing a parameter *border=2* to [soma.aims.read](pyaims_highlevel.html#soma.aims.read) function.\n",
"
\n",
"\n",
"### Mathematical morphology"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# apply 5mm closing\n",
"clvol = aimsalgo.AimsMorphoClosing(tvol, 5)\n",
"aims.write(clvol, 'closed.nii')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"### Mesher"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"m = aimsalgo.Mesher()\n",
"mesh = aims.AimsSurfaceTriangle() # create an empty mesh\n",
"# the border should be -1\n",
"clvol.fillBorder(-1)\n",
"# get a smooth mesh of the interface of the biggest connected component\n",
"m.getBrain(clvol, mesh)\n",
"aims.write(mesh, 'head_mesh.gii')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
"The above examples make up a simplified version of the head mesh extraction algorithm in `VipGetHead`, used in the Morphologist pipeline.\n",
"\n",
"\n",
"### Surface generation\n",
"\n",
"The [soma.aims.SurfaceGenerator](pyaims_algo.html#soma.aims.SurfaceGenerator) allows to create simple meshes of predefined shapes: cube, cylinder, sphere, icosehedron, cone, arrow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"center = (50, 25, 20)\n",
"radius = 53\n",
"mesh1 = aims.SurfaceGenerator.icosahedron(center, radius)\n",
"mesh2 = aims.SurfaceGenerator.generate(\n",
" {'type': 'arrow', 'point1': [30, 70, 0],\n",
" 'point2': [100, 100, 100], 'radius': 20, 'arrow_radius': 30,\n",
" 'arrow_length_factor': 0.7, 'facets': 50})\n",
"# get the list of all possible generated objects and parameters:\n",
"print(aims.SurfaceGenerator.description())\n",
"assert('arrow_length_factor' in aims.SurfaceGenerator.description()[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"### Interpolation\n",
"\n",
"Interpolators help to get values in millimeters coordinates in a discrete space (volume grid), and may allow voxels values mixing (linear interpolation, typically)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"import numpy as np\n",
"# load a functional volume\n",
"vol = aims.read('data_for_anatomist/subject01/Audio-Video_T_map.nii')\n",
"# get the position of the maximum\n",
"maxval = vol.max()\n",
"pmax = [p[0] for p in np.where(vol.np == maxval)]\n",
"# set pmax in mm\n",
"vs = vol.header()['voxel_size']\n",
"pmax = [x * y for x,y in zip(pmax, vs)]\n",
"# take a sphere of 5mm radius, with about 200 vertices\n",
"mesh = aims.SurfaceGenerator.sphere(pmax[:3], 5., 200)\n",
"vert = mesh.vertex()\n",
"# get an interpolator\n",
"interpolator = aims.aims.getLinearInterpolator(vol)\n",
"# create a texture for that sphere\n",
"tex = aims.TimeTexture_FLOAT()\n",
"tx = tex[0]\n",
"tx2 = tex[1]\n",
"tx.reserve(len(vert))\n",
"tx2.reserve(len(vert))\n",
"for v in vert:\n",
" tx.append(interpolator.value(v))\n",
" # compare to non-interpolated value\n",
" tx2.append(vol.value(*[int(round(x / y)) for x,y in zip(v, vs)]))\n",
"aims.write(tex, 'functional_tex.gii')\n",
"aims.write(mesh, 'sphere.gii')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Look at the difference between the two timesteps (interpolated and non-interpolated) of the texture in Anatomist.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"### Types conversion\n",
"\n",
"The `Converter_*_*` classes allow to convert some data structures types to others. \n",
"Of course all types cannot be converted to any other, but they are typically used ton convert volumed from a given voxel type to another one. \n",
"A \"factory\" function may help to build the correct converter using input and output types. \n",
"For instance, to convert the anatomical volume of the previous examples to float type:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims\n",
"vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n",
"print('type of vol:', type(vol))\n",
"assert(type(vol) is aims.Volume_S16)\n",
"c = aims.Converter(intype=vol, outtype=aims.Volume('FLOAT'))\n",
"vol2 = c(vol)\n",
"print('type of converted volume:', type(vol2))\n",
"assert(type(vol2) is aims.Volume_FLOAT)\n",
"print('value of initial volume at voxel (50, 50, 50):', vol.value(50, 50, 50))\n",
"assert(vol.value(50, 50, 50) == 57)\n",
"print('value of converted volume at voxel (50, 50, 50):', vol2.value(50, 50, 50))\n",
"assert(vol2.value(50, 50, 50) == 57.0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Resampling\n",
"\n",
"Resampling allows to apply a geometric transformation or/and to change voxels size. \n",
"Several types of resampling may be used depending on how we interpolate values between neighbouring voxels (see interpolators): \n",
"nearest-neighbour (order 0), linear (order 1), spline resampling with order 2 to 7 in AIMS."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from soma import aims, aimsalgo\n",
"import math\n",
"vol = aims.read('data_for_anatomist/subject01/subject01.nii')\n",
"# create an affine transformation matrix\n",
"# rotating pi/8 along z axis\n",
"tr = aims.AffineTransformation3d(aims.Quaternion([0, 0, math.sin(math.pi / 16), math.cos(math.pi / 16)]))\n",
"tr.setTranslation((100, -50, 0))\n",
"# get an order 2 resampler for volumes of S16\n",
"resp = aims.ResamplerFactory_S16().getResampler(2)\n",
"resp.setDefaultValue(-1) # set background to -1\n",
"resp.setRef(vol) # volume to resample\n",
"# resample into a volume of dimension 200x200x200 with voxel size 1.1, 1.1, 1.5\n",
"resampled = resp.doit(tr, 200, 200, 200, (1.1, 1.1, 1.5))\n",
"# Note that the header transformations to external referentials have been updated\n",
"print(resampled.header()['referentials'])\n",
"assert(resampled.header()['referentials'] \n",
" == ['Scanner-based anatomical coordinates', 'Talairach-MNI template-SPM'])\n",
"import numpy\n",
"numpy.set_printoptions(precision=4)\n",
"for t in resampled.header()['transformations']:\n",
" print(aims.AffineTransformation3d(t))\n",
"aims.write(resampled, 'resampled.nii')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Load the original image and the resampled in Anatomist. \n",
"See how the resampled has been rotated. Now apply the NIFTI/SPM referential info on both images. \n",
"They are now aligned again, and cursor clicks correctly go to the same location on both volume, whatever the display referential for each of them.\n",
"\n",
"\n",
" \n",
"\n",
"PyAIMS / PyAnatomist integration\n",
"--------------------------------\n",
"\n",
"It is possible to use both PyAims and PyAnatomist APIs together in python.\n",
"See [the Pyanatomist / PyAims tutorial](../../pyanatomist/sphinx/pyanatomist_pyaims_tutorial.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At the end, cleanup the temporary working directory\n",
"----------------------------------------------------"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# cleanup data\n",
"import shutil\n",
"os.chdir(older_cwd)\n",
"shutil.rmtree(tuto_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
},
"nbsphinx": {
"timeout": 120
}
},
"nbformat": 4,
"nbformat_minor": 2
}