PyAIMS submodules API

This section documents soma.aims / soma.aimsalgo submodules written in python.

SubModule: apctools

.APC (commissure coordinates) IO and other tools

soma.aims.apctools.apcFileTransform(inAPCfilename, outAPCfilename, transform, outimagevoxelsize, imagefile=None)[source]

Transforms the coordinates of a .APC file points through a given transformation. It basically reads inAPCfilename, transforms its contents using apcTransform(), then writes the result to outAPCfilename.

soma.aims.apctools.apcRead(filename, imagefile=None)[source]

Read a .APC file - filename: string - imagefile: string

optional filename for the image file from which the AC/PC coordinates are taken from. Its header may be used to recover millimeters positions from voxels if they are not specified in the .APC file itself (for older versions of the .APC files)

  • returns: dict

    the contents of the file as a dictionary, keys being ‘ac’, ‘pc’, ‘ih’ for voxel coordinates, and ‘acmm’, ‘pcmm’, ‘ihmm’ for millimeters coordinates, and optionally ‘comment’.

soma.aims.apctools.apcTransform(apcdict, transform, outimagevoxelsize)[source]

Transforms coordinates of commissures points through a specified transformation

  • apcdict: dict

    Commissures coordinates, as a dictionary with ‘ac’, ‘pc’, ‘ih’ keys for voxel coordinates, ‘acmm’, ‘pcmm’, ‘ihmm’ for millimeters coordinates

  • transform: AffineTransformation3d object

  • outimagevoxelsize:

    • as string: filename for the image whose voxel size should be used

    • as Volume or any other object with a header() method: voxel_size is taken from its header

    • as dict or header object: voxel size is takes as the voxel_size entry of the dictionary

Coordinates are transformed in the apcdict dictionary, which is modified in-place.

soma.aims.apctools.apcWrite(apcdict, filename)[source]

Writes a .APC file from a dictionary

SubModule: colormaphints

soma.aims.colormaphints.anatomicalColormaps = [('B-W LINEAR', (1.0, 1.0, 1.0)), ('Blue-White', (0.0, 0.0, 1.0)), ('Green-White-linear', (0.0, 1.0, 0.0)), ('Green-White-exponential', (0.0, 1.0, 0.0))]

predefined list of colormaps suitable for anatomical volumes

soma.aims.colormaphints.anatomicalFusionColormaps = [('B-W LINEAR-fusion', (1.0, 1.0, 1.0)), ('Blue-White-fusion', (0.0, 0.0, 1.0)), ('Green-White-linear-fusion', (0.0, 1.0, 0.0))]

predefined list of colormaps suitable for fusionned anatomical volumes

soma.aims.colormaphints.binaryColormaps = [('BLUE-lfusion', (0.0, 0.0, 1.0)), ('GREEN-lfusion', (0.0, 1.0, 0.0)), ('RED-lfusion', (1.0, 0.0, 0.0)), ('CYAN-lfusion', (0.0, 1.0, 1.0)), ('VIOLET-lfusion', (1.0, 0.0, 1.0)), ('YELLOW-lfusion', (1.0, 1.0, 0.0)), ('WHITE-lfusion', (1.0, 1.0, 1.0))]

predefined list of colormaps suitable for binary volumes

soma.aims.colormaphints.binaryFusionColormaps = [('BLUE-ufusion', (0.0, 0.0, 1.0)), ('GREEN-ufusion', (0.0, 1.0, 0.0)), ('RED-ufusion', (1.0, 0.0, 0.0)), ('CYAN-ufusion', (0.0, 1.0, 1.0)), ('VIOLET-ufusion', (1.0, 0.0, 1.0)), ('YELLOW-ufusion', (1.0, 1.0, 0.0)), ('Black-ufusion', (1.0, 1.0, 1.0))]

predefined list of colormaps suitable for fusionned binary volumes

soma.aims.colormaphints.checkVolume(vol)[source]

Checks colormap-related clues in a volume, and tries to determine whether it is an anatomical volume, a diffusion volume, a functional volume, or a labels volume. This is determined as “likelihoods” for each class (based on a pure empirical heurisrtic), based on, mainly, the histogram, voxel type, and voxel sizes.

soma.aims.colormaphints.chooseColormaps(vols)[source]

Automatically chooses distinc colormaps for a list of volumes

  • returns: a list of colormaps names. They should be known from Anatomist.

soma.aims.colormaphints.diffusionColormaps = [('B-W LINEAR', (1.0, 1.0, 1.0)), ('Blue-White', (0.0, 0.0, 1.0)), ('Green-White-linear', (0.0, 1.0, 0.0)), ('Green-White-exponential', (0.0, 1.0, 0.0))]

predefined list of colormaps suitable for diffusion volumes

soma.aims.colormaphints.diffusionFusionColormaps = [('B-W LINEAR-fusion', (1.0, 1.0, 1.0)), ('Blue-White-fusion', (0.0, 0.0, 1.0)), ('Green-White-linear-fusion', (0.0, 1.0, 0.0))]

predefined list of colormaps suitable for fusionned diffusion volumes

soma.aims.colormaphints.functionalColormaps = [('RED TEMPERATURE', (1.0, 0.5, 0.0)), ('RAINBOW', (1.0, 0.0, 0.0)), ('Blue-Red', (1.0, 0.0, 0.0)), ('actif-ret', (1.0, 1.0, 0.0)), ('Yellow-red', (1.0, 1.0, 0.0))]

predefined list of colormaps suitable for functional volumes

soma.aims.colormaphints.functionalFusionColormaps = [('Rainbow1-fusion', (1.0, 0.0, 0.0)), ('Blue-Red-fusion', (1.0, 0.0, 0.0)), ('Yellow-red-fusion', (1.0, 1.0, 0.0))]

predefined list of colormaps suitable for fusionned functional volumes

soma.aims.colormaphints.labelsColormaps = [('Blue-Red', (1.0, 0.0, 0.0)), ('Talairach', (0.0, 0.0, 0.0))]

predefined list of colormaps suitable for labels volumes

soma.aims.colormaphints.labelsFusionColormaps = []

predefined list of colormaps suitable for fusionned labels volumes

soma.aims.colormaphints.twotailColormaps = [('tvalues100-200-100-lfusion', (1.0, 0.0, 0.0)), ('tvalues100-100-100-lfusion', (1.0, 0.0, 0.0))]

predefined list of colormaps suitable for two-tail T-values volumes

soma.aims.colormaphints.twotailFusionColormaps = [('tvalues100-200-100', (1.0, 0.0, 0.0)), ('tvalues100-100-100', (1.0, 0.0, 0.0))]

predefined list of colormaps suitable for fusionned two-tail T-values volumes

SubModule: filetools

File functions

soma.aims.filetools.cmp(ref_file, test_file, skip_suffixes=None)[source]

Compare files, taking into account their neuroimaging nature. Some specific comparison function will be called for graphs, meshes, images, CSV files.

soma.aims.filetools.compare_nii_files(file1, file2, thresh=50, out_stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]

Compare nifti files (.nii, .nii.gz)

soma.aims.filetools.compare_text_files(file1, file2, thresh=1e-06)[source]

Compare text files (.txt, .csv,..) which may contain nan.

soma.aims.filetools.filter_header_for_cmp(hdr)[source]

Utility function used by cmp() Removed uuid and referential properties from the given header, in order to be compared with another file wich may differ only in thoses identifiers

SubModule: fslTransformation

FSL matrixes seem to transform from/to internal refs, like Aims but with a different convention:

  • X: right -> left

  • Y: back -> front

  • Z: bottom -> top

which appears to be Y and Z flipped compared to Aims

soma.aims.fslTransformation.fslMatToTrm(matfile, srcimage, dstimage)[source]

As far as I have understood:

A FSL transformation goes from the disk referential of the source image to the disk referential of the destination image.

BUT:

if the qform of an image (disk -> “real world”) implies a flip (goes from a direct referential to an indirect one or the contrary), then a flip along X axis is inserted in the matrix, since FSL flirt doesn’t allow flipping.

SubModule: graph_comparison

soma.aims.graph_comparison.rel_flx_max_diff = 0.0001

max accepted relative difference of float numbers

soma.aims.graph_comparison.same_graphs(ref_graph, test_graph, verbose=False)[source]

Compare two graphs and return if they are identical. This function is useful for testing and validation purposes. Graphs structure are compared, and vertices / edges attributes. AIMS objects inside attributes are not compared (meshes, bucket, volumes in sulci graphs for instance)

Parameters
  • ref_graph (string or Graph object) – reference graph to be compared. A filename may be passed here: in this case the graph is read using aims.read() function.

  • test_graph (string or Graph object) – test graph to be compared. A filename may be passed here: in this case the graph is read using aims.read() function.

  • verbose (bool (optional, default: False)) – if True, messages are print on the standard output during comparison.

Returns

Return type

True if ref and test graphs are identical, or False otherwise.

SubModule: io_ext

soma.aims.io_ext.aims = <module 'soma.aims' from '/casa/host/build/python/soma/aims/__init__.py'>

IO formats readers / writers written in python for aims.

Currently:

Numpy format for matrices YAML format for Object

SubModule: lazy_read_data

This module provides wrappers for Aims readable data types which lazily load when they are used, and can release memory after they are used: LazyReadData

Specialized iterators can help parallelizing reading opertions, and perform it earlier (before they are really used) in an iteration: PreloadIterator, PreloadList.

A specialized version in aimsalgo handles resampling while loading: LazyResampleVolume.

class soma.aims.lazy_read_data.LazyReadData(data_or_filename, allocator_context=None, read_options=None, nops=0, reader=None, **kwargs)[source]

Bases: object

LazyReadData is a data class proxy, which loads the underlying data when used, and is also able to unload it after a given number of operations to release memory.

If the data is used again after release, then it is loaded again.

The aim of this proxy is to carry data references in complex expressions or formulas, while allowing to lower the amount of memory needed to process the expression.

Ex: if we need to add 100 Volumes, the easy way to write it is:

volumes = [aims.read(f) for f in filenames]
res = sum(volumes)

This expression, sum(volumes) uses a complete list of volumes, and thus needs the 100 volumes to be physically in mempry before the sum operation actually begins. However as the sum is performed sequentially, it should be possible to perform the same operation using only memory for 2 volumes.

One solution would use iterators and yield to read data during the for loop, but it would not work in a more “hand-made” expression like this one:

res = vol1 + vol2 + vol3 - vol4 * vol5 + vol6  # etc.

LazyReadData offers a solution to process these expressions:

volumes = [LazyReadData(f, nops=1) for f in filenames]
res = sum(volumes).data

vol1 = LazyReadData(filenames[0], nops=1)
# ...
vol6 = LazyReadData(filename[5], nops=1)
# etc.
res = vol1 + vol2 + vol3 - vol4 * vol5 + vol6  # etc.
res = res.data  # get actual Volume object

LazyReadData loads the underlying data from its filename whenever any attribute or method of the proxy is queried in the underlying data. Reading is done using aims.read(), thus only AIMS objects are supported, but on the other hand, all kinds of AIMS objects can work this way: volumes, meshes, textures, graphs, transformations, etc.

Without specifying the nops parameter, LazyReadData does not save so much memory: it just loads data whenever needed, but from this moment, keeps it in memory until the proxy is actually deleted. nops tells the proxy that, after this number of operations, the data will be released.

operations in this context are arithmetic operators (+, -, *, /, pow). Other method calls are not counted.

Thus in order to optimize things, nops should be set to the number of times the object will be used in an expression. A kind of pre-parsing of the expression may be needed in order to automatize this.

Loading is done in a thread-safe manner (using a lock) so that two (or more) threads accessing data will not trigger several loads.

Specializing

Subclasses may override the _lazy_read() method to implement a different behavior or load additional data. This method should set self.data with the loaded data. This method returns the loaded data.

Another way of specializing the load behavior is to provide a Reader object which could also be a specialized version of soma.aims.Reader.

Parameters
  • data_or_filename (str, Aims object, or LazyReadData) – a LazyReadData can be built from another one (copying its data, filename and other internals), or from a filename, or from an existing AIMS object.

  • allocator_context (AllocatorContext) – passed to aims.read() when data is read.

  • read_options (dict) – passed to aims.read() when data is read

  • nops (int) – number of operations before data is unloaded. 0 means never released.

  • reader (aims.Reader) – pre-built Reader instance, used when more specific reader options are needed. Otherwise a standard reader will be used.

  • kwargs (dict) – if data is an AIMS object, kwargs may include an additional ‘filename’ argument. The rest is passed to aims.read() when data is read.

preloading()[source]

If a threaded load operation has been started (“preloading”), then this method returns True as soon as the operation has started. It still returns True as long as the data is in memory. Its goal is to tell that another load operation is not needed.

class soma.aims.lazy_read_data.PreloadIterator(iterable, npreload=4)[source]

Bases: object

An iterator intended to be used to iterate over sequences of LazyReadData, which performs pre-iterations and pre-loads data before they get used in an actual iteration.

Idea:

When iterating over a list of LazyReadData, data is loaded when accessed, thus at the last moment, sequentially. As data loading can be efficiently threaded, the idea is to use threads to start preloading of a number of data which will be used later in the loop. This parallel loading idea is somewhat antinomic with the lazy loading data principle, so the PreloadIterator mixes both approaches. The number of preloaded data can be specified, the default is the number of processors in the machine. Each preload operation will run in a separate thread.

volumes = [LazyReadData(f, nops=1) for f in filenames]
res = sum(PreloadIterator(volumes, npreload=8))

In the above example, 8 threads will be used to preload the next 8 items in the list from the current iterator position. As the iterator advances, more data preloads will be triggered.

Parameters
  • iterable (iterable) – the iterable can be a list, a generator, or an iterator. It should iterate over items which are LazyReadData instances, because it will use their lazy loading mechanism and their threading locks.

  • npreload (number of preloaded data / number of threads used to preload) –

class soma.aims.lazy_read_data.PreloadList(iterable=None, npreload=4)[source]

Bases: list

A list which provides a PreloadIterator to iterate over it.

volumes = PreloadList((LazyReadData(f, nops=1) for f in filenames), npreload=8)
res = sum(volumes)

equivalent to:

volumes = [LazyReadData(f, nops=1) for f in filenames]
res = sum(PreloadIterator(volumes, npreload=8))

SubModule: meshSplit

soma.aims.meshSplit.meshSplit(mesh, tex, graph, tex_time_step=0)[source]

Splits a mesh into patches corresponding to a labels texture. Patches are organized into a graph.

The graph must preexist, and nodes will be inserted into it.

soma.aims.meshSplit.meshSplit2(mesh, tex, graph, voxel_size=None, tex_time_step=None)[source]

Split mesh according to texture patches

Compared to meshSplit, this version also adds buckets (voxels lists) in each graph node.

Parameters
  • mesh (cortex mesh for example) –

  • tex (aims.TimeTexture_S16) – texture of labels (parcellation of the mesh, labels between 1 and nb_labels, background = 0)

  • graph (Graph) – the graph __syntax__ attribute should be: ‘roi’

  • voxel_size ((optional)) – if a voxel size is given, a bucket will be built with the specified voxel size to follow the mesh. Otherwise there will be no bucket.

  • tex_time_step (int (optional)) – time step to be used in the texture for regions split. default: 0

  • Outputs

  • -------

  • None – modify the input graph: add vertex : submeshes (one per texture label) and associated buckets add vertex “others” : void

SubModule: spmnormalizationreader

soma.aims.spmnormalizationreader.readSpmNormalization(matfilename, source=None, destref=None, srcref=None)[source]

Read a SPM *_sn.mat normalization file and converts it to an Aims AffineTransformation3d. The converted transformation has for source the AIMS referential of the source image, and for destination the template referential of the SPM .mat file. All coordinates are in millimeters.

The source image information may be provided either as its filename, its header object, or the image itself. It should carry the needed information: source volume storage_to_memory transformation matrix, voxel_size, etc. If None is passed as source (the default), then the source image name will be built from the .mat filename and will be read if found.

  • matfilename: string

    file name of the *_sn.mat normalization file to reading

  • source: filename (string), or Volume Volume, or volume header (MappingType)

    file name of the *_sn.mat normalization file to reading

  • destref: string or UUID (Uuid)

    destination referential for the transformation. If not specified, none will be set. If provided as a symbolic name (‘Talairach-MNI template-SPM’), it will be converted to an UUID string.

  • srcref: string or UUID

    source referential for the transformation. If not specified, an attempt will be made to take it from the source image, otherwise it will not be set. If provided as a symbolic name (‘Talairach-MNI template-SPM’), it will be converted to an UUID string.

  • returns: AffineTransformation3d object

    the converted transformation

SubModule: texturetools

soma.aims.texturetools.average_texture(output, inputs)[source]

Create average gyri texture from a group of subject.

soma.aims.texturetools.change_wrong_labels(cc_label, label, gyri_tex, mesh_neighbors_vector, cc_tex_label)[source]

After a study of its neighbors, wrong label is replaced by the correct number.

Parameters
  • cc_label (label of connected component in cc_tex_label) –

  • label (label of associated vertices in gyri texture) –

  • S16) (gyri_tex (aims time texture) –

  • mesh_neighbors_vector (aims.SurfaceManip.surfaceNeighbours(mesh)) –

  • cc_tex_label (texture representing connected components of label) –

Returns

  • gyri_tex (aims time texture S16) (new gyri_tex texture,) – without isolated vertex.

  • winner_label (the correct number.)

soma.aims.texturetools.clean_gyri_texture(mesh, gyri_tex)[source]

Cleaning a gyri texture by using connected components.

Parameters
  • surface) (mesh (aims time) – white mesh associated to gyri_tex

  • S16) (gyri_tex (aims time texture) – gyri texture as full FreeSurfer parcellation.

Returns

new gyri texture, without isolated vertex.

Return type

gyri_tex (aims time texture S16)

soma.aims.texturetools.connectedComponents(mesh, tex, areas_mode=0)[source]
Parameters
  • mesh

  • tex (aimsTimeTexture_S16) – (one time step) labeled between 1 and LabelsNb, background = 0, ignored_vertex = -1.

  • areas_mode – if = 1: computing area measures of the connected components, if = 0: no measure (by default).

Returns

  • step_cc (connectedComponentTex: aimsTimeTexture_S16) – time step = LabelsNb, for each time step (label in the tex), texture of the connected components corresponding to this label (background = -1, and connected components = values between 1 and nb_cc).

  • areas_measure (python dictionary) – areas_measures[label] = [16.5, 6.0] (numpy array) if label (in tex) has two connected Components 1 and 2 with area = 16.5 and 6.0 respectively, areas are in square mm

soma.aims.texturetools.extractLabelsFromTexture(tex, labels_list, new_label)[source]
inputs:

tex: labeled texture ( from FreeSurfer or an other ) labels_list, new_label: you can overwrite numbers ( labels_list ) with your own number ( new_label )

output:

otex: labeled texture with merged regions only

soma.aims.texturetools.find_wrong_labels(mesh, gyriTex)[source]
Parameters
  • mesh

  • gyriTex (gyri texture) –

Returns

wrong_labels – [cctex: connectedComponentTex: aimsTimeTexture_S16, time step = LabelsNb, for each time step (label in the tex), texture of the connected components corresponding to this label (background = -1, and connected components = values between 1 and ccNb) areas_measures = python dictionary, areas_measures[label] = [16.5, 6.0] (numpy array) if label (in tex) has two connected Components 1 and 2 with area = 16.5 and 6.0 respectively, areas are in square mm]

Return type

list of wrong labels

soma.aims.texturetools.mergeLabelsFromTexture(tex, labels_list, new_label)[source]
inputs:

tex: labeled texture ( from FreeSurfer or an other ) labels_list, new_label: you can overwrite numbers ( labels_list ) with your own number ( new_label )

ouput:

otex: labeled texture with merged regions

soma.aims.texturetools.meshDiceIndex(mesh, texture1, texture2, timestep1=0, timestep2=0, labels_table1=None, labels_table2=None)[source]

DICE index calculation between two sets of regions defined by label textures on a common mesh. texture1, texture2: aims.TimeTexture instances, should be int (labels). timestep1, timestep2: timestep to use in texture1 and texture2. labels_table1, labels_table2: optional labels translation tables (dicts or arrays) to translate values of texture1 and/or texture2.

return

soma.aims.texturetools.mesh_to_polygon_textured_mesh(mesh, poly_tex)[source]
soma.aims.texturetools.nomenclature_to_colormap(hierarchy, labels_list, as_float=True, default_color=[0.3, 0.6, 1.0, 1.0])[source]

Make a colormap from labels and colors of a nomenclature (hierarchy), following a labels_list order.

Parameters
  • hierarchy (Hierarchy object) – nomenclature

  • labels_list (list of strings) – labels with order. The returned colormap will follow this ordering.

  • as_float (bool (optional, default: True)) – if True, colors will be float values in the [0-1] range. If False, they will be int values in the [0-255] range.

  • default_color (list (4 floats) (optional)) – Color used for labels not found in the nomenclature. It is given as floats ([0-1] range).

Returns

colormap – array of colors (4 float values in [0-1] range)

Return type

numpy array

soma.aims.texturetools.parcels_surface_features(mesh, texture, tex_index=- 1, as_csv_table=False)[source]

Record area and boundary length features on a set of parcels (in a texture).

The mesh should be a single one (single timestep), the texture may have several timesteps. The timestep index can be specified, or all timesteps will be recorded, and the result will be a dict.

The result is a dict, unless as_csv_table is set. In that case it will be a CSV-shaped array.

soma.aims.texturetools.remove_non_principal_connected_components(mesh, tex, trash_label)[source]

Keep only the largest connected component in each label, for a label texture.

Parameters
  • mesh

  • tex (label texture (S16, int)) –

  • trash_label (value to replace non-principal components) –

Returns

out_tex

Return type

label texture

soma.aims.texturetools.set_texture_colormap(texture, colormap, cmap_name='custom', tex_max=None, tex_min=None, tex_index=0, col_mapping='all')[source]

Set a colormap in a texture object header.

The texture object may be any kind of textured object: a TimeTexture instance, or a Volume.

Parameters
  • texture (TimeTexture, Volume...) – The texture object should have a header() method.

  • colormap (array, Volume, or filename) – The colormap may be provided as RGB or RGBA, and as an aims Volume object, or a numpy array, or as an image filename. It should be a 1D colormap (for now at least).

  • cmap_name (str (optional)) – name of the colormap to be used in Anatomist.

  • tex_max (float (optional)) – Max texture value to be mapped to the colormap bounds. It is used to scale the max value of the colormap in Anatomist. If not specified, the texture or volume max will be looked for in the texture object. Used only if col_mapping is “one”.

  • tex_min (float (optional)) – Min texture value to be mapped to the colormap bounds. It is used to scale the max value of the colormap in Anatomist. If not specified, the texture or volume max will be looked for in the texture object. Used only if col_mapping is “one”.

  • tex_index (int (optional)) – Texture index in the textured object

  • col_mapping (str or None (optional)) – “all”: map the full texture range to the colormap bounds (default); “one”: one-to-one mapping between colors and values (int values); “none” or None: don’t force any mapping - anatomist will choose to use a histogram if needed.

soma.aims.texturetools.set_texture_labels(texture, labels, tex_index=0)[source]

Set a labels list or dict in a texture object header.

The texture object may be any kind of textured object: a TimeTexture instance, or a Volume.

Parameters
  • texture (TimeTexture, Volume...) – The texture object should have a header() method.

  • labels (list ot dict) – Values are labels strings. Keys are ints. It may be either a list (keys are list indices) or a dict.

  • tex_index (int (optional)) – Texture index in the textured object

soma.aims.texturetools.vertex_texture_to_polygon_texture(mesh, tex, allow_cut=False)[source]

Make a “polygon texture” from a vartex-based label texture. A polygon texture has a value for each polygon.

For a given polygon the value is taken as the majority of values on its vertices. If an absolute majority cannot be obtained, the mesh polygons may be cut to avoid losing precision. This is done if allow_cut is True.

When allow_cut is False, the returned value is the polygon texture. It may work on meshes of any polygon size (triangles, quads, segments…)

When allow_cut is True, the returned value is a tuple:
  • polygon texture

  • new mesh with possibly split triangles

It only works for meshes of triangles.

SubModule: volumetools

Volume functions

soma.aims.volumetools.crop_volume(vol, threshold=0, border=0)[source]

Crop the input volume, removing slices filled with values under a given threshold, and keeping a given border.

If no crop actually takes place, the input volume is returned without duplication. If crop is actually performed, then a view into the original volume is returned, sharing the same data block which is not copied.

Transformations in the header are adapted accordingly.

Parameters
  • vol (aims Volume) – volume to be cropped

  • threshold (volume value, optional) – Minimum value over which a slice cannot be cropped (is supposed to contain real data). The default is 0: only value <= 0 is croppable

  • border (int, optional) – border around the cropped volume: the cropped volume is enlarged by twice this value in each direction, within the limits of the original volume (the bounding box always fits in the original volume). Values in the border are taken from the original volume, the border is not artificially filled with a constant value. The default is 0: no border

soma.aims.volumetools.fill_border_constant(data, value=0, whole=False)[source]

Fill the border of data using a constant value. In aims, a Volume with border is managed as an unallocated view (the visible data) in a larger allocated Volume (the Volume that contains borders). In order to be filled, the borders must exists, otherwise the function has no effect on the Volume.

Parameters
  • data (Volume_* or rc_ptr_Volume_*) –

  • value (value to fill border with (optional)) – Default is 0.

  • whole (bool (optional)) – For partially read Volume, it forces to fill the borders also when they has been already filled with data from parent full unallocated Volume. Default is False

soma.aims.volumetools.fill_border_median(data, size=(- 1, - 1, - 1, - 1), whole=False)[source]

Fill the border of data using the median value processed in the inside border. In aims, a Volume with border is managed as an unallocated view (the visible data) in a larger allocated Volume (the Volume that contains borders). In order to be filled, the borders must exists, otherwise the function has no effect on the Volume.

Parameters
  • data (Volume_* or rc_ptr_Volume_*) –

  • size (list or Point4dl size of the inside border to process median) – value. (-1,-1,-1,-1) means that the median value is processed in the inside border of equal outside border size. if the outside border is of size (2, 2, 0) in dimensions x, y, z, the inside border is also of size (2, 2, 0) (optional) Default is (-1,-1,-1,-1).

  • whole (bool (optional)) – For partially read Volume, it forces to fill the borders also when they has been already filled with data from parent full unallocated Volume. Default is False

soma.aims.volumetools.fill_border_mirror(data, whole=False)[source]

Fills the border mirroring the inside border. In aims, a Volume with border is managed as an unallocated view (the visible data) in a larger allocated Volume (the Volume that contains borders). In order to be filled, the borders must exists, otherwise the function has no effect on the Volume.

Parameters
  • data (Volume_* or rc_ptr_Volume_*) –

  • whole (bool (optional)) – For partially read Volume, it forces to fill the borders also when they has been already filled with data from parent full unallocated Volume. Default is False

soma.aims.volumetools.fill_border_nearest(data, whole=False)[source]

Fill the border of data using the inside border voxel value. In aims, a Volume with border is managed as an unallocated view (the visible data) in a larger allocated Volume (the Volume that contains borders). In order to be filled, the borders must exists, otherwise the function has no effect on the Volume.

Parameters
  • data (Volume_* or rc_ptr_Volume_*) –

  • whole (bool (optional)) – For partially read Volume, it forces to fill the borders also when they has been already filled with data from parent full unallocated Volume. Default is False

SubModule: aimsalgo.lazy_resample_volume

class soma.aimsalgo.lazy_resample_volume.LazyResampleVolume(data_or_filename, allocator_context=None, read_options=None, nops=0, reader=None, dtype=None, transform=None, dims=None, vox_size=(1.0, 1.0, 1.0, 1.0), resampling_order=1, default_value=0, **kwargs)[source]

Bases: soma.aims.lazy_read_data.LazyReadData

A specialized version of aims.LazyReadData dedicated to Volumes, which can perform voxel type conversion and resampling to another space when reading data.

LazyResampleVolume is useful when operations have to be performed on several volumes which are not initially in the same space.

image_names = ['image%02d.nii' % i for i in range(10)]
transf_names = ['transform%02d' % i for i in range(10)]
rvols = [lazy_resample_volume.LazyResampleVolume(
            f, transform=t, nops=1, dims=(256, 256, 200, 1),
            vox_size=(1, 1, 1, 1), dtype='FLOAT')
          for f, t in zip(image_names, transf_names)]
res = sum(rvols) / len(rvols)
Parameters
  • data_or_filename (see LazyReadData) –

  • allocator_context (see LazyReadData) –

  • read_options (see LazyReadData) –

  • nops (see LazyReadData) –

  • reader (see LazyReadData) –

  • dtype (str or type) – may specify a conversion to a specific voxel type

  • transform (str or aims.AffineTransformation3d or list) – Transformations to be applied to the volume when it is read. May be an AffineTransformation3d instance, or a filename (.trm file), or a list of transformations / filenames to be combined (applied rioght to left, thus matrices are multiplied in the left-to-right order ).

  • dims (list or tuple of int) – resampled volume dimensions in voxels

  • vox_size (list or tuple of float) – resampled volume voxel sizes

  • resampling_order (int) – interpolation order for the resampling

  • default_value (data type) – default background value for the resampled volume

  • kwargs (see LazyReadData) –

SubModule: aimsalgo.mesh_coordinates_sphere_resampling

soma.aimsalgo.mesh_coordinates_sphere_resampling.draw_sphere(mesh, longitude, latitude)[source]

Draw a sphere

Parameters
  • mesh ((AimsTimeSurface_3_VOID)) – a spherical triangulation of cortical hemisphere of the subject

  • longitude ((TimeTexture_FLOAT)) – a longitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

  • latitude ((TimeTexture_FLOAT)) – a latitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

Returns

sphere_mesh – a spherical triangulation of the subject of its cortical hemisphere, projected on a sphere

Return type

(AimsTimeSurface_3_VOID)

soma.aimsalgo.mesh_coordinates_sphere_resampling.polygon_average_sizes(mesh)[source]

Return the average edge length for each triangle of a mesh

Used by refine_sphere_mesh() and sphere_mesh_from_distance_map()

Parameters
  • mesh ((AimsTimeSurface_3_VOID)) – a mesh providing trianglar struture

  • Return

  • lengths ((numpy array)) – average size for each polygon

soma.aimsalgo.mesh_coordinates_sphere_resampling.polygon_max_sizes(mesh)[source]

Return the max edge length for each triangle of a mesh

Used by refine_sphere_mesh() and sphere_mesh_from_distance_map()

Parameters
  • mesh ((AimsTimeSurface_3_VOID)) – a mesh providing trianglar struture

  • Return

  • lengths ((numpy array)) – average size for each polygon

soma.aimsalgo.mesh_coordinates_sphere_resampling.refine_sphere_mesh(init_sphere, avg_dist_texture, current_sphere, target_avg_dist, inversion=False, init_sphere_coords=None, current_sphere_coords=None, dist_texture_is_scaled=True)[source]

Adaptively refine polygons of a sphere mesh according to an average distance map (genrally calculated in a different space), and a target length.

This is one single step if the iterative sphere_mesh_from_distance_map().

Polygons where the average distance map value is “too high” are oversampled (divided in 4).

Parameters
Returns

refined_sphere

Return type

(AimsTimeSurface_3_VOID)

soma.aimsalgo.mesh_coordinates_sphere_resampling.resample_mesh_to_sphere(mesh, sphere, longitude, latitude, inversion=False)[source]

Resample a mesh to the sphere.

Parameters
  • mesh ((AimsTimeSurface_3_VOID)) – a spherical triangulation of cortical hemisphere of the subject

  • sphere ((AimsTimeSurface_3_VOID)) – a sphere mesh with center 0. For example, a spherical mesh of size 100 located in standard BrainVISA directory can be used.

  • longitude ((TimeTexture_FLOAT)) – a longitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

  • latitude ((TimeTexture_FLOAT)) – a latitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

  • inversion (bool) – if True, the longitude coord is inverted (useful for right hemisphere)

Returns

resampled

Return type

(AimsTimeSurface_3_VOID)

soma.aimsalgo.mesh_coordinates_sphere_resampling.resample_texture_to_sphere(mesh, sphere, longitude, latitude, texture, interpolation='linear', inversion=False)[source]

Resample a mesh to the sphere.

Parameters
  • mesh ((AimsTimeSurface_3_VOID)) – a spherical triangulation of cortical hemisphere of the subject

  • sphere ((AimsTimeSurface_3_VOID)) – a sphere mesh with center 0. For example, a spherical mesh of size 100 located in standard BrainVISA directory can be used.

  • longitude ((TimeTexture_FLOAT)) – a longitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

  • latitude ((TimeTexture_FLOAT)) – a latitude texture from HipHop mapping that go with the white_mesh of the subject. This texture indicates the spherical coordinates at each point.

  • interpolation (string or MeshInterpoler.InterpolationType enum) – resampling interpolation type: “linear” or “nearest_neighbour”

  • inversion (bool) – if True, the longitude coord is inverted (useful for right hemisphere)

Returns

resampled

Return type

(same type as input texture)

soma.aimsalgo.mesh_coordinates_sphere_resampling.sphere(v, u)[source]

Generate a sphere from polar coordinates to spheric coordinates.

Parameters
  • radius ((float)) – around of 100

  • u ((float)) – angle phi (latitude) WARNING: radian

  • v ((float)) – angle theta (longitude) WARNING: radian

soma.aimsalgo.mesh_coordinates_sphere_resampling.sphere_coordinates(sphere, inversion=False)[source]

Compute spherical coordinates (longitude, latitude) on a sphere.

Parameters
  • sphere ((AimsTimeSurface_3_VOID)) – a sphere mesh: vertices must be on a sphere with center 0.

  • inversion (bool) – if True, the longitude coord is inverted (useful for right hemisphere)

Returns

(longitude, latitude)

Return type

tuple, each element being a TimeTexture_FLOAT

soma.aimsalgo.mesh_coordinates_sphere_resampling.sphere_mesh_from_distance_map(init_sphere, avg_dist_texture, target_avg_dist, inversion=False, dist_texture_is_scaled=True)[source]

Builds a sphere mesh with vertices density driven by an average distance map, coming with another initial sphere mesh, (genrally calculated in a different space), and a target length.

Starting from an icosahedron, this procedure iterates calls to refine_sphere_mesh() until the target_avg_dist criterion is reached everywhere on the mesh.

The initial avg_dist_texture can be (and, has better be) scaled according to the edges length of the init_sphere mesh polygons. In this case it is the ratio of post / pre deformation edges lengths.

Use case: - get an initial sphere mesh (typically an icosphere) - get subjects mesh (typically grey/white brain interfaces), which have

also coordinates maps (output of the Hip Hop toolbox for BrainVISA)

  • resample subjects mesh to the initial sphere. Obtained meshes will be very inhomogen

  • build a edges legth map from these subjects resampled meshes

  • use sphere_mesh_from_distance_map to build an adapted template sphere

Parameters
  • init_sphere ((AimsTimeSurface_3_VOID)) –

  • avg_dist_texture ((TimeTexture_FLOAT)) –

  • target_avg_dist ((float)) –

  • dist_texture_is_scaled ((bool) (optional)) – If True, the avg_dist_texture is considered to be scaled according to the inital sphere triangles edges (see aims.SurfaceManip.meshEdgeLengthRatioTexture). Default: True

Returns

refined_sphere

Return type

(AimsTimeSurface_3_VOID)

soma.aimsalgo.mesh_coordinates_sphere_resampling.texture_by_polygon(mesh, texture)[source]

Averages a texture (classically, by vertex) on polygons.

Used by refine_sphere_mesh() and sphere_mesh_from_distance_map()

Parameters
Returns

poly_tex – texture averaged on polygons

Return type

(nupy array)

SubModule: aimsalgo.mesh_skeleton

soma.aimsalgo.mesh_skeleton.mesh_skeleton(mesh, texture, curv_func=None, dist_tex=None, do_timesteps=False, min_cc_size=20, min_branch_size=20, debug_inspect=())[source]

Process a skeleton of an object given as a binary texture.

The current algorithm is rather simple, it erodes vertices iteratively in a given order until a vertex is “blocked” based on a curvature-like criterion funcion. The mesh vertices position and curvature are not used directly in the algorithm.

Parameters
  • mesh (aims.AimsSurfaceTriangle) – triangular mesh to buils the skeleton on

  • texture (aims.TimeTexture (int values)) – input object definition: binary object in a texture, all non-zero values are considered in the object

  • curv_func (function) – “curvature” function which returns a value for each point, deciding if the point can be removed from the object (eroded), normally based on curvature or “sharpness”. A negative value means that the point cannot be removed. The default function allows to remove a point unless it is a “sharp edge”, connected to the object only by one triangle edge.

  • dist_tex (aims.TimeTexture (float values)) – distance-like map inside the object, deciding the priority of eroded points. Points with the lowest values will be processed first. Typically if we want to build the skeleton of a thresholded curvature or a depth potential function (DPF), texture will be this binarized texture, and dist_tex will be the curvature or DPF texture itself.

  • do_timesteps (bool) – if True, the output texture will have one timestep per front propagation iteration

  • min_cc_size (int) – small connected components can be removed afterwards. Such trimming only happens if do_timesteps is False.

  • min_branch_size (int) – small branches can be pruned afterwards. Such trimming only happens if do_timesteps is False.

  • debug_inspect (sequence (preferably set) of ints) – list of vertices for which debug information will be printed on the standard output. Useful to understand what happens there.

Returns

skel_tex – output skeleton texture. Value 0 is the background, 1 is the skeleton. If do_timesteps is True, then one timestep per propagation step will be found in the texture, and value 2 will be used for the object interior (not belonging to the propagation front in the current step)

Return type

aims.TimeTexture_S16

soma.aimsalgo.mesh_skeleton.prune_branches(mesh, texture, min_branch_size=20, neigh=None)[source]

Prune the smallest branches in a skeleton texture

soma.aimsalgo.mesh_skeleton.sharp_curve_func(nvert, ntex, neigh, v, dist_tex)[source]

Default “curvature” function used as “curv_func” in mesh_skeleton().

Returns a negative value if the given vertex should not be removed (eroded). The current implementation freezes a vertex if it has only one neighbor in the object.

soma.aimsalgo.mesh_skeleton.sort_potential(front, texture)[source]

Sort front points list according to texture value

soma.aimsalgo.mesh_skeleton.topo_mark(mesh, texture, neigh=None)[source]

Mark skeleton vertices according to their topological type:

  • 1: end point

  • 2: line point

  • 3: bifurcation

soma.aimsalgo.mesh_skeleton.trim_skeleton(mesh, skeleton, min_cc_size=20, min_branch_size=20)[source]

Trim a skeleton texture by removing small connected components and small branches.

SubModule: aimsalgo.meshwatershedtools

This modles features the following:

  • generate a reduced profile by basins using the watershed algorithm

  • merge the items if necessary (measured criteria validating the basins)

Main dependencies: PyAims library

soma.aimsalgo.meshwatershedtools.watershedWithBasinsMerging(tex, mesh, size_min=0, depth_min=0, tex_threshold=0.01, mode='and')[source]

Generate a texture of merged basins: watershed texture.

The basins are merged according to two criteria:
  1. the size of basins

  2. the depth of basins

Parameters
  • tex ((TimeTexture_S16)) – texture of boundaries between regions

  • mesh ((aimsTimeSurface_3)) – associated mesh

  • size_min ((int)) – number of basins > size_min

  • depth_min ((float)) – number of basins > depth_min

  • tex_threshold ((float)) – threshold on the input intensity texture

  • mode ((str)) –

    two cases:
    1. and –> merge basins with its parent

      if size < k_size and depth < k_depth

    2. or –> merge basins with its parent

      if size < k_size or depth < k_depth

Returns

output_tex – watershed results according to the thresholds indicated by k_size and k_depth

Return type

(TimeTexture_S16)

SubModule: aimsalgo.polyfit

Fit a volume with a polynomial.

Inspired by Fitpoly.m, courtesy of Alexandre Vignaud.

soma.aimsalgo.polyfit.apply_poly(volume_like, coefs, mask=None, transformation=None)[source]

Calculate a polynomial over the domain of the supplied mask.

Return a numpy array the same size as the input mask.

transformation (optional) is the projective transformation from the volume to the referential in which the polynomial coefficients were calculated.

soma.aimsalgo.polyfit.fit_poly_coefs(volume, order, mask=None)[source]

Get the coefficients of the polynomial that fits the input data

soma.aimsalgo.polyfit.meshgrid_volume(volume)[source]

Return three arrays containing each voxel’s coordinates.

SubModule: aimsalgo.t1mapping

Reconstruct magnetic resonance parametres.

This is mainly a re-implementation of scripts provided by Alexandre Vignaud, plus a few improvements functions (such as mask and B1 map holes filling).

class soma.aimsalgo.t1mapping.BAFIData(amplitude_volume, phase_volume)[source]

Bases: object

B1 map reconstruction class using the VFA (Variable Flip Angle) method.

Pass the BAFI data as two amplitude-phase 4D AIMS volumes.

The last dimension of both arrays represents the different echos.

static correctB0(FA_map, FA_phase, B0_map, tau, echo_time)[source]

Apply B0 correction to a B1 map.

This is a re-implementation of correctB0.m, courtesy of Alexandre Vignaud.

fix_b1_map(b1map, smooth_type='median', gaussian=False, output_median=False)[source]

Fix/improve the B1 map by filling holes, smoothing, and extending it a little bit spacially so as to use it on the complete whole brain.

Parameters
  • b1map (volume) – the B1 map to be corrected, may be the output of self.make_flip_angle_map()

  • smooth_type (str (optional)) – smoothing correction type. default: ‘median’ median: dilated:

  • gaussian (float (optional)) – default: 0 (not applied) perform an additional gaussian filtering of given stdev

  • output_median (bool (optional)) – if set, the output will be a tuple including a 2nd volume: the median-filtered B1 map. Only valid if smooth_type is ‘median’.

Returns

  • The corrected B1 map.

  • If output_median is set, the return value is a tuple

  • (corrected B1 map, median-filtered B1 map)

make_B0_map()[source]

Build a map of B0 in Hz from BAFI data.

Return the map as a numpy array.

This is a re-implementation of Phase2B0Map.m, courtesy of Alexandre Vignaud.

make_B1_map(B0_correction=False)[source]

Build a map of B1 (in radians) from BAFI data.

Return a numpy array of complex type.

This is a re-implementation of BAFI2B1map.m, courtesy of Alexandre Vignaud.

  • The method is Yarnykh’s (MRM 57:192-200 (2007)) +

  • Amadon ISMRM2008 (MAFI sequence: simultaneaous cartography of B0

and B1)

make_flip_angle_map()[source]

Build a map of actual flip angle (in radians) from BAFI data.

This is a re-implementation of BAFI2FAmap.m (courtesy of Alexandre Vignaud) modified to return only the real flip angle (omitting the phase).

  • The method is Yarnykh’s (MRM 57:192-200 (2007)) +

  • Amadon ISMRM2008 (MAFI sequence: simultaneaous cartography of B0

and B1)

class soma.aimsalgo.t1mapping.GREData2FlipAngles(min_FA_volume, max_FA_volume)[source]

Bases: object

soma.aimsalgo.t1mapping.correct_bias(biased_vol, b1map, dp_gre_low_contrast=None, field_threshold=None)[source]

Apply bias correction on biased_vol according to the B1 map, and possibly a GRE low contrast image.

Without dp_gre_low_contrast image:

\[unbiased\_vol = biased\_vol / b1map\]

(plus improvements)

With dp_gre_low_contrast image:

\[unbiased\_vol = biased\_vol / (lowpass(dp\_gre\_low\_contrast) * b1map)\]

(roughly)

\(lowpass\) is currently a gaussian filter with sigma=8mm.

method: courtesy of Alexandre Vignaud.

ref: ISMRM abstract Mauconduit et al.

All input images are expected to contain transformation information to a common space in their header (1st transformation, normally to the scanner-based referential). They are thus not expected to have the same field of view or voxel size, all are resampled to the biased_vol space.

The returned value is a tuple containing 2 images: the corrected image, and the multiplicative correction field.

Parameters
  • biased_vol (volume) – volume to be corrected

  • b1map (volume) – B1 map as flip angles in degrees, generally returned by BAFIData.make_flip_angle_map. May be improved (holes filled, dilated) using BAFIData.fix_b1_map() which is generally better.

  • dp_gre_low_contrast (volume (optional)) – GRE low contrast image

  • field_threshold (float (optional)) – Threshold for the corrective field before inversion: the biased image will be divided by this field. To avoid too high values, field values under this threshold are clamped. Null values are masked out, so the threshold applies only to non-null values. If not specified, the threshold is 100 if the dp_gre_low_contrast is not provided, and 3000 when dp_gre_low_contrast is used. If field_threshold is 0, then no thresholding is applied.

Returns

  • unbiased_vol (volume) – according to the calculations explained above. The returned image has the same voxel type as the input one (althrough calculations are performed in float in the function), and the grey levels are roughly adjusted to the level of input data (unless it produces overflow, in which case the max value is adjusted to fit in the voxel type).

  • field (volume) – The correction field applied to the image (multiplicatively)

soma.aimsalgo.t1mapping.correct_sensitivity_map(sensitivity_map, flip_angle_map)[source]

Factor out the effect of the transmit field from a sensitivity map.

soma.aimsalgo.t1mapping.t1mapping_VFA(flip_angle_factor, GRE_data)[source]

Reconstruct a T1 relaxometry map from DESPOT1 data (2 flip angles).

This is a re-implementation of GetT1_VAFI3Dp_Brebis.m, courtesy of Alexandre Vignaud.

SubModule: aimsalgo.texture_cleaning

soma.aimsalgo.texture_cleaning.clean_texture(mesh, tex, labels, ero_dist={'GapMap': 1.5, 'other': 0.5, 'unknown': 7.0}, dilation=1.5, min_cc_size=100.0, max_threads=0)[source]

Clean labels texture:

  • for each label in the nomenclature: - erode a certain amount depending on the label to eliminate small crap - dilate the same amount + dilation mm, to grow back and to connect

    disconnected parts

    • re-erode back to original size

  • Voronoi for all regions to fill gaps

  • filter out small disconnected parts (< min_cc_size mm2)

  • set a labels and colors table in the texture

Parameters
  • mesh (Aims mesh) –

  • tex (Aims texture) –

  • labels (dict) – labels map, normally obtained using read_labels()

  • ero_dist (dict) –

  • dilation (float) –

  • min_cc_size (float) –

  • max_threads (int) – 0: all CPU cores 1: mono-core 2+: that number of worker threads -n: all but n cores

Returns

otex – cleaned output texture

Return type

Aims texture