referential of saved meshes

Questions about Anatomist manipulation

Moderators: denghien, riviere

Post Reply
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

referential of saved meshes

Post by tizianod »

Hello,

I am using a simple Python script to save into a single mesh file the meshes extracted from the sulci graph generated by the Morphologist pipeline. This works perfectly and in Anatomist I can see the MRI with the extracted mesh perfectly registered.
However using other programs (like MITK) the mesh is not aligned to the image. I guess the problem is that the mesh is saved in the Aims referential which is radiological, but the image is displayed on it's own referential, which is probably neurological (the original image is in NIFTI format). I read that Anatomist makes this tranformation on the flight, this could explain why in Anatomist I have no alignment problems. Now how can I change the coordinate system of the mesh to match the one of the image? I think I would need the inverse transformation that is used internally by Anatomist to display the image. I guess is not just an axis flippling, because the origin is also changing, right? I tried to run AimsFileInfo but I didn't find the required transformation.

P.S. I am new to BrainVISA, sorry if I made some stupid mistake
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

Hi,
Yes that's it: meshes built from segmented images are in the Aims coords system of the source image (origin in the right, top, front voxel of the image). If the image has a transformation to another coords system (say a scanner-based ref in NIFTI file), you can apply it to the mesh.
- either manually using command lines: use AimsFileInfo on the image, see the "transformations" field, and write a .trm transform file corresponding to it (see the format description here), then use AimsMeshTransform to apply the matrix to a mesh.
- either in a python script, using something like that:

Code: Select all

from soma import aims
# read mesh
mesh = aims.read( 'mesh_to_transform.gii' )
# read image header
f = aims.Finder()
if f.check( 'source_image.nii' ):
  hdr = f.header()
  # get transform
  if hdr.has_key( 'transformations' ):
    tr = aims.AffineTransformation3d( hdr[ 'transformations' ][0] )
    # apply transform (mesh will be modified)
    aims.SurfaceManip.meshTransform( mesh, tr )
    # save resulting mesh
    aims.write( mesh, 'mesh_trans.gii' )
After that, the mesh mesh_trans.gii will be in the scanner-based referential (which is what I think FreeSurfer does).
Does it help ?
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Yes, Perfect! I was close to the solution indeed :)
Thanks a lot for your quick help!
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

I spoke too fast..
The mesh and the image looked well aligned but indeed they are not perfectly registered. Does the AC-CP transformation play any role in this issue?
I attach my script and what I get. I have a last step in which I conver ply to VTK with Paraview but I believe and hope that the problem is not there.
Attachments
printsmall.png
printsmall.png (192.75 KiB) Viewed 24554 times
graph_test.py
(1.42 KiB) Downloaded 684 times
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

Well, I don't really know MITK and which coordinates system it is using... Every software has its own conventions, so it's difficult to guess. Transforming to the scanner-based referential was the first idea that came to my mind (and we thought we had figured out that FreeSurfer was working in this space for its meshes), but it's just an assumption... Looking at your image, this assumption doesn't seem to be correct...
It could also be disk-space coordinates, or an internal convention specific to MITK...?
The AC-PC transformation should not interfere here: in BrainVisa it is just calculated to get a link to other subjects (and at some points of the segmentation pipeline, get help from a template), but the data are independent from it.
Or MITK expects a transform to a standard (MNI?) space, which is not present here ?
I guess (and hope) that Paraview doesn't change vertices coordinates while converting to VTK format ?
Denis
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Hi,

The problem is not only related to MITK, the exact same thing would happen also using MedInria. Additionally, I do not understand why in the Morphologist pipeline the brain mask is registered to the image, but for example the segmented cortex not (see attachment). What is the trasformation applied there? If I understood correctly BrainVisa uses the Aims referential and the transformation from storage to Aims is stored in the matrix "storage_to_memory" which is saved in the header thus if I want to save the meshes in the same referential of the image I think I should multiply the mesh coordinates by the inverse of the "storage_to_memory" matrix, why is this not the case?
medinria.png
medinria.png (144.19 KiB) Viewed 24546 times
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

Hi,

I guess there are 2 different things here:
- the T1 and brain mask are not displayed at the same position: I guess that somewhere, Morphologist desn't keep the transformation info between the input image and the output (brain segmentation) one. I will check that. As we used to work with older image formats which did not allow to store such information, at some places maybe we didn't take care of correctly reporting information through the image processing pipeline. Moreover Morphologist uses a different mechanism for coordinates systems and transformations (not in the image header itself): each data is assigned a referential identified in BV database, and assigned to each data of the Morphologist pipeline. So it looks OK in Morphologist/Anatomist. and here all images have the same number /sizes of voxels so can be superimposed on a voxel-to-voxel basis.
- what we need to know here is how image voxels are positioned in a millimetric world, where mesh vertices coordinates are also specified. This voxel to mm transform is possibly subject to different conventions. It could be the scanner-based coordinates system (what I guessed at first, but did not seem to work in the previous posts), a software-dependent convention / orientation, or a common inter-subject space (MNI).
The "storage_to_memory" matrix is the transform going from disk orientation of voxels (the order they are physically stored on disk) and memory space, in our software (Aims) convention. Both in voxels (not mm). If you apply its inverse to a mesh, you will get it in disk-space orientation, and I am not sure it is what MITK or Medinria expect.

Denis
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Hi Denis,
I agree that there are two different problems:
1) Probably BrainVisa Morpohologist pipeline doesn't save the brain segmentations (left cortex, right cortext etc) in the same reference frame as the original image (while the brain mask for example is in the same reference)
2) The mesh are saved in a world coordinate system (in mm) which may be different from sw to sw and thus a transformation might be necessary.
Regarding the first point, there is probably a transformation saved somewhere in the BrainVISA db and I guess you are checking that.
Regarding the second point, instead, I found that both MITK and medInria, being ITK-based share the world coordinate system of ITK, which is the coordinate system of the scanner. The information about this coordinate system (origin and direction) is read directly from the header of the NIfti file, and more exactly in the "qform" field. Indeed, using the command line "fslhd" (FSL tools) to read the Nifti header I get for example (I report only the relevant part):

Code: Select all

tiziano@libido:~/data/nifti$ fslhd 001_mri.nii 

qform_name     Scanner Anat
qform_code     1
qto_xyz:1      -0.997664  0.053355  -0.042661  124.750862
qto_xyz:2      0.053832  0.998499  -0.010116  -93.404778
qto_xyz:3      -0.042057  0.012389  0.999039  -72.978676
qto_xyz:4      0.000000  0.000000  0.000000  1.000000
qform_xorient  Right-to-Left
qform_yorient  Posterior-to-Anterior
qform_zorient  Inferior-to-Superior

And using the ITK function GetOrigin() I get:

Code: Select all

import SimpleITK as sitk
In [25]: input = sitk.ReadImage('/home/tiziano/data/nifti/001_mri.nii')

In [26]: input.GetOrigin()
Out[26]: (-124.75086212158203, 93.40477752685547, -72.97867584228516)

In [27]: input.GetDirection()
Out[27]: 
(0.997663947115257,
 -0.053354775001125815,
 0.04266048102176248,
 -0.0538319930417767,
 -0.9984987627150277,
 0.010116194328540334,
 -0.04205668973606638,
 0.012389061350164798,
 0.9990384106586188)
As you can see, a part from some flipping operation, the two transformation are indeed exactly the same.
Now If I run the AimsFileInfo on the same file I get:

Code: Select all

tiziano@libido:~/data/nifti$ AimsFileInfo 001_mri.nii 
attributes = {
    'disk_data_type' : 'S16',
    'bits_allocated' : 16,
    'scale_factor' : 2.96996,
    'scale_offset' : 0,
    'data_type' : 'FLOAT',
    'scale_factor_applied' : 0,
    'possible_data_types' : [ 'FLOAT', 'S16', 'DOUBLE' ],
    'cal_min' : 0,
    'cal_max' : 0,
    'freq_dim' : 0,
    'phase_dim' : 0,
    'slice_dim' : 0,
    'slice_code' : 0,
    'slice_start' : 0,
    'slice_end' : 0,
    'slice_duration' : 0,
    'storage_to_memory' : [ 1, 0, 0, 0, 0, -1, 0, 255, 0, 0, -1, 181, 0, 0, 0, 1 ],
    'volume_dimension' : [ 256, 256, 182 ],
    'voxel_size' : [ 1, 1, 1 ],
    'referentials' : [ 'Scanner-based anatomical coordinates', 'Scanner-based anatomical coordinates' ],
    'transformations' : [ [ -0.997664, -0.0533548, 0.0426605, 130.635, 0.053832, -0.998499, 0.0101162, 159.381, -0.0420567, -0.0123891, -0.999039, 111.007, 0, 0, 0, 1 ], [ -0.997664, -0.0533548, 0.0426636, 130.634, 0.053832, -0.998499, 0.0101161, 159.381, -0.0420598, -0.0123891, -0.999039, 111.007, 0, 0, 0, 1 ] ],
    'toffset' : 0,
    'xyz_units' : 2,
    'time_units' : 8,
    'descrip' : '',
    'aux_file' : '',
    'nifti_type' : 1,
    'file_type' : 'NIFTI1',
    'object_type' : 'Volume'
  }

Now I was expecting to find the same transformation in the first element of the array "transformations", but the numbers here do not match with the ones before (it is sufficient to look at the origin which is now in ~ 130,159,111). What is this transformation then? I will now try to apply the transformation in the Nifti file to the mesh and see if this solves the problem.. but still I don't get where the transformation returned by AimsFileInfo comes from..

Thanks a lot for your support, I hope this thread will be useful for the community

Tiziano
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

OK, then if ITK works in the scanner-based coordinates, then applying the corresponding transformation should fix the problem.
But:
a) we found that it didn't work in the previous messages
b) the brain mask seems not to have kept this transformation information, which may explain a)
If you have used my little bit of python code on the brain segmentation image, then you can try it using the raw T1 image as "source_image" instead: this one should contain the correct transformation and should be OK.

Now, yes, the "transformations" field shown using AimsFileInfo is not the same as the qform in the NIFTI file: they are starting from different coords systems:
- the qform in NIFTI files start from the voxels space on disk
- Aims is using an intermediate coords system which (normally) has always the same orientation (always axial, with axes in the same direction), and has its origin in the center of the first voxel (right, top, front). the transform between disk voxels and this intermediate Aims referential is given by the "storage_to_memory" matrix. So if you combine these two transforms you will get the NIFTI qform. Our meshes are in this intermediate referential, so you should apply the transform as given by AimsFileInfo (what is done in my little script).

This intermediate coords system may seem spurious and confusing, but as I said older image formats did not all allow to store a scanner-based coordinates system, and we consider that the order voxels are stored on disk is a kind of internal cuisine of specific formats (some formats impose an order, others do not), so we wanted to have an abstraction allowing to always have the same orientation in memory. This enables to do simply very simple operations such as image1 - image2 (voxel-to-voxel) if image1 and image2 are deduced from the same acquisition, even if stored on disk as different format (thus possibly different voxels order).

Denis
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Hi,

I finally solved the problem of the meshes, the problems was that the ITK world is almost the scanner-based anatomical referential, because, as I showed before, there is additionally a flipping of both the X and Y axes. So, finally, to convert from Aims to ITK space we need first to to apply the Aims-to-scanner transformation and then flip the result. For example, I do this with the following python script:

Code: Select all

from soma import aims
import sys,os

meshInFile = sys.argv[1]
meshOutFile = os.path.splitext(meshInFile)[0]+'_ITKworld.ply'

origImageDile = sys.argv[2]


# get mesh
mesh = aims.read(meshInFile)

# get transform
f = aims.Finder()
f.check(origImageDile)
hdr = f.header()
tr = aims.AffineTransformation3d( hdr[ 'transformations' ][0])

# apply Aims-to-scanner transform
aims.SurfaceManip.meshTransform(mesh,tr)

# prepare flipping transformation of X and Y axis
flipXY = aims.AffineTransformation3d([-1, 0, 0, 0,   0, -1, 0, 0,   0, 0, 1, 0,   0, 0, 0, 1 ])

# apply the flipping
aims.SurfaceManip.meshTransform(mesh,flipXY)

# write mesh
aims.write(mesh,meshOutFile)

Still I don't find a solution for the mis-aligment of image and the masks Lcortex, Lskeleton, Lgrey_white etc... I showed in the previous post with a medInria screenshot
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

I finally solved the problem of the meshes, the problems was that the ITK world is almost the scanner-based anatomical referential, because, as I showed before, there is additionally a flipping of both the X and Y axes. So, finally, to convert from Aims to ITK space we need first to to apply the Aims-to-scanner transformation and then flip the result. For example, I do this with the following python script:
Fine. Good to know...
Still I don't find a solution for the mis-aligment of image and the masks Lcortex, Lskeleton, Lgrey_white etc... I showed in the previous post with a medInria screenshot
This is probably our fault: the segmentation commands probably do not take care of reporting input image header information to output images, so the scanner-based transformation gets lost duriong the process. I will try to have a look at it.
Denis
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

Hi,
I actually found a header problem on the cortex image output, but not in other steps (I didn't check everyone of them but the brain mask seems OK).
I have fixed it for the next release.
Denis
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Ok thanks, and yes the brain mask was already OK. Is there a workaround for the other masks in the meanwhile?
User avatar
riviere
Site Admin
Posts: 1361
Joined: Tue Jan 06, 2004 12:21 pm
Location: CEA NeuroSpin, Saint Aubin, France
Contact:

Re: referential of saved meshes

Post by riviere »

As far as I have seen, only the "cortex" images are affected.
Yes you can force re-writing the images, using AimsFileConvert:

Code: Select all

AimsFileConvert Lcortex_subject.nii Lcortex_subject.nii
It will overwrite the image, and should write in the native format header (here, nifti) information stored in the .minf meta-information header of BrainVisa/Aims (which actually contain the correct information).

Denis
tizianod
Posts: 21
Joined: Fri Apr 06, 2012 9:52 am

Re: referential of saved meshes

Post by tizianod »

Hi,

As I already pointed out some time ago the masks LCortex and RCortex are not in the same space as the original image. As you suggested using the AimsFileConvert command I get the masks back in the correct 'world' space thanks to the Nifti qform transformation. However the voxel orientation in the cortex masks still remains different from the original image. Because of this I have problems in applying the mask to the original nifti image. Do you know an easy way to rearrange the voxels in the masks in the same way as in the original image?

Thanks a lot

Tiziano
Post Reply