It is possible to save the fusion image resulting from the overlap between a mask and the original image, in nii format?
How can we do it?
Thank you
Save the resulting image from the overlap between a mask and the original image
-
- Posts: 2
- Joined: Fri May 06, 2016 4:53 pm
- riviere
- Site Admin
- Posts: 1361
- Joined: Tue Jan 06, 2004 12:21 pm
- Location: CEA NeuroSpin, Saint Aubin, France
- Contact:
Re: Save the resulting image from the overlap between a mask and the original image
Hi,
Well, it is not possible to save a fusion image from Anatomist because a fusion image is not an image: it's a mix between two images, each with its colormap, and images resolutions and fields of view can differ. It is possible to make a (possibly resampled) image, but Anatomist does it on the fly only for the displayed slice, there is no "fusionned volume" under it. It should be possible to build it using the programming API of Anatomist, but you would get a RGB volume anyway (probably not what you expect).
It is possible to do it using commandlines (Aims commands) and/or python scripting, and/or BrainVisa processes. In a general case you would have to take into account images coordinates systems, coordinates transformations between them, resampling in a target space with a target resolution, and masking.
If you are in a "simple" case, with both images in the same field of view and resolution (voxels are directly superimposable), you can use the AimsMask command for instance (use AimsMask -h for help and options).
Denis
Well, it is not possible to save a fusion image from Anatomist because a fusion image is not an image: it's a mix between two images, each with its colormap, and images resolutions and fields of view can differ. It is possible to make a (possibly resampled) image, but Anatomist does it on the fly only for the displayed slice, there is no "fusionned volume" under it. It should be possible to build it using the programming API of Anatomist, but you would get a RGB volume anyway (probably not what you expect).
It is possible to do it using commandlines (Aims commands) and/or python scripting, and/or BrainVisa processes. In a general case you would have to take into account images coordinates systems, coordinates transformations between them, resampling in a target space with a target resolution, and masking.
If you are in a "simple" case, with both images in the same field of view and resolution (voxels are directly superimposable), you can use the AimsMask command for instance (use AimsMask -h for help and options).
Denis