Python API for deepsulci module

Sulci labeling API

deepsulci.sulci_labeling.analyse.stats.acc_score(y_true, y_pred)[source]

ACC score

deepsulci.sulci_labeling.analyse.stats.bacc_score(y_true, y_pred, labels)[source]

BACC score

deepsulci.sulci_labeling.analyse.stats.esi_score(y_true, y_pred, labels)[source]

ESI score

deepsulci.sulci_labeling.method.cutting.cutting(y_scores, y_vert, bck, threshold, vs=1.0)[source]

Cut elementary fold according to voxel-wise classification scores

class deepsulci.sulci_labeling.method.unet.UnetSulciLabeling(sulci_side_list, batch_size=3, cuda=0, lr=0.001, momentum=0.9, num_filter=64, translation_file=None, dict_bck2=None, dict_names=None)[source]

3D U-Net for automatic sulci recognition

Parameters:
  • sulci_side_list (list of sulci names) –
  • lr (learning rate) –
  • momentum (momentume for SGD) –
  • early_stopping (if True, early_stopping with patience=4 is used) –
  • cuda (index of the GPU to use) –
  • batch_size (number of sample per batch during learning) –
  • data_augmentation (if True, random rotation of the images) –
  • num_filter (number of init filter in the UNet (default=64)) –
  • opt (optimizer to use ('Adam' or 'SGD')) –

Pattern classification API

deepsulci.pattern_classification.analyse.stats.balanced_accuracy(y_true, y_pred, labels=None)[source]
class deepsulci.pattern_classification.method.resnet.ResnetPatternClassification(bounding_box, pattern=None, cuda=-1, names_filter=None, lr=0.0001, momentum=0.9, batch_size=10, dict_bck=None, dict_label=None)[source]

ResNet classifier for pattern classification

class deepsulci.pattern_classification.method.snipe.SnipePatternClassification(pattern=None, names_filter=None, n_opal=10, patch_sizes=[6], num_cpu=1, dict_bck=None, dict_bck_filtered=None, dict_label=None)[source]

SNIPE classifier for pattern classification

deepsulci.pattern_classification.method.snipe.subject_labeling(gfile, dict_bck, translation, mask, vol_size, n_opal, distmap_list, bck_list, proba_list, label_list, patch_sizes)[source]

Label a subject sulcal graph (.arg file) for a specific pattern search using the SNIPE method

class deepsulci.pattern_classification.method.svm.SVMPatternClassification(pattern=None, names_filter=None, C=1, gamma=0.01, trans=[0], dict_bck=None, dict_bck_filtered=None, dict_searched_pattern=None, dict_label=None)[source]

SVM classifier for pattern classification

Deep learning tools

class deepsulci.deeptools.dataset.PatternDataset(gfile_list, pattern, bb, train=True, dict_bck={}, dict_label={}, labels=None)[source]

Pattern dataset (for pattern classification)

class deepsulci.deeptools.dataset.SulciDataset(gfile_list, dict_sulci, train=True, translation_file=None, dict_bck2={}, dict_names={})[source]

Sulci dataset class

deepsulci.deeptools.dataset.apply_bounding_box(points, bb)[source]

Crop points with a bounding box

deepsulci.deeptools.dataset.extract_data(graph, flip=False)[source]

Extract sulci points data from sulcal graphs (.arg files)

deepsulci.deeptools.dataset.random_rotation(center, rot_angle)[source]

Apply a random rotation (random axis and angle) around a given center

deepsulci.deeptools.dataset.rotation_bck(bck, transrot)[source]

Apply a rotation to a bucket (set of voxels)

deepsulci.deeptools.dataset.rotation_matrix(angle, direction, point=None)[source]

Return matrix to rotate about axis defined by point and direction.

>>> R = rotation_matrix(math.pi/2, [0, 0, 1], [1, 0, 0])
>>> numpy.allclose(numpy.dot(R, [0, 0, 0, 1]), [1, -1, 0, 1])
True
>>> angle = (random.random() - 0.5) * (2*math.pi)
>>> direc = numpy.random.random(3) - 0.5
>>> point = numpy.random.random(3) - 0.5
>>> R0 = rotation_matrix(angle, direc, point)
>>> R1 = rotation_matrix(angle-2*math.pi, direc, point)
>>> is_same_transform(R0, R1)
True
>>> R0 = rotation_matrix(angle, direc, point)
>>> R1 = rotation_matrix(-angle, -direc, point)
>>> is_same_transform(R0, R1)
True
>>> I = numpy.identity(4, numpy.float64)
>>> numpy.allclose(I, rotation_matrix(math.pi*2, direc))
True
>>> numpy.allclose(2, numpy.trace(rotation_matrix(math.pi/2,
...                                               direc, point)))
True
deepsulci.deeptools.dataset.unit_vector(data, axis=None, out=None)[source]

Return ndarray normalized by length, i.e. Euclidean norm, along axis.

>>> v0 = numpy.random.random(3)
>>> v1 = unit_vector(v0)
>>> numpy.allclose(v1, v0 / numpy.linalg.norm(v0))
True
>>> v0 = numpy.random.rand(5, 4, 3)
>>> v1 = unit_vector(v0, axis=-1)
>>> v2 = v0 / numpy.expand_dims(numpy.sqrt(numpy.sum(v0*v0, axis=2)), 2)
>>> numpy.allclose(v1, v2)
True
>>> v1 = unit_vector(v0, axis=1)
>>> v2 = v0 / numpy.expand_dims(numpy.sqrt(numpy.sum(v0*v0, axis=1)), 1)
>>> numpy.allclose(v1, v2)
True
>>> v1 = numpy.empty((5, 4, 3))
>>> unit_vector(v0, axis=1, out=v1)
>>> numpy.allclose(v1, v2)
True
>>> list(unit_vector([]))
[]
>>> list(unit_vector([1]))
[1.0]
class deepsulci.deeptools.early_stopping.EarlyStopping(patience=7, verbose=False)[source]

Early stops the training if validation loss doesn’t improve after a given patience.

Parameters:
  • patience (int) – How long to wait after last time validation loss improved. Default: 7
  • verbose (bool) – If True, prints a message for each validation loss improvement. Default: False
save_checkpoint(val_loss, model)[source]

Saves model when validation loss decrease.

class deepsulci.deeptools.models.Decoder(in_channels, out_channels, interpolate, kernel_size=3, scale_factor=(2, 2, 2), conv_layer_order='crg', num_groups=32, ind=0)[source]

A single module for decoder path consisting of the upsample layer (either learned ConvTranspose3d or interpolation) followed by a DoubleConv module.

Parameters:
  • in_channels (int) – number of input channels
  • out_channels (int) – number of output channels
  • interpolate (bool) – if True use nn.Upsample for upsampling, otherwise learn ConvTranspose3d if you have enough GPU memory and ain’t afraid of overfitting
  • kernel_size (int) – size of the convolving kernel
  • scale_factor (tuple) – used as the multiplier for the image H/W/D in case of nn.Upsample or as stride in case of ConvTranspose3d
  • conv_layer_order (string) – determines the order of layers in DoubleConv module. See DoubleConv for more info.
  • num_groups (int) – number of groups for the GroupNorm
class deepsulci.deeptools.models.DoubleConv(in_channels, out_channels, kernel_size=3, order='crg', num_groups=32)[source]

A module consisting of two consecutive convolution layers (e.g. BatchNorm3d+ReLU+Conv3d) with the number of output channels ‘out_channels // 2’ and ‘out_channels’ respectively. We use (Conv3d+ReLU+GroupNorm3d) by default. This can be change however by providing the ‘order’ argument, e.g. in order to change to Conv3d+BatchNorm3d+ReLU use order=’cbr’. Use padded convolutions to make sure that the output (H_out, W_out) is the same as (H_in, W_in), so that you don’t have to crop in the decoder path.

Parameters:
  • in_channels (int) – number of input channels
  • out_channels (int) – number of output channels
  • kernel_size (int) – size of the convolving kernel
  • order (string) – determines the order of layers, e.g. ‘cr’ -> conv + ReLU ‘crg’ -> conv + ReLU + groupnorm
  • num_groups (int) – number of groups for the GroupNorm
class deepsulci.deeptools.models.Encoder(in_channels, out_channels, conv_kernel_size=3, is_max_pool=True, max_pool_kernel_size=(2, 2, 2), conv_layer_order='crg', num_groups=32, ind=0)[source]

A single module from the encoder path consisting of the optional max pooling layer (one may specify the MaxPool kernel_size to be different than the standard (2,2,2), e.g. if the volumetric data is anisotropic (make sure to use complementary scale_factor in the decoder path) followed by a DoubleConv module.

Parameters:
  • in_channels (int) – number of input channels
  • out_channels (int) – number of output channels
  • conv_kernel_size (int) – size of the convolving kernel
  • is_max_pool (bool) – if True use MaxPool3d before DoubleConv
  • max_pool_kernel_size (tuple) – the size of the window to take a max over
  • conv_layer_order (string) – determines the order of layers in DoubleConv module. See DoubleConv for more info.
  • num_groups (int) – number of groups for the GroupNorm
class deepsulci.deeptools.models.UNet3D(in_channels, out_channels, final_sigmoid, interpolate=True, conv_layer_order='crg', init_channel_number=64)[source]

3DUnet model from “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation”

Parameters:
  • in_channels (int) – number of input channels
  • out_channels (int) – number of output segmentation masks; Note that that the of out_channels might correspond to either different semantic classes or to different binary segmentation mask. It’s up to the user of the class to interpret the out_channels and use the proper loss criterion during training (i.e. NLLLoss (multi-class) or BCELoss (two-class) respectively)
  • interpolate (bool) – if True use F.interpolate for upsampling otherwise use ConvTranspose3d
  • final_sigmoid (bool) – if True apply element-wise nn.Sigmoid after the final 1x1x1 convolution, otherwise apply nn.Softmax. MUST be True if nn.BCELoss (two-class) is used to train the model. MUST be False if nn.CrossEntropyLoss (multi-class) is used to train the model.
  • conv_layer_order (string) – determines the order of layers in DoubleConv module. e.g. ‘crg’ stands for Conv3d+ReLU+GroupNorm3d. See DoubleConv for more info.
deepsulci.deeptools.models.resnet101(**kwargs)[source]

Constructs a ResNet-101 model.

deepsulci.deeptools.models.resnet152(**kwargs)[source]

Constructs a ResNet-152 model.

deepsulci.deeptools.models.resnet18(**kwargs)[source]

Constructs a ResNet-18 model.

deepsulci.deeptools.models.resnet34(**kwargs)[source]

Constructs a ResNet-34 model.

deepsulci.deeptools.models.resnet50(**kwargs)[source]

Constructs a ResNet-50 model.

Patch tools

class deepsulci.patchtools.optimized_patchmatch.OptimizedPatchMatch(patch_size, search_size=[3, 3, 3], border=10, segmentation=True, k=5, j=4)[source]

Optimized PatchMatch algorithm