interfaces.ants.segmentation

AntsJointFusion

Link to code

Wraps the executable command antsJointFusion.

Examples

>>> from nipype.interfaces.ants import AntsJointFusion
>>> antsjointfusion = AntsJointFusion()
>>> antsjointfusion.inputs.out_label_fusion = 'ants_fusion_label_output.nii'
>>> antsjointfusion.inputs.atlas_image = [ ['rc1s1.nii','rc1s2.nii'] ]
>>> antsjointfusion.inputs.atlas_segmentation_image = ['segmentation0.nii.gz']
>>> antsjointfusion.inputs.target_image = ['im1.nii']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -l segmentation0.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii']"
>>> antsjointfusion.inputs.target_image = [ ['im1.nii', 'im2.nii'] ]
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -l segmentation0.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.atlas_image = [ ['rc1s1.nii','rc1s2.nii'],
...                                        ['rc2s1.nii','rc2s2.nii'] ]
>>> antsjointfusion.inputs.atlas_segmentation_image = ['segmentation0.nii.gz',
...                                                    'segmentation1.nii.gz']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.dimension = 3
>>> antsjointfusion.inputs.alpha = 0.5
>>> antsjointfusion.inputs.beta = 1.0
>>> antsjointfusion.inputs.patch_radius = [3,2,1]
>>> antsjointfusion.inputs.search_radius = [3]
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -o ants_fusion_label_output.nii -p 3x2x1 -s 3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.search_radius = ['mask.nii']
>>> antsjointfusion.inputs.verbose = True
>>> antsjointfusion.inputs.exclusion_image = ['roi01.nii', 'roi02.nii']
>>> antsjointfusion.inputs.exclusion_image_label = ['1','2']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -e 1[roi01.nii] -e 2[roi02.nii] -o ants_fusion_label_output.nii -p 3x2x1 -s mask.nii -t ['im1.nii', 'im2.nii'] -v"
>>> antsjointfusion.inputs.out_label_fusion = 'ants_fusion_label_output.nii'
>>> antsjointfusion.inputs.out_intensity_fusion_name_format = 'ants_joint_fusion_intensity_%d.nii.gz'
>>> antsjointfusion.inputs.out_label_post_prob_name_format = 'ants_joint_fusion_posterior_%d.nii.gz'
>>> antsjointfusion.inputs.out_atlas_voting_weight_name_format = 'ants_joint_fusion_voting_weight_%d.nii.gz'
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -e 1[roi01.nii] -e 2[roi02.nii]  -o [ants_fusion_label_output.nii, ants_joint_fusion_intensity_%d.nii.gz, ants_joint_fusion_posterior_%d.nii.gz, ants_joint_fusion_voting_weight_%d.nii.gz] -p 3x2x1 -s mask.nii -t ['im1.nii', 'im2.nii'] -v"

Inputs:

[Mandatory]
atlas_image: (a list of items which are a list of items which are an
          existing file name)
        The atlas image (or multimodal atlas images) assumed to be aligned
        to a common image domain.
        argument: ``-g %s...``
atlas_segmentation_image: (a list of items which are an existing file
          name)
        The atlas segmentation images. For performing label fusion the
        number of specified segmentations should be identical to the number
        of atlas image sets.
        argument: ``-l %s...``
target_image: (a list of items which are a list of items which are an
          existing file name)
        The target image (or multimodal target images) assumed to be aligned
        to a common image domain.
        argument: ``-t %s``

[Optional]
mask_image: (an existing file name)
        If a mask image is specified, fusion is only performed in the mask
        region.
        argument: ``-x %s``
patch_metric: ('PC' or 'MSQ')
        Metric to be used in determining the most similar neighborhood
        patch. Options include Pearson's correlation (PC) and mean squares
        (MSQ). Default = PC (Pearson correlation).
        argument: ``-m %s``
out_label_fusion: (a file name)
        The output label fusion image.
        argument: ``%s``
out_label_post_prob_name_format: (a unicode string)
        Optional label posterior probability image file name format.
        requires: out_label_fusion, out_intensity_fusion_name_format
search_radius: (a list of from 1 to 3 items which are any value,
          nipype default value: [3, 3, 3])
        Search radius for similarity measures. Default = 3x3x3. One can also
        specify an image where the value at the voxel specifies the
        isotropic search radius at that voxel.
        argument: ``-s %s``
exclusion_image_label: (a list of items which are a unicode string)
        Specify a label for the exclusion region.
        argument: ``-e %s``
        requires: exclusion_image
patch_radius: (a list of items which are a value of class 'int')
        Patch radius for similarity measures.Default: 2x2x2
        argument: ``-p %s``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
constrain_nonnegative: (a boolean, nipype default value: False)
        Constrain solution to non-negative weights.
        argument: ``-c``
dimension: (3 or 2 or 4)
        This option forces the image to be treated as a specified-
        dimensional image. If not specified, the program tries to infer the
        dimensionality from the input image.
        argument: ``-d %d``
out_atlas_voting_weight_name_format: (a unicode string)
        Optional atlas voting weight image file name format.
        requires: out_label_fusion, out_intensity_fusion_name_format,
          out_label_post_prob_name_format
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
beta: (a float, nipype default value: 2.0)
        Exponent for mapping intensity difference to the joint error.
        Default = 2.0
        argument: ``-b %s``
exclusion_image: (a list of items which are an existing file name)
        Specify an exclusion region for the given label.
out_intensity_fusion_name_format: (a unicode string)
        Optional intensity fusion image file name format. (e.g.
        "antsJointFusionIntensity_%d.nii.gz")
retain_atlas_voting_images: (a boolean, nipype default value: False)
        Retain atlas voting images. Default = false
        argument: ``-f``
retain_label_posterior_images: (a boolean, nipype default value:
          False)
        Retain label posterior probability images. Requires atlas
        segmentations to be specified. Default = false
        argument: ``-r``
        requires: atlas_segmentation_image
alpha: (a float, nipype default value: 0.1)
        Regularization term added to matrix Mx for calculating the inverse.
        Default = 0.1
        argument: ``-a %s``
verbose: (a boolean)
        Verbose output.
        argument: ``-v``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

out_atlas_voting_weight_name_format: (a unicode string)
out_label_fusion: (an existing file name)
out_label_post_prob_name_format: (a unicode string)
out_intensity_fusion_name_format: (a unicode string)

Atropos

Link to code

Wraps the executable command Atropos.

A finite mixture modeling (FMM) segmentation approach with possibilities for specifying prior constraints. These prior constraints include the specification of a prior label image, prior probability images (one for each class), and/or an MRF prior to enforce spatial smoothing of the labels. Similar algorithms include FAST and SPM.

Examples

>>> from nipype.interfaces.ants import Atropos
>>> at = Atropos()
>>> at.inputs.dimension = 3
>>> at.inputs.intensity_images = 'structural.nii'
>>> at.inputs.mask_image = 'mask.nii'
>>> at.inputs.initialization = 'PriorProbabilityImages'
>>> at.inputs.prior_probability_images = ['rc1s1.nii', 'rc1s2.nii']
>>> at.inputs.number_of_tissue_classes = 2
>>> at.inputs.prior_weighting = 0.8
>>> at.inputs.prior_probability_threshold = 0.0000001
>>> at.inputs.likelihood_model = 'Gaussian'
>>> at.inputs.mrf_smoothing_factor = 0.2
>>> at.inputs.mrf_radius = [1, 1, 1]
>>> at.inputs.icm_use_synchronous_update = True
>>> at.inputs.maximum_number_of_icm_terations = 1
>>> at.inputs.n_iterations = 5
>>> at.inputs.convergence_threshold = 0.000001
>>> at.inputs.posterior_formulation = 'Socrates'
>>> at.inputs.use_mixture_model_proportions = True
>>> at.inputs.save_posteriors = True
>>> at.cmdline
'Atropos --image-dimensionality 3 --icm [1,1] --initialization PriorProbabilityImages[2,priors/priorProbImages%02d.nii,0.8,1e-07] --intensity-image structural.nii --likelihood-model Gaussian --mask-image mask.nii --mrf [0.2,1x1x1] --convergence [5,1e-06] --output [structural_labeled.nii,POSTERIOR_%02d.nii.gz] --posterior-formulation Socrates[1] --use-random-seed 1'

Inputs:

[Mandatory]
number_of_tissue_classes: (an integer (int or long))
mask_image: (an existing file name)
        argument: ``--mask-image %s``
intensity_images: (a list of items which are an existing file name)
        argument: ``--intensity-image %s...``
initialization: ('Random' or 'Otsu' or 'KMeans' or
          'PriorProbabilityImages' or 'PriorLabelImage')
        argument: ``%s``
        requires: number_of_tissue_classes

[Optional]
likelihood_model: (a unicode string)
        argument: ``--likelihood-model %s``
prior_probability_images: (a list of items which are an existing file
          name)
output_posteriors_name_template: (a unicode string, nipype default
          value: POSTERIOR_%02d.nii.gz)
prior_weighting: (a float)
use_mixture_model_proportions: (a boolean)
        requires: posterior_formulation
maximum_number_of_icm_terations: (an integer (int or long))
        requires: icm_use_synchronous_update
n_iterations: (an integer (int or long))
        argument: ``%s``
posterior_formulation: (a unicode string)
        argument: ``%s``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3, or 4)
        argument: ``--image-dimensionality %d``
prior_probability_threshold: (a float)
        requires: prior_weighting
use_random_seed: (a boolean, nipype default value: True)
        use random seed value over constant
        argument: ``--use-random-seed %d``
convergence_threshold: (a float)
        requires: n_iterations
mrf_radius: (a list of items which are an integer (int or long))
        requires: mrf_smoothing_factor
mrf_smoothing_factor: (a float)
        argument: ``%s``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
icm_use_synchronous_update: (a boolean)
        argument: ``%s``
out_classified_image_name: (a file name)
        argument: ``%s``
save_posteriors: (a boolean)
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

classified_image: (an existing file name)
posteriors: (a list of items which are a file name)

BrainExtraction

Link to code

Wraps the executable command antsBrainExtraction.sh.

Examples

>>> from nipype.interfaces.ants.segmentation import BrainExtraction
>>> brainextraction = BrainExtraction()
>>> brainextraction.inputs.dimension = 3
>>> brainextraction.inputs.anatomical_image ='T1.nii.gz'
>>> brainextraction.inputs.brain_template = 'study_template.nii.gz'
>>> brainextraction.inputs.brain_probability_mask ='ProbabilityMaskOfStudyTemplate.nii.gz'
>>> brainextraction.cmdline
'antsBrainExtraction.sh -a T1.nii.gz -m ProbabilityMaskOfStudyTemplate.nii.gz -e study_template.nii.gz -d 3 -s nii.gz -o highres001_'

Inputs:

[Mandatory]
brain_template: (an existing file name)
        Anatomical template created using e.g. LPBA40 data set with
        buildtemplateparallel.sh in ANTs.
        argument: ``-e %s``
brain_probability_mask: (an existing file name)
        Brain probability mask created using e.g. LPBA40 data set which have
        brain masks defined, and warped to anatomical template and averaged
        resulting in a probability image.
        argument: ``-m %s``
anatomical_image: (an existing file name)
        Structural image, typically T1. If more than one anatomical image is
        specified, subsequently specified images are used during the
        segmentation process. However, only the first image is used in the
        registration of priors. Our suggestion would be to specify the T1 as
        the first image. Anatomical template created using e.g. LPBA40 data
        set with buildtemplateparallel.sh in ANTs.
        argument: ``-a %s``

[Optional]
debug: (a boolean)
        If > 0, runs a faster version of the script. Only for testing.
        Implies -u 0. Requires single thread computation for complete
        reproducibility.
        argument: ``-z 1``
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``-d %d``
image_suffix: (a unicode string, nipype default value: nii.gz)
        any of standard ITK formats, nii.gz is default
        argument: ``-s %s``
extraction_registration_mask: (an existing file name)
        Mask (defined in the template space) used during registration for
        brain extraction. To limit the metric computation to a specific
        region.
        argument: ``-f %s``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
use_floatingpoint_precision: (0 or 1)
        Use floating point precision in registrations (default = 0)
        argument: ``-q %d``
keep_temporary_files: (an integer (int or long))
        Keep brain extraction/segmentation warps, etc (default = 0).
        argument: ``-k %d``
out_prefix: (a unicode string, nipype default value: highres001_)
        Prefix that is prepended to all output files (default =
        highress001_)
        argument: ``-o %s``
use_random_seeding: (0 or 1)
        Use random number generated from system clock in Atropos (default =
        1)
        argument: ``-u %d``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

BrainExtractionPriorWarped: (an existing file name)
BrainExtractionMask: (an existing file name)
        brain extraction mask
BrainExtractionPrior1Warp: (an existing file name)
BrainExtractionPrior1InverseWarp: (an existing file name)
BrainExtractionBrain: (an existing file name)
        brain extraction image
BrainExtractionGM: (an existing file name)
        segmentation mask with only grey matter
BrainExtractionSegmentation: (an existing file name)
        segmentation mask with CSF, GM, and WM
BrainExtractionCSF: (an existing file name)
        segmentation mask with only CSF
N4Truncated0: (an existing file name)
BrainExtractionInitialAffineFixed: (an existing file name)
BrainExtractionWM: (an existing file name)
        segmenration mask with only white matter
BrainExtractionPrior0GenericAffine: (an existing file name)
BrainExtractionLaplacian: (an existing file name)
BrainExtractionTemplateLaplacian: (an existing file name)
N4Corrected0: (an existing file name)
        N4 bias field corrected image
BrainExtractionTmp: (an existing file name)
BrainExtractionInitialAffineMoving: (an existing file name)
BrainExtractionInitialAffine: (an existing file name)

CorticalThickness

Link to code

Wraps the executable command antsCorticalThickness.sh.

Examples

>>> from nipype.interfaces.ants.segmentation import CorticalThickness
>>> corticalthickness = CorticalThickness()
>>> corticalthickness.inputs.dimension = 3
>>> corticalthickness.inputs.anatomical_image ='T1.nii.gz'
>>> corticalthickness.inputs.brain_template = 'study_template.nii.gz'
>>> corticalthickness.inputs.brain_probability_mask ='ProbabilityMaskOfStudyTemplate.nii.gz'
>>> corticalthickness.inputs.segmentation_priors = ['BrainSegmentationPrior01.nii.gz',
...                                                 'BrainSegmentationPrior02.nii.gz',
...                                                 'BrainSegmentationPrior03.nii.gz',
...                                                 'BrainSegmentationPrior04.nii.gz']
>>> corticalthickness.inputs.t1_registration_template = 'brain_study_template.nii.gz'
>>> corticalthickness.cmdline
'antsCorticalThickness.sh -a T1.nii.gz -m ProbabilityMaskOfStudyTemplate.nii.gz -e study_template.nii.gz -d 3 -s nii.gz -o antsCT_ -p nipype_priors/BrainSegmentationPrior%02d.nii.gz -t brain_study_template.nii.gz'

Inputs:

[Mandatory]
t1_registration_template: (an existing file name)
        Anatomical *intensity* template (assumed to be skull-stripped). A
        common case would be where this would be the same template as
        specified in the -e option which is not skull stripped.
        argument: ``-t %s``
brain_template: (an existing file name)
        Anatomical *intensity* template (possibly created using a population
        data set with buildtemplateparallel.sh in ANTs). This template is
        *not* skull-stripped.
        argument: ``-e %s``
segmentation_priors: (a list of items which are an existing file
          name)
        argument: ``-p %s``
brain_probability_mask: (an existing file name)
        brain probability mask in template space
        argument: ``-m %s``
anatomical_image: (an existing file name)
        Structural *intensity* image, typically T1. If more than one
        anatomical image is specified, subsequently specified images are
        used during the segmentation process. However, only the first image
        is used in the registration of priors. Our suggestion would be to
        specify the T1 as the first image.
        argument: ``-a %s``

[Optional]
prior_segmentation_weight: (a float)
        Atropos spatial prior *probability* weight for the segmentation
        argument: ``-w %f``
b_spline_smoothing: (a boolean)
        Use B-spline SyN for registrations and B-spline exponential mapping
        in DiReCT.
        argument: ``-v``
debug: (a boolean)
        If > 0, runs a faster version of the script. Only for testing.
        Implies -u 0. Requires single thread computation for complete
        reproducibility.
        argument: ``-z 1``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
label_propagation: (a unicode string)
        Incorporate a distance prior one the posterior formulation. Should
        be of the form 'label[lambda,boundaryProbability]' where label is a
        value of 1,2,3,... denoting label ID. The label probability for
        anything outside the current label = boundaryProbability * exp(
        -lambda * distanceFromBoundary ) Intuitively, smaller lambda values
        will increase the spatial capture range of the distance prior. To
        apply to all label values, simply omit specifying the label, i.e. -l
        [lambda,boundaryProbability].
        argument: ``-l %s``
out_prefix: (a unicode string, nipype default value: antsCT_)
        Prefix that is prepended to all output files (default = antsCT_)
        argument: ``-o %s``
use_random_seeding: (0 or 1)
        Use random number generated from system clock in Atropos (default =
        1)
        argument: ``-u %d``
posterior_formulation: (a unicode string)
        Atropos posterior formulation and whether or not to use mixture
        model proportions. e.g 'Socrates[1]' (default) or 'Aristotle[1]'.
        Choose the latter if you want use the distance priors (see also the
        -l option for label propagation control).
        argument: ``-b %s``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``-d %d``
image_suffix: (a unicode string, nipype default value: nii.gz)
        any of standard ITK formats, nii.gz is default
        argument: ``-s %s``
segmentation_iterations: (an integer (int or long))
        N4 -> Atropos -> N4 iterations during segmentation (default = 3)
        argument: ``-n %d``
quick_registration: (a boolean)
        If = 1, use antsRegistrationSyNQuick.sh as the basis for
        registration during brain extraction, brain segmentation, and
        (optional) normalization to a template. Otherwise use
        antsRegistrationSyN.sh (default = 0).
        argument: ``-q 1``
extraction_registration_mask: (an existing file name)
        Mask (defined in the template space) used during registration for
        brain extraction.
        argument: ``-f %s``
use_floatingpoint_precision: (0 or 1)
        Use floating point precision in registrations (default = 0)
        argument: ``-j %d``
cortical_label_image: (an existing file name)
        Cortical ROI labels to use as a prior for ATITH.
keep_temporary_files: (an integer (int or long))
        Keep brain extraction/segmentation warps, etc (default = 0).
        argument: ``-k %d``
max_iterations: (an integer (int or long))
        ANTS registration max iterations (default = 100x100x70x20)
        argument: ``-i %d``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

BrainExtractionMask: (an existing file name)
        brain extraction mask
BrainSegmentationN4: (an existing file name)
        N4 corrected image
BrainSegmentationPosteriors: (a list of items which are an existing
          file name)
        Posterior probability images
SubjectToTemplate0GenericAffine: (an existing file name)
        Template to subject inverse affine
CorticalThickness: (an existing file name)
        cortical thickness file
TemplateToSubject1GenericAffine: (an existing file name)
        Template to subject affine
BrainSegmentation: (an existing file name)
        brain segmentaion image
TemplateToSubject0Warp: (an existing file name)
        Template to subject warp
BrainVolumes: (an existing file name)
        Brain volumes as text
CorticalThicknessNormedToTemplate: (an existing file name)
        Normalized cortical thickness
ExtractedBrainN4: (an existing file name)
        extracted brain from N4 image
SubjectToTemplate1Warp: (an existing file name)
        Template to subject inverse warp
SubjectToTemplateLogJacobian: (an existing file name)
        Template to subject log jacobian

DenoiseImage

Link to code

Wraps the executable command DenoiseImage.

Examples

>>> import copy
>>> from nipype.interfaces.ants import DenoiseImage
>>> denoise = DenoiseImage()
>>> denoise.inputs.dimension = 3
>>> denoise.inputs.input_image = 'im1.nii'
>>> denoise.cmdline
'DenoiseImage -d 3 -i im1.nii -n Gaussian -o im1_noise_corrected.nii -s 1'
>>> denoise_2 = copy.deepcopy(denoise)
>>> denoise_2.inputs.output_image = 'output_corrected_image.nii.gz'
>>> denoise_2.inputs.noise_model = 'Rician'
>>> denoise_2.inputs.shrink_factor = 2
>>> denoise_2.cmdline
'DenoiseImage -d 3 -i im1.nii -n Rician -o output_corrected_image.nii.gz -s 2'
>>> denoise_3 = DenoiseImage()
>>> denoise_3.inputs.input_image = 'im1.nii'
>>> denoise_3.inputs.save_noise = True
>>> denoise_3.cmdline
'DenoiseImage -i im1.nii -n Gaussian -o [ im1_noise_corrected.nii, im1_noise.nii ] -s 1'

Inputs:

[Mandatory]
input_image: (an existing file name)
        A scalar image is expected as input for noise correction.
        argument: ``-i %s``
save_noise: (a boolean, nipype default value: False)
        True if the estimated noise should be saved to file.
        mutually_exclusive: noise_image

[Optional]
shrink_factor: (an integer (int or long), nipype default value: 1)
        Running noise correction on large images can be time consuming. To
        lessen computation time, the input image can be resampled. The
        shrink factor, specified as a single integer, describes this
        resampling. Shrink factor = 1 is the default.
        argument: ``-s %s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables
verbose: (a boolean)
        Verbose output.
        argument: ``-v``
dimension: (2 or 3 or 4)
        This option forces the image to be treated as a specified-
        dimensional image. If not specified, the program tries to infer the
        dimensionality from the input image.
        argument: ``-d %d``
noise_image: (a file name)
        Filename for the estimated noise.
noise_model: ('Gaussian' or 'Rician', nipype default value: Gaussian)
        Employ a Rician or Gaussian noise model.
        argument: ``-n %s``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
output_image: (a file name)
        The output consists of the noise corrected version of the input
        image.
        argument: ``-o %s``

Outputs:

output_image: (an existing file name)
noise_image: (a file name)

JointFusion

Link to code

Wraps the executable command jointfusion.

Examples

>>> from nipype.interfaces.ants import JointFusion
>>> at = JointFusion()
>>> at.inputs.dimension = 3
>>> at.inputs.modalities = 1
>>> at.inputs.method = 'Joint[0.1,2]'
>>> at.inputs.output_label_image ='fusion_labelimage_output.nii'
>>> at.inputs.warped_intensity_images = ['im1.nii',
...                                      'im2.nii',
...                                      'im3.nii']
>>> at.inputs.warped_label_images = ['segmentation0.nii.gz',
...                                  'segmentation1.nii.gz',
...                                  'segmentation1.nii.gz']
>>> at.inputs.target_image = 'T1.nii'
>>> at.cmdline
'jointfusion 3 1 -m Joint[0.1,2] -tg T1.nii -g im1.nii -g im2.nii -g im3.nii -l segmentation0.nii.gz -l segmentation1.nii.gz -l segmentation1.nii.gz fusion_labelimage_output.nii'
>>> at.inputs.method = 'Joint'
>>> at.inputs.alpha = 0.5
>>> at.inputs.beta = 1
>>> at.inputs.patch_radius = [3,2,1]
>>> at.inputs.search_radius = [1,2,3]
>>> at.cmdline
'jointfusion 3 1 -m Joint[0.5,1] -rp 3x2x1 -rs 1x2x3 -tg T1.nii -g im1.nii -g im2.nii -g im3.nii -l segmentation0.nii.gz -l segmentation1.nii.gz -l segmentation1.nii.gz fusion_labelimage_output.nii'

Inputs:

[Mandatory]
warped_label_images: (a list of items which are an existing file
          name)
        Warped atlas segmentations
        argument: ``-l %s...``
output_label_image: (a file name)
        Output fusion label map image
        argument: ``%s``, position: -1
warped_intensity_images: (a list of items which are an existing file
          name)
        Warped atlas images
        argument: ``-g %s...``
modalities: (an integer (int or long))
        Number of modalities or features
        argument: ``%d``, position: 1
target_image: (a list of items which are an existing file name)
        Target image(s)
        argument: ``-tg %s...``
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3, or 4)
        argument: ``%d``, position: 0

[Optional]
atlas_group_weights: (a list of items which are a value of class
          'int')
        Assign the voting weights to each atlas group
        argument: ``-gpw %d...``
exclusion_region: (an existing file name)
        Specify an exclusion region for the given label.
        argument: ``-x %s``
alpha: (a float, nipype default value: 0.0)
        Regularization term added to matrix Mx for inverse
        requires: method
atlas_group_id: (a list of items which are a value of class 'int')
        Assign a group ID for each atlas
        argument: ``-gp %d...``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
beta: (an integer (int or long), nipype default value: 0)
        Exponent for mapping intensity difference to joint error
        requires: method
search_radius: (a list of items which are a value of class 'int')
        Local search radius. Default: 3x3x3
        argument: ``-rs %s``
patch_radius: (a list of items which are a value of class 'int')
        Patch radius for similarity measures, scalar or vector. Default:
        2x2x2
        argument: ``-rp %s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables
method: (a unicode string, nipype default value: )
        Select voting method. Options: Joint (Joint Label Fusion). May be
        followed by optional parameters in brackets, e.g., -m Joint[0.1,2]
        argument: ``-m %s``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``

Outputs:

output_label_image: (an existing file name)

KellyKapowski

Link to code

Wraps the executable command KellyKapowski.

Nipype Interface to ANTs’ KellyKapowski, also known as DiReCT.

DiReCT is a registration based estimate of cortical thickness. It was published in S. R. Das, B. B. Avants, M. Grossman, and J. C. Gee, Registration based cortical thickness measurement, Neuroimage 2009, 45:867–879.

Examples

>>> from nipype.interfaces.ants.segmentation import KellyKapowski
>>> kk = KellyKapowski()
>>> kk.inputs.dimension = 3
>>> kk.inputs.segmentation_image = "segmentation0.nii.gz"
>>> kk.inputs.convergence = "[45,0.0,10]"
>>> kk.inputs.thickness_prior_estimate = 10
>>> kk.cmdline
'KellyKapowski --convergence "[45,0.0,10]" --output "[segmentation0_cortical_thickness.nii.gz,segmentation0_warped_white_matter.nii.gz]" --image-dimensionality 3 --gradient-step 0.025000 --maximum-number-of-invert-displacement-field-iterations 20 --number-of-integration-points 10 --segmentation-image "[segmentation0.nii.gz,2,3]" --smoothing-variance 1.000000 --smoothing-velocity-field-parameter 1.500000 --thickness-prior-estimate 10.000000'

Inputs:

[Mandatory]
segmentation_image: (an existing file name)
        A segmentation image must be supplied labeling the gray and white
        matters. Default values = 2 and 3, respectively.
        argument: ``--segmentation-image "%s"``

[Optional]
thickness_prior_estimate: (a float, nipype default value: 10)
        Provides a prior constraint on the final thickness measurement in
        mm.
        argument: ``--thickness-prior-estimate %f``
max_invert_displacement_field_iters: (an integer (int or long),
          nipype default value: 20)
        Maximum number of iterations for estimating the invertdisplacement
        field.
        argument: ``--maximum-number-of-invert-displacement-field-iterations
        %d``
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``--image-dimensionality %d``
thickness_prior_image: (an existing file name)
        An image containing spatially varying prior thickness values.
        argument: ``--thickness-prior-image "%s"``
warped_white_matter: (a file name)
        Filename for the warped white matter file.
smoothing_variance: (a float, nipype default value: 1.0)
        Defines the Gaussian smoothing of the hit and total images.
        argument: ``--smoothing-variance %f``
gray_matter_label: (an integer (int or long), nipype default value:
          2)
        The label value for the gray matter label in the segmentation_image.
number_integration_points: (an integer (int or long), nipype default
          value: 10)
        Number of compositions of the diffeomorphism per iteration.
        argument: ``--number-of-integration-points %d``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
smoothing_velocity_field: (a float, nipype default value: 1.5)
        Defines the Gaussian smoothing of the velocity field (default =
        1.5). If the b-spline smoothing option is chosen, then this defines
        the isotropic mesh spacing for the smoothing spline (default = 15).
        argument: ``--smoothing-velocity-field-parameter %f``
gradient_step: (a float, nipype default value: 0.025)
        Gradient step size for the optimization.
        argument: ``--gradient-step %f``
white_matter_label: (an integer (int or long), nipype default value:
          3)
        The label value for the white matter label in the
        segmentation_image.
gray_matter_prob_image: (an existing file name)
        In addition to the segmentation image, a gray matter probability
        image can be used. If no such image is supplied, one is created
        using the segmentation image and a variance of 1.0 mm.
        argument: ``--gray-matter-probability-image "%s"``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables
convergence: (a unicode string, nipype default value: )
        Convergence is determined by fitting a line to the normalized energy
        profile of the last N iterations (where N is specified by the window
        size) and determining the slope which is then compared with the
        convergence threshold.
        argument: ``--convergence "%s"``
use_bspline_smoothing: (a boolean)
        Sets the option for B-spline smoothing of the velocity field.
        argument: ``--use-bspline-smoothing 1``
cortical_thickness: (a file name)
        Filename for the cortical thickness.
        argument: ``--output "%s"``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
white_matter_prob_image: (an existing file name)
        In addition to the segmentation image, a white matter probability
        image can be used. If no such image is supplied, one is created
        using the segmentation image and a variance of 1.0 mm.
        argument: ``--white-matter-probability-image "%s"``

Outputs:

warped_white_matter: (a file name)
        A warped white matter image.
cortical_thickness: (a file name)
        A thickness map defined in the segmented gray matter.

References:

None

LaplacianThickness

Link to code

Wraps the executable command LaplacianThickness.

Calculates the cortical thickness from an anatomical image

Examples

>>> from nipype.interfaces.ants import LaplacianThickness
>>> cort_thick = LaplacianThickness()
>>> cort_thick.inputs.input_wm = 'white_matter.nii.gz'
>>> cort_thick.inputs.input_gm = 'gray_matter.nii.gz'
>>> cort_thick.cmdline
'LaplacianThickness white_matter.nii.gz gray_matter.nii.gz white_matter_thickness.nii.gz'
>>> cort_thick.inputs.output_image = 'output_thickness.nii.gz'
>>> cort_thick.cmdline
'LaplacianThickness white_matter.nii.gz gray_matter.nii.gz output_thickness.nii.gz'

Inputs:

[Mandatory]
input_wm: (a file name)
        white matter segmentation image
        argument: ``%s``, position: 1
input_gm: (a file name)
        gray matter segmentation image
        argument: ``%s``, position: 2

[Optional]
smooth_param: (a float)
        argument: ``smoothparam=%d``, position: 4
output_image: (a file name)
        name of output file
        argument: ``%s``, position: 3
opt_tolerance: (a float)
        argument: ``optional-laplacian-tolerance=%d``, position: 8
sulcus_prior: (a boolean)
        argument: ``use-sulcus-prior``, position: 7
dT: (a float)
        argument: ``dT=%d``, position: 6
prior_thickness: (a float)
        argument: ``priorthickval=%d``, position: 5
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

output_image: (an existing file name)
        Cortical thickness

N4BiasFieldCorrection

Link to code

Wraps the executable command N4BiasFieldCorrection.

N4 is a variant of the popular N3 (nonparameteric nonuniform normalization) retrospective bias correction algorithm. Based on the assumption that the corruption of the low frequency bias field can be modeled as a convolution of the intensity histogram by a Gaussian, the basic algorithmic protocol is to iterate between deconvolving the intensity histogram by a Gaussian, remapping the intensities, and then spatially smoothing this result by a B-spline modeling of the bias field itself. The modifications from and improvements obtained over the original N3 algorithm are described in [Tustison2010].

[Tustison2010]N. Tustison et al., N4ITK: Improved N3 Bias Correction, IEEE Transactions on Medical Imaging, 29(6):1310-1320, June 2010.

Examples

>>> import copy
>>> from nipype.interfaces.ants import N4BiasFieldCorrection
>>> n4 = N4BiasFieldCorrection()
>>> n4.inputs.dimension = 3
>>> n4.inputs.input_image = 'structural.nii'
>>> n4.inputs.bspline_fitting_distance = 300
>>> n4.inputs.shrink_factor = 3
>>> n4.inputs.n_iterations = [50,50,30,20]
>>> n4.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_2 = copy.deepcopy(n4)
>>> n4_2.inputs.convergence_threshold = 1e-6
>>> n4_2.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20, 1e-06 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_3 = copy.deepcopy(n4_2)
>>> n4_3.inputs.bspline_order = 5
>>> n4_3.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300, 5 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20, 1e-06 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_4 = N4BiasFieldCorrection()
>>> n4_4.inputs.input_image = 'structural.nii'
>>> n4_4.inputs.save_bias = True
>>> n4_4.inputs.dimension = 3
>>> n4_4.cmdline
'N4BiasFieldCorrection -d 3 --input-image structural.nii --output [ structural_corrected.nii, structural_bias.nii ]'

Inputs:

[Mandatory]
input_image: (a file name)
        input for bias correction. Negative values or values close to zero
        should be processed prior to correction
        argument: ``--input-image %s``
save_bias: (a boolean, nipype default value: False)
        True if the estimated bias should be saved to file.
        mutually_exclusive: bias_image
copy_header: (a boolean, nipype default value: False)
        copy headers of the original image into the output (corrected) file

[Optional]
shrink_factor: (an integer (int or long))
        argument: ``--shrink-factor %d``
bias_image: (a file name)
        Filename for the estimated bias.
mask_image: (a file name)
        image to specify region to perform final bias correction in
        argument: ``--mask-image %s``
convergence_threshold: (a float)
        requires: n_iterations
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3 or 4)
        argument: ``-d %d``
weight_image: (a file name)
        image for relative weighting (e.g. probability map of the white
        matter) of voxels during the B-spline fitting.
        argument: ``--weight-image %s``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
bspline_fitting_distance: (a float)
        argument: ``--bspline-fitting %s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables
n_iterations: (a list of items which are an integer (int or long))
        argument: ``--convergence %s``
output_image: (a unicode string)
        output file name
        argument: ``--output %s``
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
bspline_order: (an integer (int or long))
        requires: bspline_fitting_distance

Outputs:

bias_image: (an existing file name)
        Estimated bias
output_image: (an existing file name)
        Warped image