interfaces.ants.segmentation

AntsJointFusion

Link to code

Wraps the executable command antsJointFusion.

Examples

>>> from nipype.interfaces.ants import AntsJointFusion
>>> antsjointfusion = AntsJointFusion()
>>> antsjointfusion.inputs.out_label_fusion = 'ants_fusion_label_output.nii'
>>> antsjointfusion.inputs.atlas_image = [ ['rc1s1.nii','rc1s2.nii'] ]
>>> antsjointfusion.inputs.atlas_segmentation_image = ['segmentation0.nii.gz']
>>> antsjointfusion.inputs.target_image = ['im1.nii']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -l segmentation0.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii']"
>>> antsjointfusion.inputs.target_image = [ ['im1.nii', 'im2.nii'] ]
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -l segmentation0.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.atlas_image = [ ['rc1s1.nii','rc1s2.nii'],
...                                        ['rc2s1.nii','rc2s2.nii'] ]
>>> antsjointfusion.inputs.atlas_segmentation_image = ['segmentation0.nii.gz',
...                                                    'segmentation1.nii.gz']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.1 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 2.0 -o ants_fusion_label_output.nii -s 3x3x3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.dimension = 3
>>> antsjointfusion.inputs.alpha = 0.5
>>> antsjointfusion.inputs.beta = 1.0
>>> antsjointfusion.inputs.patch_radius = [3,2,1]
>>> antsjointfusion.inputs.search_radius = [3]
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -o ants_fusion_label_output.nii -p 3x2x1 -s 3 -t ['im1.nii', 'im2.nii']"
>>> antsjointfusion.inputs.search_radius = ['mask.nii']
>>> antsjointfusion.inputs.verbose = True
>>> antsjointfusion.inputs.exclusion_image = ['roi01.nii', 'roi02.nii']
>>> antsjointfusion.inputs.exclusion_image_label = ['1','2']
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -e 1[roi01.nii] -e 2[roi02.nii] -o ants_fusion_label_output.nii -p 3x2x1 -s mask.nii -t ['im1.nii', 'im2.nii'] -v"
>>> antsjointfusion.inputs.out_label_fusion = 'ants_fusion_label_output.nii'
>>> antsjointfusion.inputs.out_intensity_fusion_name_format = 'ants_joint_fusion_intensity_%d.nii.gz'
>>> antsjointfusion.inputs.out_label_post_prob_name_format = 'ants_joint_fusion_posterior_%d.nii.gz'
>>> antsjointfusion.inputs.out_atlas_voting_weight_name_format = 'ants_joint_fusion_voting_weight_%d.nii.gz'
>>> antsjointfusion.cmdline
"antsJointFusion -a 0.5 -g ['rc1s1.nii', 'rc1s2.nii'] -g ['rc2s1.nii', 'rc2s2.nii'] -l segmentation0.nii.gz -l segmentation1.nii.gz -b 1.0 -d 3 -e 1[roi01.nii] -e 2[roi02.nii]  -o [ants_fusion_label_output.nii, ants_joint_fusion_intensity_%d.nii.gz, ants_joint_fusion_posterior_%d.nii.gz, ants_joint_fusion_voting_weight_%d.nii.gz] -p 3x2x1 -s mask.nii -t ['im1.nii', 'im2.nii'] -v"

Inputs:

[Mandatory]
target_image: (a list of items which are a list of items which are a
          pathlike object or string representing an existing file)
        The target image (or multimodal target images) assumed to be aligned
        to a common image domain.
        argument: ``-t %s``
atlas_image: (a list of items which are a list of items which are a
          pathlike object or string representing an existing file)
        The atlas image (or multimodal atlas images) assumed to be aligned
        to a common image domain.
        argument: ``-g %s...``
atlas_segmentation_image: (a list of items which are a pathlike
          object or string representing an existing file)
        The atlas segmentation images. For performing label fusion the
        number of specified segmentations should be identical to the number
        of atlas image sets.
        argument: ``-l %s...``

[Optional]
dimension: (3 or 2 or 4)
        This option forces the image to be treated as a specified-
        dimensional image. If not specified, the program tries to infer the
        dimensionality from the input image.
        argument: ``-d %d``
alpha: (a float, nipype default value: 0.1)
        Regularization term added to matrix Mx for calculating the inverse.
        Default = 0.1
        argument: ``-a %s``
beta: (a float, nipype default value: 2.0)
        Exponent for mapping intensity difference to the joint error.
        Default = 2.0
        argument: ``-b %s``
retain_label_posterior_images: (a boolean, nipype default value:
          False)
        Retain label posterior probability images. Requires atlas
        segmentations to be specified. Default = false
        argument: ``-r``
        requires: atlas_segmentation_image
retain_atlas_voting_images: (a boolean, nipype default value: False)
        Retain atlas voting images. Default = false
        argument: ``-f``
constrain_nonnegative: (a boolean, nipype default value: False)
        Constrain solution to non-negative weights.
        argument: ``-c``
patch_radius: (a list of items which are a value of class 'int')
        Patch radius for similarity measures.Default: 2x2x2
        argument: ``-p %s``
patch_metric: ('PC' or 'MSQ')
        Metric to be used in determining the most similar neighborhood
        patch. Options include Pearson's correlation (PC) and mean squares
        (MSQ). Default = PC (Pearson correlation).
        argument: ``-m %s``
search_radius: (a list of from 1 to 3 items which are any value,
          nipype default value: [3, 3, 3])
        Search radius for similarity measures. Default = 3x3x3. One can also
        specify an image where the value at the voxel specifies the
        isotropic search radius at that voxel.
        argument: ``-s %s``
exclusion_image_label: (a list of items which are a unicode string)
        Specify a label for the exclusion region.
        argument: ``-e %s``
        requires: exclusion_image
exclusion_image: (a list of items which are a pathlike object or
          string representing an existing file)
        Specify an exclusion region for the given label.
mask_image: (a pathlike object or string representing an existing
          file)
        If a mask image is specified, fusion is only performed in the mask
        region.
        argument: ``-x %s``
out_label_fusion: (a pathlike object or string representing a file)
        The output label fusion image.
        argument: ``%s``
out_intensity_fusion_name_format: (a unicode string)
        Optional intensity fusion image file name format. (e.g.
        "antsJointFusionIntensity_%d.nii.gz")
out_label_post_prob_name_format: (a unicode string)
        Optional label posterior probability image file name format.
        requires: out_label_fusion, out_intensity_fusion_name_format
out_atlas_voting_weight_name_format: (a unicode string)
        Optional atlas voting weight image file name format.
        requires: out_label_fusion, out_intensity_fusion_name_format,
          out_label_post_prob_name_format
verbose: (a boolean)
        Verbose output.
        argument: ``-v``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

out_label_fusion: (a pathlike object or string representing an
          existing file)
out_intensity_fusion_name_format: (a unicode string)
out_label_post_prob_name_format: (a unicode string)
out_atlas_voting_weight_name_format: (a unicode string)

Atropos

Link to code

Wraps the executable command Atropos.

A finite mixture modeling (FMM) segmentation approach with possibilities for specifying prior constraints. These prior constraints include the specification of a prior label image, prior probability images (one for each class), and/or an MRF prior to enforce spatial smoothing of the labels. Similar algorithms include FAST and SPM.

Examples

>>> from nipype.interfaces.ants import Atropos
>>> at = Atropos()
>>> at.inputs.dimension = 3
>>> at.inputs.intensity_images = 'structural.nii'
>>> at.inputs.mask_image = 'mask.nii'
>>> at.inputs.initialization = 'PriorProbabilityImages'
>>> at.inputs.prior_probability_images = ['rc1s1.nii', 'rc1s2.nii']
>>> at.inputs.number_of_tissue_classes = 2
>>> at.inputs.prior_weighting = 0.8
>>> at.inputs.prior_probability_threshold = 0.0000001
>>> at.inputs.likelihood_model = 'Gaussian'
>>> at.inputs.mrf_smoothing_factor = 0.2
>>> at.inputs.mrf_radius = [1, 1, 1]
>>> at.inputs.icm_use_synchronous_update = True
>>> at.inputs.maximum_number_of_icm_terations = 1
>>> at.inputs.n_iterations = 5
>>> at.inputs.convergence_threshold = 0.000001
>>> at.inputs.posterior_formulation = 'Socrates'
>>> at.inputs.use_mixture_model_proportions = True
>>> at.inputs.save_posteriors = True
>>> at.cmdline
'Atropos --image-dimensionality 3 --icm [1,1] --initialization PriorProbabilityImages[2,priors/priorProbImages%02d.nii,0.8,1e-07] --intensity-image structural.nii --likelihood-model Gaussian --mask-image mask.nii --mrf [0.2,1x1x1] --convergence [5,1e-06] --output [structural_labeled.nii,POSTERIOR_%02d.nii.gz] --posterior-formulation Socrates[1] --use-random-seed 1'

Inputs:

[Mandatory]
intensity_images: (a list of items which are a pathlike object or
          string representing an existing file)
        argument: ``--intensity-image %s...``
mask_image: (a pathlike object or string representing an existing
          file)
        argument: ``--mask-image %s``
initialization: ('Random' or 'Otsu' or 'KMeans' or
          'PriorProbabilityImages' or 'PriorLabelImage')
        argument: ``%s``
        requires: number_of_tissue_classes
number_of_tissue_classes: (an integer (int or long))

[Optional]
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3, or 4)
        argument: ``--image-dimensionality %d``
prior_probability_images: (a list of items which are a pathlike
          object or string representing an existing file)
prior_weighting: (a float)
prior_probability_threshold: (a float)
        requires: prior_weighting
likelihood_model: (a unicode string)
        argument: ``--likelihood-model %s``
mrf_smoothing_factor: (a float)
        argument: ``%s``
mrf_radius: (a list of items which are an integer (int or long))
        requires: mrf_smoothing_factor
icm_use_synchronous_update: (a boolean)
        argument: ``%s``
maximum_number_of_icm_terations: (an integer (int or long))
        requires: icm_use_synchronous_update
n_iterations: (an integer (int or long))
        argument: ``%s``
convergence_threshold: (a float)
        requires: n_iterations
posterior_formulation: (a unicode string)
        argument: ``%s``
use_random_seed: (a boolean, nipype default value: True)
        use random seed value over constant
        argument: ``--use-random-seed %d``
use_mixture_model_proportions: (a boolean)
        requires: posterior_formulation
out_classified_image_name: (a pathlike object or string representing
          a file)
        argument: ``%s``
save_posteriors: (a boolean)
output_posteriors_name_template: (a unicode string, nipype default
          value: POSTERIOR_%02d.nii.gz)
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

classified_image: (a pathlike object or string representing an
          existing file)
posteriors: (a list of items which are a pathlike object or string
          representing a file)

BrainExtraction

Link to code

Wraps the executable command antsBrainExtraction.sh.

Examples

>>> from nipype.interfaces.ants.segmentation import BrainExtraction
>>> brainextraction = BrainExtraction()
>>> brainextraction.inputs.dimension = 3
>>> brainextraction.inputs.anatomical_image ='T1.nii.gz'
>>> brainextraction.inputs.brain_template = 'study_template.nii.gz'
>>> brainextraction.inputs.brain_probability_mask ='ProbabilityMaskOfStudyTemplate.nii.gz'
>>> brainextraction.cmdline
'antsBrainExtraction.sh -a T1.nii.gz -m ProbabilityMaskOfStudyTemplate.nii.gz -e study_template.nii.gz -d 3 -s nii.gz -o highres001_'

Inputs:

[Mandatory]
anatomical_image: (a pathlike object or string representing an
          existing file)
        Structural image, typically T1. If more than one anatomical image is
        specified, subsequently specified images are used during the
        segmentation process. However, only the first image is used in the
        registration of priors. Our suggestion would be to specify the T1 as
        the first image. Anatomical template created using e.g. LPBA40 data
        set with buildtemplateparallel.sh in ANTs.
        argument: ``-a %s``
brain_template: (a pathlike object or string representing an existing
          file)
        Anatomical template created using e.g. LPBA40 data set with
        buildtemplateparallel.sh in ANTs.
        argument: ``-e %s``
brain_probability_mask: (a pathlike object or string representing an
          existing file)
        Brain probability mask created using e.g. LPBA40 data set which have
        brain masks defined, and warped to anatomical template and averaged
        resulting in a probability image.
        argument: ``-m %s``

[Optional]
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``-d %d``
out_prefix: (a unicode string, nipype default value: highres001_)
        Prefix that is prepended to all output files (default =
        highress001_)
        argument: ``-o %s``
extraction_registration_mask: (a pathlike object or string
          representing an existing file)
        Mask (defined in the template space) used during registration for
        brain extraction. To limit the metric computation to a specific
        region.
        argument: ``-f %s``
image_suffix: (a unicode string, nipype default value: nii.gz)
        any of standard ITK formats, nii.gz is default
        argument: ``-s %s``
use_random_seeding: (0 or 1)
        Use random number generated from system clock in Atropos (default =
        1)
        argument: ``-u %d``
keep_temporary_files: (an integer (int or long))
        Keep brain extraction/segmentation warps, etc (default = 0).
        argument: ``-k %d``
use_floatingpoint_precision: (0 or 1)
        Use floating point precision in registrations (default = 0)
        argument: ``-q %d``
debug: (a boolean)
        If > 0, runs a faster version of the script. Only for testing.
        Implies -u 0. Requires single thread computation for complete
        reproducibility.
        argument: ``-z 1``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

BrainExtractionMask: (a pathlike object or string representing an
          existing file)
        brain extraction mask
BrainExtractionBrain: (a pathlike object or string representing an
          existing file)
        brain extraction image
BrainExtractionCSF: (a pathlike object or string representing an
          existing file)
        segmentation mask with only CSF
BrainExtractionGM: (a pathlike object or string representing an
          existing file)
        segmentation mask with only grey matter
BrainExtractionInitialAffine: (a pathlike object or string
          representing an existing file)
BrainExtractionInitialAffineFixed: (a pathlike object or string
          representing an existing file)
BrainExtractionInitialAffineMoving: (a pathlike object or string
          representing an existing file)
BrainExtractionLaplacian: (a pathlike object or string representing
          an existing file)
BrainExtractionPrior0GenericAffine: (a pathlike object or string
          representing an existing file)
BrainExtractionPrior1InverseWarp: (a pathlike object or string
          representing an existing file)
BrainExtractionPrior1Warp: (a pathlike object or string representing
          an existing file)
BrainExtractionPriorWarped: (a pathlike object or string representing
          an existing file)
BrainExtractionSegmentation: (a pathlike object or string
          representing an existing file)
        segmentation mask with CSF, GM, and WM
BrainExtractionTemplateLaplacian: (a pathlike object or string
          representing an existing file)
BrainExtractionTmp: (a pathlike object or string representing an
          existing file)
BrainExtractionWM: (a pathlike object or string representing an
          existing file)
        segmenration mask with only white matter
N4Corrected0: (a pathlike object or string representing an existing
          file)
        N4 bias field corrected image
N4Truncated0: (a pathlike object or string representing an existing
          file)

CorticalThickness

Link to code

Wraps the executable command antsCorticalThickness.sh.

Examples

>>> from nipype.interfaces.ants.segmentation import CorticalThickness
>>> corticalthickness = CorticalThickness()
>>> corticalthickness.inputs.dimension = 3
>>> corticalthickness.inputs.anatomical_image ='T1.nii.gz'
>>> corticalthickness.inputs.brain_template = 'study_template.nii.gz'
>>> corticalthickness.inputs.brain_probability_mask ='ProbabilityMaskOfStudyTemplate.nii.gz'
>>> corticalthickness.inputs.segmentation_priors = ['BrainSegmentationPrior01.nii.gz',
...                                                 'BrainSegmentationPrior02.nii.gz',
...                                                 'BrainSegmentationPrior03.nii.gz',
...                                                 'BrainSegmentationPrior04.nii.gz']
>>> corticalthickness.inputs.t1_registration_template = 'brain_study_template.nii.gz'
>>> corticalthickness.cmdline
'antsCorticalThickness.sh -a T1.nii.gz -m ProbabilityMaskOfStudyTemplate.nii.gz -e study_template.nii.gz -d 3 -s nii.gz -o antsCT_ -p nipype_priors/BrainSegmentationPrior%02d.nii.gz -t brain_study_template.nii.gz'

Inputs:

[Mandatory]
anatomical_image: (a pathlike object or string representing an
          existing file)
        Structural *intensity* image, typically T1. If more than one
        anatomical image is specified, subsequently specified images are
        used during the segmentation process. However, only the first image
        is used in the registration of priors. Our suggestion would be to
        specify the T1 as the first image.
        argument: ``-a %s``
brain_template: (a pathlike object or string representing an existing
          file)
        Anatomical *intensity* template (possibly created using a population
        data set with buildtemplateparallel.sh in ANTs). This template is
        *not* skull-stripped.
        argument: ``-e %s``
brain_probability_mask: (a pathlike object or string representing an
          existing file)
        brain probability mask in template space
        argument: ``-m %s``
segmentation_priors: (a list of items which are a pathlike object or
          string representing an existing file)
        argument: ``-p %s``
t1_registration_template: (a pathlike object or string representing
          an existing file)
        Anatomical *intensity* template (assumed to be skull-stripped). A
        common case would be where this would be the same template as
        specified in the -e option which is not skull stripped.
        argument: ``-t %s``

[Optional]
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``-d %d``
out_prefix: (a unicode string, nipype default value: antsCT_)
        Prefix that is prepended to all output files (default = antsCT_)
        argument: ``-o %s``
image_suffix: (a unicode string, nipype default value: nii.gz)
        any of standard ITK formats, nii.gz is default
        argument: ``-s %s``
extraction_registration_mask: (a pathlike object or string
          representing an existing file)
        Mask (defined in the template space) used during registration for
        brain extraction.
        argument: ``-f %s``
keep_temporary_files: (an integer (int or long))
        Keep brain extraction/segmentation warps, etc (default = 0).
        argument: ``-k %d``
max_iterations: (an integer (int or long))
        ANTS registration max iterations (default = 100x100x70x20)
        argument: ``-i %d``
prior_segmentation_weight: (a float)
        Atropos spatial prior *probability* weight for the segmentation
        argument: ``-w %f``
segmentation_iterations: (an integer (int or long))
        N4 -> Atropos -> N4 iterations during segmentation (default = 3)
        argument: ``-n %d``
posterior_formulation: (a unicode string)
        Atropos posterior formulation and whether or not to use mixture
        model proportions. e.g 'Socrates[1]' (default) or 'Aristotle[1]'.
        Choose the latter if you want use the distance priors (see also the
        -l option for label propagation control).
        argument: ``-b %s``
use_floatingpoint_precision: (0 or 1)
        Use floating point precision in registrations (default = 0)
        argument: ``-j %d``
use_random_seeding: (0 or 1)
        Use random number generated from system clock in Atropos (default =
        1)
        argument: ``-u %d``
b_spline_smoothing: (a boolean)
        Use B-spline SyN for registrations and B-spline exponential mapping
        in DiReCT.
        argument: ``-v``
cortical_label_image: (a pathlike object or string representing an
          existing file)
        Cortical ROI labels to use as a prior for ATITH.
label_propagation: (a unicode string)
        Incorporate a distance prior one the posterior formulation. Should
        be of the form 'label[lambda,boundaryProbability]' where label is a
        value of 1,2,3,... denoting label ID. The label probability for
        anything outside the current label = boundaryProbability * exp(
        -lambda * distanceFromBoundary ) Intuitively, smaller lambda values
        will increase the spatial capture range of the distance prior. To
        apply to all label values, simply omit specifying the label, i.e. -l
        [lambda,boundaryProbability].
        argument: ``-l %s``
quick_registration: (a boolean)
        If = 1, use antsRegistrationSyNQuick.sh as the basis for
        registration during brain extraction, brain segmentation, and
        (optional) normalization to a template. Otherwise use
        antsRegistrationSyN.sh (default = 0).
        argument: ``-q 1``
debug: (a boolean)
        If > 0, runs a faster version of the script. Only for testing.
        Implies -u 0. Requires single thread computation for complete
        reproducibility.
        argument: ``-z 1``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

BrainExtractionMask: (a pathlike object or string representing an
          existing file)
        brain extraction mask
ExtractedBrainN4: (a pathlike object or string representing an
          existing file)
        extracted brain from N4 image
BrainSegmentation: (a pathlike object or string representing an
          existing file)
        brain segmentaion image
BrainSegmentationN4: (a pathlike object or string representing an
          existing file)
        N4 corrected image
BrainSegmentationPosteriors: (a list of items which are a pathlike
          object or string representing an existing file)
        Posterior probability images
CorticalThickness: (a pathlike object or string representing an
          existing file)
        cortical thickness file
TemplateToSubject1GenericAffine: (a pathlike object or string
          representing an existing file)
        Template to subject affine
TemplateToSubject0Warp: (a pathlike object or string representing an
          existing file)
        Template to subject warp
SubjectToTemplate1Warp: (a pathlike object or string representing an
          existing file)
        Template to subject inverse warp
SubjectToTemplate0GenericAffine: (a pathlike object or string
          representing an existing file)
        Template to subject inverse affine
SubjectToTemplateLogJacobian: (a pathlike object or string
          representing an existing file)
        Template to subject log jacobian
CorticalThicknessNormedToTemplate: (a pathlike object or string
          representing an existing file)
        Normalized cortical thickness
BrainVolumes: (a pathlike object or string representing an existing
          file)
        Brain volumes as text

DenoiseImage

Link to code

Wraps the executable command DenoiseImage.

Examples

>>> import copy
>>> from nipype.interfaces.ants import DenoiseImage
>>> denoise = DenoiseImage()
>>> denoise.inputs.dimension = 3
>>> denoise.inputs.input_image = 'im1.nii'
>>> denoise.cmdline
'DenoiseImage -d 3 -i im1.nii -n Gaussian -o im1_noise_corrected.nii -s 1'
>>> denoise_2 = copy.deepcopy(denoise)
>>> denoise_2.inputs.output_image = 'output_corrected_image.nii.gz'
>>> denoise_2.inputs.noise_model = 'Rician'
>>> denoise_2.inputs.shrink_factor = 2
>>> denoise_2.cmdline
'DenoiseImage -d 3 -i im1.nii -n Rician -o output_corrected_image.nii.gz -s 2'
>>> denoise_3 = DenoiseImage()
>>> denoise_3.inputs.input_image = 'im1.nii'
>>> denoise_3.inputs.save_noise = True
>>> denoise_3.cmdline
'DenoiseImage -i im1.nii -n Gaussian -o [ im1_noise_corrected.nii, im1_noise.nii ] -s 1'

Inputs:

[Mandatory]
input_image: (a pathlike object or string representing an existing
          file)
        A scalar image is expected as input for noise correction.
        argument: ``-i %s``
save_noise: (a boolean, nipype default value: False)
        True if the estimated noise should be saved to file.
        mutually_exclusive: noise_image

[Optional]
dimension: (2 or 3 or 4)
        This option forces the image to be treated as a specified-
        dimensional image. If not specified, the program tries to infer the
        dimensionality from the input image.
        argument: ``-d %d``
noise_model: ('Gaussian' or 'Rician', nipype default value: Gaussian)
        Employ a Rician or Gaussian noise model.
        argument: ``-n %s``
shrink_factor: (an integer (int or long), nipype default value: 1)
        Running noise correction on large images can be time consuming. To
        lessen computation time, the input image can be resampled. The
        shrink factor, specified as a single integer, describes this
        resampling. Shrink factor = 1 is the default.
        argument: ``-s %s``
output_image: (a pathlike object or string representing a file)
        The output consists of the noise corrected version of the input
        image.
        argument: ``-o %s``
noise_image: (a pathlike object or string representing a file)
        Filename for the estimated noise.
verbose: (a boolean)
        Verbose output.
        argument: ``-v``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

output_image: (a pathlike object or string representing an existing
          file)
noise_image: (a pathlike object or string representing a file)

JointFusion

Link to code

Wraps the executable command jointfusion.

Examples

>>> from nipype.interfaces.ants import JointFusion
>>> at = JointFusion()
>>> at.inputs.dimension = 3
>>> at.inputs.modalities = 1
>>> at.inputs.method = 'Joint[0.1,2]'
>>> at.inputs.output_label_image ='fusion_labelimage_output.nii'
>>> at.inputs.warped_intensity_images = ['im1.nii',
...                                      'im2.nii',
...                                      'im3.nii']
>>> at.inputs.warped_label_images = ['segmentation0.nii.gz',
...                                  'segmentation1.nii.gz',
...                                  'segmentation1.nii.gz']
>>> at.inputs.target_image = 'T1.nii'
>>> at.cmdline
'jointfusion 3 1 -m Joint[0.1,2] -tg T1.nii -g im1.nii -g im2.nii -g im3.nii -l segmentation0.nii.gz -l segmentation1.nii.gz -l segmentation1.nii.gz fusion_labelimage_output.nii'
>>> at.inputs.method = 'Joint'
>>> at.inputs.alpha = 0.5
>>> at.inputs.beta = 1
>>> at.inputs.patch_radius = [3,2,1]
>>> at.inputs.search_radius = [1,2,3]
>>> at.cmdline
'jointfusion 3 1 -m Joint[0.5,1] -rp 3x2x1 -rs 1x2x3 -tg T1.nii -g im1.nii -g im2.nii -g im3.nii -l segmentation0.nii.gz -l segmentation1.nii.gz -l segmentation1.nii.gz fusion_labelimage_output.nii'

Inputs:

[Mandatory]
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3, or 4)
        argument: ``%d``, position: 0
modalities: (an integer (int or long))
        Number of modalities or features
        argument: ``%d``, position: 1
warped_intensity_images: (a list of items which are a pathlike object
          or string representing an existing file)
        Warped atlas images
        argument: ``-g %s...``
target_image: (a list of items which are a pathlike object or string
          representing an existing file)
        Target image(s)
        argument: ``-tg %s...``
warped_label_images: (a list of items which are a pathlike object or
          string representing an existing file)
        Warped atlas segmentations
        argument: ``-l %s...``
output_label_image: (a pathlike object or string representing a file)
        Output fusion label map image
        argument: ``%s``, position: -1

[Optional]
method: (a unicode string, nipype default value: )
        Select voting method. Options: Joint (Joint Label Fusion). May be
        followed by optional parameters in brackets, e.g., -m Joint[0.1,2]
        argument: ``-m %s``
alpha: (a float, nipype default value: 0.0)
        Regularization term added to matrix Mx for inverse
        requires: method
beta: (an integer (int or long), nipype default value: 0)
        Exponent for mapping intensity difference to joint error
        requires: method
patch_radius: (a list of items which are a value of class 'int')
        Patch radius for similarity measures, scalar or vector. Default:
        2x2x2
        argument: ``-rp %s``
search_radius: (a list of items which are a value of class 'int')
        Local search radius. Default: 3x3x3
        argument: ``-rs %s``
exclusion_region: (a pathlike object or string representing an
          existing file)
        Specify an exclusion region for the given label.
        argument: ``-x %s``
atlas_group_id: (a list of items which are a value of class 'int')
        Assign a group ID for each atlas
        argument: ``-gp %d...``
atlas_group_weights: (a list of items which are a value of class
          'int')
        Assign the voting weights to each atlas group
        argument: ``-gpw %d...``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

output_label_image: (a pathlike object or string representing an
          existing file)

KellyKapowski

Link to code

Wraps the executable command KellyKapowski.

Nipype Interface to ANTs’ KellyKapowski, also known as DiReCT.

DiReCT is a registration based estimate of cortical thickness. It was published in S. R. Das, B. B. Avants, M. Grossman, and J. C. Gee, Registration based cortical thickness measurement, Neuroimage 2009, 45:867–879.

Examples

>>> from nipype.interfaces.ants.segmentation import KellyKapowski
>>> kk = KellyKapowski()
>>> kk.inputs.dimension = 3
>>> kk.inputs.segmentation_image = "segmentation0.nii.gz"
>>> kk.inputs.convergence = "[45,0.0,10]"
>>> kk.inputs.thickness_prior_estimate = 10
>>> kk.cmdline
'KellyKapowski --convergence "[45,0.0,10]" --output "[segmentation0_cortical_thickness.nii.gz,segmentation0_warped_white_matter.nii.gz]" --image-dimensionality 3 --gradient-step 0.025000 --maximum-number-of-invert-displacement-field-iterations 20 --number-of-integration-points 10 --segmentation-image "[segmentation0.nii.gz,2,3]" --smoothing-variance 1.000000 --smoothing-velocity-field-parameter 1.500000 --thickness-prior-estimate 10.000000'

Inputs:

[Mandatory]
segmentation_image: (a pathlike object or string representing an
          existing file)
        A segmentation image must be supplied labeling the gray and white
        matters. Default values = 2 and 3, respectively.
        argument: ``--segmentation-image "%s"``

[Optional]
dimension: (3 or 2, nipype default value: 3)
        image dimension (2 or 3)
        argument: ``--image-dimensionality %d``
gray_matter_label: (an integer (int or long), nipype default value:
          2)
        The label value for the gray matter label in the segmentation_image.
white_matter_label: (an integer (int or long), nipype default value:
          3)
        The label value for the white matter label in the
        segmentation_image.
gray_matter_prob_image: (a pathlike object or string representing an
          existing file)
        In addition to the segmentation image, a gray matter probability
        image can be used. If no such image is supplied, one is created
        using the segmentation image and a variance of 1.0 mm.
        argument: ``--gray-matter-probability-image "%s"``
white_matter_prob_image: (a pathlike object or string representing an
          existing file)
        In addition to the segmentation image, a white matter probability
        image can be used. If no such image is supplied, one is created
        using the segmentation image and a variance of 1.0 mm.
        argument: ``--white-matter-probability-image "%s"``
convergence: (a unicode string, nipype default value: )
        Convergence is determined by fitting a line to the normalized energy
        profile of the last N iterations (where N is specified by the window
        size) and determining the slope which is then compared with the
        convergence threshold.
        argument: ``--convergence "%s"``
thickness_prior_estimate: (a float, nipype default value: 10)
        Provides a prior constraint on the final thickness measurement in
        mm.
        argument: ``--thickness-prior-estimate %f``
thickness_prior_image: (a pathlike object or string representing an
          existing file)
        An image containing spatially varying prior thickness values.
        argument: ``--thickness-prior-image "%s"``
gradient_step: (a float, nipype default value: 0.025)
        Gradient step size for the optimization.
        argument: ``--gradient-step %f``
smoothing_variance: (a float, nipype default value: 1.0)
        Defines the Gaussian smoothing of the hit and total images.
        argument: ``--smoothing-variance %f``
smoothing_velocity_field: (a float, nipype default value: 1.5)
        Defines the Gaussian smoothing of the velocity field (default =
        1.5). If the b-spline smoothing option is chosen, then this defines
        the isotropic mesh spacing for the smoothing spline (default = 15).
        argument: ``--smoothing-velocity-field-parameter %f``
use_bspline_smoothing: (a boolean)
        Sets the option for B-spline smoothing of the velocity field.
        argument: ``--use-bspline-smoothing 1``
number_integration_points: (an integer (int or long), nipype default
          value: 10)
        Number of compositions of the diffeomorphism per iteration.
        argument: ``--number-of-integration-points %d``
max_invert_displacement_field_iters: (an integer (int or long),
          nipype default value: 20)
        Maximum number of iterations for estimating the invertdisplacement
        field.
        argument: ``--maximum-number-of-invert-displacement-field-iterations
        %d``
cortical_thickness: (a pathlike object or string representing a file)
        Filename for the cortical thickness.
        argument: ``--output "%s"``
warped_white_matter: (a pathlike object or string representing a
          file)
        Filename for the warped white matter file.
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

cortical_thickness: (a pathlike object or string representing a file)
        A thickness map defined in the segmented gray matter.
warped_white_matter: (a pathlike object or string representing a
          file)
        A warped white matter image.

References:

None

LaplacianThickness

Link to code

Wraps the executable command LaplacianThickness.

Calculates the cortical thickness from an anatomical image

Examples

>>> from nipype.interfaces.ants import LaplacianThickness
>>> cort_thick = LaplacianThickness()
>>> cort_thick.inputs.input_wm = 'white_matter.nii.gz'
>>> cort_thick.inputs.input_gm = 'gray_matter.nii.gz'
>>> cort_thick.cmdline
'LaplacianThickness white_matter.nii.gz gray_matter.nii.gz white_matter_thickness.nii.gz'
>>> cort_thick.inputs.output_image = 'output_thickness.nii.gz'
>>> cort_thick.cmdline
'LaplacianThickness white_matter.nii.gz gray_matter.nii.gz output_thickness.nii.gz'

Inputs:

[Mandatory]
input_wm: (a pathlike object or string representing a file)
        white matter segmentation image
        argument: ``%s``, position: 1
input_gm: (a pathlike object or string representing a file)
        gray matter segmentation image
        argument: ``%s``, position: 2

[Optional]
output_image: (a pathlike object or string representing a file)
        name of output file
        argument: ``%s``, position: 3
smooth_param: (a float)
        Sigma of the Laplacian Recursive Image Filter (defaults to 1)
        argument: ``%s``, position: 4
prior_thickness: (a float)
        Prior thickness (defaults to 500)
        argument: ``%s``, position: 5
        requires: smooth_param
dT: (a float)
        Time delta used during integration (defaults to 0.01)
        argument: ``%s``, position: 6
        requires: prior_thickness
sulcus_prior: (a float)
        Positive floating point number for sulcus prior. Authors said that
        0.15 might be a reasonable value
        argument: ``%s``, position: 7
        requires: dT
tolerance: (a float)
        Tolerance to reach during optimization (defaults to 0.001)
        argument: ``%s``, position: 8
        requires: sulcus_prior
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

output_image: (a pathlike object or string representing an existing
          file)
        Cortical thickness

N4BiasFieldCorrection

Link to code

Wraps the executable command N4BiasFieldCorrection.

Bias field correction.

N4 is a variant of the popular N3 (nonparameteric nonuniform normalization) retrospective bias correction algorithm. Based on the assumption that the corruption of the low frequency bias field can be modeled as a convolution of the intensity histogram by a Gaussian, the basic algorithmic protocol is to iterate between deconvolving the intensity histogram by a Gaussian, remapping the intensities, and then spatially smoothing this result by a B-spline modeling of the bias field itself. The modifications from and improvements obtained over the original N3 algorithm are described in [Tustison2010].

[Tustison2010]N. Tustison et al., N4ITK: Improved N3 Bias Correction, IEEE Transactions on Medical Imaging, 29(6):1310-1320, June 2010.

Examples

>>> import copy
>>> from nipype.interfaces.ants import N4BiasFieldCorrection
>>> n4 = N4BiasFieldCorrection()
>>> n4.inputs.dimension = 3
>>> n4.inputs.input_image = 'structural.nii'
>>> n4.inputs.bspline_fitting_distance = 300
>>> n4.inputs.shrink_factor = 3
>>> n4.inputs.n_iterations = [50,50,30,20]
>>> n4.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_2 = copy.deepcopy(n4)
>>> n4_2.inputs.convergence_threshold = 1e-6
>>> n4_2.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20, 1e-06 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_3 = copy.deepcopy(n4_2)
>>> n4_3.inputs.bspline_order = 5
>>> n4_3.cmdline
'N4BiasFieldCorrection --bspline-fitting [ 300, 5 ] -d 3 --input-image structural.nii --convergence [ 50x50x30x20, 1e-06 ] --output structural_corrected.nii --shrink-factor 3'
>>> n4_4 = N4BiasFieldCorrection()
>>> n4_4.inputs.input_image = 'structural.nii'
>>> n4_4.inputs.save_bias = True
>>> n4_4.inputs.dimension = 3
>>> n4_4.cmdline
'N4BiasFieldCorrection -d 3 --input-image structural.nii --output [ structural_corrected.nii, structural_bias.nii ]'
>>> n4_5 = N4BiasFieldCorrection()
>>> n4_5.inputs.input_image = 'structural.nii'
>>> n4_5.inputs.dimension = 3
>>> n4_5.inputs.histogram_sharpening = (0.12, 0.02, 200)
>>> n4_5.cmdline
'N4BiasFieldCorrection -d 3  --histogram-sharpening [0.12,0.02,200] --input-image structural.nii --output structural_corrected.nii'

Inputs:

[Mandatory]
input_image: (a pathlike object or string representing a file)
        input for bias correction. Negative values or values close to zero
        should be processed prior to correction
        argument: ``--input-image %s``
save_bias: (a boolean, nipype default value: False)
        True if the estimated bias should be saved to file.
        mutually_exclusive: bias_image
copy_header: (a boolean, nipype default value: False)
        copy headers of the original image into the output (corrected) file

[Optional]
dimension: (3 or 2 or 4, nipype default value: 3)
        image dimension (2, 3 or 4)
        argument: ``-d %d``
mask_image: (a pathlike object or string representing a file)
        image to specify region to perform final bias correction in
        argument: ``--mask-image %s``
weight_image: (a pathlike object or string representing a file)
        image for relative weighting (e.g. probability map of the white
        matter) of voxels during the B-spline fitting.
        argument: ``--weight-image %s``
output_image: (a unicode string)
        output file name
        argument: ``--output %s``
bspline_fitting_distance: (a float)
        argument: ``--bspline-fitting %s``
bspline_order: (an integer (int or long))
        requires: bspline_fitting_distance
shrink_factor: (an integer (int or long))
        argument: ``--shrink-factor %d``
n_iterations: (a list of items which are an integer (int or long))
        argument: ``--convergence %s``
convergence_threshold: (a float)
        requires: n_iterations
bias_image: (a pathlike object or string representing a file)
        Filename for the estimated bias.
rescale_intensities: (a boolean, nipype default value: False)
        [NOTE: Only ANTs>=2.1.0]
        At each iteration, a new intensity mapping is calculated and applied
        but there
        is nothing which constrains the new intensity range to be within
        certain values.
        The result is that the range can "drift" from the original at each
        iteration.
        This option rescales to the [min,max] range of the original image
        intensities
        within the user-specified mask.
        argument: ``-r``
histogram_sharpening: (a tuple of the form: (a float, a float, an
          integer (int or long)))
        Three-values tuple of histogram sharpening parameters (FWHM,
        wienerNose, numberOfHistogramBins).
        These options describe the histogram sharpening parameters, i.e. the
        deconvolution step parameters described in the original N3
        algorithm.
        The default values have been shown to work fairly well.
        argument: ``--histogram-sharpening [%g,%g,%d]``
num_threads: (an integer (int or long), nipype default value: 1)
        Number of ITK threads to use
args: (a unicode string)
        Additional parameters to the command
        argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
          of class 'str' and with values which are a bytes or None or a
          value of class 'str', nipype default value: {})
        Environment variables

Outputs:

output_image: (a pathlike object or string representing an existing
          file)
        Warped image
bias_image: (a pathlike object or string representing an existing
          file)
        Estimated bias