interfaces.io

BIDSDataGrabber

Link to code

BIDS datagrabber module that wraps around pybids to allow arbitrary querying of BIDS datasets.

Examples

By default, the BIDSDataGrabber fetches anatomical and functional images from a project, and makes BIDS entities (e.g. subject) available for filtering outputs.

>>> bg = BIDSDataGrabber()
>>> bg.inputs.base_dir = 'ds005/'
>>> bg.inputs.subject = '01'
>>> results = bg.run() # doctest: +SKIP

Dynamically created, user-defined output fields can also be defined to return different types of outputs from the same project. All outputs are filtered on common entities, which can be explicitly defined as infields.

>>> bg = BIDSDataGrabber(infields = ['subject'])
>>> bg.inputs.base_dir = 'ds005/'
>>> bg.inputs.subject = '01'
>>> bg.inputs.output_query['dwi'] = dict(datatype='dwi')
>>> results = bg.run() # doctest: +SKIP

Inputs:

[Mandatory]
base_dir: (a pathlike object or string representing an existing
          directory)
        Path to BIDS Directory.
index_derivatives: (a boolean, nipype default value: False)
        Index derivatives/ sub-directory

[Optional]
output_query: (a dictionary with keys which are a unicode string and
          with values which are a dictionary with keys which are any value
          and with values which are any value)
        Queries for outfield outputs
raise_on_empty: (a boolean, nipype default value: True)
        Generate exception if list is empty for a given field
extra_derivatives: (a list of items which are a pathlike object or
          string representing an existing directory)
        Additional derivative directories to index

Outputs:

None

DataFinder

Link to code

Search for paths that match a given regular expression. Allows a less proscriptive approach to gathering input files compared to DataGrabber. Will recursively search any subdirectories by default. This can be limited with the min/max depth options. Matched paths are available in the output ‘out_paths’. Any named groups of captured text from the regular expression are also available as ouputs of the same name.

Examples

>>> from nipype.interfaces.io import DataFinder
>>> df = DataFinder()
>>> df.inputs.root_paths = '.'
>>> df.inputs.match_regex = r'.+/(?P<series_dir>.+(qT1|ep2d_fid_T1).+)/(?P<basename>.+)\.nii.gz'
>>> result = df.run() # doctest: +SKIP
>>> result.outputs.out_paths  # doctest: +SKIP
['./027-ep2d_fid_T1_Gd4/acquisition.nii.gz',
 './018-ep2d_fid_T1_Gd2/acquisition.nii.gz',
 './016-ep2d_fid_T1_Gd1/acquisition.nii.gz',
 './013-ep2d_fid_T1_pre/acquisition.nii.gz']
>>> result.outputs.series_dir  # doctest: +SKIP
['027-ep2d_fid_T1_Gd4',
 '018-ep2d_fid_T1_Gd2',
 '016-ep2d_fid_T1_Gd1',
 '013-ep2d_fid_T1_pre']
>>> result.outputs.basename  # doctest: +SKIP
['acquisition',
 'acquisition'
 'acquisition',
 'acquisition']

Inputs:

[Mandatory]
root_paths: (a list of items which are any value or a unicode string)

[Optional]
match_regex: (a unicode string, nipype default value: (.+))
        Regular expression for matching paths.
ignore_regexes: (a list of items which are any value)
        List of regular expressions, if any match the path it will be
        ignored.
max_depth: (an integer (int or long))
        The maximum depth to search beneath the root_paths
min_depth: (an integer (int or long))
        The minimum depth to search beneath the root paths
unpack_single: (a boolean, nipype default value: False)
        Unpack single results from list

Outputs:

None

DataGrabber

Link to code

Generic datagrabber module that wraps around glob in an intelligent way for neuroimaging tasks to grab files

Attention

Doesn’t support directories currently

Examples

>>> from nipype.interfaces.io import DataGrabber

Pick all files from current directory

>>> dg = DataGrabber()
>>> dg.inputs.template = '*'

Pick file foo/foo.nii from current directory

>>> dg.inputs.template = '%s/%s.dcm'
>>> dg.inputs.template_args['outfiles']=[['dicomdir','123456-1-1.dcm']]

Same thing but with dynamically created fields

>>> dg = DataGrabber(infields=['arg1','arg2'])
>>> dg.inputs.template = '%s/%s.nii'
>>> dg.inputs.arg1 = 'foo'
>>> dg.inputs.arg2 = 'foo'

however this latter form can be used with iterables and iterfield in a pipeline.

Dynamically created, user-defined input and output fields

>>> dg = DataGrabber(infields=['sid'], outfields=['func','struct','ref'])
>>> dg.inputs.base_directory = '.'
>>> dg.inputs.template = '%s/%s.nii'
>>> dg.inputs.template_args['func'] = [['sid',['f3','f5']]]
>>> dg.inputs.template_args['struct'] = [['sid',['struct']]]
>>> dg.inputs.template_args['ref'] = [['sid','ref']]
>>> dg.inputs.sid = 's1'

Change the template only for output field struct. The rest use the general template

>>> dg.inputs.field_template = dict(struct='%s/struct.nii')
>>> dg.inputs.template_args['struct'] = [['sid']]

Inputs:

[Mandatory]
sort_filelist: (a boolean)
        Sort the filelist that matches the template
template: (a unicode string)
        Layout used to get files. relative to base directory if defined

[Optional]
base_directory: (a pathlike object or string representing an existing
          directory)
        Path to the base directory consisting of subject data.
raise_on_empty: (a boolean, nipype default value: True)
        Generate exception if list is empty for a given field
drop_blank_outputs: (a boolean, nipype default value: False)
        Remove ``None`` entries from output lists
template_args: (a dictionary with keys which are a unicode string and
          with values which are a list of items which are a list of items
          which are any value)
        Information to plug into template

Outputs:

None

DataSink

Link to code

Generic datasink module to store structured outputs

Primarily for use within a workflow. This interface allows arbitrary creation of input attributes. The names of these attributes define the directory structure to create for storage of the files or directories.

The attributes take the following form:

string[[.[@]]string[[.[@]]string]] …

where parts between [] are optional.

An attribute such as contrasts.@con will create a ‘contrasts’ directory to store the results linked to the attribute. If the @ is left out, such as in ‘contrasts.con’, a subdirectory ‘con’ will be created under ‘contrasts’.

the general form of the output is:

'base_directory/container/parameterization/destloc/filename'

destloc = string[[.[@]]string[[.[@]]string]] and
filename comesfrom the input to the connect statement.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

Note

If both substitutions and regexp_substitutions are used, then substitutions are applied first followed by regexp_substitutions.

This interface cannot be used in a MapNode as the inputs are defined only when the connect statement is executed.

Examples

>>> ds = DataSink()
>>> ds.inputs.base_directory = 'results_dir'
>>> ds.inputs.container = 'subject'
>>> ds.inputs.structural = 'structural.nii'
>>> setattr(ds.inputs, 'contrasts.@con', ['cont1.nii', 'cont2.nii'])
>>> setattr(ds.inputs, 'contrasts.alt', ['cont1a.nii', 'cont2a.nii'])
>>> ds.run()  # doctest: +SKIP

To use DataSink in a MapNode, its inputs have to be defined at the time the interface is created.

>>> ds = DataSink(infields=['contasts.@con'])
>>> ds.inputs.base_directory = 'results_dir'
>>> ds.inputs.container = 'subject'
>>> ds.inputs.structural = 'structural.nii'
>>> setattr(ds.inputs, 'contrasts.@con', ['cont1.nii', 'cont2.nii'])
>>> setattr(ds.inputs, 'contrasts.alt', ['cont1a.nii', 'cont2a.nii'])
>>> ds.run()  # doctest: +SKIP

Inputs:

[Optional]
base_directory: (a pathlike object or string representing a
          directory)
        Path to the base directory for storing data.
container: (a unicode string)
        Folder within base directory in which to store output
parameterization: (a boolean, nipype default value: True)
        store output in parametrized structure
strip_dir: (a pathlike object or string representing a directory)
        path to strip out of filename
substitutions: (a list of items which are a tuple of the form: (a
          unicode string, a unicode string))
        List of 2-tuples reflecting string to substitute and string to
        replace it with
regexp_substitutions: (a list of items which are a tuple of the form:
          (a unicode string, a unicode string))
        List of 2-tuples reflecting a pair of a Python regexp pattern and a
        replacement string. Invoked after string `substitutions`
_outputs: (a dictionary with keys which are a unicode string and with
          values which are any value, nipype default value: {})
remove_dest_dir: (a boolean, nipype default value: False)
        remove dest directory when copying dirs
creds_path: (a unicode string)
        Filepath to AWS credentials file for S3 bucket access; if not
        specified, the credentials will be taken from the AWS_ACCESS_KEY_ID
        and AWS_SECRET_ACCESS_KEY environment variables
encrypt_bucket_keys: (a boolean)
        Flag indicating whether to use S3 server-side AES-256 encryption
bucket: (any value)
        Boto3 S3 bucket for manual override of bucket
local_copy: (a unicode string)
        Copy files locally as well as to S3 bucket

Outputs:

out_file: (any value)
        datasink output

ExportFile

Link to code

Export a file to an absolute path

This interface copies an input file to a named output file. This is useful to save individual files to a specific location, instead of more flexible interfaces like DataSink.

Examples

>>> from nipype.interfaces.io import ExportFile
>>> import os.path as op
>>> ef = ExportFile()
>>> ef.inputs.in_file = "T1.nii.gz"
>>> os.mkdir("output_folder")
>>> ef.inputs.out_file = op.abspath("output_folder/sub1_out.nii.gz")
>>> res = ef.run()
>>> os.path.exists(res.outputs.out_file)
True

Inputs:

[Mandatory]
in_file: (a pathlike object or string representing an existing file)
        Input file name
out_file: (a pathlike object or string representing a file)
        Output file name

[Optional]
check_extension: (a boolean)
        Ensure that the input and output file extensions match
clobber: (a boolean)
        Permit overwriting existing files

Outputs:

out_file: (a pathlike object or string representing an existing file)
        Output file name

FreeSurferSource

Link to code

Generates freesurfer subject info from their directories

Examples

>>> from nipype.interfaces.io import FreeSurferSource
>>> fs = FreeSurferSource()
>>> #fs.inputs.subjects_dir = '.'
>>> fs.inputs.subject_id = 'PWS04'
>>> res = fs.run() # doctest: +SKIP
>>> fs.inputs.hemi = 'lh'
>>> res = fs.run() # doctest: +SKIP

Inputs:

[Mandatory]
subjects_dir: (a pathlike object or string representing an existing
          directory)
        Freesurfer subjects directory.
subject_id: (a unicode string)
        Subject name for whom to retrieve data

[Optional]
hemi: ('both' or 'lh' or 'rh', nipype default value: both)
        Selects hemisphere specific outputs

Outputs:

T1: (a pathlike object or string representing an existing file)
        Intensity normalized whole-head volume
aseg: (a pathlike object or string representing an existing file)
        Volumetric map of regions from automatic segmentation
brain: (a pathlike object or string representing an existing file)
        Intensity normalized brain-only volume
brainmask: (a pathlike object or string representing an existing
          file)
        Skull-stripped (brain-only) volume
filled: (a pathlike object or string representing an existing file)
        Subcortical mass volume
norm: (a pathlike object or string representing an existing file)
        Normalized skull-stripped volume
nu: (a pathlike object or string representing an existing file)
        Non-uniformity corrected whole-head volume
orig: (a pathlike object or string representing an existing file)
        Base image conformed to Freesurfer space
rawavg: (a pathlike object or string representing an existing file)
        Volume formed by averaging input images
ribbon: (a list of items which are a pathlike object or string
          representing an existing file)
        Volumetric maps of cortical ribbons
wm: (a pathlike object or string representing an existing file)
        Segmented white-matter volume
wmparc: (a pathlike object or string representing an existing file)
        Aparc parcellation projected into subcortical white matter
curv: (a list of items which are a pathlike object or string
          representing an existing file)
        Maps of surface curvature
avg_curv: (a list of items which are a pathlike object or string
          representing an existing file)
        Average atlas curvature, sampled to subject
inflated: (a list of items which are a pathlike object or string
          representing an existing file)
        Inflated surface meshes
pial: (a list of items which are a pathlike object or string
          representing an existing file)
        Gray matter/pia mater surface meshes
area_pial: (a list of items which are a pathlike object or string
          representing an existing file)
        Mean area of triangles each vertex on the pial surface is associated
        with
curv_pial: (a list of items which are a pathlike object or string
          representing an existing file)
        Curvature of pial surface
smoothwm: (a list of items which are a pathlike object or string
          representing an existing file)
        Smoothed original surface meshes
sphere: (a list of items which are a pathlike object or string
          representing an existing file)
        Spherical surface meshes
sulc: (a list of items which are a pathlike object or string
          representing an existing file)
        Surface maps of sulcal depth
thickness: (a list of items which are a pathlike object or string
          representing an existing file)
        Surface maps of cortical thickness
volume: (a list of items which are a pathlike object or string
          representing an existing file)
        Surface maps of cortical volume
white: (a list of items which are a pathlike object or string
          representing an existing file)
        White/gray matter surface meshes
jacobian_white: (a list of items which are a pathlike object or
          string representing an existing file)
        Distortion required to register to spherical atlas
graymid: (a list of items which are a pathlike object or string
          representing an existing file)
        Graymid/midthickness surface meshes
label: (a list of items which are a pathlike object or string
          representing an existing file)
        Volume and surface label files
annot: (a list of items which are a pathlike object or string
          representing an existing file)
        Surface annotation files
aparc_aseg: (a list of items which are a pathlike object or string
          representing an existing file)
        Aparc parcellation projected into aseg volume
sphere_reg: (a list of items which are a pathlike object or string
          representing an existing file)
        Spherical registration file
aseg_stats: (a list of items which are a pathlike object or string
          representing an existing file)
        Automated segmentation statistics file
wmparc_stats: (a list of items which are a pathlike object or string
          representing an existing file)
        White matter parcellation statistics file
aparc_stats: (a list of items which are a pathlike object or string
          representing an existing file)
        Aparc parcellation statistics files
BA_stats: (a list of items which are a pathlike object or string
          representing an existing file)
        Brodmann Area statistics files
aparc_a2009s_stats: (a list of items which are a pathlike object or
          string representing an existing file)
        Aparc a2009s parcellation statistics files
curv_stats: (a list of items which are a pathlike object or string
          representing an existing file)
        Curvature statistics files
entorhinal_exvivo_stats: (a list of items which are a pathlike object
          or string representing an existing file)
        Entorhinal exvivo statistics files

IOBase

Link to code

Inputs:

None

Outputs:

None

JSONFileGrabber

Link to code

Datagrabber interface that loads a json file and generates an output for every first-level object

Example

>>> import pprint
>>> from nipype.interfaces.io import JSONFileGrabber
>>> jsonSource = JSONFileGrabber()
>>> jsonSource.inputs.defaults = {'param1': 'overrideMe', 'param3': 1.0}
>>> res = jsonSource.run()
>>> pprint.pprint(res.outputs.get())
{'param1': 'overrideMe', 'param3': 1.0}
>>> jsonSource.inputs.in_file = os.path.join(datadir, 'jsongrabber.txt')
>>> res = jsonSource.run()
>>> pprint.pprint(res.outputs.get())  # doctest:, +ELLIPSIS
{'param1': 'exampleStr', 'param2': 4, 'param3': 1.0}

Inputs:

[Optional]
in_file: (a pathlike object or string representing an existing file)
        JSON source file
defaults: (a dictionary with keys which are any value and with values
          which are any value)
        JSON dictionary that sets default outputvalues, overridden by values
        found in in_file

Outputs:

None

JSONFileSink

Link to code

Very simple frontend for storing values into a JSON file. Entries already existing in in_dict will be overridden by matching entries dynamically added as inputs.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

>>> jsonsink = JSONFileSink(input_names=['subject_id',
...                         'some_measurement'])
>>> jsonsink.inputs.subject_id = 's1'
>>> jsonsink.inputs.some_measurement = 11.4
>>> jsonsink.run() # doctest: +SKIP

Using a dictionary as input:

>>> dictsink = JSONFileSink()
>>> dictsink.inputs.in_dict = {'subject_id': 's1',
...                            'some_measurement': 11.4}
>>> dictsink.run() # doctest: +SKIP

Inputs:

[Optional]
out_file: (a pathlike object or string representing a file)
        JSON sink file
in_dict: (a dictionary with keys which are any value and with values
          which are any value, nipype default value: {})
        input JSON dictionary
_outputs: (a dictionary with keys which are any value and with values
          which are any value, nipype default value: {})

Outputs:

out_file: (a pathlike object or string representing a file)
        JSON sink file

MySQLSink

Link to code

Very simple frontend for storing values into MySQL database.

Examples

>>> sql = MySQLSink(input_names=['subject_id', 'some_measurement'])
>>> sql.inputs.database_name = 'my_database'
>>> sql.inputs.table_name = 'experiment_results'
>>> sql.inputs.username = 'root'
>>> sql.inputs.password = 'secret'
>>> sql.inputs.subject_id = 's1'
>>> sql.inputs.some_measurement = 11.4
>>> sql.run() # doctest: +SKIP

Inputs:

[Mandatory]
host: (a unicode string, nipype default value: localhost)
        mutually_exclusive: config
        requires: username, password
config: (a pathlike object or string representing a file)
        MySQL Options File (same format as my.cnf)
        mutually_exclusive: host
database_name: (a unicode string)
        Otherwise known as the schema name
table_name: (a unicode string)

[Optional]
username: (a unicode string)
password: (a unicode string)

Outputs:

None

S3DataGrabber

Link to code

Generic datagrabber module that wraps around glob in an intelligent way for neuroimaging tasks to grab files from Amazon S3

Works exactly like DataGrabber, except, you must specify an S3 “bucket” and “bucket_path” to search for your data and a “local_directory” to store the data. “local_directory” should be a location on HDFS for Spark jobs. Additionally, “template” uses regex style formatting, rather than the glob-style found in the original DataGrabber.

Examples

>>> s3grab = S3DataGrabber(infields=['subj_id'], outfields=["func", "anat"])
>>> s3grab.inputs.bucket = 'openneuro'
>>> s3grab.inputs.sort_filelist = True
>>> s3grab.inputs.template = '*'
>>> s3grab.inputs.anon = True
>>> s3grab.inputs.bucket_path = 'ds000101/ds000101_R2.0.0/uncompressed/'
>>> s3grab.inputs.local_directory = '/tmp'
>>> s3grab.inputs.field_template = {'anat': '%s/anat/%s_T1w.nii.gz',
...                                 'func': '%s/func/%s_task-simon_run-1_bold.nii.gz'}
>>> s3grab.inputs.template_args = {'anat': [['subj_id', 'subj_id']],
...                                'func': [['subj_id', 'subj_id']]}
>>> s3grab.inputs.subj_id = 'sub-01'
>>> s3grab.run()  # doctest: +SKIP

Inputs:

[Mandatory]
bucket: (a unicode string)
        Amazon S3 bucket where your data is stored
sort_filelist: (a boolean)
        Sort the filelist that matches the template
template: (a unicode string)
        Layout used to get files. Relative to bucket_path if defined.Uses
        regex rather than glob style formatting.

[Optional]
anon: (a boolean, nipype default value: False)
        Use anonymous connection to s3. If this is set to True, boto may
        print a urlopen error, but this does not prevent data from being
        downloaded.
region: (a unicode string, nipype default value: us-east-1)
        Region of s3 bucket
bucket_path: (a unicode string, nipype default value: )
        Location within your bucket for subject data.
local_directory: (a pathlike object or string representing an
          existing directory)
        Path to the local directory for subject data to be downloaded and
        accessed. Should be on HDFS for Spark jobs.
raise_on_empty: (a boolean, nipype default value: True)
        Generate exception if list is empty for a given field
template_args: (a dictionary with keys which are a unicode string and
          with values which are a list of items which are a list of items
          which are any value)
        Information to plug into template

Outputs:

None

SQLiteSink

Link to code

Very simple frontend for storing values into SQLite database.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

Examples

>>> sql = SQLiteSink(input_names=['subject_id', 'some_measurement'])
>>> sql.inputs.database_file = 'my_database.db'
>>> sql.inputs.table_name = 'experiment_results'
>>> sql.inputs.subject_id = 's1'
>>> sql.inputs.some_measurement = 11.4
>>> sql.run() # doctest: +SKIP

Inputs:

[Mandatory]
database_file: (a pathlike object or string representing an existing
          file)
table_name: (a unicode string)

Outputs:

None

SSHDataGrabber

Link to code

Extension of DataGrabber module that downloads the file list and optionally the files from a SSH server. The SSH operation must not need user and password so an SSH agent must be active in where this module is being run.

Attention

Doesn’t support directories currently

Examples

>>> from nipype.interfaces.io import SSHDataGrabber
>>> dg = SSHDataGrabber()
>>> dg.inputs.hostname = 'test.rebex.net'
>>> dg.inputs.user = 'demo'
>>> dg.inputs.password = 'password'
>>> dg.inputs.base_directory = 'pub/example'

Pick all files from the base directory

>>> dg.inputs.template = '*'

Pick all files starting with “s” and a number from current directory

>>> dg.inputs.template_expression = 'regexp'
>>> dg.inputs.template = 'pop[0-9].*'

Same thing but with dynamically created fields

>>> dg = SSHDataGrabber(infields=['arg1','arg2'])
>>> dg.inputs.hostname = 'test.rebex.net'
>>> dg.inputs.user = 'demo'
>>> dg.inputs.password = 'password'
>>> dg.inputs.base_directory = 'pub'
>>> dg.inputs.template = '%s/%s.txt'
>>> dg.inputs.arg1 = 'example'
>>> dg.inputs.arg2 = 'foo'

however this latter form can be used with iterables and iterfield in a pipeline.

Dynamically created, user-defined input and output fields

>>> dg = SSHDataGrabber(infields=['sid'], outfields=['func','struct','ref'])
>>> dg.inputs.hostname = 'myhost.com'
>>> dg.inputs.base_directory = '/main_folder/my_remote_dir'
>>> dg.inputs.template_args['func'] = [['sid',['f3','f5']]]
>>> dg.inputs.template_args['struct'] = [['sid',['struct']]]
>>> dg.inputs.template_args['ref'] = [['sid','ref']]
>>> dg.inputs.sid = 's1'

Change the template only for output field struct. The rest use the general template

>>> dg.inputs.field_template = dict(struct='%s/struct.nii')
>>> dg.inputs.template_args['struct'] = [['sid']]

Inputs:

[Mandatory]
hostname: (a unicode string)
        Server hostname.
base_directory: (a unicode string)
        Path to the base directory consisting of subject data.
sort_filelist: (a boolean)
        Sort the filelist that matches the template
template: (a unicode string)
        Layout used to get files. relative to base directory if defined

[Optional]
username: (a unicode string)
        Server username.
password: (a string)
        Server password.
download_files: (a boolean, nipype default value: True)
        If false it will return the file names without downloading them
template_expression: ('fnmatch' or 'regexp', nipype default value:
          fnmatch)
        Use either fnmatch or regexp to express templates
ssh_log_to_file: (a unicode string, nipype default value: )
        If set SSH commands will be logged to the given file
raise_on_empty: (a boolean, nipype default value: True)
        Generate exception if list is empty for a given field
drop_blank_outputs: (a boolean, nipype default value: False)
        Remove ``None`` entries from output lists
template_args: (a dictionary with keys which are a unicode string and
          with values which are a list of items which are a list of items
          which are any value)
        Information to plug into template

Outputs:

None

SelectFiles

Link to code

Flexibly collect data from disk to feed into workflows.

This interface uses the {}-based string formatting syntax to plug values (possibly known only at workflow execution time) into string templates and collect files from persistant storage. These templates can also be combined with glob wildcards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.

Examples

>>> import pprint
>>> from nipype import SelectFiles, Node
>>> templates={"T1": "{subject_id}/struct/T1.nii",
...            "epi": "{subject_id}/func/f[0, 1].nii"}
>>> dg = Node(SelectFiles(templates), "selectfiles")
>>> dg.inputs.subject_id = "subj1"
>>> pprint.pprint(dg.outputs.get())  # doctest:
{'T1': <undefined>, 'epi': <undefined>}

The same thing with dynamic grabbing of specific files:

>>> templates["epi"] = "{subject_id}/func/f{run!s}.nii"
>>> dg = Node(SelectFiles(templates), "selectfiles")
>>> dg.inputs.subject_id = "subj1"
>>> dg.inputs.run = [2, 4]

Inputs:

[Optional]
base_directory: (a pathlike object or string representing an existing
          directory)
        Root path common to templates.
sort_filelist: (a boolean, nipype default value: True)
        When matching mutliple files, return them in sorted order.
raise_on_empty: (a boolean, nipype default value: True)
        Raise an exception if a template pattern matches no files.
force_lists: (a boolean or a list of items which are a unicode
          string, nipype default value: False)
        Whether to return outputs as a list even when only one file matches
        the template. Either a boolean that applies to all output fields or
        a list of output field names to coerce to a list

Outputs:

None

XNATSink

Link to code

Generic datasink module that takes a directory containing a list of nifti files and provides a set of structured output fields.

Inputs:

[Mandatory]
server: (a unicode string)
        mutually_exclusive: config
        requires: user, pwd
config: (a pathlike object or string representing a file)
        mutually_exclusive: server
project_id: (a unicode string)
        Project in which to store the outputs
subject_id: (a unicode string)
        Set to subject id
experiment_id: (a unicode string)
        Set to workflow name

[Optional]
_outputs: (a dictionary with keys which are a unicode string and with
          values which are any value, nipype default value: {})
user: (a unicode string)
pwd: (a string)
cache_dir: (a pathlike object or string representing a directory)
assessor_id: (a unicode string)
        Option to customize ouputs representation in XNAT - assessor level
        will be used with specified id
        mutually_exclusive: reconstruction_id
reconstruction_id: (a unicode string)
        Option to customize ouputs representation in XNAT - reconstruction
        level will be used with specified id
        mutually_exclusive: assessor_id
share: (a boolean, nipype default value: False)
        Option to share the subjects from the original projectinstead of
        creating new ones when possible - the created experiments are then
        shared back to the original project

Outputs:

None

XNATSource

Link to code

Generic XNATSource module that wraps around the pyxnat module in an intelligent way for neuroimaging tasks to grab files and data from an XNAT server.

Examples

>>> from nipype.interfaces.io import XNATSource

Pick all files from current directory

>>> dg = XNATSource()
>>> dg.inputs.template = '*'
>>> dg = XNATSource(infields=['project','subject','experiment','assessor','inout'])
>>> dg.inputs.query_template = '/projects/%s/subjects/%s/experiments/%s'                    '/assessors/%s/%s_resources/files'
>>> dg.inputs.project = 'IMAGEN'
>>> dg.inputs.subject = 'IMAGEN_000000001274'
>>> dg.inputs.experiment = '*SessionA*'
>>> dg.inputs.assessor = '*ADNI_MPRAGE_nii'
>>> dg.inputs.inout = 'out'
>>> dg = XNATSource(infields=['sid'],outfields=['struct','func'])
>>> dg.inputs.query_template = '/projects/IMAGEN/subjects/%s/experiments/*SessionA*'                    '/assessors/*%s_nii/out_resources/files'
>>> dg.inputs.query_template_args['struct'] = [['sid','ADNI_MPRAGE']]
>>> dg.inputs.query_template_args['func'] = [['sid','EPI_faces']]
>>> dg.inputs.sid = 'IMAGEN_000000001274'

Inputs:

[Mandatory]
query_template: (a unicode string)
        Layout used to get files. Relative to base directory if defined
server: (a unicode string)
        mutually_exclusive: config
        requires: user, pwd
config: (a pathlike object or string representing a file)
        mutually_exclusive: server

[Optional]
query_template_args: (a dictionary with keys which are a unicode
          string and with values which are a list of items which are a list
          of items which are any value, nipype default value: {'outfiles':
          []})
        Information to plug into template
user: (a unicode string)
pwd: (a string)
cache_dir: (a pathlike object or string representing a directory)
        Cache directory

Outputs:

None

add_traits()

Link to code

Add traits to a traited class.

All traits are set to Undefined by default

copytree()

Link to code

Recursively copy a directory tree using nipype.utils.filemanip.copyfile()

This is not a thread-safe routine. However, in the case of creating new directories, it checks to see if a particular directory has already been created by another process.

push_file()

Link to code

quote_id()

Link to code

unquote_id()

Link to code