nipype.interfaces.io module

Set of interfaces that allow interaction with data. Currently available interfaces are:

DataSource: Generic nifti to named Nifti interface DataSink: Generic named output from interfaces to data store XNATSource: preliminary interface to XNAT

To come : XNATSink

BIDSDataGrabber

Link to code

Bases: LibraryBaseInterface, IOBase

BIDS datagrabber module that wraps around pybids to allow arbitrary querying of BIDS datasets.

Examples

By default, the BIDSDataGrabber fetches anatomical and functional images from a project, and makes BIDS entities (e.g. subject) available for filtering outputs.

>>> bg = BIDSDataGrabber()
>>> bg.inputs.base_dir = 'ds005/'
>>> bg.inputs.subject = '01'
>>> results = bg.run() 

Dynamically created, user-defined output fields can also be defined to return different types of outputs from the same project. All outputs are filtered on common entities, which can be explicitly defined as infields.

>>> bg = BIDSDataGrabber(infields = ['subject'])
>>> bg.inputs.base_dir = 'ds005/'
>>> bg.inputs.subject = '01'
>>> bg.inputs.output_query['dwi'] = dict(datatype='dwi')
>>> results = bg.run() 
Mandatory Inputs:
  • base_dir (a pathlike object or string representing an existing directory) – Path to BIDS Directory.

  • index_derivatives (a boolean) – Index derivatives/ sub-directory. (Nipype default value: False)

Optional Inputs:
  • extra_derivatives (a list of items which are a pathlike object or string representing an existing directory) – Additional derivative directories to index.

  • load_layout (a pathlike object or string representing an existing directory) – Path to load already saved Bidslayout.

  • output_query (a dictionary with keys which are a string and with values which are a dictionary with keys which are any value and with values which are any value) – Queries for outfield outputs.

  • raise_on_empty (a boolean) – Generate exception if list is empty for a given field. (Nipype default value: True)

BIDSDataGrabber.input_spec

alias of BIDSDataGrabberInputSpec

BIDSDataGrabber.output_spec

alias of DynamicTraitedSpec

class nipype.interfaces.io.BIDSDataGrabberInputSpec(**kwargs)

Bases: nipype.interfaces.base.specs.DynamicTraitedSpec

DataFinder

Link to code

Bases: IOBase

Search for paths that match a given regular expression. Allows a less proscriptive approach to gathering input files compared to DataGrabber. Will recursively search any subdirectories by default. This can be limited with the min/max depth options. Matched paths are available in the output ‘out_paths’. Any named groups of captured text from the regular expression are also available as outputs of the same name.

Examples

>>> from nipype.interfaces.io import DataFinder
>>> df = DataFinder()
>>> df.inputs.root_paths = '.'
>>> df.inputs.match_regex = r'.+/(?P<series_dir>.+(qT1|ep2d_fid_T1).+)/(?P<basename>.+)\.nii.gz'
>>> result = df.run() 
>>> result.outputs.out_paths  
['./027-ep2d_fid_T1_Gd4/acquisition.nii.gz',
 './018-ep2d_fid_T1_Gd2/acquisition.nii.gz',
 './016-ep2d_fid_T1_Gd1/acquisition.nii.gz',
 './013-ep2d_fid_T1_pre/acquisition.nii.gz']
>>> result.outputs.series_dir  
['027-ep2d_fid_T1_Gd4',
 '018-ep2d_fid_T1_Gd2',
 '016-ep2d_fid_T1_Gd1',
 '013-ep2d_fid_T1_pre']
>>> result.outputs.basename  
['acquisition',
 'acquisition'
 'acquisition',
 'acquisition']
Mandatory Inputs:

root_paths (a list of items which are any value or a string)

Optional Inputs:
  • ignore_regexes (a list of items which are any value) – List of regular expressions, if any match the path it will be ignored.

  • match_regex (a string) – Regular expression for matching paths. (Nipype default value: (.+))

  • max_depth (an integer) – The maximum depth to search beneath the root_paths.

  • min_depth (an integer) – The minimum depth to search beneath the root paths.

  • unpack_single (a boolean) – Unpack single results from list. (Nipype default value: False)

DataFinder.output_spec

alias of DynamicTraitedSpec

DataGrabber

Link to code

Bases: IOBase

Find files on a filesystem.

Generic datagrabber module that wraps around glob in an intelligent way for neuroimaging tasks to grab files

Important

Doesn’t support directories currently

Examples

>>> from nipype.interfaces.io import DataGrabber

Pick all files from current directory

>>> dg = DataGrabber()
>>> dg.inputs.template = '*'

Pick file foo/foo.nii from current directory

>>> dg.inputs.template = '%s/%s.dcm'
>>> dg.inputs.template_args['outfiles']=[['dicomdir','123456-1-1.dcm']]

Same thing but with dynamically created fields

>>> dg = DataGrabber(infields=['arg1','arg2'])
>>> dg.inputs.template = '%s/%s.nii'
>>> dg.inputs.arg1 = 'foo'
>>> dg.inputs.arg2 = 'foo'

however this latter form can be used with iterables and iterfield in a pipeline.

Dynamically created, user-defined input and output fields

>>> dg = DataGrabber(infields=['sid'], outfields=['func','struct','ref'])
>>> dg.inputs.base_directory = '.'
>>> dg.inputs.template = '%s/%s.nii'
>>> dg.inputs.template_args['func'] = [['sid',['f3','f5']]]
>>> dg.inputs.template_args['struct'] = [['sid',['struct']]]
>>> dg.inputs.template_args['ref'] = [['sid','ref']]
>>> dg.inputs.sid = 's1'

Change the template only for output field struct. The rest use the general template

>>> dg.inputs.field_template = dict(struct='%s/struct.nii')
>>> dg.inputs.template_args['struct'] = [['sid']]
Mandatory Inputs:
  • sort_filelist (a boolean) – Sort the filelist that matches the template.

  • template (a string) – Layout used to get files. relative to base directory if defined.

Optional Inputs:
  • base_directory (a pathlike object or string representing an existing directory) – Path to the base directory consisting of subject data.

  • drop_blank_outputs (a boolean) – Remove None entries from output lists. (Nipype default value: False)

  • raise_on_empty (a boolean) – Generate exception if list is empty for a given field. (Nipype default value: True)

  • template_args (a dictionary with keys which are a string and with values which are a list of items which are a list of items which are any value) – Information to plug into template.

DataGrabber.output_spec

alias of DynamicTraitedSpec

DataSink

Link to code

Bases: IOBase

Generic datasink module to store structured outputs.

Primarily for use within a workflow. This interface allows arbitrary creation of input attributes. The names of these attributes define the directory structure to create for storage of the files or directories.

The attributes take the following form:

string[[.[@]]string[[.[@]]string]] ...

where parts between [] are optional.

An attribute such as contrasts.@con will create a ‘contrasts’ directory to store the results linked to the attribute. If the @ is left out, such as in ‘contrasts.con’, a subdirectory ‘con’ will be created under ‘contrasts’.

The general form of the output is:

'base_directory/container/parameterization/destloc/filename'

destloc = string[[.[@]]string[[.[@]]string]] and filename come from the input to the connect statement.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

Note

If both substitutions and regexp_substitutions are used, then substitutions are applied first followed by regexp_substitutions.

This interface cannot be used in a MapNode as the inputs are defined only when the connect statement is executed.

Examples

>>> ds = DataSink()
>>> ds.inputs.base_directory = 'results_dir'
>>> ds.inputs.container = 'subject'
>>> ds.inputs.structural = 'structural.nii'
>>> setattr(ds.inputs, 'contrasts.@con', ['cont1.nii', 'cont2.nii'])
>>> setattr(ds.inputs, 'contrasts.alt', ['cont1a.nii', 'cont2a.nii'])
>>> ds.run()  

To use DataSink in a MapNode, its inputs have to be defined at the time the interface is created.

>>> ds = DataSink(infields=['contasts.@con'])
>>> ds.inputs.base_directory = 'results_dir'
>>> ds.inputs.container = 'subject'
>>> ds.inputs.structural = 'structural.nii'
>>> setattr(ds.inputs, 'contrasts.@con', ['cont1.nii', 'cont2.nii'])
>>> setattr(ds.inputs, 'contrasts.alt', ['cont1a.nii', 'cont2a.nii'])
>>> ds.run()  
Optional Inputs:
  • _outputs (a dictionary with keys which are a string and with values which are any value) – (Nipype default value: {})

  • base_directory (a string) – Path to the base directory for storing data.

  • bucket (any value) – Boto3 S3 bucket for manual override of bucket.

  • container (a string) – Folder within base directory in which to store output.

  • creds_path (a string) – Filepath to AWS credentials file for S3 bucket access; if not specified, the credentials will be taken from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

  • encrypt_bucket_keys (a boolean) – Flag indicating whether to use S3 server-side AES-256 encryption.

  • local_copy (a string) – Copy files locally as well as to S3 bucket.

  • parameterization (a boolean) – Store output in parametrized structure. (Nipype default value: True)

  • regexp_substitutions (a list of items which are a tuple of the form: (a string, a string)) – List of 2-tuples reflecting a pair of a Python regexp pattern and a replacement string. Invoked after string substitutions.

  • remove_dest_dir (a boolean) – Remove dest directory when copying dirs. (Nipype default value: False)

  • strip_dir (a string) – Path to strip out of filename.

  • substitutions (a list of items which are a tuple of the form: (a string, a string)) – List of 2-tuples reflecting string to substitute and string to replace it with.

Outputs:

out_file (any value) – Datasink output.

ExportFile

Link to code

Bases: SimpleInterface

Export a file to an absolute path.

This interface copies an input file to a named output file. This is useful to save individual files to a specific location, instead of more flexible interfaces like DataSink.

Examples

>>> from nipype.interfaces.io import ExportFile
>>> import os.path as op
>>> ef = ExportFile()
>>> ef.inputs.in_file = "T1.nii.gz"
>>> os.mkdir("output_folder")
>>> ef.inputs.out_file = op.abspath("output_folder/sub1_out.nii.gz")
>>> res = ef.run()
>>> os.path.exists(res.outputs.out_file)
True
Mandatory Inputs:
  • in_file (a pathlike object or string representing an existing file) – Input file name.

  • out_file (a pathlike object or string representing a file) – Output file name.

Optional Inputs:
  • check_extension (a boolean) – Ensure that the input and output file extensions match. (Nipype default value: True)

  • clobber (a boolean) – Permit overwriting existing files.

Outputs:

out_file (a pathlike object or string representing an existing file) – Output file name.

FreeSurferSource

Link to code

Bases: IOBase

Generates freesurfer subject info from their directories.

Examples

>>> from nipype.interfaces.io import FreeSurferSource
>>> fs = FreeSurferSource()
>>> #fs.inputs.subjects_dir = '.'
>>> fs.inputs.subject_id = 'PWS04'
>>> res = fs.run() 
>>> fs.inputs.hemi = 'lh'
>>> res = fs.run() 
Mandatory Inputs:
  • subject_id (a string) – Subject name for whom to retrieve data.

  • subjects_dir (a pathlike object or string representing an existing directory) – Freesurfer subjects directory.

Optional Inputs:

hemi (‘both’ or ‘lh’ or ‘rh’) – Selects hemisphere specific outputs. (Nipype default value: both)

Outputs:
  • BA_stats (a list of items which are a pathlike object or string representing an existing file) – Brodmann Area statistics files.

  • T1 (a pathlike object or string representing an existing file) – Intensity normalized whole-head volume.

  • annot (a list of items which are a pathlike object or string representing an existing file) – Surface annotation files.

  • aparc_a2009s_stats (a list of items which are a pathlike object or string representing an existing file) – Aparc a2009s parcellation statistics files.

  • aparc_aseg (a list of items which are a pathlike object or string representing an existing file) – Aparc parcellation projected into aseg volume.

  • aparc_stats (a list of items which are a pathlike object or string representing an existing file) – Aparc parcellation statistics files.

  • area_pial (a list of items which are a pathlike object or string representing an existing file) – Mean area of triangles each vertex on the pial surface is associated with.

  • aseg (a pathlike object or string representing an existing file) – Volumetric map of regions from automatic segmentation.

  • aseg_stats (a list of items which are a pathlike object or string representing an existing file) – Automated segmentation statistics file.

  • avg_curv (a list of items which are a pathlike object or string representing an existing file) – Average atlas curvature, sampled to subject.

  • brain (a pathlike object or string representing an existing file) – Intensity normalized brain-only volume.

  • brainmask (a pathlike object or string representing an existing file) – Skull-stripped (brain-only) volume.

  • curv (a list of items which are a pathlike object or string representing an existing file) – Maps of surface curvature.

  • curv_pial (a list of items which are a pathlike object or string representing an existing file) – Curvature of pial surface.

  • curv_stats (a list of items which are a pathlike object or string representing an existing file) – Curvature statistics files.

  • entorhinal_exvivo_stats (a list of items which are a pathlike object or string representing an existing file) – Entorhinal exvivo statistics files.

  • filled (a pathlike object or string representing an existing file) – Subcortical mass volume.

  • graymid (a list of items which are a pathlike object or string representing an existing file) – Graymid/midthickness surface meshes.

  • inflated (a list of items which are a pathlike object or string representing an existing file) – Inflated surface meshes.

  • jacobian_white (a list of items which are a pathlike object or string representing an existing file) – Distortion required to register to spherical atlas.

  • label (a list of items which are a pathlike object or string representing an existing file) – Volume and surface label files.

  • norm (a pathlike object or string representing an existing file) – Normalized skull-stripped volume.

  • nu (a pathlike object or string representing an existing file) – Non-uniformity corrected whole-head volume.

  • orig (a pathlike object or string representing an existing file) – Base image conformed to Freesurfer space.

  • pial (a list of items which are a pathlike object or string representing an existing file) – Gray matter/pia mater surface meshes.

  • rawavg (a pathlike object or string representing an existing file) – Volume formed by averaging input images.

  • ribbon (a list of items which are a pathlike object or string representing an existing file) – Volumetric maps of cortical ribbons.

  • smoothwm (a list of items which are a pathlike object or string representing an existing file) – Smoothed original surface meshes.

  • sphere (a list of items which are a pathlike object or string representing an existing file) – Spherical surface meshes.

  • sphere_reg (a list of items which are a pathlike object or string representing an existing file) – Spherical registration file.

  • sulc (a list of items which are a pathlike object or string representing an existing file) – Surface maps of sulcal depth.

  • thickness (a list of items which are a pathlike object or string representing an existing file) – Surface maps of cortical thickness.

  • volume (a list of items which are a pathlike object or string representing an existing file) – Surface maps of cortical volume.

  • white (a list of items which are a pathlike object or string representing an existing file) – White/gray matter surface meshes.

  • wm (a pathlike object or string representing an existing file) – Segmented white-matter volume.

  • wmparc (a pathlike object or string representing an existing file) – Aparc parcellation projected into subcortical white matter.

  • wmparc_stats (a list of items which are a pathlike object or string representing an existing file) – White matter parcellation statistics file.

IOBase

Link to code

JSONFileGrabber

Link to code

Bases: IOBase

Datagrabber interface that loads a json file and generates an output for every first-level object

Example

>>> import pprint
>>> from nipype.interfaces.io import JSONFileGrabber
>>> jsonSource = JSONFileGrabber()
>>> jsonSource.inputs.defaults = {'param1': 'overrideMe', 'param3': 1.0}
>>> res = jsonSource.run()
>>> pprint.pprint(res.outputs.get())
{'param1': 'overrideMe', 'param3': 1.0}
>>> jsonSource.inputs.in_file = os.path.join(datadir, 'jsongrabber.txt')
>>> res = jsonSource.run()
>>> pprint.pprint(res.outputs.get())  
{'param1': 'exampleStr', 'param2': 4, 'param3': 1.0}
Optional Inputs:
  • defaults (a dictionary with keys which are any value and with values which are any value) – JSON dictionary that sets default outputvalues, overridden by values found in in_file.

  • in_file (a pathlike object or string representing an existing file) – JSON source file.

JSONFileGrabber.output_spec

alias of DynamicTraitedSpec

JSONFileSink

Link to code

Bases: IOBase

Very simple frontend for storing values into a JSON file. Entries already existing in in_dict will be overridden by matching entries dynamically added as inputs.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

Examples

>>> jsonsink = JSONFileSink(input_names=['subject_id',
...                         'some_measurement'])
>>> jsonsink.inputs.subject_id = 's1'
>>> jsonsink.inputs.some_measurement = 11.4
>>> jsonsink.run() 

Using a dictionary as input:

>>> dictsink = JSONFileSink()
>>> dictsink.inputs.in_dict = {'subject_id': 's1',
...                            'some_measurement': 11.4}
>>> dictsink.run() 
Optional Inputs:
  • _outputs (a dictionary with keys which are any value and with values which are any value) – (Nipype default value: {})

  • in_dict (a dictionary with keys which are any value and with values which are any value) – Input JSON dictionary. (Nipype default value: {})

  • out_file (a pathlike object or string representing a file) – JSON sink file.

Outputs:

out_file (a pathlike object or string representing a file) – JSON sink file.

MySQLSink

Link to code

Bases: IOBase

Very simple frontend for storing values into MySQL database.

Examples

>>> sql = MySQLSink(input_names=['subject_id', 'some_measurement'])
>>> sql.inputs.database_name = 'my_database'
>>> sql.inputs.table_name = 'experiment_results'
>>> sql.inputs.username = 'root'
>>> sql.inputs.password = 'secret'
>>> sql.inputs.subject_id = 's1'
>>> sql.inputs.some_measurement = 11.4
>>> sql.run() 
Mandatory Inputs:
  • config (a pathlike object or string representing a file) – MySQL Options File (same format as my.cnf). Mutually exclusive with inputs: host.

  • database_name (a string) – Otherwise known as the schema name.

  • host (a string) – Mutually exclusive with inputs: config. Requires inputs: username, password. (Nipype default value: localhost)

  • table_name (a string)

Optional Inputs:
  • password (a string)

  • username (a string)

class nipype.interfaces.io.ProgressPercentage(filename)

Bases: object

Callable class instsance (via __call__ method) that displays upload percentage of a file to S3

S3DataGrabber

Link to code

Bases: LibraryBaseInterface, IOBase

Pull data from an Amazon S3 Bucket.

Generic datagrabber module that wraps around glob in an intelligent way for neuroimaging tasks to grab files from Amazon S3

Works exactly like DataGrabber, except, you must specify an S3 “bucket” and “bucket_path” to search for your data and a “local_directory” to store the data. “local_directory” should be a location on HDFS for Spark jobs. Additionally, “template” uses regex style formatting, rather than the glob-style found in the original DataGrabber.

Examples

>>> s3grab = S3DataGrabber(infields=['subj_id'], outfields=["func", "anat"])
>>> s3grab.inputs.bucket = 'openneuro'
>>> s3grab.inputs.sort_filelist = True
>>> s3grab.inputs.template = '*'
>>> s3grab.inputs.anon = True
>>> s3grab.inputs.bucket_path = 'ds000101/ds000101_R2.0.0/uncompressed/'
>>> s3grab.inputs.local_directory = '/tmp'
>>> s3grab.inputs.field_template = {'anat': '%s/anat/%s_T1w.nii.gz',
...                                 'func': '%s/func/%s_task-simon_run-1_bold.nii.gz'}
>>> s3grab.inputs.template_args = {'anat': [['subj_id', 'subj_id']],
...                                'func': [['subj_id', 'subj_id']]}
>>> s3grab.inputs.subj_id = 'sub-01'
>>> s3grab.run()  
Mandatory Inputs:
  • bucket (a string) – Amazon S3 bucket where your data is stored.

  • sort_filelist (a boolean) – Sort the filelist that matches the template.

  • template (a string) – Layout used to get files. Relative to bucket_path if defined.Uses regex rather than glob style formatting.

Optional Inputs:
  • anon (a boolean) – Use anonymous connection to s3. If this is set to True, boto may print a urlopen error, but this does not prevent data from being downloaded. (Nipype default value: False)

  • bucket_path (a string) – Location within your bucket for subject data. (Nipype default value: "")

  • local_directory (a pathlike object or string representing an existing directory) – Path to the local directory for subject data to be downloaded and accessed. Should be on HDFS for Spark jobs.

  • raise_on_empty (a boolean) – Generate exception if list is empty for a given field. (Nipype default value: True)

  • region (a string) – Region of s3 bucket. (Nipype default value: us-east-1)

  • template_args (a dictionary with keys which are a string and with values which are a list of items which are a list of items which are any value) – Information to plug into template.

S3DataGrabber.output_spec

alias of DynamicTraitedSpec

S3DataGrabber.s3tolocal(s3path, bkt)

SQLiteSink

Link to code

Bases: LibraryBaseInterface, IOBase

Very simple frontend for storing values into SQLite database.

Warning

This is not a thread-safe node because it can write to a common shared location. It will not complain when it overwrites a file.

Examples

>>> sql = SQLiteSink(input_names=['subject_id', 'some_measurement'])
>>> sql.inputs.database_file = 'my_database.db'
>>> sql.inputs.table_name = 'experiment_results'
>>> sql.inputs.subject_id = 's1'
>>> sql.inputs.some_measurement = 11.4
>>> sql.run() 
Mandatory Inputs:
  • database_file (a pathlike object or string representing an existing file)

  • table_name (a string)

SSHDataGrabber

Link to code

Bases: LibraryBaseInterface, DataGrabber

Extension of DataGrabber module that downloads the file list and optionally the files from a SSH server. The SSH operation must not need user and password so an SSH agent must be active in where this module is being run.

Attention

Doesn’t support directories currently

Examples

>>> from nipype.interfaces.io import SSHDataGrabber
>>> dg = SSHDataGrabber()
>>> dg.inputs.hostname = 'test.rebex.net'
>>> dg.inputs.user = 'demo'
>>> dg.inputs.password = 'password'
>>> dg.inputs.base_directory = 'pub/example'

Pick all files from the base directory

>>> dg.inputs.template = '*'

Pick all files starting with “s” and a number from current directory

>>> dg.inputs.template_expression = 'regexp'
>>> dg.inputs.template = 'pop[0-9].*'

Same thing but with dynamically created fields

>>> dg = SSHDataGrabber(infields=['arg1','arg2'])
>>> dg.inputs.hostname = 'test.rebex.net'
>>> dg.inputs.user = 'demo'
>>> dg.inputs.password = 'password'
>>> dg.inputs.base_directory = 'pub'
>>> dg.inputs.template = '%s/%s.txt'
>>> dg.inputs.arg1 = 'example'
>>> dg.inputs.arg2 = 'foo'

however this latter form can be used with iterables and iterfield in a pipeline.

Dynamically created, user-defined input and output fields

>>> dg = SSHDataGrabber(infields=['sid'], outfields=['func','struct','ref'])
>>> dg.inputs.hostname = 'myhost.com'
>>> dg.inputs.base_directory = '/main_folder/my_remote_dir'
>>> dg.inputs.template_args['func'] = [['sid',['f3','f5']]]
>>> dg.inputs.template_args['struct'] = [['sid',['struct']]]
>>> dg.inputs.template_args['ref'] = [['sid','ref']]
>>> dg.inputs.sid = 's1'

Change the template only for output field struct. The rest use the general template

>>> dg.inputs.field_template = dict(struct='%s/struct.nii')
>>> dg.inputs.template_args['struct'] = [['sid']]
Mandatory Inputs:
  • base_directory (a string) – Path to the base directory consisting of subject data.

  • hostname (a string) – Server hostname.

  • sort_filelist (a boolean) – Sort the filelist that matches the template.

  • template (a string) – Layout used to get files. relative to base directory if defined.

Optional Inputs:
  • download_files (a boolean) – If false it will return the file names without downloading them. (Nipype default value: True)

  • drop_blank_outputs (a boolean) – Remove None entries from output lists. (Nipype default value: False)

  • password (a string) – Server password.

  • raise_on_empty (a boolean) – Generate exception if list is empty for a given field. (Nipype default value: True)

  • ssh_log_to_file (a string) – If set SSH commands will be logged to the given file. (Nipype default value: "")

  • template_args (a dictionary with keys which are a string and with values which are a list of items which are a list of items which are any value) – Information to plug into template.

  • template_expression (‘fnmatch’ or ‘regexp’) – Use either fnmatch or regexp to express templates. (Nipype default value: fnmatch)

  • username (a string) – Server username.

SSHDataGrabber.output_spec

alias of DynamicTraitedSpec

SelectFiles

Link to code

Bases: IOBase

Flexibly collect data from disk to feed into workflows.

This interface uses Python’s {}-based string formatting syntax to plug values (possibly known only at workflow execution time) into string templates and collect files from persistent storage. These templates can also be combined with glob wildcards (*, ?) and character ranges ([...]). The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.

Examples

>>> import pprint
>>> from nipype import SelectFiles, Node
>>> templates={"T1": "{subject_id}/struct/T1.nii",
...            "epi": "{subject_id}/func/f[0,1].nii"}
>>> dg = Node(SelectFiles(templates), "selectfiles")
>>> dg.inputs.subject_id = "subj1"
>>> pprint.pprint(dg.outputs.get())  # doctest:
{'T1': <undefined>, 'epi': <undefined>}

Note that SelectFiles does not support lists as inputs for the dynamic fields. Attempts to do so may lead to unexpected results because brackets also express glob character ranges. For example,

>>> templates["epi"] = "{subject_id}/func/f{run}.nii"
>>> dg = Node(SelectFiles(templates), "selectfiles")
>>> dg.inputs.subject_id = "subj1"
>>> dg.inputs.run = [10, 11]

would match f0.nii or f1.nii, not f10.nii or f11.nii.

Optional Inputs:
  • base_directory (a pathlike object or string representing an existing directory) – Root path common to templates.

  • force_lists (a boolean or a list of items which are a string) – Whether to return outputs as a list even when only one file matches the template. Either a boolean that applies to all output fields or a list of output field names to coerce to a list. (Nipype default value: False)

  • raise_on_empty (a boolean) – Raise an exception if a template pattern matches no files. (Nipype default value: True)

  • sort_filelist (a boolean) – When matching multiple files, return them in sorted order. (Nipype default value: True)

SelectFiles.output_spec

alias of DynamicTraitedSpec

XNATSink

Link to code

Bases: LibraryBaseInterface, IOBase

Generic datasink module that takes a directory containing a list of nifti files and provides a set of structured output fields.

Mandatory Inputs:
  • config (a pathlike object or string representing a file) – Mutually exclusive with inputs: server.

  • experiment_id (a string) – Set to workflow name.

  • project_id (a string) – Project in which to store the outputs.

  • server (a string) – Mutually exclusive with inputs: config. Requires inputs: user, pwd.

  • subject_id (a string) – Set to subject id.

Optional Inputs:
  • _outputs (a dictionary with keys which are a string and with values which are any value) – (Nipype default value: {})

  • assessor_id (a string) – Option to customize outputs representation in XNAT - assessor level will be used with specified id. Mutually exclusive with inputs: reconstruction_id.

  • cache_dir (a pathlike object or string representing a directory)

  • pwd (a string)

  • reconstruction_id (a string) – Option to customize outputs representation in XNAT - reconstruction level will be used with specified id. Mutually exclusive with inputs: assessor_id.

  • share (a boolean) – Option to share the subjects from the original projectinstead of creating new ones when possible - the created experiments are then shared back to the original project. (Nipype default value: False)

  • user (a string)

XNATSource

Link to code

Bases: LibraryBaseInterface, IOBase

Pull data from an XNAT server.

Generic XNATSource module that wraps around the pyxnat module in an intelligent way for neuroimaging tasks to grab files and data from an XNAT server.

Examples

Pick all files from current directory

>>> dg = XNATSource()
>>> dg.inputs.template = '*'
>>> dg = XNATSource(infields=['project','subject','experiment','assessor','inout'])
>>> dg.inputs.query_template = '/projects/%s/subjects/%s/experiments/%s'                '/assessors/%s/%s_resources/files'
>>> dg.inputs.project = 'IMAGEN'
>>> dg.inputs.subject = 'IMAGEN_000000001274'
>>> dg.inputs.experiment = '*SessionA*'
>>> dg.inputs.assessor = '*ADNI_MPRAGE_nii'
>>> dg.inputs.inout = 'out'
>>> dg = XNATSource(infields=['sid'],outfields=['struct','func'])
>>> dg.inputs.query_template = '/projects/IMAGEN/subjects/%s/experiments/*SessionA*'                '/assessors/*%s_nii/out_resources/files'
>>> dg.inputs.query_template_args['struct'] = [['sid','ADNI_MPRAGE']]
>>> dg.inputs.query_template_args['func'] = [['sid','EPI_faces']]
>>> dg.inputs.sid = 'IMAGEN_000000001274'
Mandatory Inputs:
  • config (a pathlike object or string representing a file) – Mutually exclusive with inputs: server.

  • query_template (a string) – Layout used to get files. Relative to base directory if defined.

  • server (a string) – Mutually exclusive with inputs: config. Requires inputs: user, pwd.

Optional Inputs:
  • cache_dir (a pathlike object or string representing a directory) – Cache directory.

  • pwd (a string)

  • query_template_args (a dictionary with keys which are a string and with values which are a list of items which are a list of items which are any value) – Information to plug into template. (Nipype default value: {'outfiles': []})

  • user (a string)

XNATSource.output_spec

alias of DynamicTraitedSpec

nipype.interfaces.io.add_traits(base, names, trait_type=None)

Add traits to a traited class.

All traits are set to Undefined by default

nipype.interfaces.io.capture_provenance()
nipype.interfaces.io.copytree(src, dst, use_hardlink=False)

Recursively copy a directory tree using nipype.utils.filemanip.copyfile()

This is not a thread-safe routine. However, in the case of creating new directories, it checks to see if a particular directory has already been created by another process.

nipype.interfaces.io.push_file(self, xnat, file_name, out_key, uri_template_args)
nipype.interfaces.io.push_provenance()
nipype.interfaces.io.quote_id(string)
nipype.interfaces.io.unquote_id(string)