interfaces.afni.model¶
Deconvolve¶
Wraps the executable command 3dDeconvolve
.
Performs OLS regression given a 4D neuroimage file and stimulus timings
For complete details, see the 3dDeconvolve Documentation.
Examples¶
>>> from nipype.interfaces import afni
>>> deconvolve = afni.Deconvolve()
>>> deconvolve.inputs.in_files = ['functional.nii', 'functional2.nii']
>>> deconvolve.inputs.out_file = 'output.nii'
>>> deconvolve.inputs.x1D = 'output.1D'
>>> stim_times = [(1, 'timeseries.txt', 'SPMG1(4)')]
>>> deconvolve.inputs.stim_times = stim_times
>>> deconvolve.inputs.stim_label = [(1, 'Houses')]
>>> deconvolve.inputs.gltsym = ['SYM: +Houses']
>>> deconvolve.inputs.glt_label = [(1, 'Houses')]
>>> deconvolve.cmdline
"3dDeconvolve -input functional.nii functional2.nii -bucket output.nii -x1D output.1D -num_stimts 1 -stim_times 1 timeseries.txt 'SPMG1(4)' -stim_label 1 Houses -num_glt 1 -gltsym 'SYM: +Houses' -glt_label 1 Houses"
>>> res = deconvolve.run() # doctest: +SKIP
Inputs:
[Optional]
in_files: (a list of items which are a pathlike object or string
representing an existing file)
filenames of 3D+time input datasets. More than one filename can be
given and the datasets will be auto-catenated in time. You can input
a 1D time series file here, but the time axis should run along the
ROW direction, not the COLUMN direction as in the 'input1D' option.
argument: ``-input %s``, position: 1
sat: (a boolean)
check the dataset time series for initial saturation transients,
which should normally have been excised before data analysis.
argument: ``-sat``
mutually_exclusive: trans
trans: (a boolean)
check the dataset time series for initial saturation transients,
which should normally have been excised before data analysis.
argument: ``-trans``
mutually_exclusive: sat
noblock: (a boolean)
normally, if you input multiple datasets with 'input', then the
separate datasets are taken to be separate image runs that get
separate baseline models. Use this options if you want to have the
program consider these to be all one big run.* If any of the input
dataset has only 1 sub-brick, then this option is automatically
invoked!* If the auto-catenation feature isn't used, then this
option has no effect, no how, no way.
argument: ``-noblock``
force_TR: (a float)
use this value instead of the TR in the 'input' dataset. (It's
better to fix the input using Refit.)
argument: ``-force_TR %f``, position: 0
input1D: (a pathlike object or string representing an existing file)
filename of single (fMRI) .1D time series where time runs down the
column.
argument: ``-input1D %s``
TR_1D: (a float)
TR to use with 'input1D'. This option has no effect if you do not
also use 'input1D'.
argument: ``-TR_1D %f``
legendre: (a boolean)
use Legendre polynomials for null hypothesis (baseline model)
argument: ``-legendre``
nolegendre: (a boolean)
use power polynomials for null hypotheses. Don't do this unless you
are crazy!
argument: ``-nolegendre``
nodmbase: (a boolean)
don't de-mean baseline time series
argument: ``-nodmbase``
dmbase: (a boolean)
de-mean baseline time series (default if 'polort' >= 0)
argument: ``-dmbase``
svd: (a boolean)
use SVD instead of Gaussian elimination (default)
argument: ``-svd``
nosvd: (a boolean)
use Gaussian elimination instead of SVD
argument: ``-nosvd``
rmsmin: (a float)
minimum rms error to reject reduced model (default = 0; don't use
this option normally!)
argument: ``-rmsmin %f``
nocond: (a boolean)
DON'T calculate matrix condition number
argument: ``-nocond``
singvals: (a boolean)
print out the matrix singular values
argument: ``-singvals``
goforit: (an integer (int or long))
use this to proceed even if the matrix has bad problems (e.g.,
duplicate columns, large condition number, etc.).
argument: ``-GOFORIT %i``
allzero_OK: (a boolean)
don't consider all zero matrix columns to be the type of error that
'gotforit' is needed to ignore.
argument: ``-allzero_OK``
dname: (a tuple of the form: (a unicode string, a unicode string))
set environmental variable to provided value
argument: ``-D%s=%s``
mask: (a pathlike object or string representing an existing file)
filename of 3D mask dataset; only data time series from within the
mask will be analyzed; results for voxels outside the mask will be
set to zero.
argument: ``-mask %s``
automask: (a boolean)
build a mask automatically from input data (will be slow for long
time series datasets)
argument: ``-automask``
STATmask: (a pathlike object or string representing an existing file)
build a mask from provided file, and use this mask for the purpose
of reporting truncation-to float issues AND for computing the FDR
curves. The actual results ARE not masked with this option (only
with 'mask' or 'automask' options).
argument: ``-STATmask %s``
censor: (a pathlike object or string representing an existing file)
filename of censor .1D time series. This is a file of 1s and 0s,
indicating which time points are to be included (1) and which are to
be excluded (0).
argument: ``-censor %s``
polort: (an integer (int or long))
degree of polynomial corresponding to the null hypothesis [default:
1]
argument: ``-polort %d``
ortvec: (a tuple of the form: (a pathlike object or string
representing an existing file, a unicode string))
this option lets you input a rectangular array of 1 or more baseline
vectors from a file. This method is a fast way to include a lot of
baseline regressors in one step.
argument: ``-ortvec %s %s``
x1D: (a pathlike object or string representing a file)
specify name for saved X matrix
argument: ``-x1D %s``
x1D_stop: (a boolean)
stop running after writing .xmat.1D file
argument: ``-x1D_stop``
cbucket: (a unicode string)
Name for dataset in which to save the regression coefficients (no
statistics). This dataset will be used in a -xrestore run [not yet
implemented] instead of the bucket dataset, if possible.
argument: ``-cbucket %s``
out_file: (a pathlike object or string representing a file)
output statistics file
argument: ``-bucket %s``
num_threads: (an integer (int or long))
run the program with provided number of sub-processes
argument: ``-jobs %d``
fout: (a boolean)
output F-statistic for each stimulus
argument: ``-fout``
rout: (a boolean)
output the R^2 statistic for each stimulus
argument: ``-rout``
tout: (a boolean)
output the T-statistic for each stimulus
argument: ``-tout``
vout: (a boolean)
output the sample variance (MSE) for each stimulus
argument: ``-vout``
nofdr: (a boolean)
Don't compute the statistic-vs-FDR curves for the bucket dataset.
argument: ``-noFDR``
global_times: (a boolean)
use global timing for stimulus timing files
argument: ``-global_times``
mutually_exclusive: local_times
local_times: (a boolean)
use local timing for stimulus timing files
argument: ``-local_times``
mutually_exclusive: global_times
num_stimts: (an integer (int or long))
number of stimulus timing files
argument: ``-num_stimts %d``, position: -6
stim_times: (a list of items which are a tuple of the form: (an
integer (int or long), a pathlike object or string representing an
existing file, a unicode string))
generate a response model from a set of stimulus times given in
file.
argument: ``-stim_times %d %s '%s'...``, position: -5
stim_label: (a list of items which are a tuple of the form: (an
integer (int or long), a unicode string))
label for kth input stimulus (e.g., Label1)
argument: ``-stim_label %d %s...``, position: -4
requires: stim_times
stim_times_subtract: (a float)
this option means to subtract specified seconds from each time
encountered in any 'stim_times' option. The purpose of this option
is to make it simple to adjust timing files for the removal of
images from the start of each imaging run.
argument: ``-stim_times_subtract %f``
num_glt: (an integer (int or long))
number of general linear tests (i.e., contrasts)
argument: ``-num_glt %d``, position: -3
gltsym: (a list of items which are a unicode string)
general linear tests (i.e., contrasts) using symbolic conventions
(e.g., '+Label1 -Label2')
argument: ``-gltsym 'SYM: %s'...``, position: -2
glt_label: (a list of items which are a tuple of the form: (an
integer (int or long), a unicode string))
general linear test (i.e., contrast) labels
argument: ``-glt_label %d %s...``, position: -1
requires: gltsym
outputtype: ('NIFTI' or 'AFNI' or 'NIFTI_GZ')
AFNI output filetype
args: (a unicode string)
Additional parameters to the command
argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
of class 'str' and with values which are a bytes or None or a
value of class 'str', nipype default value: {})
Environment variables
Outputs:
out_file: (a pathlike object or string representing an existing file)
output statistics file
reml_script: (a pathlike object or string representing an existing
file)
automatical generated script to run 3dREMLfit
x1D: (a pathlike object or string representing an existing file)
save out X matrix
cbucket: (a pathlike object or string representing a file)
output regression coefficients file (if generated)
References:¶
None None
Remlfit¶
Wraps the executable command 3dREMLfit
.
Performs Generalized least squares time series fit with Restricted Maximum Likelihood (REML) estimation of the temporal auto-correlation structure.
For complete details, see the 3dREMLfit Documentation.
Examples¶
>>> from nipype.interfaces import afni
>>> remlfit = afni.Remlfit()
>>> remlfit.inputs.in_files = ['functional.nii', 'functional2.nii']
>>> remlfit.inputs.out_file = 'output.nii'
>>> remlfit.inputs.matrix = 'output.1D'
>>> remlfit.inputs.gltsym = [('SYM: +Lab1 -Lab2', 'TestSYM'), ('timeseries.txt', 'TestFile')]
>>> remlfit.cmdline
'3dREMLfit -gltsym "SYM: +Lab1 -Lab2" TestSYM -gltsym "timeseries.txt" TestFile -input "functional.nii functional2.nii" -matrix output.1D -Rbuck output.nii'
>>> res = remlfit.run() # doctest: +SKIP
Inputs:
[Mandatory]
in_files: (a list of items which are a pathlike object or string
representing an existing file)
Read time series dataset
argument: ``-input "%s"``
matrix: (a pathlike object or string representing a file)
the design matrix file, which should have been output from
Deconvolve via the 'x1D' option
argument: ``-matrix %s``
[Optional]
polort: (an integer (int or long))
if no 'matrix' option is given, AND no 'matim' option, create a
matrix with Legendre polynomial regressorsup to the specified order.
The default value is 0, whichproduces a matrix with a single column
of all ones
argument: ``-polort %d``
mutually_exclusive: matrix
matim: (a pathlike object or string representing a file)
read a standard file as the matrix. You can use only Col as a name
in GLTs with these nonstandard matrix input methods, since the other
names come from the 'matrix' file. These mutually exclusive options
are ignored if 'matrix' is used.
argument: ``-matim %s``
mutually_exclusive: matrix
mask: (a pathlike object or string representing an existing file)
filename of 3D mask dataset; only data time series from within the
mask will be analyzed; results for voxels outside the mask will be
set to zero.
argument: ``-mask %s``
automask: (a boolean, nipype default value: False)
build a mask automatically from input data (will be slow for long
time series datasets)
argument: ``-automask``
STATmask: (a pathlike object or string representing an existing file)
filename of 3D mask dataset to be used for the purpose of reporting
truncation-to float issues AND for computing the FDR curves. The
actual results ARE not masked with this option (only with 'mask' or
'automask' options).
argument: ``-STATmask %s``
addbase: (a list of items which are a pathlike object or string
representing an existing file)
file(s) to add baseline model columns to the matrix with this
option. Each column in the specified file(s) will be appended to the
matrix. File(s) must have at least as many rows as the matrix does.
argument: ``-addbase %s``
slibase: (a list of items which are a pathlike object or string
representing an existing file)
similar to 'addbase' in concept, BUT each specified file must have
an integer multiple of the number of slices in the input dataset(s);
then, separate regression matrices are generated for each slice,
with the first column of the file appended to the matrix for the
first slice of the dataset, the second column of the file appended
to the matrix for the first slice of the dataset, and so on.
Intended to help model physiological noise in FMRI, or other effects
you want to regress out that might change significantly in the
inter-slice time intervals. This will slow the program down, and
make it use a lot more memory (to hold all the matrix stuff).
argument: ``-slibase %s``
slibase_sm: (a list of items which are a pathlike object or string
representing an existing file)
similar to 'slibase', BUT each file much be in slice major order
(i.e. all slice0 columns come first, then all slice1 columns, etc).
argument: ``-slibase_sm %s``
usetemp: (a boolean)
write intermediate stuff to disk, to economize on RAM. Using this
option might be necessary to run with 'slibase' and with 'Grid'
values above the default, since the program has to store a large
number of matrices for such a problem: two for every slice and for
every (a,b) pair in the ARMA parameter grid. Temporary files are
written to the directory given in environment variable TMPDIR, or in
/tmp, or in ./ (preference is in that order)
argument: ``-usetemp``
nodmbase: (a boolean)
by default, baseline columns added to the matrix via 'addbase' or
'slibase' or 'dsort' will each have their mean removed (as is done
in Deconvolve); this option turns this centering off
argument: ``-nodmbase``
requires: addbase, dsort
dsort: (a pathlike object or string representing an existing file)
4D dataset to be used as voxelwise baseline regressor
argument: ``-dsort %s``
dsort_nods: (a boolean)
if 'dsort' option is used, this command will output additional
results files excluding the 'dsort' file
argument: ``-dsort_nods``
requires: dsort
fout: (a boolean)
output F-statistic for each stimulus
argument: ``-fout``
rout: (a boolean)
output the R^2 statistic for each stimulus
argument: ``-rout``
tout: (a boolean)
output the T-statistic for each stimulus; if you use 'out_file' and
do not give any of 'fout', 'tout',or 'rout', then the program
assumes 'fout' is activated.
argument: ``-tout``
nofdr: (a boolean)
do NOT add FDR curve data to bucket datasets; FDR curves can take a
long time if 'tout' is used
argument: ``-noFDR``
nobout: (a boolean)
do NOT add baseline (null hypothesis) regressor betas to the
'rbeta_file' and/or 'obeta_file' output datasets.
argument: ``-nobout``
gltsym: (a list of items which are a tuple of the form: (a pathlike
object or string representing an existing file, a unicode string)
or a tuple of the form: (a unicode string, a unicode string))
read a symbolic GLT from input file and associate it with a label.
As in Deconvolve, you can also use the 'SYM:' method to provide the
definition of the GLT directly as a string (e.g., with 'SYM: +Label1
-Label2'). Unlike Deconvolve, you MUST specify 'SYM: ' if providing
the GLT directly as a string instead of from a file
argument: ``-gltsym "%s" %s...``
out_file: (a pathlike object or string representing a file)
output dataset for beta + statistics from the REML estimation; also
contains the results of any GLT analysis requested in the Deconvolve
setup, similar to the 'bucket' output from Deconvolve. This dataset
does NOT get the betas (or statistics) of those regressors marked as
'baseline' in the matrix file.
argument: ``-Rbuck %s``
var_file: (a pathlike object or string representing a file)
output dataset for REML variance parameters
argument: ``-Rvar %s``
rbeta_file: (a pathlike object or string representing a file)
output dataset for beta weights from the REML estimation, similar to
the 'cbucket' output from Deconvolve. This dataset will contain all
the beta weights, for baseline and stimulus regressors alike, unless
the '-nobout' option is given -- in that case, this dataset will
only get the betas for the stimulus regressors.
argument: ``-Rbeta %s``
glt_file: (a pathlike object or string representing a file)
output dataset for beta + statistics from the REML estimation, but
ONLY for the GLTs added on the REMLfit command line itself via
'gltsym'; GLTs from Deconvolve's command line will NOT be included.
argument: ``-Rglt %s``
fitts_file: (a pathlike object or string representing a file)
ouput dataset for REML fitted model
argument: ``-Rfitts %s``
errts_file: (a pathlike object or string representing a file)
output dataset for REML residuals = data - fitted model
argument: ``-Rerrts %s``
wherr_file: (a pathlike object or string representing a file)
dataset for REML residual, whitened using the estimated ARMA(1,1)
correlation matrix of the noise
argument: ``-Rwherr %s``
quiet: (a boolean)
turn off most progress messages
argument: ``-quiet``
verb: (a boolean)
turns on more progress messages, including memory usage progress
reports at various stages
argument: ``-verb``
goforit: (a boolean)
With potential issues flagged in the design matrix, an attempt will
nevertheless be made to fit the model
argument: ``-GOFORIT``
ovar: (a pathlike object or string representing a file)
dataset for OLSQ st.dev. parameter (kind of boring)
argument: ``-Ovar %s``
obeta: (a pathlike object or string representing a file)
dataset for beta weights from the OLSQ estimation
argument: ``-Obeta %s``
obuck: (a pathlike object or string representing a file)
dataset for beta + statistics from the OLSQ estimation
argument: ``-Obuck %s``
oglt: (a pathlike object or string representing a file)
dataset for beta + statistics from 'gltsym' options
argument: ``-Oglt %s``
ofitts: (a pathlike object or string representing a file)
dataset for OLSQ fitted model
argument: ``-Ofitts %s``
oerrts: (a pathlike object or string representing a file)
dataset for OLSQ residuals (data - fitted model)
argument: ``-Oerrts %s``
num_threads: (an integer (int or long), nipype default value: 1)
set number of threads
outputtype: ('NIFTI' or 'AFNI' or 'NIFTI_GZ')
AFNI output filetype
args: (a unicode string)
Additional parameters to the command
argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
of class 'str' and with values which are a bytes or None or a
value of class 'str', nipype default value: {})
Environment variables
Outputs:
out_file: (a pathlike object or string representing a file)
dataset for beta + statistics from the REML estimation (if generated
var_file: (a pathlike object or string representing a file)
dataset for REML variance parameters (if generated)
rbeta_file: (a pathlike object or string representing a file)
output dataset for beta weights from the REML estimation (if
generated
glt_file: (a pathlike object or string representing a file)
output dataset for beta + statistics from the REML estimation, but
ONLY for the GLTs added on the REMLfit command line itself via
'gltsym' (if generated)
fitts_file: (a pathlike object or string representing a file)
ouput dataset for REML fitted model (if generated)
errts_file: (a pathlike object or string representing a file)
output dataset for REML residuals = data - fitted model (if
generated
wherr_file: (a pathlike object or string representing a file)
dataset for REML residual, whitened using the estimated ARMA(1,1)
correlation matrix of the noise (if generated)
ovar: (a pathlike object or string representing a file)
dataset for OLSQ st.dev. parameter (if generated)
obeta: (a pathlike object or string representing a file)
dataset for beta weights from the OLSQ estimation (if generated)
obuck: (a pathlike object or string representing a file)
dataset for beta + statistics from the OLSQ estimation (if
generated)
oglt: (a pathlike object or string representing a file)
dataset for beta + statistics from 'gltsym' options (if generated
ofitts: (a pathlike object or string representing a file)
dataset for OLSQ fitted model (if generated)
oerrts: (a pathlike object or string representing a file)
dataset for OLSQ residuals = data - fitted model (if generated
References:¶
None None
Synthesize¶
Wraps the executable command 3dSynthesize
.
- Reads a ‘-cbucket’ dataset and a ‘.xmat.1D’ matrix from 3dDeconvolve,
- and synthesizes a fit dataset using user-selected sub-bricks and matrix columns.
For complete details, see the 3dSynthesize Documentation.
Examples¶
>>> from nipype.interfaces import afni
>>> synthesize = afni.Synthesize()
>>> synthesize.inputs.cbucket = 'functional.nii'
>>> synthesize.inputs.matrix = 'output.1D'
>>> synthesize.inputs.select = ['baseline']
>>> synthesize.cmdline
'3dSynthesize -cbucket functional.nii -matrix output.1D -select baseline'
>>> syn = synthesize.run() # doctest: +SKIP
Inputs:
[Mandatory]
cbucket: (a pathlike object or string representing a file)
Read the dataset output from 3dDeconvolve via the '-cbucket' option.
argument: ``-cbucket %s``
matrix: (a pathlike object or string representing a file)
Read the matrix output from 3dDeconvolve via the '-x1D' option.
argument: ``-matrix %s``
select: (a list of items which are a unicode string)
A list of selected columns from the matrix (and the corresponding
coefficient sub-bricks from the cbucket). Valid types include
'baseline', 'polort', 'allfunc', 'allstim', 'all', Can also provide
'something' where something matches a stim_label from 3dDeconvolve,
and 'digits' where digits are the numbers of the select matrix
columns by numbers (starting at 0), or number ranges of the form
'3..7' and '3-7'.
argument: ``-select %s``
[Optional]
out_file: (a pathlike object or string representing a file)
output dataset prefix name (default 'syn')
argument: ``-prefix %s``
dry_run: (a boolean)
Don't compute the output, just check the inputs.
argument: ``-dry``
TR: (a float)
TR to set in the output. The default value of TR is read from the
header of the matrix file.
argument: ``-TR %f``
cenfill: ('zero' or 'nbhr' or 'none')
Determines how censored time points from the 3dDeconvolve run will
be filled. Valid types are 'zero', 'nbhr' and 'none'.
argument: ``-cenfill %s``
num_threads: (an integer (int or long), nipype default value: 1)
set number of threads
outputtype: ('NIFTI' or 'AFNI' or 'NIFTI_GZ')
AFNI output filetype
args: (a unicode string)
Additional parameters to the command
argument: ``%s``
environ: (a dictionary with keys which are a bytes or None or a value
of class 'str' and with values which are a bytes or None or a
value of class 'str', nipype default value: {})
Environment variables
Outputs:
out_file: (a pathlike object or string representing an existing file)
output file
References:¶
None None