nipype.pipeline.plugins.multiproc module

Parallel workflow execution via multiprocessing

Support for child processes running as non-daemons based on http://stackoverflow.com/a/8963618/1183453

class nipype.pipeline.plugins.multiproc.MultiProcPlugin(plugin_args=None)

Bases: nipype.pipeline.plugins.base.DistributedPluginBase

Execute workflow with multiprocessing, not sending more jobs at once than the system can support.

The plugin_args input to run can be used to control the multiprocessing execution and defining the maximum amount of memory and threads that should be used. When those parameters are not specified, the number of threads and memory of the system is used.

System consuming nodes should be tagged:

memory_consuming_node.mem_gb = 8
thread_consuming_node.n_procs = 16

The default number of threads and memory are set at node creation, and are 1 and 0.25GB respectively.

Currently supported options are:

  • non_daemon: boolean flag to execute as non-daemon processes

  • n_procs: maximum number of threads to be executed in parallel

  • memory_gb: maximum memory (in GB) that can be used at once.

  • raise_insufficient: raise error if the requested resources for

    a node over the maximum n_procs and/or memory_gb (default is True).

  • scheduler: sort jobs topologically ('tsort', default value)

    or prioritize jobs by, first, memory consumption and, second, number of threads ('mem_thread' option).

  • mp_context: name of multiprocessing context to use

nipype.pipeline.plugins.multiproc.process_initializer(cwd)

Initializes the environment of the child process

nipype.pipeline.plugins.multiproc.run_node(node, updatehash, taskid)

Function to execute node.run(), catch and log any errors and return the result dictionary

Parameters:
  • node (nipype Node instance) – the node to run

  • updatehash (boolean) – flag for updating hash

  • taskid (int) – an identifier for this task

Returns:

result – dictionary containing the node runtime results and stats

Return type:

dictionary