Run a registration of a series of 2d images.
mia-2dmyopgt-nonrigid -i <in-file> -o <out-file> [options]
mia-2dmyopgt-nonrigid This program implements the non-linear registration based on Pseudo Ground Thruth for motion compensation of series of myocardial perfusion images given as a data set as decribed in Chao Li and Ying Sun, 'Nonrigid Registration of Myocardial Perfusion MRI Using Pseudo Ground Truth' , In Proc. Medical Image Computing and Computer-Assisted Intervention MICCAI 2009, 165-172, 2009. Note that for this nonlinear motion correction a preceding linear registration step is usually required.
File-IO
input perfusion data set
output perfusion data set
file name base for registered files, the image file type is the same as given in the input data set
Pseudo Ground Thruth estimation
spacial neighborhood penalty weight
temporal second derivative penalty weight
crorrelation threshhold for neighborhood analysis
skip images at the beginning of the series e.g. because as they are of other modalities
Registration
Optimizer used for minimization For supported plugins see PLUGINS:minimizer/singlecost
start coefficinet rate in spines, gets divided by --c-rate-divider with every pass
cofficient rate divider for each pass
start divcurl weight, gets divided by --divcurl-divider with every pass
divcurl weight scaling with each new pass
image cost weight
multi-resolution levels
registration passes
verbosity of output, print messages of given level and higher priorities. Supported priorities starting at lowest level are:
info \(hy Low level messages
trace \(hy Function call trace
fail \(hy Report test failures
warning \(hy Warnings
error \(hy Report errors
debug \(hy Debug output
message \(hy Normal messages
fatal \(hy Report only fatal errors
print copyright information
print this help
print a short help
print the version number and exit
Maxiumum number of threads to use for processing,This number should be lower or equal to the number of logical processor cores in the machine. (-1: automatic estimation).
gdas
Gradient descent with automatic step size correction., supported parameters are:
ftolr = 0 (double)
Stop if the relative change of the criterion is below.. in [0, INF]
max-step = 2 (double)
Minimal absolute step size. in [1, INF]
maxiter = 200 (uint)
Stopping criterion: the maximum number of iterations. in [1, 2147483647]
min-step = 0.1 (double)
Maximal absolute step size. in [1e-10, INF]
xtola = 0.01 (double)
Stop if the inf-norm of the change applied to x is below this value.. in [0, INF]
gdsq
Gradient descent with quadratic step estimation, supported parameters are:
ftolr = 0 (double)
Stop if the relative change of the criterion is below.. in [0, INF]
gtola = 0 (double)
Stop if the inf-norm of the gradient is below this value.. in [0, INF]
maxiter = 100 (uint)
Stopping criterion: the maximum number of iterations. in [1, 2147483647]
scale = 2 (double)
Fallback fixed step size scaling. in [1, INF]
step = 0.1 (double)
Initial step size. in [0, INF]
xtola = 0 (double)
Stop if the inf-norm of x-update is below this value.. in [0, INF]
gsl
optimizer plugin based on the multimin optimizers ofthe GNU Scientific Library (GSL) https://www.gnu.org/software/gsl/, supported parameters are:
eps = 0.01 (double)
gradient based optimizers: stop when |grad| < eps, simplex: stop when simplex size < eps.. in [1e-10, 10]
iter = 100 (int)
maximum number of iterations. in [1, 2147483647]
opt = gd (dict)
Specific optimizer to be used.. Supported values are:
bfgs \(hy Broyden-Fletcher-Goldfarb-Shann
bfgs2 \(hy Broyden-Fletcher-Goldfarb-Shann (most efficient version)
cg-fr \(hy Flecher-Reeves conjugate gradient algorithm
gd \(hy Gradient descent.
simplex \(hy Simplex algorithm of Nelder and Mead
cg-pr \(hy Polak-Ribiere conjugate gradient algorithm
step = 0.001 (double)
initial step size. in [0, 10]
tol = 0.1 (double)
some tolerance parameter. in [0.001, 10]
nlopt
Minimizer algorithms using the NLOPT library, for a description of the optimizers please see 'http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms', supported parameters are:
ftola = 0 (double)
Stopping criterion: the absolute change of the objective value is below this value. in [0, INF]
ftolr = 0 (double)
Stopping criterion: the relative change of the objective value is below this value. in [0, INF]
higher = inf (double)
Higher boundary (equal for all parameters). in [INF, INF]
local-opt = none (dict)
local minimization algorithm that may be required for the main minimization algorithm.. Supported values are:
gn-orig-direct-l \(hy Dividing Rectangles (original implementation, locally biased)
gn-direct-l-noscal \(hy Dividing Rectangles (unscaled, locally biased)
gn-isres \(hy Improved Stochastic Ranking Evolution Strategy
ld-tnewton \(hy Truncated Newton
gn-direct-l-rand \(hy Dividing Rectangles (locally biased, randomized)
ln-newuoa \(hy Derivative-free Unconstrained Optimization by Iteratively Constructed Quadratic Approximation
gn-direct-l-rand-noscale \(hy Dividing Rectangles (unscaled, locally biased, randomized)
gn-orig-direct \(hy Dividing Rectangles (original implementation)
ld-tnewton-precond \(hy Preconditioned Truncated Newton
ld-tnewton-restart \(hy Truncated Newton with steepest-descent restarting
gn-direct \(hy Dividing Rectangles
ln-neldermead \(hy Nelder-Mead simplex algorithm
ln-cobyla \(hy Constrained Optimization BY Linear Approximation
gn-crs2-lm \(hy Controlled Random Search with Local Mutation
ld-var2 \(hy Shifted Limited-Memory Variable-Metric, Rank 2
ld-var1 \(hy Shifted Limited-Memory Variable-Metric, Rank 1
ld-mma \(hy Method of Moving Asymptotes
ld-lbfgs-nocedal \(hy None
ld-lbfgs \(hy Low-storage BFGS
gn-direct-l \(hy Dividing Rectangles (locally biased)
none \(hy don't specify algorithm
ln-bobyqa \(hy Derivative-free Bound-constrained Optimization
ln-sbplx \(hy Subplex variant of Nelder-Mead
ln-newuoa-bound \(hy Derivative-free Bound-constrained Optimization by Iteratively Constructed Quadratic Approximation
ln-praxis \(hy Gradient-free Local Optimization via the Principal-Axis Method
gn-direct-noscal \(hy Dividing Rectangles (unscaled)
ld-tnewton-precond-restart \(hy Preconditioned Truncated Newton with steepest-descent restarting
lower = -inf (double)
Lower boundary (equal for all parameters). in [INF, INF]
maxiter = 100 (int)
Stopping criterion: the maximum number of iterations. in [1, 2147483647]
opt = ld-lbfgs (dict)
main minimization algorithm. Supported values are:
gn-orig-direct-l \(hy Dividing Rectangles (original implementation, locally biased)
g-mlsl-lds \(hy Multi-Level Single-Linkage (low-discrepancy-sequence, require local gradient based optimization and bounds)
gn-direct-l-noscal \(hy Dividing Rectangles (unscaled, locally biased)
gn-isres \(hy Improved Stochastic Ranking Evolution Strategy
ld-tnewton \(hy Truncated Newton
gn-direct-l-rand \(hy Dividing Rectangles (locally biased, randomized)
ln-newuoa \(hy Derivative-free Unconstrained Optimization by Iteratively Constructed Quadratic Approximation
gn-direct-l-rand-noscale \(hy Dividing Rectangles (unscaled, locally biased, randomized)
gn-orig-direct \(hy Dividing Rectangles (original implementation)
ld-tnewton-precond \(hy Preconditioned Truncated Newton
ld-tnewton-restart \(hy Truncated Newton with steepest-descent restarting
gn-direct \(hy Dividing Rectangles
auglag-eq \(hy Augmented Lagrangian algorithm with equality constraints only
ln-neldermead \(hy Nelder-Mead simplex algorithm
ln-cobyla \(hy Constrained Optimization BY Linear Approximation
gn-crs2-lm \(hy Controlled Random Search with Local Mutation
ld-var2 \(hy Shifted Limited-Memory Variable-Metric, Rank 2
ld-var1 \(hy Shifted Limited-Memory Variable-Metric, Rank 1
ld-mma \(hy Method of Moving Asymptotes
ld-lbfgs-nocedal \(hy None
g-mlsl \(hy Multi-Level Single-Linkage (require local optimization and bounds)
ld-lbfgs \(hy Low-storage BFGS
gn-direct-l \(hy Dividing Rectangles (locally biased)
ln-bobyqa \(hy Derivative-free Bound-constrained Optimization
ln-sbplx \(hy Subplex variant of Nelder-Mead
ln-newuoa-bound \(hy Derivative-free Bound-constrained Optimization by Iteratively Constructed Quadratic Approximation
auglag \(hy Augmented Lagrangian algorithm
ln-praxis \(hy Gradient-free Local Optimization via the Principal-Axis Method
gn-direct-noscal \(hy Dividing Rectangles (unscaled)
ld-tnewton-precond-restart \(hy Preconditioned Truncated Newton with steepest-descent restarting
ld-slsqp \(hy Sequential Least-Squares Quadratic Programming
step = 0 (double)
Initial step size for gradient free methods. in [0, INF]
stop = -inf (double)
Stopping criterion: function value falls below this value. in [INF, INF]
xtola = 0 (double)
Stopping criterion: the absolute change of all x-values is below this value. in [0, INF]
xtolr = 0 (double)
Stopping criterion: the relative change of all x-values is below this value. in [0, INF]
Register the perfusion series given in 'segment.set' by using Pseudo Ground Truth estimation. Skip two images at the beginning and otherwiese use the default parameters. Store the result in 'registered.set'. mia-2dmyopgt-nonrigid -i segment.set -o registered.set -k 2
Gert Wollny
This software is Copyright (c) 1999\(hy2013 Leipzig, Germany and Madrid, Spain. It comes with ABSOLUTELY NO WARRANTY and you may redistribute it under the terms of the GNU GENERAL PUBLIC LICENSE Version 3 (or later). For more information run the program with the option '--copyright'.