Nabu configuration parameters¶
This file lists all the current configuration parameters available in the configuration file.
dataset¶
Dataset location, either a directory or a HDF5-Nexus file.
location =
Which entry to process in the data HDF5 file. Default is the first entry. It can be a comma-separated list of entries, and/or a wildcard (* for all entries, or things like entry???1).
hdf5_entry =
Nexus version to use when browsing the HDF5 dataset. Default is 1.0.
nexus_version = 1.0
Path to a directory where XXX_flats.h5 and XXX_darks.h5 are to be found, where ‘XXX’ denotes the dataset basename. If these files are found, then reduced flats/darks will be loaded from them. Otherwise, reduced flats/darks will be saved to there once computed, either in the .nx directory, or in the output directory. Mind that the HDF5 entry corresponds to the one of the dataset.
darks_flats_dir =
Binning factor in the horizontal dimension when reading the data. The final slices dimensions will be divided by this factor.
binning = 1
Binning factor in the vertical dimension when reading the data. This results in a lesser number of reconstructed slices.
binning_z = 1
Projections subsampling factor: take one projection out of ‘projection_subsampling’. The format can be an integer (take 1 projection out of N), or N:M (take 1 projection out of N, start with the projection number M) For example: 2 (or 2:0) to reconstruct from even projections, 2:1 to reconstruct from odd projections.
projections_subsampling = 1
Projection to exclude from the reconstruction. It can be: -indices = exclude_projections_indices.txt : Path to a text file with one integer per line. Each corresponding projection INDEX will be ignored. -angles = exclude_projections_angles.txt : Path to a text file with angle in DEGREES, one per line. The corresponding angles will be ignored -angular_range = [a, b] : ignore angles belonging to angular range [a, b] in degrees, with b included.
exclude_projections =
Which metadata to overwrite, separated by a semicolon, and with units. Example: ‘energy = 19 kev; pixel_size = 1.6 um’
overwrite_metadata =
preproc¶
How to perform flat-field normalization. The parameter value can be:
1 or True: enabled.
0 or False: disabled
forced or force-load: perform flatfield regardless of the dataset by attempting to load darks/flats
force-compute: perform flatfield, ignore all .h5 files containing already computed darks/flats.
flatfield = 1
Whether to correct for flat distortion. If activated, each radio is correlated with its corresponding flat, in order to determine and correct the flat distortion.
flat_distortion_correction_enabled = 0
Advanced parameters for flat distortion correction
flat_distortion_params = tile_size=100; interpolation_kind='linear'; padding_mode='edge'; correction_spike_threshold=None
Whether to normalize frames with Synchrotron Current. This can correct the effect of a beam refill not taken into account by flats.
normalize_srcurrent = 1
Whether to enable the CCD hotspots correction.
ccd_filter_enabled = 0
If ccd_filter_enabled = 1, a median filter is applied on the 3X3 neighborhood of every pixel. If a pixel value exceeds the median value more than this parameter, then the pixel value is replaced with the median value.
ccd_filter_threshold = 0.04
Apply coordinate transformation on the raw data, at the reading stage. Default (empty) is None. Available are: None, identity(for testing the pipeline), map_xz. This latter method requires two URLs being passed by detector_distortion_correction_options: map_x and map_z pointing to two 2D arrays containing the position where each pixel can be interpolated at in the raw data
detector_distortion_correction =
Options for detector_distortion_correction. Example, for mapx_xz: detector_distortion_correction_options=map_x=”silx:./dm.h5?path=/coords_source_x” ; map_z=”silx:./dm.h5?path=/coords_source_z” Mind the semicolon separator (;).
detector_distortion_correction_options =
Whether to enable the ‘double flat-field’ filetering for correcting rings artefacts.
double_flatfield_enabled = 0
Enable high-pass filtering on double flatfield with this value of ‘sigma’
dff_sigma =
Whether to take logarithm after flat-field and phase retrieval.
take_logarithm = 1
After division by the FF, and before the logarithm, the is clipped to this minimum. Enabled only if take_logarithm=1
log_min_clip = 1e-6
After division by the FF, and before the logarithm, the is clipped to this maximum. Enabled only if take_logarithm=1
log_max_clip = 10.0
Sinogram normalization method. Available methods are: chebyshev, subtraction, division, none. Default is none (no normalization)
sino_normalization =
Path to the file when sino_normalization is either ‘subtraction’ or ‘division’. To specify the path within a HDF5 file, the syntax is /path/to/file?path=/entry/data
sino_normalization_file =
Path to the file where some operations should be stored for later use. By default it is ‘xxx_nabu_processes.h5’
processes_file =
Sinogram rings removal method. Default (empty) is None. Available are: None, munch, vo, mean-subtraction, mean-division. See also: sino_rings_options
sino_rings_correction =
Options for sinogram rings correction methods. The parameters are separated by commas and passed as ‘name=value’. Mind the semicolon separator (;). The default options are the following: -For munch: sigma=1.0 ; levels=10 ; padding=False -For vo: snr=3.0; la_size=51; sm_size=21; dim=1 -For mean-subtraction and mean-division: filter_cutoff=(0, 30)
sino_rings_options =
Center of rotation when ‘tilt_correction’ is non-empty. By default the center of rotation is the middle of each radio, i.e ((Nx-1)/2.0, (Ny-1)/2.0).
rotate_projections_center =
Detector tilt correction. Default (empty) means no tilt correction. The following values can be provided for automatic tilt estimation, in this case, the projection images are rotated by the found tilt value:
A scalar value: tilt correction angle in degrees
1d-correlation: auto-detect tilt with the 1D correlation method (fastest, but works best for small tilts)
fft-polar: auto-detect tilt with polar FFT method (slower, but works well on all ranges of tilts)
tilt_correction =
Options for methods computing automatically the detector tilt. The parameters are separated by commas and passed as ‘name=value’, for example: low_pass=1; high_pass=20. Mind the semicolon separator (;). Use ‘value’ (‘’) for values that are strings
autotilt_options =
phase¶
Phase retrieval method. Available are: Paganin, CTF, None
method = none
Single-distance phase retrieval related parameters
delta/beta ratio for the Paganin/CTF method
delta_beta = 100.0
Unsharp mask strength. The unsharped image is equal to UnsharpedImage = (1 + coeff)*originalPaganinImage - coeff * ConvolvedImage. Setting this coefficient to zero means that no unsharp mask will be applied.
unsharp_coeff = 0
Standard deviation of the Gaussian filter when applying an unsharp mask after the phase filtering. Disabled if set to 0.
unsharp_sigma = 0
Which type of unsharp mask filter to use. Available values are gaussian, laplacian and imagej. Default is gaussian.
unsharp_method = gaussian
Padding type for the filtering step in Paganin/CTF. Available are: mirror, edge, zeros
padding_type = edge
Geometric parameters for CTF phase retrieval. Length units are in meters.
ctf_geometry = z1_v=None; z1_h=None; detec_pixel_size=None; magnification=True
Advanced parameters for CTF phase retrieval.
ctf_advanced_params = length_scale=1e-5; lim1=1e-5; lim2=0.2; normalize_by_mean=True
reconstruction¶
Reconstruction method. Possible values: FBP, HBP, cone, MLEM, none. If value is ‘none’, no reconstruction will be done.
method = FBP
Reconstruction method implementation. The same method can have several implementations. Can be ‘nabu’, ‘corrct’, ‘astra’
implementation =
In the case you want to override the angles found in the files metadata. The angles are in degree.
angles_file =
Rotation axis position. It can be a number or the name of an estimation method (empty value means the middle of the detector). The following methods are available to find automatically the Center of Rotation (CoR):
centered : a fast and simple auto-CoR method. It only works when the CoR is not far from the middle of the detector. It does not work for half-tomography.
global : a slow but robust auto-CoR.
sliding-window : semi-automatically find the CoR with a sliding window. You have to specify on which side the CoR is (left, center, right). Please see the ‘cor_options’ parameter.
growing-window : automatically find the CoR with a sliding-and-growing window. You can tune the option with the parameter ‘cor_options’.
sino-coarse-to-fine: Estimate CoR from sinogram. Only works for 360 degrees scans.
composite-coarse-to-fine: Estimate CoR from composite multi-angle images. Only works for 360 degrees scans.
fourier-angles: Estimate CoR from sino based on an angular correlation analysis. You can tune the option with the parameter ‘cor_options’.
octave-accurate: Legacy from octave accurate COR estimation algorithm. It first estimates the COR with global fourier-based correlation, then refines this estimation with local correlation based on the variance of the difference patches. You can tune the option with the parameter ‘cor_options’.
vo: Method from Nghia Vo, based on double-wedge in sinogram Fourier transform (needs algotom python package)
rotation_axis_position = sliding-window
Options for methods finding automatically the rotation axis position. The parameters are separated by commas and passed as ‘name=value’. For example: low_pass=1; high_pass=20. Mind the semicolon separator (;) and the ‘’ for string values that are strings. If ‘side’ is set, it is expected to be either:
‘from_file’ (to pick the value in the NX file.)
or an relative CoR position in pixels (if so, it overrides the value in the NX file), or any of ‘left’, ‘center’, ‘right’, ‘all’, ‘near’. The default value for ‘side’ is ‘from_file’.
cor_options = side='from_file'
Which slice to use for estimating the Center of Rotation (CoR). This parameter can be an integer or ‘top’, ‘middle’, ‘bottom’. If provided, the CoR will be estimated from the correspondig sinogram, and ‘cor_options’ can contain the parameter ‘subsampling’.
cor_slice =
In the case where the axis position is specified for each angle
axis_correction_file =
A file where each line describes the horizontal and vertical translations of the sample (or detector). The order is ‘horizontal, vertical’. It can be created from a numpy array saved with ‘numpy.savetxt’
translation_movements_file =
Use this if you want to obtain a rotated reconstructed slice. The angle is in degrees.
angle_offset = 0
Filter type for FBP method. Available are: none, ramlak, shepp-logan, cosine, hamming, hann, tukey, lanczos, hilbert
fbp_filter_type = ramlak
Cut-off frequency for Fourier filter used in FBP, in normalized units. Default is the Nyquist frequency 1.0
fbp_filter_cutoff = 1.
In cone-beam geometry, distance (in meters) between the X-ray source and the center of the sample. Default is infinity.
source_sample_dist =
In cone-beam geometry, distance (in meters) between the center of the sample and the detector. Default is read from the input dataset.
sample_detector_dist =
Padding type for FBP. Available are: zeros, edges
padding_type = edges
Whether to enable half-acquisition. Default is auto. You can enable/disable it manually by setting 1 or 0.
enable_halftomo = auto
Whether to mask voxels falling outside of the reconstruction region
clip_outer_circle = 0
If ‘clip_outer_circle’ is enabled, value of the voxels falling outside of the reconstruction region.
outer_circle_value = 0
If set to true, the reconstructed region is centered on the rotation axis, i.e the center of the image will be the rotation axis position.
centered_axis = 1
How many reduction steps will be taken. At least 2. A Higher number may increase speed but may also increase the interpolation errors
hbp_reduction_steps = 2
Increasing this parameter help matching the GPU memory size for big slices. Reconstruction by fragments of the whole images. For very large slices it can be useful to increase this number to fit the memory
hbp_legs = 4
Parameters for sub-volume reconstruction. Indices start at 0, and upper bounds are INCLUDED!
(x, y) are the dimension of a slice, and (z) is the ‘vertical’ axis By default, all the volume is reconstructed slice by slice, along the axis ‘z’.
start_x = 0
end_x = -1
start_y = 0
end_y = -1
start_z = 0
end_z = -1
Parameters for iterative algorithms
Number of iterations
iterations = 200
output¶
Directory where the output reconstruction is stored.
location =
File prefix. Optional, by default it is inferred from the scanned dataset.
file_prefix =
Output file format. Available are: hdf5, tiff, jp2, edf, vol
file_format = hdf5
What to do in the case where the output file exists. By default, the output data is never overwritten and the process is interrupted if the file already exists. Set this option to 1 if you want to overwrite the output files.
overwrite_results = 1
Whether to create a single large tiff file for the reconstructed volume.
tiff_single_file = 0
Compression ratio for Jpeg2000 output.
jpeg2000_compression_ratio =
Lower and upper bounds to use when converting from float32 to int. Floating point values are clipped to these (min, max) values before being cast to integer.
float_clip_values =
postproc¶
Whether to compute a histogram of the volume.
output_histogram = 0
Number of bins for the output histogram. Default is one million.
histogram_bins = 1000000
resources¶
Computations distribution method. It can be:
local: run the computations on the local machine
slurm: run the computations through SLURM
preview: reconstruct the slices/volume as quickly as possible, possibly doing some binning.
method = local
Number of GPUs to use.
gpus = 1
For method = local only. List of GPU IDs to use. This parameter overwrites ‘gpus’. If left blank, exactly one GPU will be used, and the best one will be picked.
gpu_id =
pipeline¶
Save intermediate results. This is a list of comma-separated processing steps, for ex: flatfield, phase, sinogram. Each step generates a HDF5 file in the form name_file_prefix.hdf5 (ex. ‘sinogram_file_prefix.hdf5’)
save_steps =
Resume the processing from a previously saved processing step. The corresponding file must exist in the output directory.
resume_from_step =
File where the intermediate processing steps are written. By default it is empty, and intermediate processing steps are written in the same directory as the reconstructions, with a file prefix, ex. sinogram_mydataset.hdf5.
steps_file =
Level of verbosity of the processing. 0 = terse, 3 = much information.
verbosity = 2