tomoscan.esrf.volume.edfvolume.EDFVolume#

class tomoscan.esrf.volume.edfvolume.EDFVolume(folder=None, volume_basename=None, data=None, source_scan=None, metadata=None, data_url=None, metadata_url=None, overwrite=False, header=None, start_index=0, data_extension='edf', metadata_extension='txt')#

Bases: TIFFLikeDiskAccessor, VolumeSingleFrameBase

Save volume data to single frame edf and metadata to .txt files

Warning

each file saved under {volume_basename}_{index_zfill6}.edf is considered to be a slice of the volume.

__init__(folder=None, volume_basename=None, data=None, source_scan=None, metadata=None, data_url=None, metadata_url=None, overwrite=False, header=None, start_index=0, data_extension='edf', metadata_extension='txt')#

Methods

__init__([folder, volume_basename, data, ...])

browse_data_files([url])

param url

data url. If not provided will take self.data_url

browse_data_urls([url])

generator on data urls used.

browse_metadata_files([url])

param url

metadata url. If not provided will take self.metadata_url

browse_slices([url])

generator of 2D numpy array representing a slice

build_drac_metadata()

build the drac (successor of icat) metadata dict from existing volume metadata.

check_can_provide_identifier()

clear_cache()

remove object stored in data and metadata

data_file_name_generator(n_frames, data_url)

browse output files for n_frames

data_file_saver_generator(n_frames, ...)

Provide a helper class to dump data frame by frame.

deduce_data_and_metadata_urls(url)

compute data and metadata urls from 'parent url' :return: data_url: DataUrl | None, metadata_url: DataUrl | None

example_defined_from_str_identifier()

example as string to explain how users can defined identifiers from a string

format_data_path_for_data(data_path, index, ...)

Return file path to save the frame at index of the current volume

from_identifier(identifier)

Return the Dataset from a identifier

get_bounding_box([axis])

Return the bounding box covered by the Tomo object axis is expected to be in (0, 1, 2) or (x==0, y==1, z==2)

get_data_path_pattern_for_data(data_path, ...)

Return file path pattern (and not full path) to load data.

get_identifier()

dataset unique identifier.

get_min_max()

compute min max of the volume.

get_min_max_values([url])

compute min max over 'data' if exists else browsing the volume slice by slice

get_slice([index, axis, xy, xz, yz, url])

read a single slice of the volume

get_slices(slices)

retrieve a couple of slices along any axis:

get_volume_basename([url])

get_volume_shape([url])

return volume shape as a tuple

load()

load_chunk(chunk[, url])

Load a sub-volume.

load_data([url, store])

load volume data from disk

load_frame(file_name, scheme)

Function dedicated for volume saving each frame on a single file

load_metadata([url, store])

load volume metadata from disk

read_file(file_name)

rtype

tuple

read_n_columns_in_file(file_name, column_indices)

rtype

tuple

read_n_lines_in_file(file_name, line_indices)

rtype

tuple

remove_existing_data_files([url])

Clean any existing files (if overwrite and rights) that must be used for saving

save([url])

save volume data and metadata to disk

save_data([url])

save data to the provided url or existing one if none is provided

save_frame(frame, file_name, scheme)

Function dedicated for volune saving each frame on a single file

save_metadata([url])

save metadata to the provided url or existing one if none is provided

select(volume[, xy, xz, yz, axis, index])

select a slice at 'index' along an axis (axis)

select_slices(volume, slices)

Attributes

DEFAULT_DATA_EXTENSION

DEFAULT_DATA_PATH_PATTERN

DEFAULT_DATA_SCHEME

DEFAULT_METADATA_EXTENSION

DEFAULT_METADATA_PATH_PATTERN

DEFAULT_METADATA_SCHEME

EXTENSION

data

data_extension

data_url

extension

rtype

str

header

possible header for the edf files

metadata

metadata_extension

metadata_url

overwrite

rtype

bool

pixel_size

position

position are provided as a tuple using the same reference for axis as the volume data.

skip_existing_data_files_removal

The loading of the volume for single frame base is done by loading all the file contained in a folder data_url.file_path().

source_scan

start_index

rtype

int

url

voxel_size

voxel size as (axis 0 dim - aka z, axis 1 dim - aka y, axis 2 dim aka z)

browse_data_files(url=None)#
Parameters

url – data url. If not provided will take self.data_url

return a generator go through all the existing files associated to the data volume

browse_data_urls(url=None)#

generator on data urls used.

Parameters

url – data url to be used. If not provided will take self.data_url

browse_metadata_files(url=None)#
Parameters

url – metadata url. If not provided will take self.metadata_url

return a generator go through all the existing files associated to the data volume

browse_slices(url=None)#

generator of 2D numpy array representing a slice

Parameters

url – data url to be used. If not provided will browse self.data if exists else self.data_url

Warning

this will get the slice from the data on disk and never use data property. so before browsing slices you might want to check if data is already loaded

build_drac_metadata()#

build the drac (successor of icat) metadata dict from existing volume metadata.

Return type

dict

clear_cache()#

remove object stored in data and metadata

data_file_name_generator(n_frames, data_url)#

browse output files for n_frames

data_file_saver_generator(n_frames, data_url, overwrite)#

Provide a helper class to dump data frame by frame. For know the only possible interaction is Helper[:] = frame

Parameters
  • n_frames – number of frame the final volume will contain

  • data_url (DataUrl) – url to dump data

  • overwrite (bool) – overwrite existing file ?

deduce_data_and_metadata_urls(url)#

compute data and metadata urls from ‘parent url’ :return: data_url: DataUrl | None, metadata_url: DataUrl | None

static example_defined_from_str_identifier()#

example as string to explain how users can defined identifiers from a string

Return type

str

format_data_path_for_data(data_path, index, volume_basename)#

Return file path to save the frame at index of the current volume

Return type

str

static from_identifier(identifier)#

Return the Dataset from a identifier

get_bounding_box(axis=None)#

Return the bounding box covered by the Tomo object axis is expected to be in (0, 1, 2) or (x==0, y==1, z==2)

get_data_path_pattern_for_data(data_path, volume_basename)#

Return file path pattern (and not full path) to load data. For example in edf it can return ‘myacquisition_*.edf’ in order to be handled by

Return type

str

get_identifier()#

dataset unique identifier. Can be for example a hdf5 and en entry from which the dataset can be rebuild

Return type

EDFVolumeIdentifier

get_min_max()#

compute min max of the volume. Can take some time but avoid to load the full volume in memory

Return type

tuple

get_min_max_values(url=None)#

compute min max over ‘data’ if exists else browsing the volume slice by slice

Parameters

url – data url to be used. If not provided will take self.data_url

Return type

tuple

get_slice(index=None, axis=None, xy=None, xz=None, yz=None, url=None)#

read a single slice of the volume

get_slices(slices)#

retrieve a couple of slices along any axis:

For example, if you want to retrieve slice number 2 of axis 0 and slice number 56 of axis 1:

slices = volume.get_slices(
    (0, 2),
    (1, 56),
)
for (axis, slice), data in slices:
    ...
get_volume_shape(url=None)#

return volume shape as a tuple

property header: dict | None#

possible header for the edf files

load_chunk(chunk, url=None)#

Load a sub-volume.

Parameters
  • chunk – tuple of slice objects indicating which chunk of the volume has to be loaded.

  • url – data url to be used. If not provided will take self.data_url

load_data(url=None, store=True)#

load volume data from disk

load_frame(file_name, scheme)#

Function dedicated for volume saving each frame on a single file

Parameters
  • file_name – path to store the data

  • scheme – scheme to save the data

load_metadata(url=None, store=True)#

load volume metadata from disk

property position: tuple | None#

position are provided as a tuple using the same reference for axis as the volume data. position is returned as (axis_0_pos, axis_1_pos, axis_2_pos). Can also be see as (z_position, y_position, x_position)

remove_existing_data_files(url=None)#

Clean any existing files (if overwrite and rights) that must be used for saving

save(url=None, **kwargs)#

save volume data and metadata to disk

save_data(url=None)#

save data to the provided url or existing one if none is provided

save_frame(frame, file_name, scheme)#

Function dedicated for volune saving each frame on a single file

Parameters
  • frame – frame to be save

  • file_name – path to store the data

  • scheme – scheme to save the data

save_metadata(url=None)#

save metadata to the provided url or existing one if none is provided

static select(volume, xy=None, xz=None, yz=None, axis=None, index=None)#

select a slice at ‘index’ along an axis (axis)

property skip_existing_data_files_removal: bool#

The loading of the volume for single frame base is done by loading all the file contained in a folder data_url.file_path(). When saving the data we make sure there is no ‘remaining’ of any previous saving by using the file pattern. But when we want to save a volume from several thread (one thread save the n first frame, second the n next frame …) this could be a limitation. So in this case we can use the ‘ignore_existing_files’ that will avoid calling ‘_remove_existing_data_files’

Return type

bool

property voxel_size: tuple | None#

voxel size as (axis 0 dim - aka z, axis 1 dim - aka y, axis 2 dim aka z)