base_source
Module containing BaseSource class.
BaseSource is the abstract data source class from which all concrete data sources must inherit.
Classes
BaseSource
class BaseSource( data_splitter: Optional[DatasetSplitter] = None, seed: Optional[int] = None, ignore_cols: Optional[Union[str, Sequence[str]]] = None, iterable: bool = True, modifiers: Optional[dict[str, DataPathModifiers]] = None, partition_size: int = 16, required_fields: Optional[Any] = None, **kwargs: Any,):
Abstract Base Source from which all other data sources must inherit.
This is used for streaming data in batches as opposed to loading the entire dataset into memory.
Arguments
data_splitter
: Deprecated argument, will be removed in a future release. Defaults to None. Not used.seed
: Random number seed. Used for setting random seed for all libraries. Defaults to None.ignore_cols
: Column/list of columns to be ignored from the data. Defaults to None.modifiers
: Dictionary used for modifying paths/ extensions in the dataframe. Defaults to None.partition_size
: The size of each partition when iterating over the data in a batched fashion.
Attributes
seed
: Random number seed. Used for setting random seed for all libraries.
Ancestors
Subclasses
Variables
is_initialised : bool
- Checks ifBaseSource
was initialised.
is_task_running : bool
- Returns True if a task is running.
Methods
add_hook
def add_hook(self, hook: DataSourceHook) ‑> None:
Add a hook to the datasource.
apply_ignore_cols
def apply_ignore_cols(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Apply ignored columns to dataframe, dropping columns as needed.
Returns A copy of the dataframe with ignored columns removed, or the original dataframe if this datasource does not specify any ignore columns.
apply_ignore_cols_iter
def apply_ignore_cols_iter( self, dfs: Iterator[pd.DataFrame],) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Apply ignored columns to dataframes from iterator.
apply_modifiers
def apply_modifiers(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Apply column modifiers to the dataframe.
If no modifiers are specified, returns the dataframe unchanged.
get_data
def get_data( self, data_keys: SingleOrMulti[str], *, use_cache: bool = True, **kwargs: Any,) ‑> Optional[pandas.core.frame.DataFrame]:
Get data corresponding to the provided data key(s).
Can be used to return data for a single data key or for multiple at once. If used for multiple, the order of the output dataframe must match the order of the keys provided.
Arguments
data_keys
: Key(s) for which to get the data of. These may be things such as file names, UUIDs, etc.use_cache
: Whether the cache should be used to retrieve data for these keys. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.- **
**kwargs
**: Additional keyword arguments.
Returns
A dataframe containing the data, ordered to match the order of keys
in data_keys
, or None if no data for those keys was available.
get_project_db_sqlite_columns
def get_project_db_sqlite_columns(self) ‑> list[str]:
Implement this method to get the required columns.
This is used by the "run on new data only" feature. This is used to add data to the task table in the project database.
get_project_db_sqlite_create_table_query
def get_project_db_sqlite_create_table_query(self) ‑> str:
Implement this method to return the required columns and types.
This is used by the "run on new data only" feature. This should be in the format that can be used after a "CREATE TABLE" statement and is used to create the task table in the project database.
partition
def partition( self, iterable: Iterable[_I], partition_size: int = 1,) ‑> collections.abc.Iterable[collections.abc.Sequence[~_I]]:
Takes an iterable and yields partitions of size partition_size
.
The final partition may be less than size partition_size
due to the variable
length of the iterable.
remove_hook
def remove_hook(self, hook: DataSourceHook) ‑> None:
Remove a hook from the datasource.
yield_data
def yield_data( self, data_keys: Optional[SingleOrMulti[str]] = None, *, use_cache: bool = True, partition_size: Optional[int] = None, **kwargs: Any,) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Yields data in batches from this source.
If data_keys is specified, only yield from that subset of the data. Otherwise, iterate through the whole datasource.
Arguments
data_keys
: An optional list of data keys to use for yielding data. Otherwise, all data in the datasource will be considered.data_keys
is always provided when this method is called from the Dataset as part of a task.use_cache
: Whether the cache should be used to retrieve data for these data points. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.partition_size
: The number of data elements to load/yield in each iteration. If not provided, defaults to the partition size configured in the datasource.- **
**kwargs
**: Additional keyword arguments.
FileSystemIterableSource
class FileSystemIterableSource( path: Union[os.PathLike, str], output_path: Optional[Union[os.PathLike, str]] = None, iterable: bool = True, fast_load: bool = True, cache_images: bool = False, filter: Optional[FileSystemFilter] = None, data_splitter: Optional[DatasetSplitter] = None, seed: Optional[int] = None, ignore_cols: Optional[Union[str, Sequence[str]]] = None, modifiers: Optional[dict[str, DataPathModifiers]] = None, partition_size: int = 16, required_fields: Optional[Any] = None,):
Abstract base source that supports iterating over file-based data.
This is used for Iterable data sources that whose data is stored as files on disk.
Arguments
cache_images
: Whether to cache images in the file system. Defaults to False. This is ignored iffast_load
is True.data_splitter
: Deprecated argument, will be removed in a future release. Defaults to None. Not used.fast_load
: Whether the data will be loaded in fast mode. This is used to determine whether the data will be iterated over during set up for schema generation and splitting (where necessary). Only relevant ifiterable
is True, otherwise it is ignored. Defaults to True.ignore_cols
: Column/list of columns to be ignored from the data. Defaults to None.iterable
: Whether the data source is iterable. This is used to determine whether the data source can be used in a streaming context during a task. Defaults to True.modifiers
: Dictionary used for modifying paths/ extensions in the dataframe. Defaults to None.output_path
: The path where to save intermediary output files. Defaults to 'preprocessed/'.partition_size
: The size of each partition when iterating over the data in a batched fashion.path
: Path to the directory which contains the data files. Subdirectories will be searched recursively.seed
: Random number seed. Used for setting random seed for all libraries. Defaults to None.
Attributes
seed
: Random number seed. Used for setting random seed for all libraries.
Raises
ValueError
: Ifiterable
is False orfast_load
is False orcache_images
is True.
Subclasses
Variables
-
file_names : list[str]
- Returns a list of file names in the specified directory.This property accounts for files skipped at runtime by filtering them out of the list of cached file names. Files may get skipped at runtime due to errors or because they don't contain any image data and
images_only
is True. This allows us to skip these files again more quickly if they are still present in the directory.
is_initialised : bool
- Checks ifBaseSource
was initialised.
is_task_running : bool
- Returns True if a task is running.
-
path : pathlib.Path
- Resolved absolute path to data.Provides a consistent version of the path provided by the user which should work throughout regardless of operating system and of directory structure.
-
selected_file_names : list[str]
- Returns a list of selected file names.Selected file names are affected by the
selected_file_names_override
andnew_file_names_only
attributes.
-
selected_file_names_differ : bool
- Returns True if selected_file_names will differ from default.In particular, returns True iff there is a selected file names override in place and/or there is filtering for new file names only present.
Static methods
get_num_workers
def get_num_workers(file_names: Sequence[str]) ‑> int:
Inherited from:
MultiProcessingMixIn.get_num_workers :
Gets the number of workers to use for multiprocessing.
Ensures that the number of workers is at least 1 and at most equal to MAX_NUM_MULTIPROCESSING_WORKERS. If the number of files is less than MAX_NUM_MULTIPROCESSING_WORKERS, then we use the number of files as the number of workers. Unless the number of machine cores is also less than MAX_NUM_MULTIPROCESSING_WORKERS, in which case we use the lower of the two.
Arguments
file_names
: The list of file names to load.
Returns The number of workers to use for multiprocessing.
Methods
add_hook
def add_hook(self, hook: DataSourceHook) ‑> None:
Inherited from:
Add a hook to the datasource.
apply_ignore_cols
def apply_ignore_cols(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Inherited from:
BaseSource.apply_ignore_cols :
Apply ignored columns to dataframe, dropping columns as needed.
Returns A copy of the dataframe with ignored columns removed, or the original dataframe if this datasource does not specify any ignore columns.
apply_ignore_cols_iter
def apply_ignore_cols_iter( self, dfs: Iterator[pd.DataFrame],) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Inherited from:
BaseSource.apply_ignore_cols_iter :
Apply ignored columns to dataframes from iterator.
apply_modifiers
def apply_modifiers(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Inherited from:
Apply column modifiers to the dataframe.
If no modifiers are specified, returns the dataframe unchanged.
clear_file_names_cache
def clear_file_names_cache(self) ‑> None:
Clears the list of selected file names.
This allows the datasource to pick up any new files that have been added to the directory since the last time it was cached.
file_names_iter
def file_names_iter( self, as_strs: bool = False,) ‑> Union[collections.abc.Iterator[pathlib.Path], collections.abc.Iterator[str]]:
Iterate over files in a directory, yielding those that match the criteria.
Arguments
as_strs
: By default the files yielded will be yielded as Path objects. If this is True, yield them as strings instead.
get_data
def get_data( self, data_keys: SingleOrMulti[str], *, use_cache: bool = True, **kwargs: Any,) ‑> Optional[pandas.core.frame.DataFrame]:
Inherited from:
Get data corresponding to the provided data key(s).
Can be used to return data for a single data key or for multiple at once. If used for multiple, the order of the output dataframe must match the order of the keys provided.
Arguments
data_keys
: Key(s) for which to get the data of. These may be things such as file names, UUIDs, etc.use_cache
: Whether the cache should be used to retrieve data for these keys. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.- **
**kwargs
**: Additional keyword arguments.
Returns
A dataframe containing the data, ordered to match the order of keys
in data_keys
, or None if no data for those keys was available.
get_project_db_sqlite_columns
def get_project_db_sqlite_columns(self) ‑> list[str]:
Returns the required columns to identify a data point.
get_project_db_sqlite_create_table_query
def get_project_db_sqlite_create_table_query(self) ‑> str:
Returns the required columns and types to identify a data point.
The file name is used as the primary key and the last modified date is used to determine if the file has been updated since the last time it was processed. If there is a conflict on the file name, the row is replaced with the new data to ensure that the last modified date is always up to date.
partition
def partition( self, iterable: Iterable[_I], partition_size: int = 1,) ‑> collections.abc.Iterable[collections.abc.Sequence[~_I]]:
Inherited from:
Takes an iterable and yields partitions of size partition_size
.
The final partition may be less than size partition_size
due to the variable
length of the iterable.
remove_hook
def remove_hook(self, hook: DataSourceHook) ‑> None:
Inherited from:
Remove a hook from the datasource.
use_file_multiprocessing
def use_file_multiprocessing(self, file_names: Sequence[str]) ‑> bool:
Inherited from:
MultiProcessingMixIn.use_file_multiprocessing :
Check if file multiprocessing should be used.
Returns True if file multiprocessing has been enabled by the environment variable and the number of workers would be greater than 1, otherwise False. There is no need to use file multiprocessing if we are just going to use one worker - it would be slower than just loading the data in the main process.
Returns True if file multiprocessing should be used, otherwise False.
yield_data
def yield_data( self, data_keys: Optional[SingleOrMulti[str]] = None, *, use_cache: bool = True, partition_size: Optional[int] = None, **kwargs: Any,) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Inherited from:
Yields data in batches from this source.
If data_keys is specified, only yield from that subset of the data. Otherwise, iterate through the whole datasource.
Arguments
data_keys
: An optional list of data keys to use for yielding data. Otherwise, all data in the datasource will be considered.data_keys
is always provided when this method is called from the Dataset as part of a task.use_cache
: Whether the cache should be used to retrieve data for these data points. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.partition_size
: The number of data elements to load/yield in each iteration. If not provided, defaults to the partition size configured in the datasource.- **
**kwargs
**: Additional keyword arguments.
FileSystemIterableSourceInferrable
class FileSystemIterableSourceInferrable( path: Union[os.PathLike, str], data_cache: Optional[DataPersister] = None, infer_class_labels_from_filepaths: bool = False, output_path: Optional[Union[os.PathLike, str]] = None, iterable: bool = True, fast_load: bool = True, cache_images: bool = False, filter: Optional[FileSystemFilter] = None, data_splitter: Optional[DatasetSplitter] = None, seed: Optional[int] = None, ignore_cols: Optional[Union[str, Sequence[str]]] = None, modifiers: Optional[dict[str, DataPathModifiers]] = None, partition_size: int = 16, required_fields: Optional[Any] = None,):
Base source that supports iterating over folder-labelled, file-based data.
This is used for data sources whose data is stored as files on disk, and for which the folder structure (0potentially) contains labelling information (e.g. the files are split into "test/", "train/", and "validate/" folders).
Arguments
cache_images
: Whether to cache images in the file system. Defaults to False. This is ignored iffast_load
is True.data_cache
: A DataPersister instance to use for data caching.data_splitter
: Deprecated argument, will be removed in a future release. Defaults to None. Not used.fast_load
: Whether the data will be loaded in fast mode. This is used to determine whether the data will be iterated over during set up for schema generation and splitting (where necessary). Only relevant ifiterable
is True, otherwise it is ignored. Defaults to True.ignore_cols
: Column/list of columns to be ignored from the data. Defaults to None.infer_class_labels_from_filepaths
: Whether class labels should be added to the data based on the filepath of the files. Defaults to the first directory withinself.path
, but can go a level deeper if the datasplitter is provided withinfer_data_split_labels
set to trueiterable
: Whether the data source is iterable. This is used to determine whether the data source can be used in a streaming context during a task. Defaults to True.modifiers
: Dictionary used for modifying paths/ extensions in the dataframe. Defaults to None.output_path
: The path where to save intermediary output files. Defaults to 'preprocessed/'.partition_size
: The size of each partition when iterating over the data in a batched fashion.path
: Path to the directory which contains the data files. Subdirectories will be searched recursively.seed
: Random number seed. Used for setting random seed for all libraries. Defaults to None.
Attributes
seed
: Random number seed. Used for setting random seed for all libraries.
Raises
ValueError
: Ifiterable
is False orfast_load
is False orcache_images
is True.
Subclasses
- DICOMSource
- bitfount.data.datasources.ophthalmology.ophthalmology_base_source._OphthalmologySource
Variables
-
file_names : list[str]
- Returns a list of file names in the specified directory.This property accounts for files skipped at runtime by filtering them out of the list of cached file names. Files may get skipped at runtime due to errors or because they don't contain any image data and
images_only
is True. This allows us to skip these files again more quickly if they are still present in the directory.
is_initialised : bool
- Checks ifBaseSource
was initialised.
is_task_running : bool
- Returns True if a task is running.
-
path : pathlib.Path
- Resolved absolute path to data.Provides a consistent version of the path provided by the user which should work throughout regardless of operating system and of directory structure.
-
selected_file_names : list[str]
- Returns a list of selected file names.Selected file names are affected by the
selected_file_names_override
andnew_file_names_only
attributes.
-
selected_file_names_differ : bool
- Returns True if selected_file_names will differ from default.In particular, returns True iff there is a selected file names override in place and/or there is filtering for new file names only present.
Static methods
get_num_workers
def get_num_workers(file_names: Sequence[str]) ‑> int:
Inherited from:
FileSystemIterableSource.get_num_workers :
Gets the number of workers to use for multiprocessing.
Ensures that the number of workers is at least 1 and at most equal to MAX_NUM_MULTIPROCESSING_WORKERS. If the number of files is less than MAX_NUM_MULTIPROCESSING_WORKERS, then we use the number of files as the number of workers. Unless the number of machine cores is also less than MAX_NUM_MULTIPROCESSING_WORKERS, in which case we use the lower of the two.
Arguments
file_names
: The list of file names to load.
Returns The number of workers to use for multiprocessing.
Methods
add_hook
def add_hook(self, hook: DataSourceHook) ‑> None:
Inherited from:
FileSystemIterableSource.add_hook :
Add a hook to the datasource.
apply_ignore_cols
def apply_ignore_cols(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Inherited from:
FileSystemIterableSource.apply_ignore_cols :
Apply ignored columns to dataframe, dropping columns as needed.
Returns A copy of the dataframe with ignored columns removed, or the original dataframe if this datasource does not specify any ignore columns.
apply_ignore_cols_iter
def apply_ignore_cols_iter( self, dfs: Iterator[pd.DataFrame],) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Inherited from:
FileSystemIterableSource.apply_ignore_cols_iter :
Apply ignored columns to dataframes from iterator.
apply_modifiers
def apply_modifiers(self, df: pd.DataFrame) ‑> pandas.core.frame.DataFrame:
Inherited from:
FileSystemIterableSource.apply_modifiers :
Apply column modifiers to the dataframe.
If no modifiers are specified, returns the dataframe unchanged.
clear_file_names_cache
def clear_file_names_cache(self) ‑> None:
Inherited from:
FileSystemIterableSource.clear_file_names_cache :
Clears the list of selected file names.
This allows the datasource to pick up any new files that have been added to the directory since the last time it was cached.
file_names_iter
def file_names_iter( self, as_strs: bool = False,) ‑> Union[collections.abc.Iterator[pathlib.Path], collections.abc.Iterator[str]]:
Inherited from:
FileSystemIterableSource.file_names_iter :
Iterate over files in a directory, yielding those that match the criteria.
Arguments
as_strs
: By default the files yielded will be yielded as Path objects. If this is True, yield them as strings instead.
get_data
def get_data( self, data_keys: SingleOrMulti[str], *, use_cache: bool = True, **kwargs: Any,) ‑> Optional[pandas.core.frame.DataFrame]:
Inherited from:
FileSystemIterableSource.get_data :
Get data corresponding to the provided data key(s).
Can be used to return data for a single data key or for multiple at once. If used for multiple, the order of the output dataframe must match the order of the keys provided.
Arguments
data_keys
: Key(s) for which to get the data of. These may be things such as file names, UUIDs, etc.use_cache
: Whether the cache should be used to retrieve data for these keys. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.- **
**kwargs
**: Additional keyword arguments.
Returns
A dataframe containing the data, ordered to match the order of keys
in data_keys
, or None if no data for those keys was available.
get_project_db_sqlite_columns
def get_project_db_sqlite_columns(self) ‑> list[str]:
Inherited from:
FileSystemIterableSource.get_project_db_sqlite_columns :
Returns the required columns to identify a data point.
get_project_db_sqlite_create_table_query
def get_project_db_sqlite_create_table_query(self) ‑> str:
Inherited from:
FileSystemIterableSource.get_project_db_sqlite_create_table_query :
Returns the required columns and types to identify a data point.
The file name is used as the primary key and the last modified date is used to determine if the file has been updated since the last time it was processed. If there is a conflict on the file name, the row is replaced with the new data to ensure that the last modified date is always up to date.
partition
def partition( self, iterable: Iterable[_I], partition_size: int = 1,) ‑> collections.abc.Iterable[collections.abc.Sequence[~_I]]:
Inherited from:
FileSystemIterableSource.partition :
Takes an iterable and yields partitions of size partition_size
.
The final partition may be less than size partition_size
due to the variable
length of the iterable.
remove_hook
def remove_hook(self, hook: DataSourceHook) ‑> None:
Inherited from:
FileSystemIterableSource.remove_hook :
Remove a hook from the datasource.
use_file_multiprocessing
def use_file_multiprocessing(self, file_names: Sequence[str]) ‑> bool:
Inherited from:
FileSystemIterableSource.use_file_multiprocessing :
Check if file multiprocessing should be used.
Returns True if file multiprocessing has been enabled by the environment variable and the number of workers would be greater than 1, otherwise False. There is no need to use file multiprocessing if we are just going to use one worker - it would be slower than just loading the data in the main process.
Returns True if file multiprocessing should be used, otherwise False.
yield_data
def yield_data( self, data_keys: Optional[SingleOrMulti[str]] = None, *, use_cache: bool = True, partition_size: Optional[int] = None, **kwargs: Any,) ‑> collections.abc.Iterator[pandas.core.frame.DataFrame]:
Inherited from:
FileSystemIterableSource.yield_data :
Yields data in batches from this source.
If data_keys is specified, only yield from that subset of the data. Otherwise, iterate through the whole datasource.
Arguments
data_keys
: An optional list of data keys to use for yielding data. Otherwise, all data in the datasource will be considered.data_keys
is always provided when this method is called from the Dataset as part of a task.use_cache
: Whether the cache should be used to retrieve data for these data points. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.partition_size
: The number of data elements to load/yield in each iteration. If not provided, defaults to the partition size configured in the datasource.- **
**kwargs
**: Additional keyword arguments.
MultiProcessingMixIn
class MultiProcessingMixIn():
MixIn class for multiprocessing of _get_data
.
Subclasses
Variables
- static
data_cache : Optional[DataPersister]
- static
image_columns : set[str]
- static
skipped_files : set[str]
Static methods
get_num_workers
def get_num_workers(file_names: Sequence[str]) ‑> int:
Gets the number of workers to use for multiprocessing.
Ensures that the number of workers is at least 1 and at most equal to MAX_NUM_MULTIPROCESSING_WORKERS. If the number of files is less than MAX_NUM_MULTIPROCESSING_WORKERS, then we use the number of files as the number of workers. Unless the number of machine cores is also less than MAX_NUM_MULTIPROCESSING_WORKERS, in which case we use the lower of the two.
Arguments
file_names
: The list of file names to load.
Returns The number of workers to use for multiprocessing.
Methods
use_file_multiprocessing
def use_file_multiprocessing(self, file_names: Sequence[str]) ‑> bool:
Check if file multiprocessing should be used.
Returns True if file multiprocessing has been enabled by the environment variable and the number of workers would be greater than 1, otherwise False. There is no need to use file multiprocessing if we are just going to use one worker - it would be slower than just loading the data in the main process.
Returns True if file multiprocessing should be used, otherwise False.