Skip to main content

csv_source

Module containing CSVSource class.

CSVSource class handles loading of CSV data.

Classes

CSVSource

class CSVSource(    path: Union[os.PathLike, AnyUrl, str],    read_csv_kwargs: Optional[dict[str, Any]] = None,    modifiers: Optional[dict[str, DataPathModifiers]] = None,    data_splitter: Optional[DatasetSplitter] = None,    seed: Optional[int] = None,    ignore_cols: Optional[Union[str, Sequence[str]]] = None,    iterable: bool = True,    partition_size: int = 16,    required_fields: Optional[Any] = None,):

Data source for loading csv files.

Arguments

  • data_splitter: Deprecated argument, will be removed in a future release. Defaults to None. Not used.
  • ignore_cols: Column/list of columns to be ignored from the data. Defaults to None.
  • modifiers: Dictionary used for modifying paths/ extensions in the dataframe. Defaults to None.
  • partition_size: The size of each partition when iterating over the data in a batched fashion.
  • path: The path or URL to the csv file.
  • read_csv_kwargs: Additional arguments to be passed as a dictionary to pandas.read_csv. Defaults to None.
  • seed: Random number seed. Used for setting random seed for all libraries. Defaults to None.

Attributes

  • seed: Random number seed. Used for setting random seed for all libraries.

Variables

  • is_initialised : bool - Checks if BaseSource was initialised.
  • is_task_running : bool - Returns True if a task is running.

Methods


add_hook

def add_hook(self, hook: DataSourceHook)> None:

Inherited from:

BaseSource.add_hook :

Add a hook to the datasource.

apply_ignore_cols

def apply_ignore_cols(self, df: pd.DataFrame)> pandas.core.frame.DataFrame:

Inherited from:

BaseSource.apply_ignore_cols :

Apply ignored columns to dataframe, dropping columns as needed.

Returns A copy of the dataframe with ignored columns removed, or the original dataframe if this datasource does not specify any ignore columns.

apply_ignore_cols_iter

def apply_ignore_cols_iter(    self, dfs: Iterator[pd.DataFrame],)> collections.abc.Iterator[pandas.core.frame.DataFrame]:

Inherited from:

BaseSource.apply_ignore_cols_iter :

Apply ignored columns to dataframes from iterator.

apply_modifiers

def apply_modifiers(self, df: pd.DataFrame)> pandas.core.frame.DataFrame:

Inherited from:

BaseSource.apply_modifiers :

Apply column modifiers to the dataframe.

If no modifiers are specified, returns the dataframe unchanged.

get_data

def get_data(    self,    data_keys: SingleOrMulti[str] | SingleOrMulti[int],    *,    use_cache: bool = True,    **kwargs: Any,)> Optional[pandas.core.frame.DataFrame]:

Inherited from:

BaseSource.get_data :

Get data corresponding to the provided data key(s).

Can be used to return data for a single data key or for multiple at once. If used for multiple, the order of the output dataframe must match the order of the keys provided.

Arguments

  • data_keys: Key(s) for which to get the data of. These may be things such as file names, UUIDs, etc.
  • use_cache: Whether the cache should be used to retrieve data for these keys. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.
  • ****kwargs**: Additional keyword arguments.

Returns A dataframe containing the data, ordered to match the order of keys in data_keys, or None if no data for those keys was available.

get_project_db_sqlite_columns

def get_project_db_sqlite_columns(self)> list[str]:

Inherited from:

BaseSource.get_project_db_sqlite_columns :

Implement this method to get the required columns.

This is used by the "run on new data only" feature. This is used to add data to the task table in the project database.

get_project_db_sqlite_create_table_query

def get_project_db_sqlite_create_table_query(self)> str:

Inherited from:

BaseSource.get_project_db_sqlite_create_table_query :

Implement this method to return the required columns and types.

This is used by the "run on new data only" feature. This should be in the format that can be used after a "CREATE TABLE" statement and is used to create the task table in the project database.

partition

def partition(    self, iterable: Iterable[_I], partition_size: int = 1,)> collections.abc.Iterable[collections.abc.Sequence[~_I]]:

Inherited from:

BaseSource.partition :

Takes an iterable and yields partitions of size partition_size.

The final partition may be less than size partition_size due to the variable length of the iterable.

remove_hook

def remove_hook(self, hook: DataSourceHook)> None:

Inherited from:

BaseSource.remove_hook :

Remove a hook from the datasource.

yield_data

def yield_data(    self,    data_keys: Optional[SingleOrMulti[str] | SingleOrMulti[int]] = None,    *,    use_cache: bool = True,    partition_size: Optional[int] = None,    **kwargs: Any,)> Iterator[pandas.core.frame.DataFrame]:

Inherited from:

BaseSource.yield_data :

Yields data in batches from this source.

If data_keys is specified, only yield from that subset of the data. Otherwise, iterate through the whole datasource.

Arguments

  • data_keys: An optional list of data keys to use for yielding data. Otherwise, all data in the datasource will be considered. data_keys is always provided when this method is called from the Dataset as part of a task.
  • use_cache: Whether the cache should be used to retrieve data for these data points. Note that cached data may have some elements, particularly image-related fields such as image data or file paths, replaced with placeholder values when stored in the cache. If datacache is set on the instance, data will be _set in the cache, regardless of this argument.
  • partition_size: The number of data elements to load/yield in each iteration. If not provided, defaults to the partition size configured in the datasource.
  • ****kwargs**: Additional keyword arguments.