kedro.io.AbstractDataSet()
|
AbstractDataSet is the base class for all data set implementations. All data set implementations should extend this abstract class and implement the methods marked as abstract. If a specific dataset implementation cannot be used in conjunction with the ParallelRunner, such user-defined dataset should have the attribute _SINGLE_PROCESS = True. Example: ::.
|
kedro.io.AbstractVersionedDataSet(filepath, …)
|
AbstractVersionedDataSet is the base class for all versioned data set implementations.
|
kedro.io.DataCatalog([data_sets, feed_dict, …])
|
DataCatalog stores instances of AbstractDataSet implementations to provide load and save capabilities from anywhere in the program.
|
kedro.io.LambdaDataSet(load, save[, exists, …])
|
LambdaDataSet loads and saves data to a data set.
|
kedro.io.MemoryDataSet([data, copy_mode])
|
MemoryDataSet loads and saves data from/to an in-memory Python object.
|
kedro.io.PartitionedDataSet(path, dataset[, …])
|
PartitionedDataSet loads and saves partitioned file-like data using the underlying dataset definition.
|
kedro.io.IncrementalDataSet(path, dataset[, …])
|
IncrementalDataSet inherits from PartitionedDataSet, which loads and saves partitioned file-like data using the underlying dataset definition.
|
kedro.io.CachedDataSet(dataset[, version, …])
|
CachedDataSet is a dataset wrapper which caches in memory the data saved, so that the user avoids io operations with slow storage media.
|
kedro.io.Version(load, save)
|
This namedtuple is used to provide load and save versions for versioned data sets. |