kedro.extras.datasets.spark.DeltaTableDataSet¶
-
class
kedro.extras.datasets.spark.DeltaTableDataSet(filepath)[source]¶ DeltaTableDataSetloads data into DeltaTable objects.Example adding a catalog entry with YAML API:
weather@spark: type: spark.SparkDataSet filepath: data/02_intermediate/data.parquet file_format: "delta" weather@delta: type: spark.DeltaTableDataSet filepath: data/02_intermediate/data.parquet
Example using Python API:
from pyspark.sql import SparkSession from pyspark.sql.types import (StructField, StringType, IntegerType, StructType) from kedro.extras.datasets.spark import DeltaTableDataSet, SparkDataSet schema = StructType([StructField("name", StringType(), True), StructField("age", IntegerType(), True)]) data = [('Alex', 31), ('Bob', 12), ('Clarke', 65), ('Dave', 29)] spark_df = SparkSession.builder.getOrCreate().createDataFrame(data, schema) data_set = SparkDataSet(filepath="test_data", file_format="delta") data_set.save(spark_df) deltatable_dataset = DeltaTableDataSet(filepath="test_data") delta_table = deltatable_dataset.load() delta_table.update()
Methods
exists()Checks whether a data set’s output already exists by calling the provided _exists() method.
from_config(name, config[, load_version, …])Create a data set instance using the configuration provided.
load()Loads data by delegation to the provided load method.
release()Release any cached data.
save(data)Saves data by delegation to the provided save method.
-
__init__(filepath)[source]¶ Creates a new instance of
DeltaTableDataSet.- Parameters
filepath (
str) – Filepath in POSIX format to a Spark dataframe. When using Databricks and working with data written to mount path points, specifyfilepath``s for (versioned) ``SparkDataSet``s starting with ``/dbfs/mnt.
-
exists()¶ Checks whether a data set’s output already exists by calling the provided _exists() method.
- Return type
bool- Returns
Flag indicating whether the output already exists.
- Raises
DataSetError – when underlying exists method raises error.
-
classmethod
from_config(name, config, load_version=None, save_version=None)¶ Create a data set instance using the configuration provided.
- Parameters
name (
str) – Data set name.config (
Dict[str,Any]) – Data set config dictionary.load_version (
Optional[str]) – Version string to be used forloadoperation if the data set is versioned. Has no effect on the data set if versioning was not enabled.save_version (
Optional[str]) – Version string to be used forsaveoperation if the data set is versioned. Has no effect on the data set if versioning was not enabled.
- Return type
AbstractDataSet- Returns
An instance of an
AbstractDataSetsubclass.- Raises
DataSetError – When the function fails to create the data set from its config.
-
load()¶ Loads data by delegation to the provided load method.
- Return type
Any- Returns
Data returned by the provided load method.
- Raises
DataSetError – When underlying load method raises error.
-
release()¶ Release any cached data.
- Raises
DataSetError – when underlying release method raises error.
- Return type
None
-
save(data)¶ Saves data by delegation to the provided save method.
- Parameters
data (
Any) – the value to be saved by provided save method.- Raises
DataSetError – when underlying save method raises error.
FileNotFoundError – when save method got file instead of dir, on Windows.
NotADirectoryError – when save method got file instead of dir, on Unix.
- Return type
None
-