Metadata-Version: 2.1
Name: datamate
Version: 0.1
Summary: A data organization and compilation system.
Author: Janne Lappalainen & Mason McGill
Description-Content-Type: text/markdown

# Datamate

Datamate is a lightweight data and configuration management framework for structuring data in machine learning projects on a hierarchical filesystem.

Datamate provides a simple framework to work with heterogenous data by automating
input and output of arrays and configurations to disk.
It provides an interface to the system's filesystem through pointers to files
and representations of the hierarchical structure.

The core-concepts of datamate are the following:

1. data mate is and object-oriented framework
2. for maintainability it provides nearly seamless input and output operations to a hierarchical filesystem
3. for scalability it automates pathing and composability
4. for explainability it offers optional configuration-based management
5. for interactive prototyping it offers: hierarchical file views in notebooks, pandas integration, configuration diffs, simultaneous write and read

Typical usecases are:

- eliminate hardcoded absolute paths and filenames
- structure preprocessing of data
- keep track of configurations for preprocessing, experiments, analyses
- create disk-caches with minimal overhead code e.g. to eliminate bottlenecks
- e.g. use the disk-cache in your `everything_in_here.ipynb` notebook to skip slow computations when restarting the kernel

# Examples

Datamate's `Directory` instances can point to (processed) data on the disk (relative to a root directory),
allowing seamless I/O.

E.g., to store a numpy array

```python
>>> import datamate
>>> datamate.set_root_dir("./data")
>>> directory = datamate.Directory("experiment_01")  # pointer to ./data/experiment_01
>>> directory.array = np.arange(5)  # creates parent directory and writes array to h5 file
>>> directory
experiment_01/ - Last modified: April 04, 2022 08:24:56
└── array.h5

displaying: 1 directory, 1 files
```

To retrieve the array:

```python
>>> import datamate
>>> datamate.set_root_dir("./data")
>>> directory = datamate.Directory("experiment_01")
>>> directory.array[:]
array([0, 1, 2, 3, 4])
```

More detailed examples in `examples/01. Introduction to Datamate.ipynb`.

# Installation

Using pip:

`pip install datamate`

# Related frameworks

Datamate is adapted from [artisan](https://github.com/MasonMcGill/artisan) to focus on flexibility in interactive jupyter notebooks with only optional configuration and type enforcement.

Because cloud-based and relational database solutions for ML-workflows can be little
beginner friendly or little flexible, Datamate is simply based on I/O of arrays and configurations on
disk with pythonic syntax, and it targets interactive and notebook-based workflows.

Datamate plays well with other, more advanced tools for data and experiment management.
E.g. combine datamate with hydra for dynamic configuration-based workflows with command line integration.
For a full-fledged cloud-based solution to track ML experiments, check out e.g. wandb.

Other than that, object-relational mappers are frameworks to provide APIs to
relational databases. This is particularly useful, when handling data over networks.
E.g. one remote server hosts data that is accessed by many people in collaboration.

ML experiment management:

- [hydra](https://github.com/facebookresearch/hydra), for dynamic configuration management with automatic command line integration.
- [wandb](https://github.com/wandb), cloud-based experiment running, data logging, and visualization.

Relational databases, object-relational mapper:

- [datajoint](https://github.com/datajoint/datajoint-python), for scientific workflow management based on relational principles.
- [sqlalchemy](https://github.com/sqlalchemy/sqlalchemy), for fully exposing SQL features and details.
- [PonyORM](https://github.com/ponyorm/pony), for syntactic sugar when accessing SQL databases.

# Contribution

Contributions welcome!
