Metadata-Version: 2.1
Name: configvlm
Version: 0.1.1
Summary: A state-of-the-art tool for Python developers seeking to rapidly and iteratively develop vision and language models within the [`pytorch`](https://pytorch.org/) framework
Author: Leonard Hackel
Author-email: l.hackel@tu-berlin.de
Requires-Python: >=3.10,<4.0
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Requires-Dist: appdirs (>=1.4.4,<2.0.0)
Requires-Dist: bigearthnet-encoder (>=0.3.0,<0.4.0)
Requires-Dist: fvcore (>=0.1.5.post20221221,<0.2.0)
Requires-Dist: lightning-bolts (>=0.6.0.post1,<0.7.0)
Requires-Dist: lmdb (>=1.4.0,<2.0.0)
Requires-Dist: matplotlib (>=3.6.3,<4.0.0)
Requires-Dist: numpy (>=1.24.1,<2.0.0)
Requires-Dist: pytorch-lightning (>=1.9.0,<2.0.0)
Requires-Dist: scikit-learn (>=1.2.1,<2.0.0)
Requires-Dist: timm (>=0.6.12,<0.7.0)
Requires-Dist: torch (>=1.13.1)
Requires-Dist: transformers (>=4.26.0,<5.0.0)
Description-Content-Type: text/markdown

# ConfigVLM

[![DOI](https://zenodo.org/badge/DOI/TODO)](https://doi.org/TODO)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/mit-0)
[![CI Pipeline](https://github.com/lhackel-tub/ConfigVLM/actions/workflows/ci.yml/badge.svg)](https://github.com/lhackel-tub/ConfigVLM/actions/workflows/ci.yml)
[![Code Coverage](./coverage.svg)](./.coverage)

<!-- introduction-start -->
The library `ConfigVLM` is a state-of-the-art tool for Python developers seeking to rapidly and
iteratively develop vision and language models within the [`pytorch`](https://pytorch.org/) framework.
This **open-source** library provides a convenient implementation for seamlessly combining models
from two of the most popular [`pytorch`](https://pytorch.org/) libraries,
the highly regarded [`timm`](https://github.com/rwightman/pytorch-image-models) and [`huggingface`🤗](https://huggingface.co/).
With an extensive collection of nearly **1000 vision** and **over 100 language models**,
with an **additional 120,000** community-uploaded models in the [`huggingface`🤗 model collection](https://huggingface.co/models),
`ConfigVLM` offers a diverse range of model combinations that require minimal implementation effort.
Its vast array of models makes it an unparalleled resource for developers seeking to create
innovative and sophisticated **vision-language models** with ease.

Furthermore, `ConfigVLM` boasts a user-friendly interface that streamlines the exchange of model components,
thus providing endless possibilities for the creation of novel models.
Additionally, the package offers **pre-built and throughput-optimized**
[`pytorch dataloaders`](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) and
[`lightning datamodules`](https://lightning.ai/docs/pytorch/latest/data/datamodule.html),
which enable developers to seamlessly test their models in diverse application areas, such as *Remote Sensing (RS)*.
Moreover, the comprehensive documentation of `ConfigVLM` includes installation instructions,
tutorial examples, and a detailed overview of the framework's interface, ensuring a smooth and hassle-free development experience.

<!-- introduction-end -->

For detailed information please visit the [publication](TODO:arXiv-Link) or the [documentation](https://lhackel-tub.github.io/ConfigVLM).

`ConfigVLM` is released under the [MIT Software License](https://opensource.org/licenses/mit-0)

