Metadata-Version: 2.1
Name: skrl
Version: 0.7.0
Summary: Modular and flexible library for Reinforcement Learning
Home-page: https://github.com/Toni-SM/skrl
Author: Toni-SM
License: MIT
Keywords: reinforcement learning,machine learning,rl,
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.6.*
Description-Content-Type: text/markdown
License-File: LICENSE

<p align="center">
  <img width="300rem" src="docs/source/_static/data/skrl-up-transparent.png">
</p>
<h2 align="center" style="border-bottom: 0 !important;">SKRL - Reinforcement Learning library</h2>
<br>

**skrl** is an open-source modular library for Reinforcement Learning written in Python (using [PyTorch](https://pytorch.org/)) and designed with a focus on readability, simplicity, and transparency of algorithm implementation. In addition to supporting the [OpenAI Gym](https://www.gymlibrary.ml) and [DeepMind](https://github.com/deepmind/dm_env) environment interfaces, it allows loading and configuring [NVIDIA Isaac Gym](https://developer.nvidia.com/isaac-gym/) and [NVIDIA Omniverse Isaac Gym](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/tutorial_gym_isaac_gym.html) environments, enabling agents' simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run

<br>

### Please, visit the documentation for usage details and examples

https://skrl.readthedocs.io/en/latest/

<br>

> **Note:** This project is under **active continuous development**. Please make sure you always have the latest version 

<br>

### Citing this library

To cite this library in publications, please use the following reference:

```bibtex
@article{serrano2022skrl,
  title={skrl: Modular and Flexible Library for Reinforcement Learning},
  author={Serrano-Mu{\~n}oz, Antonio and Arana-Arexolaleiba, Nestor and Chrysostomou, Dimitrios and B{\o}gh, Simon},
  journal={arXiv preprint arXiv:2202.03825},
  year={2022}
}
```
