Metadata-Version: 2.1
Name: torch-discounted-cumsum
Version: 1.1.0
Summary: Fast discounted cumulative sums in PyTorch
Home-page: https://www.github.com/toshas/torch-discounted-cumsum
Author: Anton Obukhov
License: BSD
Keywords: pytorch,discounted,cumsum,cumulative,sum,scan,differentiable,reinforcement,learning,rewards,time,series
Platform: UNKNOWN
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE_code.md
License-File: LICENSE_doc.md


This package implements an efficient parallel algorithm for the computation of discounted cumulative sums 
with differentiable bindings to PyTorch. The `cumsum` operation is frequently seen in data science 
domains concerned with time series, including Reinforcement Learning (RL). 

The traditional sequential algorithm performs the computation of the output elements in a loop. For an input of size 
`N`, it requires `O(N)` operations and takes `O(N)` time steps to complete. 

The proposed parallel algorithm requires a total of `O(N log N)` operations, but takes only `O(log N)` time, which is a 
considerable trade-off in many applications involving large inputs.  

Features of the parallel algorithm:
- Speed logarithmic in the input size
- Better numerical precision than sequential algorithms

Features of the package:
- CPU: sequential algorithm in C++
- GPU: parallel algorithm in CUDA
- Gradients computation for input and gamma
- Batch support for input and gamma
- Both left and right directions of summation supported
- PyTorch bindings

Find more details and the most up-to-date information on the project webpage:
https://www.github.com/toshas/torch-discounted-cumsum


