Metadata-Version: 2.1
Name: keras-flops
Version: 0.1.0
Summary: FLOPs calculator for neural network architecture written in tensorflow 2.x (tf.keras)
Home-page: https://github.com/tokusumi/keras-flops
License: MIT
Keywords: tensorflow2,flops,profiler
Author: tokusumi
Author-email: tksmtoms@gmail.com
Requires-Python: >=3.6,<4.0
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Dist: tensorflow (>=2.2,<3.0)
Project-URL: Repository, https://github.com/tokusumi/keras-flops
Description-Content-Type: text/markdown

# keras-flops

![](https://github.com/tokusumi/keras-flops/workflows/Tests/badge.svg)

FLOPs calculator for neural network architecture written in tensorflow (tf.keras) v2.2+

This stands on the shoulders of giants, [tf.profiler](https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler). 

## Requirements

* Python 3.6+
* Tensorflow 2.2+

## Installation

This implementation is simple thanks to stands on the shoulders of giants, [tf.profiler](https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler). Only one function is defined.

Copy and paste [it](https://github.com/tokusumi/keras-flops/blob/master/keras_flops/flops_calculation.py).

## Example

See colab examples [here](https://github.com/tokusumi/keras-flops/tree/master/notebooks) in details.

```python
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout

from keras_flops import get_flops

# build model
inp = Input((32, 32, 3))
x = Conv2D(32, kernel_size=(3, 3), activation="relu")(inp)
x = Conv2D(64, (3, 3), activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
x = Dense(128, activation="relu")(x)
x = Dropout(0.5)(x)
out = Dense(10, activation="softmax")(x)
model = Model(inp, out)

# Calculae FLOPS
flops = get_flops(model, batch_size=1)
print(f"FLOPS: {flops / 10 ** 9:.03} G")
# >>> FLOPS: 0.0338 G
```

## Support

Support `tf.keras.layers` as follows,

| name | layer | 
| -- | -- |
| Conv | Conv[1D/2D/3D]|
| | Conv[1D/2D]Transpose |
| | DepthwiseConv2D |
| | SeparableConv[1D/2D] |
| Pooling | AveragePooling[1D/2D] |
| | GlobalAveragePooling[1D/2D/3D]|
| | MaxPooling[1D/2D] |
| | GlobalMaxPool[1D/2D/3D] |
| Normalization | BatchNormalization |
| Activation | Softmax |
| Attention | Attention |
| | AdditiveAttention |
| others | Dense |

## Not supported

Not support `tf.keras.layers` as follows. They are calculated as zero or smaller value than correct value.

| name | layer | 
| -- | -- |
| Conv | Conv3DTranspose |
| Pooling | AveragePooling3D |
| | MaxPooling3D |
| | UpSampling[1D/2D/3D] |
| Normalization | LayerNormalization |
| RNN | SimpleRNN |
| | LSTM |
| | GRU |
| others | Embedding |
