Metadata-Version: 2.1
Name: torchlurk
Version: 0.1.3.1
Summary: A CNN visualization library for pytorch
Home-page: https://github.com/ymentha14/Torchlurk
Author: Yann Mentha
Author-email: yann.mentha@gmail.com
License: UNKNOWN
Description: 
        <p align="center">
        <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/main_title.png?raw=true"alt="Title icon" width="500"/>
        </p>
        <p align = "center">
           <a href="">
              <img alt="forks" src="https://img.shields.io/github/stars/torchlurk/torchlurk.github.io">
           </a>
           <a href="">
              <img alt="forks" src="https://img.shields.io/github/forks/torchlurk/torchlurk.github.io?color=red">
              </a>
           <a href="">
              <img alt="pip downloads" src="https://img.shields.io/badge/pypi-v1.0.1-informational">
           </a>
        
        
           <a href="https://opensource.org/licenses/MIT">
              <img alt="MIT License" src="https://img.shields.io/badge/License-MIT-green.svg">
           </a>
           <a href="htnses/MIT">
              <img alt="python_version" src="https://img.shields.io/badge/python-v3.6+-blue.svg">
           </a>
        </p>
        
        <h2 align="center"> Introduction </h2>
        CNNs do a great job at recognizing images (when appropriately trained).
        
        Problems arise when it comes to interpret the network: although one coded the network in question and knows all the tips and tricks necessary to train it efficiently, one might ignore **how** it generates the output from a given image.
        
        Torchlurk aims at helping the user in that sense: it provides an interface to visualize a Pytorch network in an efficient yet simple manner, similarly to [Microscope](https://microscope.openai.com/models/alexnet/conv5_1_0).
        
        All you need is the trained Pytorch network and its training set. That's it.
        
        <h2 align="center">  Installation &#9749 </h2>
        Torchlurk is available on pip! Just run:
        
            pip install torchlurk
        
        <h2 align="center"> Overview &#9757</h2>
        <p align="center">
           <img alt="demo gif" src="https://github.com/torchlurk/torchlurk/blob/master/imgs/demo.gif?raw=true" style="display:block;margin-left:auto;margin-right:auto;width:70%">
        </p>   
           
        <br>
        
        <h2 align="center"> Documentation &#128218</h2>
        Torchlurk has an <a href="aadsfasf">online documentation</a> which gets regularly updated.
        
        <br>
        
        <h2 align="center"> Quick Start &#8987</h2>
        
        Your training set should follow the following structure in order for the lurker to load properly your datas:
        
            .
            ├── src                  
            │   ├── name_class1
            │   │   ├── class1id_1.jpg
            │   │   ├── class1id_2.jpg
            │   │   ├── ...
            │   ├── name_class2
            │   │   ├── class2id_1.jpg
            │   │   ├── class2id_2.jpg
            │   │   ├── ...
            │   ├── ...
        
        <br>
        
        ### 1. Instanciation
        
        ```python 
        import torchlurk
        import torch
        
        # load the trained model
        your_model = ModelClass() 
        your_model.load_state_dict(torch.load(PATH))
        
        # the preprocess used for the training
        preprocess = transforms.Compose(...)
        
        # and instanciate a lurker
        lurker = Lurk(your_model,
                  preprocess,
                  save_gen_imgs_dir='save/dir',
                  save_json_path='save/dir',
                  imgs_src_dir=".source/dir",
                  side_size=224)
        ```
        
        <br>
        
        ### 2. Layer Visualization
        
        The layer visualization is an artificial image generated by gradient descent which aims at maximizing the acivation of a given filter: this gives useful insights on the type of texture/colors the filter in question is looking at in inputs images.
        
        ```python
        # compute the layer visualisation for a given set of layers/filters
        lurker.compute_layer_viz(layer_indx = 12,filter_indexes=[7])
        # OR compute it for the whole network
        lurker.compute_viz()
        # plot the filters
        lurker.plot_filter_viz(layer_indx=12,filt_indx=7)
        ```
        
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/filt_viz.png?raw=true" width=500>
        </p>
        
        <br>
        
        ### 3. Max Activation
        The max activation represents the top N images activating a given filter the most in terms of average or max score.
        
        
        ```python
        #compute the top activation images
        lurker.compute_top_imgs(compute_max=True,compute_avg=True)
        # plot them
        lurker.plot_top("avg",layer_indx=12,filt_indx=7)
        ```
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/plot_top.png?raw=true" width=3000>
        </p>
        
        <br>
        
        #### 3.1 Deconvolution
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/deconv.png?raw=true">
        </p>
        
        ```python
        # plot the max activating images along with their cropped areas
        lurker.plot_crop(layer_indx=2,filt_indx=15)
        ```
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/deconv_imgs.png?raw=true./imgs">
        </p>
        
        <br>
        
        ### 4. Gradients
        
        Guided GRAD CAM is another way to check what a given filter is looking at: it relies in particular in isolating a specific location in the image that excites our neurons. For more information, check [this article](https://medium.com/@ninads79shukla/gradcam-73a752d368be).
        
        ```python
        #compute the gradients
        lurker.compute_grads()
        # plot them
        lurker.plot_top("avg",layer_indx=12,filt_indx=7,plot_imgs=False,plot_grads=True)
        ```
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/grads.png?raw=true">
        </p>
        
        <br>
        
        ### 5. Histograms
        
        Torchlurk allows you to visualize the most activating classes of the training using histograms: a very peaked distribution is often asssociated with a specialized filter.
        ```python
        # display the 
        lurker.plot_hist(layer_indx=12,filt_indx=7,hist_type="max",num_classes=12)
        ```
        
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/histo.png?raw=true">
        </p>
        
        <br>
        
        ### 6. Serving
        Torchlurk is equiped with a live update tool which allows you to visualize your computed results while coding.
        
        ```python
        #serve the application on port 5001
        lurker.serve(port=5001)
        ```
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/served_tool.jpeg?raw=true">
        </p>
        <p align="center">
          <img src="https://github.com/torchlurk/torchlurk/blob/master/imgs/popup.jpeg?raw=true">
        </p>
        
        ```python
        lurker.end_serve()
        ```
        
        
        Happy Lurking! 
        
        <h1> &#128373</h1>
        
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Description-Content-Type: text/markdown
