Metadata-Version: 2.1
Name: mdai
Version: 0.4.2
Summary: MD.ai Python client library
Home-page: https://github.com/mdai/mdai-client-py
Author: MD.ai
Author-email: github@md.ai
License: Apache-2.0
Download-URL: https://github.com/mdai/mdai-client-py/tarball/0.4.2
Description: # MD.ai Python Client Library
        
        **Currently pre-alpha -- API may change significantly in future releases.**
        
        The python client library is designed to work with the datasets and annotations generated by the [MD.ai](https://www.md.ai/) Medical AI platform.
        
        You can download datasets consisting of images and annotations (i.e., JSON file), create train/validation/test datasets, integrate with various machine learing libraries (e.g., TensorFlow/Keras, Fast.ai) for developing machine learning algorithms.
        
        To get started, check out the examples in the [notebooks section](notebooks), or our [intro to deep learning for medical imaging lessons](https://github.com/mdai/ml-lessons/).
        
        ## Installation
        
        Requires Python 3.6+. Install and update using [pip](https://pip.pypa.io/en/stable/quickstart/):
        
        ```sh
        pip install --upgrade mdai
        ```
        
        ## Documentation
        
        Overview Documentation: https://docs.md.ai/libraries/python/
        API Documentation coming soon.
        
        ## The MD.ai Annotator
        
        The MD.ai annotator is a powerful web based application, to store and view anonymized medical images (e.g, DICOM) on the cloud, create annotations collaboratively, in real-time, and export annotations, images and labels for training. The MD.ai python client library can be used to download images and annotations, prepare the datasets, and then used to train and evaluate deep learning models.
        
        - MD.ai Documentation and Videos URL: https://docs.md.ai/
        - MD.ai Annotator Example Project URL: https://public.md.ai/annotator/project/aGq4k6NW/workspace
        
        ![MD.ai Annotator](https://docs.md.ai/img/annotator-homepage.png)
        
        ## MD.ai Annotation JSON Format
        
        More detailed information regarding the annotation JSON export format, see: https://docs.md.ai/data/json/
        
        ## Example Notebooks
        
        - [HelloWorld Keras Notebook](notebooks/hello-world-keras.ipynb)
        - [HelloWorld TFRecords Notebook](notebooks/hello-world-tfrecords-VGG16.ipynb)
        - [HelloWorld Fast.ai](notebooks/hello-world-fastai.ipynb)
        
        ## Introductory lessons to Deep Learning for medical imaging by [MD.ai](https://www.md.ai)
        
        The following are several Jupyter notebooks covering the basics of downloading and parsing annotation data, and training and evaluating different deep learning models for classification, semantic and instance segmentation and object detection problems in the medical imaging domain. The notebooks can be run on Google's colab with GPU (see instruction below).
        
        - Lesson 1. Classification of chest vs. adominal X-rays using TensorFlow/Keras [Github](https://github.com/mdai/ml-lessons/blob/master/lesson1-xray-images-classification.ipynb) | [Annotator](https://public.md.ai/annotator/project/PVq9raBJ)
        - Lesson 2. Lung X-Rays Semantic Segmentation using UNets. [Github](https://github.com/mdai/ml-lessons/blob/master/lesson2-lung-xrays-segmentation.ipynb) |
          [Annotator](https://public.md.ai/annotator/project/aGq4k6NW/workspace)
        - Lesson 3. RSNA Pneumonia detection using Kaggle data format [Github](https://github.com/mdai/ml-lessons/blob/master/lesson3-rsna-pneumonia-detection-kaggle.ipynb) | [Annotator](https://public.md.ai/annotator/project/LxR6zdR2/workspace)
        - Lesson 3. RSNA Pneumonia detection using MD.ai python client library [Github](https://github.com/mdai/ml-lessons/blob/master/lesson3-rsna-pneumonia-detection-mdai-client-lib.ipynb) | [Annotator](https://public.md.ai/annotator/project/LxR6zdR2/workspace)
        
        ## Contributing
        
        See [contributing guidelines](CONTRIBUTING.md) to set up a development environemnt and how to make contributions to mdai.
        
        ## Running Jupyter notebooks Colab
        
        It’s easy to run a Jupyter notebook on Google's Colab with free GPU use (time limited).
        For example, you can add the Github jupyter notebook path to https://colab.research.google.com/notebook:
        Select the "GITHUB" tab, and add the Lesson 1 URL: https://github.com/mdai/ml-lessons/blob/master/lesson1-xray-images-classification.ipynb
        
        To use the GPU, in the notebook menu, go to Runtime -> Change runtime type -> switch to Python 3, and turn on GPU. See more [colab tips.](https://www.kdnuggets.com/2018/02/essential-google-colaboratory-tips-tricks.html)
        
        ## Advanced: How to run on Google Cloud Platform with Deep Learning Images
        
        You can also run the notebook with powerful GPUs on the Google Cloud Platform. In this case, you need to authenticate to the Google Cloug Platform, create a private virtual machine instance running a Google's Deep Learning image, and import the lessons. See instructions below.
        
        [GCP Deep Learnings Images How To](running_on_gcp.md)
        
        ---
        
        &copy; 2020 MD.ai, Inc.
        
Platform: UNKNOWN
Classifier: Development Status :: 2 - Pre-Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Healthcare Industry
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/markdown
