Metadata-Version: 2.1
Name: dask-saturn
Version: 0.2.2
Summary: Dask Cluster objects in Saturn Cloud
Home-page: https://saturncloud.io/
Maintainer: Saturn Cloud Developers
Maintainer-email: dev@saturncloud.io
License: BSD-3-Clause
Project-URL: Documentation, http://docs.saturncloud.io
Project-URL: Source, https://github.com/saturncloud/dask-saturn
Project-URL: Issue Tracker, https://github.com/saturncloud/dask-saturn/issues
Description: # dask-saturn
        Python library for interacting with [Dask](https://dask.org/) clusters in
        [Saturn Cloud](https://www.saturncloud.io/).
        
        Dask-Saturn mimics the API of
        [Dask-Kubernetes](https://github.com/dask/dask-kubernetes), but allows the user
        to interact with clusters created within
        [Saturn Cloud](https://www.saturncloud.io/).
        
        ## Start cluster
        From within a Jupyter notebook, you can start a cluster:
        
        ```python
        from dask_saturn import SaturnCluster
        
        cluster = SaturnCluster()
        cluster
        ```
        
        By default this will start a dask cluster with the same settings that you have
        already set in the Saturn UI or in a prior notebook.
        
        To start the cluster with a certain number of workers using the `n_workers`
        option. Similarly, you can set the `scheduler_size`, `worker_size`, and `worker_is_spot`.
        
        > Note: If the cluster is already running then you can't change the settings.
        > Attempting to do so will raise a warning.
        
        Use the `autoclose` option to set up a cluster that is tied to the client
        kernel. This functions like a regular dask `LocalCluster`, when your jupyter
        kernel dies or is restarted, the dask cluster will close.
        
        ## Adjust number of workers
        Once you have a cluster you can interact with it via the jupyter
        widget, or using the `scale` and `adapt` methods.
        
        For example, to manually scale up to 20 workers:
        
        ```python
        cluster.scale(20)
        ```
        
        To create an adaptive cluster that controls its own scaling:
        
        ```python
        cluster.adapt(minimum=1, maximum=20)
        ```
        
        ## Interact with client
        To submit tasks to the cluster, you sometimes need access to the
        `Client` object. Instantiate this with the cluster as the only argument:
        
        ```python
        from distributed import Client
        
        client = Client(cluster)
        client
        ```
        
        ## Close cluster
        
        To terminate all resources associated with a cluster, use the
        `close` method:
        
        ```python
        cluster.close()
        ```
        
        ## Change settings
        
        To update the settings (such as `n_workers`, `worker_size`, `worker_is_spot`, `nthreads`) on an existing cluster, use the `reset` method:
        
        ```python
        cluster.reset(n_workers=3)
        ```
        
        You can also call this without instantiating the cluster first:
        
        ```python
        cluster = SaturnCluster.reset(n_workers=3)
        ```
        
        ## Sync files to workers
        
        When working with distributed dask clusters, the workers don't have access to the same file system as your client does. So you will see files in your jupyter server that aren't available on the workers. To move files to the workers you can use the `RegisterFiles` plugin and call `sync_files` on any path that you want to update on the workers.
        
        For instance if you have a file structure like:
        ```
        /home/jovyan/project/
        |---- utils/
        |   |---- __init__.py
        |   |---- hello.py
        |
        |---- Untitled.ipynb
        ```
        
        where hello.py contains:
        
        ```python
        # utils/hello.py
        def greet():
            return "Hello"
        ```
        
        If the code in hello.py changes or you add new files to utils, you'll want to push those changes to the workers. After setting up the `SaturnCluster` and the `Client`, register the `RegisterFiles` plugin with the workers. Then every time you make changes to the files in utils, run `sync_files`. The worker plugin makes sure that any new worker that comes up will have any files that you have synced.
        
        ```python
        from dask_saturn import RegisterFiles, sync_files
        
        client.register_worker_plugin(RegisterFiles())
        sync_files(client, "utils")
        
        # If a python script has changed, restart the workers so they will see the changes
        client.restart()
        
        # import the function and tell the workers to run it
        from util.hello import greet
        client.run(greet)
        ```
        
        > TIP: You can always check the state of the filesystem on your workers by running `client.run(os.listdir)`
        
        ## Development
        
        Create/update a dask-saturn conda environment:
        
        ```sh
        make conda-update
        ```
        
        Set environment variables to run dask-saturn with a local atlas server:
        
        ```sh
        export BASE_URL=http://dev.localtest.me:8888/
        export SATURN_TOKEN=<JUPYTER_SERVER_SATURN_TOKEN>
        ```
        
Keywords: dask saturn cloud distributed cluster
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: License :: OSI Approved :: BSD License
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: System :: Distributed Computing
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Description-Content-Type: text/markdown
