Metadata-Version: 2.1
Name: mriqc-learn
Version: 0.0.2
Summary: Learning on MRIQC-generated image quality metrics (IQMs).
Home-page: https://github.com/nipreps/mriqc-learn
Author: The NiPreps developers
Author-email: nipreps@gmail.com
License: Apache-2.0
Platform: UNKNOWN
Classifier: Development Status :: 2 - Pre-Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Requires-Python: >=3.7
Description-Content-Type: text/x-rst; charset=UTF-8
Provides-Extra: doc
Provides-Extra: docs
Provides-Extra: mem
Provides-Extra: test
Provides-Extra: tests
Provides-Extra: all
License-File: LICENSE

The MRIQC classifier for T1w images
===================================
MRIQC is released with two classifiers (already trained) to predict image quality
of T1w images.

.. tip::
     You can customize the MRIQC classifier to the T1w images generated at your
     scanning site.


From our preprint `MRIQC: Advancing the Automatic Prediction of Image Quality in MRI from Unseen Sites
<https://doi.org/10.1101/111294>`_:

    *Quality control of MRI is essential for excluding problematic acquisitions and
    avoiding bias in subsequent image processing and analysis. Visual inspection is
    subjective and impractical for large scale datasets. Although automated quality
    assessments have been demonstrated on single-site datasets, it is unclear that
    solutions can generalize to unseen data acquired at new sites. Here, we introduce
    the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and
    fitting a binary (accept/exclude) classifier. Our tool can be run both locally and
    as a free online service via the OpenNeuro.org portal. The classifier is trained on
    a publicly available, multi-site dataset (17 sites, N=1102). We perform model selection
    evaluating different normalization and feature exclusion approaches aimed at maximizing
    across-site generalization and estimate an accuracy of 76%±13% on new sites, using
    leave-one-site-out cross-validation. We confirm that result on a held-out dataset
    (2 sites, N=265) also obtaining a 76% accuracy. Even though the performance of the
    trained classifier is statistically above chance, we show that it is susceptible to
    site effects and unable to account for artifacts specific to new sites. MRIQC performs
    with high accuracy in intra-site prediction, but performance on unseen sites leaves space
    for improvement which might require more labeled data and new approaches to the
    between-site variability. Overcoming these limitations is crucial for a more objective
    quality assessment of neuroimaging data, and to enable the analysis of extremely large
    and multi-site samples.*


