Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions 4.1.Deploy the model (optimized)/conda_dep_opti.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
channels:
- anaconda
- defaults
dependencies:
- pip:
- azureml-defaults
- azure-ml-api-sdk
- torchxrayvision
- pydicom
- openvino-dev
- torch==1.13.1+cpu
- torchvision==0.14.1+cpu
- intel_extension_for_pytorch==1.13.100
- "--index-url https://pypi.org/simple/"
- "--extra-index-url https://download.pytorch.org/whl/cpu"
750 changes: 750 additions & 0 deletions 4.1.Deploy the model (optimized)/deploy-opti-sdk-v1.ipynb

Large diffs are not rendered by default.

63 changes: 63 additions & 0 deletions 4.1.Deploy the model (optimized)/png2dcm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# This script uses PyDicom library (https://pydicom.github.io/) to
# generate a DICOM file from a supplied PNG image.

import pydicom
from pydicom.dataset import Dataset, FileMetaDataset
from PIL import Image
import numpy as np
import zipfile
import io

# Read png from zip file. The code below assumes sample.zip which is a part of
# the PADCHEST dataset
# zf = zipfile.ZipFile("./sample.zip")
# data = zf.read("255433269247415893224655601475580025849_j5s1kc.png")
data = 'sample.png'
image2d = np.array(Image.open(data)).astype(float)
image2d = (image2d/255).astype(np.uint16)

file_meta = FileMetaDataset()
file_meta.MediaStorageSOPClassUID = "1.2.840.10008.5.1.4.1.1.1"
file_meta.MediaStorageSOPInstanceUID ='2.25.34327501276176110812231595851948283641'
file_meta.ImplementationClassUID = '1.3.6.1.4.1.30071.8'
file_meta.TransferSyntaxUID = pydicom.uid.ExplicitVRLittleEndian

ds = Dataset()
ds.file_meta = file_meta

ds.Rows = image2d.shape[0]
ds.Columns = image2d.shape[1]
ds.NumberOfFrames = 1

ds.PixelSpacing = [1, 1] # in mm
ds.SliceThickness = 1 # in mm

ds.SeriesInstanceUID = pydicom.uid.generate_uid()
ds.StudyInstanceUID = pydicom.uid.generate_uid()

ds.PatientName = "Demo^RSNA2021"
ds.PatientID = "123456"
ds.Modality = "CR"
ds.StudyDate = '20211204'
ds.ContentDate = '20211204'

ds.BitsStored = 16
ds.BitsAllocated = 16
ds.HighBit = 15
ds.PixelRepresentation = 0
ds.PhotometricInterpretation = "MONOCHROME2"
ds.SamplesPerPixel = 1

ds.RescaleIntercept = 900
ds.RescaleSlope = 9
ds.WindowCenter = 2000
ds.WindowWidth = 2000

ds.is_little_endian = True
ds.is_implicit_VR = False

ds.PixelData = image2d.tobytes()

pydicom.dataset.validate_file_meta(ds.file_meta, enforce_standard=True)
ds.save_as("sample_dicom.dcm", write_like_original=False)

3 changes: 3 additions & 0 deletions 4.1.Deploy the model (optimized)/sample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
57 changes: 57 additions & 0 deletions 4.2.Deploy the model(optimized)_sdk_v2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Deploy the model and model explainability (bonus)
**Deployment scenario:** Submit a DICOM file (x-ray image) to the cloud and get a model prediction in real time.

To deploy the model that was trained in the previous section ([3.Build a model](../3.Build%20a%20model/Readme.md), [training.ipynb](../3.Build%20a%20model/training.ipynb)) as a **web service hosted on Azure Container Instances (ACI)**
, you need to open the [deploy-opti-sdk-v2.ipynb](./deploy-opti-sdk-v2.ipynb) Notebook in your Azure ML workspace and follow the steps below:

Also, if you want to deploy locally, see [deploy-local-opti-sdk-v2.ipynb](./deploy-local-opti-sdk-v2.ipynb)

## Steps
1. Prepare an entry script.
2. Prepare an inference configuration.
3. Deploy the model you trained before to the cloud.
4. Test the resulting web service.

To simulate a realistic scenario:
* The model to deploy was trained from 16 bit gray scale PNG images from [PadChest](https://pubmed.ncbi.nlm.nih.gov/32877839/).
* The deployed model accepts DICOM images as inputs.

### 1. Prepare an entry script.
In order to use a model for inferencing, you need to create a scoring script first. In the notebook that sits beside this Readme we have such script embedded.
The scoring script is only required to have two functions:
* The `init()` function, which typically loads the model into a global object.
* The `run(input_data)` function uses the model to predict a value based on the input data.
* In our case, input_data will be a DICOM file forma. <br>

The output of the scoring script is the model prediction in the format of a JSON object that will be passed into an HTTP response.


### 2. Prepare an inference configuration.
We will create:
* A lightweight environment to deploy the model.
* Use [AMLRequest](https://docs.microsoft.com/en-us/python/api/azureml-contrib-services/azureml.contrib.services.aml_request?view=azure-ml-py) and [AMLResponse](https://docs.microsoft.com/en-us/python/api/azureml-contrib-services/azureml.contrib.services.aml_response.amlresponse?view=azure-ml-py) classes to access DICOM raw data.
* An inference configuration to deploy the model as a web service using the entry script or scoring script [score.py](./score.py).


### 3. Deploy the model to the cloud.
Then, we will deploy the model as an ACI web service which will be exposed as an HTTP endpoint.
* Specify the **deployment configuration** of the compute resource (i.e., CPU or GPU, amount of RAM, etc.) required for your application.
* ***Deploy*** by bringing all together: i) model, ii) environment, iii) inference configuration (script [score.py](./score.py)) and iv) deployment configuration.
* Then Azure ML, will automatically deploy the model in the cloud and you will be able to send the data to your model.

### 4. Test the resulting web service.
We will load a DICOM file, send it to the Webservice we have deploed and display the response.

## Bonus: eXplainable AI (XAI)
The notebook also includes a model usage scenario which we built around the use case of model explainability.

As the adoption of AI in Healthcare translates into clinical practice, there is an unmeet need in providing clinical meaningful insights to doctors that explain how AI algorithms work. While most the AI (Deep Learning) algorithms operate as a **black-box** (i.e., do not provide explanations), here we show how to use common XAI methods (e.g.,
[SHAP](https://shap-lrjball.readthedocs.io/en/latest/generated/shap.DeepExplainer.html) and [M3d-Cam](https://github.com/MECLabTUDA/M3d-Cam)) to verify that the **trained model** is using expected pixel information from the image.

The [explain.ipynb](./explain.ipynb) Notebook demonstrates:

* How to use integrate [SHAP](https://shap-lrjball.readthedocs.io/en/latest/generated/shap.DeepExplainer.html) and [M3d-Cam](https://github.com/MECLabTUDA/M3d-Cam) from trained Deep Learning models.
* How to load a trained model from a run directly into your code
* How to access data directly from the datastore (after the [1.Load Data](../1.Load%20Data/README.md) step).

![Explainability](./images/explainability_shap.png)
17 changes: 17 additions & 0 deletions 4.2.Deploy the model(optimized)_sdk_v2/conda_dep_opti.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
channels:
- anaconda
- defaults
dependencies:
- python=3.9
- pip
- pip:
- azureml-defaults
- azure-ml-api-sdk
- torchxrayvision
- pydicom
- openvino-dev
- torch==1.13.1+cpu
- torchvision==0.14.1+cpu
- intel_extension_for_pytorch==1.13.100
- "--index-url https://pypi.org/simple/"
- "--extra-index-url https://download.pytorch.org/whl/cpu"
Loading