Skip to content

Latest commit

 

History

History
129 lines (95 loc) · 4.36 KB

File metadata and controls

129 lines (95 loc) · 4.36 KB

GSoC'22 @ TensorFlow

image

Project Details:

Develop Healthcare examples using TensorFlow

This project is a part of Google Summer of Code 2022.

GSoC'22 @ TensorFlow Project Link

Work-Product Document (Final Report):

Medium

Project Mentor:

Objective

Developing healthcare examples to showcase various use cases of deep learning algorithms and their applications. These sample notebooks will provide students as well as the researchers an overview of the working of deep learning algorithms in real-time scenarios. Further, these trained models will be primarily used on a web-inference engine (currently under development) for underfunded medical sectors.

Pseudo-segmentation of Prostate Gland Cancer

Understanding the Problem Statement

Pseudo Segmentation is a process of creation of fake mask maps by using the classification approach on the entire image at the patch level. The entire slide image is broken down into patches of fixed length and these patches are then classified. If found positive, that patch in the original image is then masked, thereby, creating a fake mask map.

Demo:

Untitled

Access the deployed App

App: Open in Streamlit

Run the App Locally

  1. Clone the repository.
git clone https://github.com/mayureshagashe2105/GSoC-22-TensorFlow-Resources-and-Notebooks.git
  1. Go to the project directory.
cd GSoC-22-TensorFlow-Resources-and-Notebooks
  1. Checkout to the branch localhost
git checkout localhost
  1. Go to the app driectory.
cd app
  1. Install the requiremnets.
pip install -r requirements.txt
  1. Make sure you have openslide-binary from this link
  2. Run the following command:
streamlit run 🏠_Home.py

This blog post presents the technical insights used in the developed diagnosing method.

Blog Post:

Medium

Timeline

Week 1 - 2:

Tasks

  • Understanding the structure of the data and tiff files.
  • Dataset is hosted on kaggle and is very huge (412 GB). In order to get started, write a script to download the subset of the data from the kaggle environment.
  • Perform basic EDA.
  • Design custom data generators to ingest the data at the patch-level and benchmark the data generators.
  • Train a baseline model.
  • Create fake segmentation/color maps on the basis of classification results from the baseline model.
  • Optimize the Datagenerators for level 0 patch extraction.
  • Add write to disk functionality to Datagenerators.
  • Map classification results at higer resolution to segmentation map at a lower resolution.

Week 3-4:

Tasks

  • Benchmarking the Input pipeline.
  • Depicting Diagramatic Explanantions.
  • Optimizing patch extraction process.
  • Try to simplify the codebase.
  • Document the approach used.

Week 5:

Tasks

  • MLPs with JAX (Batch_mode)
  • CNNs with JAX
  • ViTs with JAX-Flax.

Week 6:

Tasks

  • Figure out how to use ViTs with patch based learning.
  • Fix bug in score function from ViT.
  • Use optax for optimizer state handling.
  • Fix inappropriate accuracy bug.
  • Document the ViTs.

Week 7 - 9:

Tasks:

  • Add docstrings to ViTs.
  • Add dropout layer and support for dropout_rng.
  • Add tensorboard pulgin.

Week 10 - 11:

Tasks:

  • Publish a release.

Week 12:

Tasks:

  • Final Work product document write-up.

My Vision Transformer Release:

ViT