profile

Machine Learning for Medical Imaging

nn-Unet: Powerful Segmentation for Medical Data

Published 7 months ago • 3 min read

Hi Reader,,

Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients.

ML Deep Dive: nnU-Net for Segmentation of Medical Data

One of the best tools to build an automatic segmentation model for 2D and 3D medical data is nnU-Net. Here’s why.

nnU-Net is not just a model architecture, it’s in fact a whole process for building the right training pipeline that fits your dataset.

Let me explain.

nnU-Net is a semantic segmentation method that automatically adapts to a given dataset.

It will analyze the provided training cases and automatically configure a matching U-Net-based segmentation pipeline.

Given a new dataset, nnU-Net will systematically analyze the provided training cases and create a ‘dataset fingerprint’.

nnU-Net then creates several U-Net configurations for each dataset:

  • 2d: a 2D U-Net (for 2D and 3D datasets).
  • 3d_fullres: a 3D U-Net that operates on a high image resolution (for 3D datasets only).
  • 3d_lowres → 3d_cascade_fullres: a 3D U-Net cascade where first a 3D U-Net operates on low resolution images and then a second high-resolution 3D U-Net refined the predictions of the former (for 3D datasets with large image sizes only).

Upon release, nnU-Net was evaluated on 23 datasets belonging to competitions from the biomedical domain.

Despite competing with handcrafted solutions for each respective dataset, nnU-Net’s fully automated pipeline scored several first places on open leaderboards!

Btw, nnU-Net is the main tool behind TotalSegmentator!

You can find out more about nnU-Net in its original paper. You can train your own nnU-Net model using their github repo.

Cost Effective Deployment Method for Deep Learning Models

Lately I’ve been experimenting with Serverless GPUs by building deep learning models endpoints. Specifically, I wanted to try a service that offered per-use billing.

The product that I used is RunPod.

With RunPod you can easily deploy deep learning models that require heavy computation on the GPUs.

You can then turn your deep learning model into an API endpoint that you can run synchronously or asynchronously.

Aaaand you pay by the second!

Meaning that if your model takes 1 second to do the computation, and you call your model 100 times per day, then you only pay for 100 seconds!

This is an extremely efficient way to do computation on the cloud and it’s very cost effective. Which would translate into lots of costs saving. Especially for startups and small companies with a tight budget.

Here’s a high level overview of how you deploy your deep learning model on a RunPod endpoint:

  • You build a docker image that uses your model to do inference. It needs to contain a RunPod handler file inside.
  • You push your docker image to a container registry. It can be public or private registry. An example would be Docker Hub.
  • You create what is called a “Template” on RunPod platform. This will specify several parameters such as the name of the docker image to pull from the container registry.
  • You create an endpoint that makes use of the previously created Template and you specify the type of GPU you would like to use, as well as the number of workers you want to run and some other parameters.

And Voilà! You now have your model deployed as an API that you can call from anywhere and you’re only billed for the time that the API was actually used.

I believe that most deep learning models in the future will be deployed in this fashion. The value is just so clear!

Tweet (or X) of the Day

Meme of the Day 😂


That's it for this week's edition, I hope you enjoyed it!

Machine Learning for Medical Imaging

by Nour Islam Mokhtari from pycad.co

👉 Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | 💡 The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!

Read more from Machine Learning for Medical Imaging

Hi Reader,, Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients. TotalSegmentator : Whole Body Segmentation at your Fingertips This free tool available online can do full body segmentation, it's called TotalSegmentator. I have already mentioned this tool in a previous edition of the newsletter, but in this...

9 days ago • 3 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Medical Imaging Expert Told Me This Recently I saw a post on LinkedIn where a medical imaging expert showcased his work of segmenting the lungs and its bronchial trees. You can...

16 days ago • 2 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How we helped accelerate inference time for a client's AI product Below is a screenshot of a benchmark we did for a client of ours. The goal was to accelerate inference time. This...

22 days ago • 3 min read
Share this post