profile

Machine Learning for Medical Imaging

ML for 3D Medical Imaging

Published 10 months agoΒ β€’Β 3 min read

Hi Reader,,

Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients.

​

Neural Networks for 3D Medical Imaging

When working with a lot of medical imaging data, you have to deal with the 3D aspect of it.
So how do you build a deep learning solution for this type of data?
​
Here are 6 neural network architectures that you can use to train a deep learning model on 3D medical data:

3D U-Net:​
​
The U-Net architecture is a powerful model for medical image segmentation. The 3D U-Net extends the classic U-Net model to 3D segmentation. It consists of an encoding (downsampling) path and a decoding (upsampling) path.
​
The encoding path captures context in the input image, while the decoding path allows for precise localization. The 3D U-Net is very effective in handling the 3D nature of volumetric images.

You can check the code here.
​
​V-Net:​
​
The V-Net architecture is another 3D convolutional neural network designed for volumetric image segmentation. Similar to U-Net, V-Net has an encoder-decoder architecture, but it uses volumetric, full-resolution 3D convolutions, so it's more computationally expensive than U-Net.
​
You can find the code here.
​
​HighResNet:​
​
This architecture is designed to handle the challenges of segmentation of structures in 3D medical images. It uses a series of 3D convolutional layers with a residual connection. The model is trained end-to-end and can process an entire 3D image at once.
​
You can find the code here.
​
​EfficientNet3D:​
​
This is a 3D adaptation of the EfficientNet architecture, which has been successful in 2D image classification tasks. It is not as commonly used for 3D segmentation as U-Net or V-Net, but it may be worth considering if computational resources are limited, as it is designed to provide a good trade-off between computational cost and performance.
​
You can find the code here.
​
​Attention U-Net:​
​
This is a variation of U-Net that includes an attention mechanism, which allows the network to focus on certain parts of the image that are more relevant for the task at hand.
​
You can find the code here.
​
​DeepMedic:​
​
This is a 3D CNN that uses dual pathways, one with normal resolution and another with down-sampled input, in order to combine both local and larger contextual information.
​
You can find the code here.

​

MONAI is bringing some game changing tools to the market

This past week I joined an event held by Kitware Inc. about MONAI and how it can be used to do impressive work in Medical Imaging.
​
One of the tools that MONAI offers is generative modeling for medical imaging.
​
This tool is a package that allows you to train generative models such as diffusion models and VAEs (variational autoencoders).
​
The package has an easy to understand API for building and training these generative models.
​
One of the questions that I asked the presenter about this package was: how much data should we use to have acceptable results ?
​
The answer was: few hundreds !
​
This means that if you have let’s say 500 patients cases in 3D format you can already build a generative model that would perform relatively good!
​
Here are some of the features that MONAI Generative Models contains:

  • Network architectures: Diffusion Model, Autoencoder-KL, VQ-VAE, Autoregressive transformers, (Multi-scale) Patch-GAN discriminator.
    ​
  • Diffusion Model Noise Schedulers: DDPM, DDIM, and PNDM.
    ​
  • Losses: Adversarial losses, Spectral losses, and Perceptual losses (for 2D and 3D data using LPIPS, RadImageNet, and 3DMedicalNet pre-trained models).
    ​
  • Metrics: Multi-Scale Structural Similarity Index Measure (MS-SSIM) and FrΓ©chet inception distance (FID).
    ​
  • Diffusion Models, Latent Diffusion Models, and VQ-VAE + Transformer Inferers classes (compatible with MONAI style) containing methods to train, sample synthetic images, and obtain the likelihood of inputted data.
    ​
  • MONAI-compatible trainer engine (based on Ignite) to train models with reconstruction and adversarial components.

​

​

What'd you think of today's edition?

​

​

That's it for this week's edition, I hope you enjoyed it!

​

Machine Learning for Medical Imaging

by Nour Islam Mokhtari from pycad.co

πŸ‘‰ Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | πŸ’‘ The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!

Read more from Machine Learning for Medical Imaging

Hi Reader,, Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients. TotalSegmentator : Whole Body Segmentation at your Fingertips This free tool available online can do full body segmentation, it's called TotalSegmentator. I have already mentioned this tool in a previous edition of the newsletter, but in this...

9 days agoΒ β€’Β 3 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Medical Imaging Expert Told Me This Recently I saw a post on LinkedIn where a medical imaging expert showcased his work of segmenting the lungs and its bronchial trees. You can...

16 days agoΒ β€’Β 2 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How we helped accelerate inference time for a client's AI product Below is a screenshot of a benchmark we did for a client of ours. The goal was to accelerate inference time. This...

22 days agoΒ β€’Β 3 min read
Share this post