profile

Machine Learning for Medical Imaging

Deploy PyTorch Models in C++

Published 11 months ago • 2 min read

Hello Reader,

Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓

​

How to Deploy PyTorch Models in C++

You need to deploy your Pytorch model into a C++ environment? Here's the best way to do it.
​
Use PyTorch's TorchScript, an intermediate representation of a PyTorch model that can be run in a high-performance environment such as C++.
​
Understanding TorchScript allows the separation of your PyTorch model from Python, enabling you to run it in a non-Python environment, like C++.
​
Let's dive into two main ways of using TorchScript: Tracing and Scripting.
​
Tracing:
​
A straightforward method where you take an example input, run it through your model, and TorchScript records the operations performed during the forward pass. It's quick and easy but can fall short with models having control flow like loops, if-else conditions, etc.
​
Scripting:
​
This comes in handy when tracing isn't enough. It handles dynamic control flows well, allowing your models to keep their flexibility.
​
So whether you're new to deep learning or a seasoned pro, taking time to understand TorchScript can lead to more optimized models and open up a whole new world of possibilities for deployment.
​
Below is a sample code on how to combine tracing and scripting to have an intermediate representation of your model.
​
Important note:
​
If we only used tracing and we didn't use scripting in this example, we would have lost the if-else statement inside our model in the final intermediate representation. Hence, you'd have an erroneous model in production.

​

I-JEPA for Self-Supervised Learning

Much of the latest advancements in computer vision have been due to self-supervised learning or SSL for short. A new SSL technique called JEPA will help models reason just like humans would do. Here's how.
​
SSL techniques so far have been focused on comparing pixels of 2 images.
​
Sometimes they compare similar and dissimilar images. For example, an image of a cat should be closer to an image of another cat in the embedding space. And an image of a cat should be further from an image of a dog in the embedding space.
​
Other times, they only compare an image and its augmented version.
​
The new technique called I-JEPA (Image based Joint-Embedding Predictive Architecture) tries to learn by creating an internal model of the outside world which compares abstract representations of images rather than comparing pixels.
​
Basically, JEPA uses a single context block to predict the representations of various target blocks originating from the same image.
​
Below is an image that gives a clear overview of this.
​
But the key question when it comes to self-supervised learning methods is always: how good are the learned representations when tested on downstream tasks like classification, object detection, ...?
​
Well, JEPA performs quite well!
​
It doesn't only score high on downstream tasks, but it is also very efficient in terms of training. It takes less epochs to converge compared to other methods.


​

New Course Announcement: Python for Medical Imaging

My brother has release a new course for medical imaging using Python.

The course begins with an introduction to Python, followed by a deep-dive into essential libraries used in the medical imaging domain. You'll then explore MONAI, its unique features, and how it can be used for effective preprocessing and analysis of medical imaging data. Whether you're a healthcare professional, researcher, or computer scientist looking to venture into medical imaging, this course offers the tools and knowledge you need.

You can grab it now with 50% off by clicking the button below.

​


​

What'd you think of today's edition?

​

​

That's it for this week's edition, I hope you enjoyed it!

​

Machine Learning for Medical Imaging

by Nour Islam Mokhtari from pycad.co

👉 Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | 💡 The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!

Read more from Machine Learning for Medical Imaging

Hi Reader,, Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients. TotalSegmentator : Whole Body Segmentation at your Fingertips This free tool available online can do full body segmentation, it's called TotalSegmentator. I have already mentioned this tool in a previous edition of the newsletter, but in this...

9 days ago • 3 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Medical Imaging Expert Told Me This Recently I saw a post on LinkedIn where a medical imaging expert showcased his work of segmenting the lungs and its bronchial trees. You can...

16 days ago • 2 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How we helped accelerate inference time for a client's AI product Below is a screenshot of a benchmark we did for a client of ours. The goal was to accelerate inference time. This...

22 days ago • 3 min read
Share this post