profile

Machine Learning for Medical Imaging

Make your Pytorch Model 9x Faster

Published 9 months agoΒ β€’Β 4 min read

Hi Reader,,

Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients.

​

​

Top ML Tools and Repos

​MITK: a free open-source software system for development of interactive medical image processing software based on ITK and VTK.

​DMD: Deep MAT Deformation, from Volumetric Medical Imaging to Organ Surface Reconstruction.

​SwinMM: Masked Multi-view with Swin Transformers for 3D Medical Image Segmentation.

​ANTsPyNet: A collection of deep learning architectures and applications ported to the python language and tools for basic medical image processing.

​

ML Deep Dive: Quantization on x86 CPU with Pytorch

Quantization in deep learning has made it possible to decrease the latency of your models significantly. A new addition to Pytorch allows you to quantize your model for the specific x86 CPU backend.

Here’s how to use it.

Quick recap of quantization with Pytorch.

The current recommended way of quantization in PyTorch is FX.

Before PyTorch 2.0, the default quantization backend (a.k.a. QEngine) on x86 CPUs was FBGEMM, which leveraged the FBGEMM performance library to achieve the performance speedup.

In the PyTorch 2.0 release, a new quantization backend called X86 was introduced to replace FBGEMM.

The x86 quantization backend offers improved INT8 inference performance when compared to the original FBGEMM backend because it leverages:

  • the strengths of FBGEMM
  • and the Intel oneAPI Deep Neural Network Library (oneDNN) kernel libraries.

To measure the performance gains with this new backend, several models were tested using the old backend (FBGEMM) and the new backend (x86).

What you’re seeing below is a ratio that quantifies performance gains by comparing FBGEMM with fp32 (floating point) and also comparing x86 with fp32.

​

In some cases, like the one on the far right for wide_resnet50_2 model, you get a little over 9x speedup 😲 when using x86 backend, while having a little over 3x speedup with the FBGEMM backend.

Overall, the results showed a 2.97X geomean performance speedup compared to FP32 inference performance, while the speedup was 1.43X with the FBGEMM backend.

Really impressive stuff!

πŸ‘‰ Code example and full tutorial can be found here.

Why do you need to know this?

Because in many instances when building ML solutions, you're faced with a scenario where you need:

  • To make your model smaller to fit into a low memory device.
  • To make your model run faster.

In these cases, quantization is your go to approach. Btw, I wrote an article about a small experiment I did with the Pytorch quantization module to test the new x86 backend. You can check it out here.

​

How I Got my First Job in ML

In 2018 I got my first job as a research and development engineer with a focus on machine learning applications in the automatic visual inspection industry.

The company I worked for was small but it was building a cutting edge solution powered by augmented reality to help with the inspection of many different mechanical parts and engines. We had Airbus and Safran as clients.

So how did I get this job?

Well, around 6 months before I started this job, I actually got an internship in this company. There were 2 of us doing an internship.

The company was growing and very quickly it became clear that they are looking to hire more engineers.

It also became clear that they would probably hire only one of us as a full time engineer.

I really wanted to get the job so I basically worked a lot, and I mean a lot.

Although my internship was supposed to only try the application of some ML algorithms and test them with their datasets, I had another goal in mind.

My goal was to integrate my ML solutions in their product before I finish my internship.

I knew that if I do this, then I will have a major advantage and a much higher chance of getting the job.

It was tough, I worked for very cheap, because I was an intern, but I had my eyes on the bigger goal, a full time ML engineering job.

So after around 3 or 4 months into the internship, I had my first ML solution integrated into their product and it was an amazing feeling!

I felt that my work was valued.

Also, at this time I started working on a new ML solution for another visual inspection problem they had.

Not long after that, the CTO approached me and told me that they wanted to hire me as a full time engineer after I finish my internship, which was a 6 months internship.

Why am I sharing this story with you?

Well, to encourage you to go above and beyond when doing any sort of internship. This is one of the best routes of getting a full time job, which I haven't seen many people talk about it.

But how do you get an internship in the first place?

Well, that's a story for another time!

​

Tweet of the Day

​

Meme of the Day πŸ˜‚

​

Build an AI-Powered Medical Imaging Chatbot

​

This week my brother released a new course at the intersection of medical imaging and machine learning with LLMs and ChatGPT. The course is about how to build an AI-Powered chatbot for medical imaging. At the end of this course you'll be able to build chatbots to chat with any type of document or web page, specifically for medical imaging and MONAI documentation. You can check it out here. A 50% discount is on for the next 2 days!

​


​

What'd you think of today's edition?

​

​

That's it for this week's edition, I hope you enjoyed it!

​

Machine Learning for Medical Imaging

by Nour Islam Mokhtari from pycad.co

πŸ‘‰ Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | πŸ’‘ The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!

Read more from Machine Learning for Medical Imaging

Hi Reader,, Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients. TotalSegmentator : Whole Body Segmentation at your Fingertips This free tool available online can do full body segmentation, it's called TotalSegmentator. I have already mentioned this tool in a previous edition of the newsletter, but in this...

9 days agoΒ β€’Β 3 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Medical Imaging Expert Told Me This Recently I saw a post on LinkedIn where a medical imaging expert showcased his work of segmenting the lungs and its bronchial trees. You can...

16 days agoΒ β€’Β 2 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How we helped accelerate inference time for a client's AI product Below is a screenshot of a benchmark we did for a client of ours. The goal was to accelerate inference time. This...

22 days agoΒ β€’Β 3 min read
Share this post