profile

Machine Learning for Medical Imaging

Do you know these ML terms?

Published 10 months ago • 2 min read

Hi Reader,,

Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients.

​

​

FLOPS in Machine Learning

When reading ML papers, you’ve probably noticed the term FLOPS. Do you know what it means and how it’s computed?

Here’s how.

“FLOPS” is an acronym that stands for “Floating Point Operations Per Second.”

It is a common measure of computational performance and is often used in the context of high-performance computing (HPC) and machine learning to gauge the speed of computer processors.

Each operation that a processor performs on floating-point numbers, such as addition, subtraction, multiplication, or division, counts as one floating-point operation.

Keep this in mind.

If a processor can perform one million such operations in a second, it would be said to have a performance of one million FLOPS, or one megaflop.

Remember this past sentence.

There are many different ways to calculate FLOPS, but one commonly used method is to look at:

  • the number of cores in a processor,
  • the clock speed of each core,
  • and the number of operations that each core can perform per clock cycle.

Let’s look at an example.

If a processor has 4 cores, each running at a clock speed of 2 GHz (2 billion cycles per second), and each core can perform 2 operations per clock cycle, the overall performance would be:

4 cores * 2 billion cycles per second/core * 2 operations/cycle = 16 billion FLOPS, or 16 GFLOPS (gigaflops).

In the context of machine learning, the number of FLOPS required for training or inference on a model can be seen as a measure of the computational cost of the model.

The calculation is based on the operations performed during forward and backward propagation in the neural network.

For example, if a neural network model uses matrix multiplication in its layers, the number of FLOPS is calculated based on the size of the matrices being multiplied, as each multiplication operation in the matrix counts as a floating-point operation.

Why do you need to know this?

Because understanding FLOPS will help you determine the computational cost of a machine learning model. This becomes increasingly important with all the large models for natural language as well as for computer vision. The more FLOPS your model needs, the more expensive it will require you to maintain it.

​

Quantization: The Key to Efficient Machine Learning Models?

Quantization is a pivotal technique in optimizing machine learning models, especially when deploying them to resource-constrained environments like mobile or embedded devices.

By reducing the numerical precision of the model's parameters, we can significantly decrease both memory requirements and computational load, thereby enhancing the model's efficiency.

How does this affect FLOPS (FLoating point Operations Per Second)?

Quite substantially.

Models with quantized parameters require fewer resources to perform the same computations.

For example, using 8-bit integers instead of 32-bit floating-point numbers reduces the memory size and computational needs by a factor of 4.

This reduction translates to a decrease in the required FLOPS or IOPS, allowing for faster and more efficient model predictions.

However, it's crucial to understand the trade-offs.

Quantization can result in a slight decrease in model accuracy due to the lower numerical precision. As such, careful testing and validation are required.

Nonetheless, with the ever-growing demand for efficient AI, quantization is becoming increasingly important in the ML world, enabling powerful AI capabilities on edge devices.

Why do you need to know this?

With all the privacy issues we're seeing lately when it comes to machine learning APIs such as ChatGPT, it is becoming increasingly important to have on-device models. But since the majority of the best ML models are huge in size, there is so much need to decrease model size and make it fit into low compute devices such as mobile phones. Quantization is the leading technique to achieve this.


​

What'd you think of today's edition?

​

​

That's it for this week's edition, I hope you enjoyed it!

​

Machine Learning for Medical Imaging

by Nour Islam Mokhtari from pycad.co

👉 Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | 💡 The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!

Read more from Machine Learning for Medical Imaging

Hi Reader,, Welcome to the PYCAD newsletter, where every week you receive doses of machine learning and computer vision techniques and tools to help you learn how to build AI solutions to empower the most vulnerable members of our society, patients. TotalSegmentator : Whole Body Segmentation at your Fingertips This free tool available online can do full body segmentation, it's called TotalSegmentator. I have already mentioned this tool in a previous edition of the newsletter, but in this...

8 days ago • 3 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Medical Imaging Expert Told Me This Recently I saw a post on LinkedIn where a medical imaging expert showcased his work of segmenting the lungs and its bronchial trees. You can...

15 days ago • 2 min read

Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How we helped accelerate inference time for a client's AI product Below is a screenshot of a benchmark we did for a client of ours. The goal was to accelerate inference time. This...

21 days ago • 3 min read
Share this post