|
Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ |
Have you heard of LMFlow?
It’s a framework that allows you to easily finetune open source large language models on your own datasets!
Here are the key features that are supported by the toolkit:
Below you can see the overall system design of LMFlow.
Note: LMFlow is not to be mixed with MLFlow, which is an MLOps framework.
More on LMFlow in the original paper and their github repo.
​
Most machine learning research is about going from mathematical modeling to ML model implementation. Here’s how to go from conditional probability to a neural architecture.
​
Let's start by defining a simple conditional probability problem. Consider a supervised learning task where we have input data X and target data Y, and we want to model the conditional probability P(Y | X), meaning the probability of Y given X.
​
A common way to model this in machine learning is to assume that this probability follows some parametric form and then use the data to estimate the parameters of this model.
​
For instance, we could assume that P(Y | X) is a Gaussian distribution with mean µ(X) and standard deviation σ(X). This mean µ(X) and standard deviation σ(X) could be any functions of X, but in order to learn them from data, we often assume they can be parameterized with some parameters θ, and are differentiable with respect to these parameters.
​
This is where neural networks come in. A neural network is just a function approximator that's highly flexible and differentiable, making it suitable to represent these functions µ(X) and σ(X).
​
Let's assume that our neural network is a simple feed-forward network with parameters θ. Then we can write our model as:
​
µ(X; θ) = NN_µ(X; θ)
σ(X; θ) = NN_σ(X; θ)
​
P(Y | X; θ) = N(Y; NN_µ(X; θ), NN_σ(X; θ)^2)
​
Here, NN_µ and NN_σ are two neural networks which take the same input X and share the same parameters θ, and N is the Gaussian distribution. Their outputs represent the mean and standard deviation of the Gaussian distribution of Y given X.
​
To train this model, we would use a method called maximum likelihood estimation (MLE), which aims to find the parameters θ that maximize the likelihood of the observed data.
​
For our Gaussian model, this corresponds to minimizing the mean squared error between Y and NN_µ(X; θ).
​
Below, you can see how we might implement this in code using PyTorch.
​
In this code, we have a neural network that outputs two values for each input: a mean and a standard deviation. The loss function is defined as the negative log-likelihood of the Gaussian distribution, which we try to minimize using gradient descent.
Why would you want to understand this?
Because it will allow you to build intuition about how some of the powerful models in generative AI work. For example, variational autoencoders (VAEs) and stable diffusion process both have some level of conditional probability that is then modeled in the form of a neural network.
​Deep Learning for Object Detection using Tensorflow.
​Deep Learning for Image Segmentation using Mask RCNN and Tensorflow.
​
​
​That's it for this week's edition, I hope you enjoyed it!
👉 Learn how to build AI systems for medical imaging domain by leveraging tools and techniques that I share with you! | 💡 The newsletter is read by people from: Nvidia, Baker Hughes, Harvard, NYU, Columbia University, University of Toronto and more!
Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ Zoom That Works Everywhere If you can’t zoom any pane in your web DICOM viewer, you’re doing extra work for no reason. Think of it like this: when something is small, you bring it...
Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ A Quick Look at Our Volume Measurement Tool One of the tools we’ve been working on is a simple way to estimate 3D volumes directly inside the viewer. You start by drawing a...
Hello Reader, Welcome to another edition of PYCAD newsletter where we cover interesting topics in Machine Learning and Computer Vision applied to Medical Imaging. The goal of this newsletter is to help you stay up-to-date and learn important concepts in this amazing field! I've got some cool insights for you below ↓ How We Build Our DICOM Viewers Using Plugins One thing we focus on when building DICOM viewers is keeping every feature as a separate plugin. This gives the app a clean structure...