Segmentation in 3D using U-Nets with Delira - A very short introduction¶
Author: Justus Schock, Alexander Moriz
Date: 17.12.2018
This Example shows how use the U-Net implementation in Delira with PyTorch.
Let’s first setup the essential hyperparameters. We will use
delira
’s Parameters
-class for this:
Since we did not specify any metric, only the CrossEntropyLoss
will
be calculated for each batch. Since we have a classification task, this
should be sufficient. We will train our network with a batchsize of 64
by using Adam
as optimizer of choice.
Logging and Visualization¶
To get a visualization of our results, we should monitor them somehow.
For logging we will use Visdom
. To start a visdom server you need to
execute the following command inside an environment which has visdom
installed:
visdom -port=9999
This will start a visdom server on port 9999 of your machine and now we can start to configure our logging environment. To view your results you can open http://localhost:9999 in your browser.
Since a single visdom server can run multiple environments, we need to specify a (unique) name for our environment and need to tell the logger, on which port it can find the visdom server.
Data Praparation¶
Loading¶
Next we will create a small train and validation set (in this case they will be the same to show the overfitting capability of the UNet).
Our data is a brain MR-image thankfully provided by the FSL in their introduction.
We first download the data and extract the T1 image and the corresponding segmentation:
Now, we load the image and the mask (they are both 3D), convert them to a 32-bit floating point numpy array and ensure, they have the same shape (i.e. that for each voxel in the image, there is a voxel in the mask):
By querying the unique values in the mask, we get the following:
This means, there are 4 classes (background and 3 types of tissue) in our sample.
To load the data, we have to use a Dataset
. The following defines a
very simple dataset, accepting an image slice, a mask slice and the
number of samples. It always returns the same sample until
num_samples
samples have been returned.
Now, we can finally instantiate our datasets:
Augmentation¶
For Data-Augmentation we will apply a few transformations:
With these transformations we can now wrap our datasets into datamanagers:
Training¶
After we have done that, we can finally specify our experiment and run
it. We will therfore use the already implemented UNet3dPytorch
:
See Also¶
For a more detailed explanation have a look at * the introduction tutorial * the classification example * the 2d segmentation example * the generative adversarial example