dltk.core package

dltk.core.activations module

dltk.core.activations.leaky_relu(inputs, alpha=0.01)[source]

Leaky ReLu activation function

Parameters:
  • inputs (tf.Tensor) – input Tensor
  • alpha (float) – leakiness parameter
Returns:

a leaky ReLu activated tensor

Return type:

tf.Tensor

dltk.core.activations.prelu(inputs, alpha_initializer=<tensorflow.python.ops.init_ops.Constant object>)[source]

Probabilistic ReLu activation function

Parameters:
  • (tf.Tensor) – input Tensor
  • alpha_initializer (float, optional) – an initial value for alpha
Returns:

a PreLu activated tensor

Return type:

tf.Tensor

dltk.core.losses module

dltk.core.losses.dice_loss(logits, labels, num_classes, smooth=1e-05, include_background=True, only_present=False)[source]

Calculates a smooth Dice coefficient loss from sparse labels.

Parameters:
  • logits (tf.Tensor) – logits prediction for which to calculate crossentropy error
  • labels (tf.Tensor) – sparse labels used for crossentropy error calculation
  • num_classes (int) – number of class labels to evaluate on
  • smooth (float) – smoothing coefficient for the loss computation
  • include_background (bool) – flag to include a loss on the background label or not
  • only_present (bool) – flag to include only labels present in the inputs or not
Returns:

Tensor scalar representing the loss

Return type:

tf.Tensor

dltk.core.losses.sparse_balanced_crossentropy(logits, labels)[source]

Calculates a class frequency balanced crossentropy loss from sparse labels.

Parameters:
  • logits (tf.Tensor) – logits prediction for which to calculate crossentropy error
  • labels (tf.Tensor) – sparse labels used for crossentropy error calculation
Returns:

Tensor scalar representing the mean loss

Return type:

tf.Tensor

dltk.core.metrics module

dltk.core.metrics.abs_vol_difference(predictions, labels, num_classes)[source]
Calculates the absolute volume difference for each class between
labels and predictions.
Parameters:
  • predictions (np.ndarray) – predictions
  • labels (np.ndarray) – labels
  • num_classes (int) – number of classes to calculate avd for
Returns:

avd per class

Return type:

np.ndarray

dltk.core.metrics.crossentropy(predictions, labels, logits=True)[source]

Calculates the crossentropy loss between predictions and labels

Parameters:
  • prediction (np.ndarray) – predictions
  • labels (np.ndarray) – labels
  • logits (bool) – flag whether predictions are logits or probabilities
Returns:

crossentropy error

Return type:

float

dltk.core.metrics.dice(predictions, labels, num_classes)[source]
Calculates the categorical Dice similarity coefficients for each class
between labels and predictions.
Parameters:
  • predictions (np.ndarray) – predictions
  • labels (np.ndarray) – labels
  • num_classes (int) – number of classes to calculate the dice coefficient for
Returns:

dice coefficient per class

Return type:

np.ndarray

dltk.core.residual_unit module

dltk.core.residual_unit.vanilla_residual_unit_3d(inputs, out_filters, kernel_size=(3, 3, 3), strides=(1, 1, 1), mode='eval', use_bias=False, kernel_initializer=<tensorflow.python.ops.init_ops.VarianceScaling object>, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, kernel_regularizer=None, bias_regularizer=None)[source]
Implementation of a 3D residual unit according to [1]. This
implementation supports strided convolutions and automatically handles different input and output filters.

[1] K. He et al. Identity Mappings in Deep Residual Networks. ECCV 2016.

Parameters:
  • inputs (tf.Tensor) – Input tensor to the residual unit. Is required to have a rank of 5 (i.e. [batch, x, y, z, channels]).
  • out_filters (int) – Number of convolutional filters used in the sub units.
  • kernel_size (tuple, optional) – Size of the convoltional kernels used in the sub units
  • strides (tuple, optional) – Convolution strides in (x,y,z) of sub unit 0. Allows downsampling of the input tensor via strides convolutions.
  • mode (str, optional) – One of the tf.estimator.ModeKeys: TRAIN, EVAL or PREDICT
  • use_bias (bool, optional) – Train a bias with each convolution.
  • kernel_initializer (TYPE, optional) – Initialisation of convolution kernels
  • bias_initializer (TYPE, optional) – Initialisation of bias
  • kernel_regularizer (None, optional) – Additional regularisation op
  • bias_regularizer (None, optional) – Additional regularisation op
Returns:

Output of the residual unit

Return type:

tf.Tensor

dltk.core.upsample module

dltk.core.upsample.get_linear_upsampling_kernel(kernel_spatial_shape, out_filters, in_filters, trainable=False)[source]
Builds a kernel for linear upsampling with the shape
[kernel_spatial_shape] + [out_filters, in_filters]. Can be set to trainable to potentially learn a better upsamling.
Parameters:
  • kernel_spatial_shape (list or tuple) – Spatial dimensions of the upsampling kernel. Is required to be of rank 2 or 3, (i.e. [dim_x, dim_y] or [dim_x, dim_y, dim_z])
  • out_filters (int) – Number of output filters.
  • in_filters (int) – Number of input filters.
  • trainable (bool, optional) – Flag to set the returned tf.Variable to be trainable or not.
Returns:

Linear upsampling kernel

Return type:

tf.Variable

dltk.core.upsample.linear_upsample_3d(inputs, strides=(2, 2, 2), use_bias=False, trainable=False, name=u'linear_upsample_3d')[source]
Linear upsampling layer in 3D using strided transpose convolutions. The
upsampling kernel size will be automatically computed to avoid information loss.
Parameters:
  • inputs (tf.Tensor) – Input tensor to be upsampled
  • strides (tuple, optional) – The strides determine the upsampling factor in each dimension.
  • use_bias (bool, optional) – Flag to train an additional bias.
  • trainable (bool, optional) – Flag to set the variables to be trainable or not.
  • name (str, optional) – Name of the layer.
Returns:

Upsampled Tensor

Return type:

tf.Tensor

Module contents