dltk.core.modules package

dltk.core.modules.activations module

class dltk.core.modules.activations.PReLU(name='prelu')[source]

Bases: dltk.core.modules.base.AbstractModule

dltk.core.modules.activations.leaky_relu(x, leakiness)[source]

Leaky RELU

Parameters:
  • x (tf.Tensor) – input tensor
  • leakiness (float) – leakiness of RELU
Returns:

Tensor with applied leaky RELU

Return type:

tf.Tensor

dltk.core.modules.base module

class dltk.core.modules.base.AbstractModule(name=None)[source]

Bases: object

Superclass for DLTK core modules - strongly inspired by Sonnet: https://github.com/deepmind/sonnet

This class wraps implements a wrapping of tf.make_template for automatic variable sharing. Each subclass needs to implement a _build function used for the template and call this superclass’ __init__ to create the template. For the variable sharing to work, variables inside _build have to be created via tf.get_variable instead of tf.Variable.

The created template is automatically called using __call__.

BIAS_COLLECTIONS = ['variables', 'model_variables', 'trainable_variables', 'biases']
MODEL_COLLECTIONS = ['variables', 'model_variables']
MOVING_COLLECTIONS = ['variables', 'model_variables', 'moving_average_variables']
TRAINABLE_COLLECTIONS = ['variables', 'model_variables', 'trainable_variables']
WEIGHT_COLLECTIONS = ['variables', 'model_variables', 'trainable_variables', 'weights']
get_variables(collection='trainable_variables')[source]

Helper to get all variables of a given collection created within this module

Parameters:collection (string, optional) – Identifier of the collection to get variables from. Defaults to tf.GraphKeys.TRAINABLE_VARIABLES
Returns:List of tf.Variables that are part of the collection and within the scope of this module
Return type:list
variable_scope

Getter to access variable scope of the built template

class dltk.core.modules.base.SaveableModule(name=None)[source]

Bases: dltk.core.modules.base.AbstractModule

classmethod load(path, session)[source]
Parameters:
  • path (string) – Path to load the network from
  • session (tf.Session) – Tensorflow Sessions to load the variables into
Returns:

  • list (list of input placeholders saved)
  • list (list of outputs produced by the network)

output_keys = []
save_metagraph(path, clear_devices=False, **kwargs)[source]
Parameters:
  • path (string) – path to save the metagraph to
  • clear_devices (bool) – flag to toggle whether meta graph saves device placement of tensors
  • kwargs – additional arguments to the module build function
save_model(path, session)[source]

Saves the network to a given path

Parameters:
  • path (string) – Path to the file to save the network in
  • session (tf.Session) – Tensorflow Sessions holding the current variable states

dltk.core.modules.batch_normalization module

class dltk.core.modules.batch_normalization.BatchNorm(offset=True, scale=True, decay_rate=0.99, eps=0.001, name='bn')[source]

Bases: dltk.core.modules.base.AbstractModule

Batch normalization module.

This module normalises the input tensor with statistics across all but the last dimension. During training an exponential moving average is kept to be used during test time.

dltk.core.modules.bilinear_upsample module

class dltk.core.modules.bilinear_upsample.BilinearUpsample(trainable=False, strides=(2, 2, 2), use_bias=False, name='bilinear_upsampling')[source]

Bases: dltk.core.modules.tranposed_convolution.TransposedConvolution

Bilinear upsampling module

This module builds a bilinear upsampling filter and uses it to upsample the input tensor.

dltk.core.modules.convolution module

class dltk.core.modules.convolution.Convolution(out_filters, filter_shape=3, strides=1, dilation_rate=1, padding='SAME', use_bias=False, name='conv')[source]

Bases: dltk.core.modules.base.AbstractModule

Convolution module

This module builds a n-D convolution based on the dimensionality of the input and applies it to the input.

dltk.core.modules.graph_convolution module

class dltk.core.modules.graph_convolution.GraphConvolution(out_filters, laplacian, k=3, bias='b1', name='gconv')[source]

Bases: dltk.core.modules.base.AbstractModule

Graph Convolution module using Chebyshev polynomials

This module builds a graph convolution using the Chebyshev polynomial filters proposed by Defferrard et al. (2016).

rescale_L(L, lmax=2)[source]

Rescale the Laplacian eigenvalues in [-1,1].

dltk.core.modules.linear module

class dltk.core.modules.linear.Linear(out_units, use_bias=True, name='linear')[source]

Bases: dltk.core.modules.base.AbstractModule

Linear layer module

This module builds a linear layer

dltk.core.modules.losses module

dltk.core.modules.losses.dice_loss(logits, labels, num_classes, smooth=1e-05, include_background=True, only_present=False, name='dice_loss', collections=['losses'])[source]

Smooth dice loss

Calculates the smooth dice loss and builds a scalar summary.

Parameters:
  • logits (tf.Tensor) – prediction for which to calculate the error
  • labels (tf.Tensor) – sparse targets with which to calculate the error
  • num_classes (int) – number of class labels to evaluate on
  • include_background (bool) – flag to include a loss on the background label or not
  • name (string) – name of this operation and summary
  • collections (list or tuple) – list of collections to add the summaries to
Returns:

Tensor representing the loss

Return type:

tf.Tensor

dltk.core.modules.losses.mse(x, y, name='mse', collections=['losses'])[source]

Mean squared error

Calculates the crossentropy loss and builds a scalar summary.

Parameters:
  • x (tf.Tensor) – prediction for which to calculate the error
  • y (tf.Tensor) – targets with which to calculate the error
  • name (string) – name of this operation and summary
  • collections (list or tuple) – list of collections to add the summaries to
Returns:

Tensor representing the loss

Return type:

tf.Tensor

dltk.core.modules.losses.sparse_balanced_crossentropy(logits, labels, name='crossentropy', collections=['losses'])[source]

Crossentropy loss

Calculates the crossentropy loss and builds a scalar summary.

Parameters:
  • logits (tf.Tensor) – logit prediction for which to calculate crossentropy error
  • labels (tf.Tensor) – labels used for crossentropy error calculation
  • name (string) – name of this operation and summary
  • collections (list or tuple) – list of collections to add the summaries to
Returns:

Tensor representing the loss

Return type:

tf.Tensor

dltk.core.modules.losses.sparse_crossentropy(logits, labels, name='crossentropy', collections=['losses'])[source]

Crossentropy loss

Calculates the crossentropy loss and builds a scalar summary.

Parameters:
  • logits (tf.Tensor) – logit prediction for which to calculate crossentropy error
  • labels (tf.Tensor) – labels used for crossentropy error calculation
  • name (string) – name of this operation and summary
  • collections (list or tuple) – list of collections to add the summaries to
Returns:

Tensor representing the loss

Return type:

tf.Tensor

dltk.core.modules.regularization module

dltk.core.modules.regularization.l1_regularization(variables, factor=0.0001, name='l1_regularization', collections=['regularization'])[source]

l1 regularization

Calculates l1 penalty for given variables and constructs a scalar summary

Parameters:
  • variables (list or tuple) – list of variables to calculate the l2 penalty for
  • factor (float) – factor to weight the penalty by
  • name (string) – name of the summary
  • collections (list or tuple) – collections to add the summary to
Returns:

l2 penalty for the variables given

Return type:

tf.Tensor

dltk.core.modules.regularization.l2_regularization(variables, factor=0.0001, name='l2_regularization', collections=['regularization'])[source]

l2 regularization

Calculates l2 penalty for given variables and constructs a scalar summary

Parameters:
  • variables (list or tuple) – list of variables to calculate the l2 penalty for
  • factor (float) – factor to weight the penalty by
  • name (string) – name of the summary
  • collections (list or tuple) – collections to add the summary to
Returns:

l2 penalty for the variables given

Return type:

tf.Tensor

dltk.core.modules.residual_units module

class dltk.core.modules.residual_units.VanillaResidualUnit(out_filters, kernel_size=3, stride=(1, 1, 1), relu_leakiness=0.01, name='res_unit')[source]

Bases: dltk.core.modules.base.AbstractModule

Vanilla pre-activation residual unit

pre-activation residual unit as proposed by He, Kaiming, et al. “Identity mappings in deep residual networks.” ECCV, 2016. - https://link.springer.com/chapter/10.1007/978-3-319-46493-0_38

dltk.core.modules.summaries module

dltk.core.modules.summaries.image_summary(img, summary_name, collections=None)[source]

Builds an image summary from a tf.Tensor or np.ndarray

If the image is a tf.Tensor 4D and 5D tensors of form (batch, x, y, channels) and (batch, x, y, z, channels) are supported. For 5D tensors each middle slice is plotted if the size of the tensor is known. Otherwise the first slice is taken.

If the image is a np.ndarray 3D and 4D arrays of form (x, y, channels) and (x, y, z, channels) are supported. For 4D tensors each middle slice is plotted if the size of the tensor is known. Otherwise the first slice is taken.

Parameters:
  • img (tf.Tensor or np.ndarray) – image to be plotted
  • summary_name (string) – name of the summary to be produced
  • collections (list or tuple, optional) – list of collections this summary should be added to additionally to tf.GraphKeys.SUMMARIES and image_summaries
Returns:

Tensor produced from tf.summary or Summary object with the plotted image(s)

Return type:

tf.Tensor or tf.Summary

dltk.core.modules.summaries.scalar_summary(x, summary_name, collections=None)[source]

Builds a scalar summary

If x is a tf.Tensor it creates the summary operation to track x

If x is a scalar it creates the tf.Summary object to be written be a summary writer

If x is a list, tuple or dict a tf.Summary object is created for each element. The key or index is used for naming

Parameters:
  • x (tf.Tensor or scalar or list or dict) – scalar data to be plotted
  • summary_name (string) – name of the summary to be produced
  • collections (list or tuple, optional) – list of collections this summary should be added to additionally to tf.GraphKeys.SUMMARIES and image_summaries
Returns:

Tensor produced from tf.summary or Summary object with the summarised data

Return type:

tf.Tensor or tf.Summary

dltk.core.modules.tranposed_convolution module

class dltk.core.modules.tranposed_convolution.TransposedConvolution(out_filters, strides=(1, 1, 1), filter_shape=None, use_bias=False, name='conv_transposed')[source]

Bases: dltk.core.modules.base.AbstractModule

Tranposed convolution module

This build a 2D or 3D transposed convolution based on the dimensionality of the input

Module contents