Segmentation¶
UNet2dPyTorch¶
-
class
UNet2dPyTorch
(num_classes, in_channels=1, depth=5, start_filts=64, up_mode='transpose', merge_mode='concat')[source]¶ Bases:
delira.models.abstract_network.AbstractPyTorchNetwork
The
UNet2dPyTorch
is a convolutional encoder-decoder neural network. Contextual spatial information (from the decoding, expansive pathway) about an input tensor is merged with information representing the localization of details (from the encoding, compressive pathway).Notes
Differences to the original paper:
- padding is used in 3x3 convolutions to prevent loss of border pixels
- merging outputs does not require cropping due to (1)
- residual connections can be used by specifying
merge_mode='add'
- if non-parametric upsampling is used in the decoder pathway (
- specified by upmode=’upsample’), then an additional 1x1 2d
convolution occurs after upsampling to reduce channel
dimensionality by a factor of 2. This channel halving happens
with the convolution in the tranpose convolution (specified by
upmode='transpose'
)
References
https://arxiv.org/abs/1505.04597
See also
-
_apply
(fn)¶
-
_build_model
(num_classes, in_channels=3, depth=5, start_filts=64)[source]¶ Builds the actual model
Parameters: - num_classes (int) – number of output classes
- in_channels (int) – number of channels for the input tensor (default: 1)
- depth (int) – number of MaxPools in the U-Net (default: 5)
- start_filts (int) – number of convolutional filters for the first conv (affects all other conv-filter numbers too; default: 64)
Notes
The Helper functions and classes are defined within this function because
delira
offers a possibility to save the source code along the weights to completely recover the network without needing a manually created network instance and these helper functions have to be saved too.
-
_get_name
()¶
-
_init_kwargs
= {}¶
-
_load_from_state_dict
(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)¶ Copies parameters and buffers from
state_dict
into only this module, but not its descendants. This is called on every submodule inload_state_dict()
. Metadata saved for this module in inputstate_dict
is provided as :attr`local_metadata`. For state dicts without metadata, :attr`local_metadata` is empty. Subclasses can achieve class-specific backward compatible loading using the version number at local_metadata.get(“version”, None).Note
state_dict
is not the same object as the inputstate_dict
toload_state_dict()
. So it can be modified.Parameters: - state_dict (dict) – a dict containing parameters and persistent buffers.
- prefix (str) – the prefix for parameters and buffers used in this module
- local_metadata (dict) – a dict containing the metadata for this moodule. See
- strict (bool) – whether to strictly enforce that the keys in
state_dict
withprefix
match the names of parameters and buffers in this module - missing_keys (list of str) – if
strict=False
, add missing keys to this list - unexpected_keys (list of str) – if
strict=False
, add unexpected keys to this list - error_msgs (list of str) – error messages should be added to this
list, and will be reported together in
load_state_dict()
-
_named_members
(get_members_fn, prefix='', recurse=True)¶ Helper method for yielding various names + members of modules.
-
_register_load_state_dict_pre_hook
(hook)¶ These hooks will be called with arguments: state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs, before loading state_dict into self. These arguments are exactly the same as those of _load_from_state_dict.
-
_register_state_dict_hook
(hook)¶ These hooks will be called with arguments: self, state_dict, prefix, local_metadata, after the state_dict of self is set. Note that only parameters and buffers of self or its children are guaranteed to exist in state_dict. The hooks may modify state_dict inplace or return a new one.
-
_slow_forward
(*input, **kwargs)¶
-
_tracing_name
(tracing_state)¶
-
_version
= 1¶
-
add_module
(name, module)¶ Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters: - name (string) – name of the child module. The child module can be accessed from this module using the given name
- parameter (Module) – child module to be added to the module.
-
apply
(fn)¶ Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).Parameters: fn ( Module
-> None) – function to be applied to each submoduleReturns: self Return type: Module Example:
>>> def init_weights(m): print(m) if type(m) == nn.Linear: m.weight.data.fill_(1.0) print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
-
buffers
(recurse=True)¶ Returns an iterator over module buffers.
Parameters: recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields: torch.Tensor – module buffer Example:
>>> for buf in model.buffers(): >>> print(type(buf.data), buf.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)
-
children
()¶ Returns an iterator over immediate children modules.
Yields: Module – a child module
-
static
closure
(model, data_dict: dict, optimizers: dict, criterions={}, metrics={}, fold=0, **kwargs)[source]¶ closure method to do a single backpropagation step
Parameters: - model (
ClassificationNetworkBasePyTorch
) – trainable model - data_dict (dict) – dictionary containing the data
- optimizers (dict) – dictionary of optimizers to optimize model’s parameters
- criterions (dict) – dict holding the criterions to calculate errors (gradients from different criterions will be accumulated)
- metrics (dict) – dict holding the metrics to calculate
- fold (int) – Current Fold in Crossvalidation (default: 0)
- **kwargs – additional keyword arguments
Returns: - dict – Metric values (with same keys as input dict metrics)
- dict – Loss values (with same keys as input dict criterions)
- list – Arbitrary number of predictions as torch.Tensor
Raises: AssertionError
– if optimizers or criterions are empty or the optimizers are not specified- model (
-
cpu
()¶ Moves all model parameters and buffers to the CPU.
Returns: self Return type: Module
-
cuda
(device=None)¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Parameters: device (int, optional) – if specified, all parameters will be copied to that device Returns: self Return type: Module
-
double
()¶ Casts all floating point parameters and buffers to
double
datatype.Returns: self Return type: Module
-
dump_patches
= False¶
-
eval
()¶ Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should reimplement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
float
()¶ Casts all floating point parameters and buffers to float datatype.
Returns: self Return type: Module
-
forward
(x)[source]¶ Feed tensor through network
Parameters: x (torch.Tensor) – Returns: Prediction Return type: torch.Tensor
-
half
()¶ Casts all floating point parameters and buffers to
half
datatype.Returns: self Return type: Module
-
load_state_dict
(state_dict, strict=True)¶ Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Parameters: - state_dict (dict) – a dict containing parameters and persistent buffers.
- strict (bool, optional) – whether to strictly enforce that the keys
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
-
modules
()¶ Returns an iterator over all modules in the network.
Yields: Module – a module in the network Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) ) 1 -> Linear (2 -> 2)
-
named_buffers
(prefix='', recurse=True)¶ Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
Parameters: Yields: (string, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
-
named_children
()¶ Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple containing a name and child module Example:
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
-
named_modules
(memo=None, prefix='')¶ Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple of name and module Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) )) 1 -> ('0', Linear (2 -> 2))
-
named_parameters
(prefix='', recurse=True)¶ Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
Parameters: Yields: (string, Parameter) – Tuple containing the name and parameter
Example:
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
-
parameters
(recurse=True)¶ Returns an iterator over module parameters.
This is typically passed to an optimizer.
Parameters: recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields: Parameter – module parameter Example:
>>> for param in model.parameters(): >>> print(type(param.data), param.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)
-
static
prepare_batch
(batch: dict, input_device, output_device)[source]¶ Helper Function to prepare Network Inputs and Labels (convert them to correct type and shape and push them to correct devices)
Parameters: - batch (dict) – dictionary containing all the data
- input_device (torch.device) – device for network inputs
- output_device (torch.device) – device for network outputs
Returns: dictionary containing data in correct type and shape and on correct device
Return type:
-
register_backward_hook
(hook)¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
The
grad_input
andgrad_output
may be tuples if the module has multiple inputs or outputs. The hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place ofgrad_input
in subsequent computations.Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
Warning
The current implementation will not have the presented behavior for complex
Module
that perform many operations. In some failure cases,grad_input
andgrad_output
will only contain the gradients for a subset of the inputs and outputs. For suchModule
, you should usetorch.Tensor.register_hook()
directly on a specific input or output to get the required gradients.
-
register_buffer
(name, tensor)¶ Adds a persistent buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the persistent state.Buffers can be accessed as attributes using given names.
Parameters: - name (string) – name of the buffer. The buffer can be accessed from this module using the given name
- tensor (Tensor) – buffer to be registered.
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
-
register_forward_hook
(hook)¶ Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None
The hook should not modify the input or output.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_forward_pre_hook
(hook)¶ Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None
The hook should not modify the input.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_parameter
(name, param)¶ Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters: - name (string) – name of the parameter. The parameter can be accessed from this module using the given name
- parameter (Parameter) – parameter to be added to the module.
-
state_dict
(destination=None, prefix='', keep_vars=False)¶ Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
Returns: a dictionary containing a whole state of the module Return type: dict Example:
>>> module.state_dict().keys() ['bias', 'weight']
-
to
(*args, **kwargs)¶ Moves and/or casts the parameters and buffers.
This can be called as
-
to
(device=None, dtype=None, non_blocking=False)
-
to
(dtype, non_blocking=False)
-
to
(tensor, non_blocking=False)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point desireddtype
s. In addition, this method will only cast the floating point parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
Parameters: - device (
torch.device
) – the desired device of the parameters and buffers in this module - dtype (
torch.dtype
) – the desired floating point type of the floating point parameters and buffers in this module - tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
Returns: self
Return type: Module
Example:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16)
-
-
train
(mode=True)¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.Returns: self Return type: Module
-
type
(dst_type)¶ Casts all parameters and buffers to
dst_type
.Parameters: dst_type (type or string) – the desired type Returns: self Return type: Module
-
static
weight_init
(m)[source]¶ Initializes weights with xavier_normal and bias with zeros
Parameters: m (torch.nn.Module) – module to initialize
-
zero_grad
()¶ Sets gradients of all model parameters to zero.
UNet3dPyTorch¶
-
class
UNet3dPyTorch
(num_classes, in_channels=3, depth=5, start_filts=64, up_mode='transpose', merge_mode='concat')[source]¶ Bases:
delira.models.abstract_network.AbstractPyTorchNetwork
The
UNet3dPyTorch
is a convolutional encoder-decoder neural network. Contextual spatial information (from the decoding, expansive pathway) about an input tensor is merged with information representing the localization of details (from the encoding, compressive pathway).Notes
- Differences to the original paper:
- Working on 3D data instead of 2D slices
- padding is used in 3x3x3 convolutions to prevent loss of border
- pixels
- merging outputs does not require cropping due to (1)
- residual connections can be used by specifying
merge_mode='add'
- if non-parametric upsampling is used in the decoder pathway (
- specified by upmode=’upsample’), then an additional 1x1x1 3d
convolution occurs after upsampling to reduce channel
dimensionality by a factor of 2. This channel halving happens
with the convolution in the tranpose convolution (specified by
upmode='transpose'
)
References
https://arxiv.org/abs/1505.04597
See also
-
_apply
(fn)¶
-
_build_model
(num_classes, in_channels=3, depth=5, start_filts=64)[source]¶ Builds the actual model
Parameters: - num_classes (int) – number of output classes
- in_channels (int) – number of channels for the input tensor (default: 1)
- depth (int) – number of MaxPools in the U-Net (default: 5)
- start_filts (int) – number of convolutional filters for the first conv (affects all other conv-filter numbers too; default: 64)
Notes
The Helper functions and classes are defined within this function because
delira
offers a possibility to save the source code along the weights to completely recover the network without needing a manually created network instance and these helper functions have to be saved too.
-
_get_name
()¶
-
_init_kwargs
= {}¶
-
_load_from_state_dict
(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)¶ Copies parameters and buffers from
state_dict
into only this module, but not its descendants. This is called on every submodule inload_state_dict()
. Metadata saved for this module in inputstate_dict
is provided as :attr`local_metadata`. For state dicts without metadata, :attr`local_metadata` is empty. Subclasses can achieve class-specific backward compatible loading using the version number at local_metadata.get(“version”, None).Note
state_dict
is not the same object as the inputstate_dict
toload_state_dict()
. So it can be modified.Parameters: - state_dict (dict) – a dict containing parameters and persistent buffers.
- prefix (str) – the prefix for parameters and buffers used in this module
- local_metadata (dict) – a dict containing the metadata for this moodule. See
- strict (bool) – whether to strictly enforce that the keys in
state_dict
withprefix
match the names of parameters and buffers in this module - missing_keys (list of str) – if
strict=False
, add missing keys to this list - unexpected_keys (list of str) – if
strict=False
, add unexpected keys to this list - error_msgs (list of str) – error messages should be added to this
list, and will be reported together in
load_state_dict()
-
_named_members
(get_members_fn, prefix='', recurse=True)¶ Helper method for yielding various names + members of modules.
-
_register_load_state_dict_pre_hook
(hook)¶ These hooks will be called with arguments: state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs, before loading state_dict into self. These arguments are exactly the same as those of _load_from_state_dict.
-
_register_state_dict_hook
(hook)¶ These hooks will be called with arguments: self, state_dict, prefix, local_metadata, after the state_dict of self is set. Note that only parameters and buffers of self or its children are guaranteed to exist in state_dict. The hooks may modify state_dict inplace or return a new one.
-
_slow_forward
(*input, **kwargs)¶
-
_tracing_name
(tracing_state)¶
-
_version
= 1¶
-
add_module
(name, module)¶ Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters: - name (string) – name of the child module. The child module can be accessed from this module using the given name
- parameter (Module) – child module to be added to the module.
-
apply
(fn)¶ Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).Parameters: fn ( Module
-> None) – function to be applied to each submoduleReturns: self Return type: Module Example:
>>> def init_weights(m): print(m) if type(m) == nn.Linear: m.weight.data.fill_(1.0) print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
-
buffers
(recurse=True)¶ Returns an iterator over module buffers.
Parameters: recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields: torch.Tensor – module buffer Example:
>>> for buf in model.buffers(): >>> print(type(buf.data), buf.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)
-
children
()¶ Returns an iterator over immediate children modules.
Yields: Module – a child module
-
static
closure
(model, data_dict: dict, optimizers: dict, criterions={}, metrics={}, fold=0, **kwargs)[source]¶ closure method to do a single backpropagation step
Parameters: - model (
ClassificationNetworkBasePyTorch
) – trainable model - data_dict (dict) – dictionary containing the data
- optimizers (dict) – dictionary of optimizers to optimize model’s parameters
- criterions (dict) – dict holding the criterions to calculate errors (gradients from different criterions will be accumulated)
- metrics (dict) – dict holding the metrics to calculate
- fold (int) – Current Fold in Crossvalidation (default: 0)
- **kwargs – additional keyword arguments
Returns: - dict – Metric values (with same keys as input dict metrics)
- dict – Loss values (with same keys as input dict criterions)
- list – Arbitrary number of predictions as torch.Tensor
Raises: AssertionError
– if optimizers or criterions are empty or the optimizers are not specified- model (
-
cpu
()¶ Moves all model parameters and buffers to the CPU.
Returns: self Return type: Module
-
cuda
(device=None)¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Parameters: device (int, optional) – if specified, all parameters will be copied to that device Returns: self Return type: Module
-
double
()¶ Casts all floating point parameters and buffers to
double
datatype.Returns: self Return type: Module
-
dump_patches
= False¶
-
eval
()¶ Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should reimplement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
float
()¶ Casts all floating point parameters and buffers to float datatype.
Returns: self Return type: Module
-
forward
(x)[source]¶ Feed tensor through network
Parameters: x (torch.Tensor) – Returns: Prediction Return type: torch.Tensor
-
half
()¶ Casts all floating point parameters and buffers to
half
datatype.Returns: self Return type: Module
-
load_state_dict
(state_dict, strict=True)¶ Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Parameters: - state_dict (dict) – a dict containing parameters and persistent buffers.
- strict (bool, optional) – whether to strictly enforce that the keys
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
-
modules
()¶ Returns an iterator over all modules in the network.
Yields: Module – a module in the network Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) ) 1 -> Linear (2 -> 2)
-
named_buffers
(prefix='', recurse=True)¶ Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
Parameters: Yields: (string, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
-
named_children
()¶ Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple containing a name and child module Example:
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
-
named_modules
(memo=None, prefix='')¶ Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple of name and module Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) )) 1 -> ('0', Linear (2 -> 2))
-
named_parameters
(prefix='', recurse=True)¶ Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
Parameters: Yields: (string, Parameter) – Tuple containing the name and parameter
Example:
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
-
parameters
(recurse=True)¶ Returns an iterator over module parameters.
This is typically passed to an optimizer.
Parameters: recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields: Parameter – module parameter Example:
>>> for param in model.parameters(): >>> print(type(param.data), param.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)
-
static
prepare_batch
(batch: dict, input_device, output_device)[source]¶ Helper Function to prepare Network Inputs and Labels (convert them to correct type and shape and push them to correct devices)
Parameters: - batch (dict) – dictionary containing all the data
- input_device (torch.device) – device for network inputs
- output_device (torch.device) – device for network outputs
Returns: dictionary containing data in correct type and shape and on correct device
Return type:
-
register_backward_hook
(hook)¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
The
grad_input
andgrad_output
may be tuples if the module has multiple inputs or outputs. The hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place ofgrad_input
in subsequent computations.Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
Warning
The current implementation will not have the presented behavior for complex
Module
that perform many operations. In some failure cases,grad_input
andgrad_output
will only contain the gradients for a subset of the inputs and outputs. For suchModule
, you should usetorch.Tensor.register_hook()
directly on a specific input or output to get the required gradients.
-
register_buffer
(name, tensor)¶ Adds a persistent buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the persistent state.Buffers can be accessed as attributes using given names.
Parameters: - name (string) – name of the buffer. The buffer can be accessed from this module using the given name
- tensor (Tensor) – buffer to be registered.
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
-
register_forward_hook
(hook)¶ Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None
The hook should not modify the input or output.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_forward_pre_hook
(hook)¶ Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None
The hook should not modify the input.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_parameter
(name, param)¶ Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters: - name (string) – name of the parameter. The parameter can be accessed from this module using the given name
- parameter (Parameter) – parameter to be added to the module.
-
state_dict
(destination=None, prefix='', keep_vars=False)¶ Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
Returns: a dictionary containing a whole state of the module Return type: dict Example:
>>> module.state_dict().keys() ['bias', 'weight']
-
to
(*args, **kwargs)¶ Moves and/or casts the parameters and buffers.
This can be called as
-
to
(device=None, dtype=None, non_blocking=False)
-
to
(dtype, non_blocking=False)
-
to
(tensor, non_blocking=False)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point desireddtype
s. In addition, this method will only cast the floating point parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
Parameters: - device (
torch.device
) – the desired device of the parameters and buffers in this module - dtype (
torch.dtype
) – the desired floating point type of the floating point parameters and buffers in this module - tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
Returns: self
Return type: Module
Example:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16)
-
-
train
(mode=True)¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.Returns: self Return type: Module
-
type
(dst_type)¶ Casts all parameters and buffers to
dst_type
.Parameters: dst_type (type or string) – the desired type Returns: self Return type: Module
-
static
weight_init
(m)[source]¶ Initializes weights with xavier_normal and bias with zeros
Parameters: m (torch.nn.Module) – module to initialize
-
zero_grad
()¶ Sets gradients of all model parameters to zero.