dcase_models.model.MLP

class dcase_models.model.MLP(model=None, model_path=None, metrics=['classification'], n_classes=10, n_frames=64, n_freqs=12, hidden_layers_size=[128, 64], dropout_rates=[0.5, 0.5], hidden_activation='relu', l2_reg=1e-05, final_activation='softmax', temporal_integration='mean', **kwargs)[source]

Bases: dcase_models.model.container.KerasModelContainer

KerasModelContainer for a generic MLP model.

Parameters:
n_classes : int, default=10

Number of classes (dimmension output).

n_frames : int or None, default=64

Length of the input (number of frames of each sequence). Use None to not use frame-level input and output. In this case the input has shape (None, n_freqs).

n_freqs : int, default=12

Number of frequency bins. The model’s input has shape (n_frames, n_freqs).

hidden_layers_size : list of int, default=[128, 64]

Dimmension of each hidden layer. Note that the length of this list defines the number of hidden layers.

dropout_rates : list of float, default=[0.5, 0.5]

List of dropout rate use after each hidden layer. The length of this list must be equal to the length of hidden_layers_size. Use 0.0 (or negative) to not use dropout.

hidden_activation : str, default=’relu’

Activation for hidden layers.

l2_reg : float, default=1e-5

Weight of the l2 regularizers. Use 0.0 to not use regularization.

final_activation : str, default=’softmax’

Activation of the last layer.

temporal_integration : {‘mean’, ‘sum’, ‘autopool’}, default=’mean’

Temporal integration operation used after last layer.

kwargs

Additional keyword arguments to Dense layers.

Examples

>>> from dcase_models.model.models import MLP
>>> model_container = MLP()
>>> model_container.model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input (InputLayer)           (None, 64, 12)            0
_________________________________________________________________
time_distributed_1 (TimeDist (None, 64, 128)           1664
_________________________________________________________________
dropout_1 (Dropout)          (None, 64, 128)           0
_________________________________________________________________
time_distributed_2 (TimeDist (None, 64, 64)            8256
_________________________________________________________________
dropout_2 (Dropout)          (None, 64, 64)            0
_________________________________________________________________
time_distributed_3 (TimeDist (None, 64, 10)            650
_________________________________________________________________
temporal_integration (Lambda (None, 10)                0
=================================================================
Total params: 10,570
Trainable params: 10,570
Non-trainable params: 0
_________________________________________________________________
Attributes:
model : keras.models.Model

Keras model.

__init__(model=None, model_path=None, metrics=['classification'], n_classes=10, n_frames=64, n_freqs=12, hidden_layers_size=[128, 64], dropout_rates=[0.5, 0.5], hidden_activation='relu', l2_reg=1e-05, final_activation='softmax', temporal_integration='mean', **kwargs)[source]

Initialize ModelContainer

Parameters:
model : keras model or similar

Object that defines the model (i.e keras.models.Model)

model_path : str

Path to the model file

model_name : str

Model name

metrics : list of str

List of metrics used for evaluation

Methods

__init__([model, model_path, metrics, …]) Initialize ModelContainer
build() Missing docstring here
check_if_model_exists(folder, **kwargs) Checks if the model already exits in the path.
cut_network(layer_where_to_cut) Cuts the network at the layer passed as argument.
evaluate(data_test, **kwargs) Evaluates the keras model using X_test and Y_test.
fine_tuning(layer_where_to_cut[, …]) Create a new model for fine-tuning.
get_available_intermediate_outputs() Return a list of available intermediate outputs.
get_intermediate_output(output_ix_name, inputs) Return the output of the model in a given layer.
get_number_of_parameters() Missing docstring here
load_model_from_json(folder, **kwargs) Loads a model from a model.json file in the path given by folder.
load_model_weights(weights_folder) Loads self.model weights in weights_folder/best_weights.hdf5.
load_pretrained_model_weights([weights_folder]) Loads pretrained weights to self.model weights.
save_model_json(folder) Saves the model to a model.json file in the given folder path.
save_model_weights(weights_folder) Saves self.model weights in weights_folder/best_weights.hdf5.
train(data_train, data_val[, weights_path, …]) Trains the keras model using the data and paramaters of arguments.
build()[source]

Missing docstring here

check_if_model_exists(folder, **kwargs)

Checks if the model already exits in the path.

Check if the folder/model.json file exists and includes the same model as self.model.

Parameters:
folder : str

Path to the folder to check.

cut_network(layer_where_to_cut)

Cuts the network at the layer passed as argument.

Parameters:
layer_where_to_cut : str or int

Layer name (str) or index (int) where cut the model.

Returns:
keras.models.Model

Cutted model.

evaluate(data_test, **kwargs)

Evaluates the keras model using X_test and Y_test.

Parameters:
X_test : ndarray

3D array with mel-spectrograms of test set. Shape = (N_instances, N_hops, N_mel_bands)

Y_test : ndarray

2D array with the annotations of test set (one hot encoding). Shape (N_instances, N_classes)

scaler : Scaler, optional

Scaler objet to be applied if is not None.

Returns:
float

evaluation’s accuracy

list

list of annotations (ground_truth)

list

list of model predictions

fine_tuning(layer_where_to_cut, new_number_of_classes=10, new_activation='softmax', freeze_source_model=True, new_model=None)

Create a new model for fine-tuning.

Cut the model in the layer_where_to_cut layer and add a new fully-connected layer.

Parameters:
layer_where_to_cut : str or int

Name (str) of index (int) of the layer where cut the model. This layer is included in the new model.

new_number_of_classes : int

Number of units in the new fully-connected layer (number of classes).

new_activation : str

Activation of the new fully-connected layer.

freeze_source_model : bool

If True, the source model is set to not be trainable.

new_model : Keras Model

If is not None, this model is added after the cut model. This is useful if you want add more than a fully-connected layer.

get_available_intermediate_outputs()

Return a list of available intermediate outputs.

Return a list of model’s layers.

Returns:
list of str

List of layers names.

get_intermediate_output(output_ix_name, inputs)

Return the output of the model in a given layer.

Cut the model in the given layer and predict the output for the given inputs.

Returns:
ndarray

Output of the model in the given layer.

get_number_of_parameters()

Missing docstring here

load_model_from_json(folder, **kwargs)

Loads a model from a model.json file in the path given by folder. The model is load in self.model attribute.

Parameters:
folder : str

Path to the folder that contains model.json file

load_model_weights(weights_folder)

Loads self.model weights in weights_folder/best_weights.hdf5.

Parameters:
weights_folder : str

Path to save the weights file.

load_pretrained_model_weights(weights_folder='./pretrained_weights')

Loads pretrained weights to self.model weights.

Parameters:
weights_folder : str

Path to load the weights file

save_model_json(folder)

Saves the model to a model.json file in the given folder path.

Parameters:
folder : str

Path to the folder to save model.json file

save_model_weights(weights_folder)

Saves self.model weights in weights_folder/best_weights.hdf5.

Parameters:
weights_folder : str

Path to save the weights file

train(data_train, data_val, weights_path='./', optimizer='Adam', learning_rate=0.001, early_stopping=100, considered_improvement=0.01, losses='categorical_crossentropy', loss_weights=[1], sequence_time_sec=0.5, metric_resolution_sec=1.0, label_list=[], shuffle=True, **kwargs_keras_fit)

Trains the keras model using the data and paramaters of arguments.

Parameters:
X_train : ndarray

3D array with mel-spectrograms of train set. Shape = (N_instances, N_hops, N_mel_bands)

Y_train : ndarray

2D array with the annotations of train set (one hot encoding). Shape (N_instances, N_classes)

X_val : ndarray

3D array with mel-spectrograms of validation set. Shape = (N_instances, N_hops, N_mel_bands)

Y_val : ndarray

2D array with the annotations of validation set (one hot encoding). Shape (N_instances, N_classes)

weights_path : str

Path where to save the best weights of the model in the training process

weights_path : str

Path where to save log of the training process

loss_weights : list

List of weights for each loss function (‘categorical_crossentropy’, ‘mean_squared_error’, ‘prototype_loss’)

optimizer : str

Optimizer used to train the model

learning_rate : float

Learning rate used to train the model

batch_size : int

Batch size used in the training process

epochs : int

Number of training epochs

fit_verbose : int

Verbose mode for fit method of Keras model