dcase_models.model.SB_CNN

class dcase_models.model.SB_CNN(model=None, model_path=None, metrics=['classification'], n_classes=10, n_frames_cnn=64, n_freq_cnn=128, filter_size_cnn=(5, 5), pool_size_cnn=(2, 2), n_dense_cnn=64, n_channels=0)[source]

Bases: dcase_models.model.container.KerasModelContainer

KerasModelContainer for SB_CNN model.

J. Salamon and J. P. Bello. “Deep Convolutional Neural Networks and Data Augmentation For Environmental Sound Classification”. IEEE Signal Processing Letters, 24(3), pages 279 - 283. 2017.

Parameters:
n_classes : int, default=10

Number of classes (dimmension output).

n_frames_cnn : int or None, default=64

Length of the input (number of frames of each sequence).

n_freq_cnn : int, default=128

Number of frequency bins. The model’s input has shape (n_frames, n_freqs).

filter_size_cnn : tuple, default=(5,5)

Kernel dimmension for convolutional layers.

pool_size_cnn : tuple, default=(2,2)

Pooling dimmension for maxpooling layers.

n_dense_cnn : int, default=64

Dimmension of penultimate dense layer.

n_channels : int, default=0

Number of input channels

0 : mono signals.

Input shape = (n_frames_cnn, n_freq_cnn)

1 : mono signals.

Input shape = (n_frames_cnn, n_freq_cnn, 1)

2 : stereo signals.

Input shape = (n_frames_cnn, n_freq_cnn, 2)

n > 2 : multi-representations.

Input shape = (n_frames_cnn, n_freq_cnn, n_channels)

Notes

Code based on Salamon’s implementation https://github.com/justinsalamon/scaper_waspaa2017

Examples

>>> from dcase_models.model.models import SB_CNN
>>> model_container = SB_CNN()
>>> model_container.model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input (InputLayer)           (None, 64, 128)           0
_________________________________________________________________
lambda (Lambda)              (None, 64, 128, 1)        0
_________________________________________________________________
conv1 (Conv2D)               (None, 60, 124, 24)       624
_________________________________________________________________
maxpool1 (MaxPooling2D)      (None, 30, 62, 24)        0
_________________________________________________________________
batchnorm1 (BatchNormalizati (None, 30, 62, 24)        96
_________________________________________________________________
conv2 (Conv2D)               (None, 26, 58, 48)        28848
_________________________________________________________________
maxpool2 (MaxPooling2D)      (None, 6, 29, 48)         0
_________________________________________________________________
batchnorm2 (BatchNormalizati (None, 6, 29, 48)         192
_________________________________________________________________
conv3 (Conv2D)               (None, 2, 25, 48)         57648
_________________________________________________________________
batchnorm3 (BatchNormalizati (None, 2, 25, 48)         192
_________________________________________________________________
flatten (Flatten)            (None, 2400)              0
_________________________________________________________________
dropout1 (Dropout)           (None, 2400)              0
_________________________________________________________________
dense1 (Dense)               (None, 64)                153664
_________________________________________________________________
dropout2 (Dropout)           (None, 64)                0
_________________________________________________________________
out (Dense)                  (None, 10)                650
=================================================================
Total params: 241,914
Trainable params: 241,674
Non-trainable params: 240
_________________________________________________________________
Attributes:
model : keras.models.Model

Keras model.

__init__(model=None, model_path=None, metrics=['classification'], n_classes=10, n_frames_cnn=64, n_freq_cnn=128, filter_size_cnn=(5, 5), pool_size_cnn=(2, 2), n_dense_cnn=64, n_channels=0)[source]

Initialization of the SB-CNN model.

Methods

__init__([model, model_path, metrics, …]) Initialization of the SB-CNN model.
build() Builds the CNN Keras model according to the initialized parameters.
check_if_model_exists(folder, **kwargs) Checks if the model already exits in the path.
cut_network(layer_where_to_cut) Cuts the network at the layer passed as argument.
evaluate(data_test, **kwargs) Evaluates the keras model using X_test and Y_test.
fine_tuning(layer_where_to_cut[, …]) Create a new model for fine-tuning.
get_available_intermediate_outputs() Return a list of available intermediate outputs.
get_intermediate_output(output_ix_name, inputs) Return the output of the model in a given layer.
get_number_of_parameters() Missing docstring here
load_model_from_json(folder, **kwargs) Loads a model from a model.json file in the path given by folder.
load_model_weights(weights_folder) Loads self.model weights in weights_folder/best_weights.hdf5.
load_pretrained_model_weights([weights_folder]) Loads pretrained weights to self.model weights.
save_model_json(folder) Saves the model to a model.json file in the given folder path.
save_model_weights(weights_folder) Saves self.model weights in weights_folder/best_weights.hdf5.
sub_model() Missing docstring here
train(data_train, data_val[, weights_path, …]) Trains the keras model using the data and paramaters of arguments.
build()[source]

Builds the CNN Keras model according to the initialized parameters.

check_if_model_exists(folder, **kwargs)

Checks if the model already exits in the path.

Check if the folder/model.json file exists and includes the same model as self.model.

Parameters:
folder : str

Path to the folder to check.

cut_network(layer_where_to_cut)

Cuts the network at the layer passed as argument.

Parameters:
layer_where_to_cut : str or int

Layer name (str) or index (int) where cut the model.

Returns:
keras.models.Model

Cutted model.

evaluate(data_test, **kwargs)

Evaluates the keras model using X_test and Y_test.

Parameters:
X_test : ndarray

3D array with mel-spectrograms of test set. Shape = (N_instances, N_hops, N_mel_bands)

Y_test : ndarray

2D array with the annotations of test set (one hot encoding). Shape (N_instances, N_classes)

scaler : Scaler, optional

Scaler objet to be applied if is not None.

Returns:
float

evaluation’s accuracy

list

list of annotations (ground_truth)

list

list of model predictions

fine_tuning(layer_where_to_cut, new_number_of_classes=10, new_activation='softmax', freeze_source_model=True, new_model=None)

Create a new model for fine-tuning.

Cut the model in the layer_where_to_cut layer and add a new fully-connected layer.

Parameters:
layer_where_to_cut : str or int

Name (str) of index (int) of the layer where cut the model. This layer is included in the new model.

new_number_of_classes : int

Number of units in the new fully-connected layer (number of classes).

new_activation : str

Activation of the new fully-connected layer.

freeze_source_model : bool

If True, the source model is set to not be trainable.

new_model : Keras Model

If is not None, this model is added after the cut model. This is useful if you want add more than a fully-connected layer.

get_available_intermediate_outputs()

Return a list of available intermediate outputs.

Return a list of model’s layers.

Returns:
list of str

List of layers names.

get_intermediate_output(output_ix_name, inputs)

Return the output of the model in a given layer.

Cut the model in the given layer and predict the output for the given inputs.

Returns:
ndarray

Output of the model in the given layer.

get_number_of_parameters()

Missing docstring here

load_model_from_json(folder, **kwargs)

Loads a model from a model.json file in the path given by folder. The model is load in self.model attribute.

Parameters:
folder : str

Path to the folder that contains model.json file

load_model_weights(weights_folder)

Loads self.model weights in weights_folder/best_weights.hdf5.

Parameters:
weights_folder : str

Path to save the weights file.

load_pretrained_model_weights(weights_folder='./pretrained_weights')

Loads pretrained weights to self.model weights.

Parameters:
weights_folder : str

Path to load the weights file

save_model_json(folder)

Saves the model to a model.json file in the given folder path.

Parameters:
folder : str

Path to the folder to save model.json file

save_model_weights(weights_folder)

Saves self.model weights in weights_folder/best_weights.hdf5.

Parameters:
weights_folder : str

Path to save the weights file

sub_model()[source]

Missing docstring here

train(data_train, data_val, weights_path='./', optimizer='Adam', learning_rate=0.001, early_stopping=100, considered_improvement=0.01, losses='categorical_crossentropy', loss_weights=[1], sequence_time_sec=0.5, metric_resolution_sec=1.0, label_list=[], shuffle=True, **kwargs_keras_fit)

Trains the keras model using the data and paramaters of arguments.

Parameters:
X_train : ndarray

3D array with mel-spectrograms of train set. Shape = (N_instances, N_hops, N_mel_bands)

Y_train : ndarray

2D array with the annotations of train set (one hot encoding). Shape (N_instances, N_classes)

X_val : ndarray

3D array with mel-spectrograms of validation set. Shape = (N_instances, N_hops, N_mel_bands)

Y_val : ndarray

2D array with the annotations of validation set (one hot encoding). Shape (N_instances, N_classes)

weights_path : str

Path where to save the best weights of the model in the training process

weights_path : str

Path where to save log of the training process

loss_weights : list

List of weights for each loss function (‘categorical_crossentropy’, ‘mean_squared_error’, ‘prototype_loss’)

optimizer : str

Optimizer used to train the model

learning_rate : float

Learning rate used to train the model

batch_size : int

Batch size used in the training process

epochs : int

Number of training epochs

fit_verbose : int

Verbose mode for fit method of Keras model