Running the Tensorflow 2.0 code gives ‘ValueError: tf.function-decorated function tried to create variables on non-first call’. What am I doing wrong?

As you are trying to use function decorator in TF 2.0, please enable run function eagerly by using below line after importing TensorFlow: tf.config.experimental_run_functions_eagerly(True) Since the above is deprecated(no longer experimental?), please use the following instead: tf.config.run_functions_eagerly(True) If you want to know more do refer to this link.

what is the difference between Flatten() and GlobalAveragePooling2D() in keras

That both seem to work doesn’t mean they do the same. Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 * 20 * … Read more

How do I get the weights of a layer in Keras?

If you want to get weights and biases of all layers, you can simply use: for layer in model.layers: print(layer.get_config(), layer.get_weights()) This will print all information that’s relevant. If you want the weights directly returned as numpy arrays, you can use: first_layer_weights = model.layers[0].get_weights()[0] first_layer_biases = model.layers[0].get_weights()[1] second_layer_weights = model.layers[1].get_weights()[0] second_layer_biases = model.layers[1].get_weights()[1] etc.

What is the difference between an Embedding Layer and a Dense Layer?

An embedding layer is faster, because it is essentially the equivalent of a dense layer that makes simplifying assumptions. Imagine a word-to-embedding layer with these weights: w = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8], [0.9, 0.0, 0.1, 0.2]] A Dense layer will treat these like actual weights with which to perform matrix multiplication. … Read more

When does keras reset an LSTM state?

Cheking with some tests, I got to the following conclusion, which is according to the documentation and to Nassim’s answer: First, there isn’t a single state in a layer, but one state per sample in the batch. There are batch_size parallel states in such a layer. Stateful=False In a stateful=False case, all the states are … Read more

How do you create a custom activation function with Keras?

Credits to this Github issue comment by Ritchie Ng. # Creating a model from keras.models import Sequential from keras.layers import Dense # Custom activation function from keras.layers import Activation from keras import backend as K from keras.utils.generic_utils import get_custom_objects def custom_activation(x): return (K.sigmoid(x) * 5) – 1 get_custom_objects().update({‘custom_activation’: Activation(custom_activation)}) # Usage model = Sequential() model.add(Dense(32, … Read more

Reset weights in Keras layer

Save the initial weights right after compiling the model but before training it: model.save_weights(‘model.h5’) and then after training, “reset” the model by reloading the initial weights: model.load_weights(‘model.h5’) This gives you an apples to apples model to compare different data sets and should be quicker than recompiling the entire model.