Je veux obtenir un modèle VGG16 pré-formé dans Keras, supprimer sa couche de sortie, puis mettre une nouvelle couche de sortie avec le nombre de classes adaptées à mon problème, puis l'adapter à de nouvelles données. Pour cette raison, j'essaie d'utiliser le modèle ici: https://keras.io/applications/#vgg16 , mais comme il n'est pas séquentiel, je ne peux pas simplement model.pop()
. Le saut à partir des calques et son ajout ne fonctionnent pas non plus, car dans les prédictions, il attend toujours l'ancienne forme. Comment ferais-je ça? Existe-t-il un moyen de convertir ce type de modèle en Sequential
?
Vous pouvez utiliser pop()
sur model.layers
puis utilisez model.layers[-1].output
pour créer de nouveaux calques.
Exemple:
from keras.models import Model
from keras.layers import Dense,Flatten
from keras.applications import vgg16
from keras import backend as K
model = vgg16.VGG16(weights='imagenet', include_top=True)
model.input
model.summary(line_length=150)
model.layers.pop()
model.layers.pop()
model.summary(line_length=150)
new_layer = Dense(10, activation='softmax', name='my_dense')
inp = model.input
out = new_layer(model.layers[-1].output)
model2 = Model(inp, out)
model2.summary(line_length=150)
Vous pouvez également utiliser include_top=False
option de ces modèles. Dans ce cas, si vous devez utiliser l'aplatissement du calque, vous devez passer le input_shape
aussi.
model3 = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model3.summary(line_length=150)
flatten = Flatten()
new_layer2 = Dense(10, activation='softmax', name='my_dense_2')
inp2 = model3.input
out2 = new_layer2(flatten(model3.output))
model4 = Model(inp2, out2)
model4.summary(line_length=150)
Nous pouvons transformer le modèle VGG en séquentiel comme:
# Create VGG model
vgg_model = keras.applications.vgg16.VGG16(weights='imagenet')
# Created model is of type Model
type(vgg_model)
>> keras.engine.training.Model
# Convert it to Sequential
model = Sequential()
for layer in vgg_model.layers:
model.add(layer)
# Now, check the model type, its Sequential!
type(model)
>> keras.models.Sequential
# Verify the model details
model.summary()
>>
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
# Now, that its sequential, we can perform usual operations.
model.layers.pop()
# Freeze the layers
for layer in model.layers:
layer.trainable = False
# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2, activation='softmax'))
# Check the summary, and yes new layer has been added.
model.summary()
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_4 (Dense) (None, 2) 2002
=================================================================
Total params: 134,262,546
Trainable params: 2,002
Non-trainable params: 134,260,544
_________________________________________________________________