2017-04-07 188 views
2

我想微调VWG模型的LFW数据集的最后两层,我已经通过删除原来的一个,并增加了我的softmax层19个输出在我的情况下,因为有19个班,我试图训练。 我也想为了做一个“自定义特征提取”Finetuning VGG-16在Keras缓慢培训

我设置的是我想成为非可训练这样的层微调最后的完全连接层:

for layer in model.layers: 
    layer.trainable = False 

使用gpu,每个时代我需要1小时的时间来训练19个班,每班每班至少40个图像。

由于我没有很多样本,这种训练表现有点奇怪。

任何人都知道为什么会发生这种情况?

这里日志:

Image shape: (224, 224, 3) 
Number of classes: 19 
K.image_dim_ordering: th 

____________________________________________________________________________________________________ 
Layer (type)      Output Shape   Param #  Connected to      
==================================================================================================== 
input_1 (InputLayer)    (None, 3, 224, 224) 0            
____________________________________________________________________________________________________ 
conv1_1 (Convolution2D)   (None, 64, 224, 224) 1792  input_1[0][0]      
____________________________________________________________________________________________________ 
conv1_2 (Convolution2D)   (None, 64, 224, 224) 36928  conv1_1[0][0]      
____________________________________________________________________________________________________ 
pool1 (MaxPooling2D)    (None, 64, 112, 112) 0   conv1_2[0][0]      
____________________________________________________________________________________________________ 
conv2_1 (Convolution2D)   (None, 128, 112, 112) 73856  pool1[0][0]      
____________________________________________________________________________________________________ 
conv2_2 (Convolution2D)   (None, 128, 112, 112) 147584  conv2_1[0][0]      
____________________________________________________________________________________________________ 
pool2 (MaxPooling2D)    (None, 128, 56, 56) 0   conv2_2[0][0]      
____________________________________________________________________________________________________ 
conv3_1 (Convolution2D)   (None, 256, 56, 56) 295168  pool2[0][0]      
____________________________________________________________________________________________________ 
conv3_2 (Convolution2D)   (None, 256, 56, 56) 590080  conv3_1[0][0]      
____________________________________________________________________________________________________ 
conv3_3 (Convolution2D)   (None, 256, 56, 56) 590080  conv3_2[0][0]      
____________________________________________________________________________________________________ 
pool3 (MaxPooling2D)    (None, 256, 28, 28) 0   conv3_3[0][0]      
____________________________________________________________________________________________________ 
conv4_1 (Convolution2D)   (None, 512, 28, 28) 1180160  pool3[0][0]      
____________________________________________________________________________________________________ 
conv4_2 (Convolution2D)   (None, 512, 28, 28) 2359808  conv4_1[0][0]      
____________________________________________________________________________________________________ 
conv4_3 (Convolution2D)   (None, 512, 28, 28) 2359808  conv4_2[0][0]      
____________________________________________________________________________________________________ 
pool4 (MaxPooling2D)    (None, 512, 14, 14) 0   conv4_3[0][0]      
____________________________________________________________________________________________________ 
conv5_1 (Convolution2D)   (None, 512, 14, 14) 2359808  pool4[0][0]      
____________________________________________________________________________________________________ 
conv5_2 (Convolution2D)   (None, 512, 14, 14) 2359808  conv5_1[0][0]      
____________________________________________________________________________________________________ 
conv5_3 (Convolution2D)   (None, 512, 14, 14) 2359808  conv5_2[0][0]      
____________________________________________________________________________________________________ 
pool5 (MaxPooling2D)    (None, 512, 7, 7)  0   conv5_3[0][0]      
____________________________________________________________________________________________________ 
flatten (Flatten)    (None, 25088)   0   pool5[0][0]      
____________________________________________________________________________________________________ 
fc6 (Dense)      (None, 4096)   102764544 flatten[0][0]      
____________________________________________________________________________________________________ 
fc7 (Dense)      (None, 4096)   16781312 fc6[0][0]       
____________________________________________________________________________________________________ 
batchnormalization_1 (BatchNorma (None, 4096)   16384  fc7[0][0]       
____________________________________________________________________________________________________ 
fc8 (Dense)      (None, 19)   77843  batchnormalization_1[0][0]  
==================================================================================================== 
Total params: 134,354,771 
Trainable params: 16,867,347 
Non-trainable params: 117,487,424 
____________________________________________________________________________________________________ 
None 
Train on 1120 samples, validate on 747 samples 
Epoch 1/20 
1120/1120 [==============================] - 7354s - loss: 2.9517 - acc: 0.0714 - val_loss: 2.9323 - val_acc: 0.2316 
Epoch 2/20 
1120/1120 [==============================] - 7356s - loss: 2.8053 - acc: 0.1732 - val_loss: 2.9187 - val_acc: 0.3614 
Epoch 3/20 
1120/1120 [==============================] - 7358s - loss: 2.6727 - acc: 0.2643 - val_loss: 2.9034 - val_acc: 0.3882 
Epoch 4/20 
1120/1120 [==============================] - 7361s - loss: 2.5565 - acc: 0.3071 - val_loss: 2.8861 - val_acc: 0.4016 
Epoch 5/20 
1120/1120 [==============================] - 7360s - loss: 2.4597 - acc: 0.3518 - val_loss: 2.8667 - val_acc: 0.4043 
Epoch 6/20 
1120/1120 [==============================] - 7363s - loss: 2.3827 - acc: 0.3714 - val_loss: 2.8448 - val_acc: 0.4163 
Epoch 7/20 
1120/1120 [==============================] - 7364s - loss: 2.3108 - acc: 0.4045 - val_loss: 2.8196 - val_acc: 0.4244 
Epoch 8/20 
1120/1120 [==============================] - 7377s - loss: 2.2463 - acc: 0.4268 - val_loss: 2.7905 - val_acc: 0.4324 
Epoch 9/20 
1120/1120 [==============================] - 7373s - loss: 2.1824 - acc: 0.4563 - val_loss: 2.7572 - val_acc: 0.4404 
Epoch 10/20 
1120/1120 [==============================] - 7373s - loss: 2.1313 - acc: 0.4732 - val_loss: 2.7190 - val_acc: 0.4471 
Epoch 11/20 
1120/1120 [==============================] - 7440s - loss: 2.0766 - acc: 0.5036 - val_loss: 2.6754 - val_acc: 0.4565 
Epoch 12/20 
1120/1120 [==============================] - 7414s - loss: 2.0323 - acc: 0.5170 - val_loss: 2.6263 - val_acc: 0.4565 
Epoch 13/20 
1120/1120 [==============================] - 7413s - loss: 1.9840 - acc: 0.5420 - val_loss: 2.5719 - val_acc: 0.4592 
Epoch 14/20 
1120/1120 [==============================] - 7414s - loss: 1.9467 - acc: 0.5464 - val_loss: 2.5130 - val_acc: 0.4592 
Epoch 15/20 
1120/1120 [==============================] - 7412s - loss: 1.9039 - acc: 0.5652 - val_loss: 2.4513 - val_acc: 0.4592 
Epoch 16/20 
1120/1120 [==============================] - 7413s - loss: 1.8716 - acc: 0.5723 - val_loss: 2.3906 - val_acc: 0.4578 
Epoch 17/20 
1120/1120 [==============================] - 7415s - loss: 1.8214 - acc: 0.5866 - val_loss: 2.3319 - val_acc: 0.4538 
Epoch 18/20 
1120/1120 [==============================] - 7416s - loss: 1.7860 - acc: 0.5982 - val_loss: 2.2789 - val_acc: 0.4538 
Epoch 19/20 
1120/1120 [==============================] - 7430s - loss: 1.7623 - acc: 0.5973 - val_loss: 2.2322 - val_acc: 0.4538 
Epoch 20/20 
1120/1120 [==============================] - 7856s - loss: 1.7222 - acc: 0.6170 - val_loss: 2.1913 - val_acc: 0.4538 
Accuracy: 45.38% 

结果并不好,因为,因为时间太长,我不能训练它更多的数据。任何想法?

谢谢!

+0

沉迷于“MarcinMożejko” - 下一步如何: 1.删除顶部(密集)层。 2.计算你的图像的网络输出(所以你将有19 * 40向量)。 3.训练你的新密集部分在这个载体上。 4.结合这2个网络(CNN和Dense)(无论如何请注意,也许它不会给出太好的结果)。 –

+0

我想过了,你所想的是从图像中提取特征,然后用这个特征来训练顺序密集层? – Eric

+0

是的。只需从图像中提取特征矢量并训练密集图层。也许你会得到一个可以接受的结果。 –

回答

1

请注意,您想喂动~ 19 * 40 < 800示例以训练16,867,347参数。因此,这基本上是每个示例的2e6参数。这根本无法正常工作。尝试删除所有FCN层(顶部的Dense层),并将较小的Dense与例如每个〜50个神经元。在我看来,这应该有助于提高准确性并加速培训。

+0

是的,我已经尝试过,但性能很差,像验证准确性冻结20%,每班至少20张图片。所以我打算改变我的数据集,因为LFW有很多类只有一个图像,所以也许如果我采取faceScrub每个类有更多的表示,它会更好地与原始的VGG,并明显地采取像100个类最低每班200张图片,即...你怎么看? Thankss! – Eric

+0

你对计算时间有什么看法? – Eric

+0

我改变了数据集(现在我正在使用faceScrub),并且我尝试了你的建议,每个都有128个神经元,但它仍然很慢。我认为这是来自卷积图层,因为我的图像尺寸是224 * 224。现在,我的结果是“val_loss:2.4294 - val_acc:0.8350”,对50个班级进行分类,每班有23个图像,我是否需要更多数据?损失函数下降很慢 – Eric