tensorflow - Finetuning VGG-16 Slow training in Keras -


i'm trying finetune 2 last layers of vgg model lfw dataset , i've changed softmax layer dimensions removing original 1 , adding softmax layer 19 outputs in case since there 19 classes i'm trying train. want finetune last connected layer in order make "custom feature extractor"

i'm setting layers want non-trainable this:

for layer in model.layers:     layer.trainable = false 

using gpu takes me 1 hour per epoch train 19 classes , minimum of 40 images per each class.

since don't have lot of samples, it's kind of strange training performance.

anyone knows why happening?

here log:

image shape:  (224, 224, 3) number of classes:  19 k.image_dim_ordering: th  ____________________________________________________________________________________________________ layer (type)                     output shape          param #     connected                      ==================================================================================================== input_1 (inputlayer)             (none, 3, 224, 224)   0                                             ____________________________________________________________________________________________________ conv1_1 (convolution2d)          (none, 64, 224, 224)  1792        input_1[0][0]                     ____________________________________________________________________________________________________ conv1_2 (convolution2d)          (none, 64, 224, 224)  36928       conv1_1[0][0]                     ____________________________________________________________________________________________________ pool1 (maxpooling2d)             (none, 64, 112, 112)  0           conv1_2[0][0]                     ____________________________________________________________________________________________________ conv2_1 (convolution2d)          (none, 128, 112, 112) 73856       pool1[0][0]                       ____________________________________________________________________________________________________ conv2_2 (convolution2d)          (none, 128, 112, 112) 147584      conv2_1[0][0]                     ____________________________________________________________________________________________________ pool2 (maxpooling2d)             (none, 128, 56, 56)   0           conv2_2[0][0]                     ____________________________________________________________________________________________________ conv3_1 (convolution2d)          (none, 256, 56, 56)   295168      pool2[0][0]                       ____________________________________________________________________________________________________ conv3_2 (convolution2d)          (none, 256, 56, 56)   590080      conv3_1[0][0]                     ____________________________________________________________________________________________________ conv3_3 (convolution2d)          (none, 256, 56, 56)   590080      conv3_2[0][0]                     ____________________________________________________________________________________________________ pool3 (maxpooling2d)             (none, 256, 28, 28)   0           conv3_3[0][0]                     ____________________________________________________________________________________________________ conv4_1 (convolution2d)          (none, 512, 28, 28)   1180160     pool3[0][0]                       ____________________________________________________________________________________________________ conv4_2 (convolution2d)          (none, 512, 28, 28)   2359808     conv4_1[0][0]                     ____________________________________________________________________________________________________ conv4_3 (convolution2d)          (none, 512, 28, 28)   2359808     conv4_2[0][0]                     ____________________________________________________________________________________________________ pool4 (maxpooling2d)             (none, 512, 14, 14)   0           conv4_3[0][0]                     ____________________________________________________________________________________________________ conv5_1 (convolution2d)          (none, 512, 14, 14)   2359808     pool4[0][0]                       ____________________________________________________________________________________________________ conv5_2 (convolution2d)          (none, 512, 14, 14)   2359808     conv5_1[0][0]                     ____________________________________________________________________________________________________ conv5_3 (convolution2d)          (none, 512, 14, 14)   2359808     conv5_2[0][0]                     ____________________________________________________________________________________________________ pool5 (maxpooling2d)             (none, 512, 7, 7)     0           conv5_3[0][0]                     ____________________________________________________________________________________________________ flatten (flatten)                (none, 25088)         0           pool5[0][0]                       ____________________________________________________________________________________________________ fc6 (dense)                      (none, 4096)          102764544   flatten[0][0]                     ____________________________________________________________________________________________________ fc7 (dense)                      (none, 4096)          16781312    fc6[0][0]                         ____________________________________________________________________________________________________ batchnormalization_1 (batchnorma (none, 4096)          16384       fc7[0][0]                         ____________________________________________________________________________________________________ fc8 (dense)                      (none, 19)            77843       batchnormalization_1[0][0]        ==================================================================================================== total params: 134,354,771 trainable params: 16,867,347 non-trainable params: 117,487,424 ____________________________________________________________________________________________________ none train on 1120 samples, validate on 747 samples epoch 1/20 1120/1120 [==============================] - 7354s - loss: 2.9517 - acc: 0.0714 - val_loss: 2.9323 - val_acc: 0.2316 epoch 2/20 1120/1120 [==============================] - 7356s - loss: 2.8053 - acc: 0.1732 - val_loss: 2.9187 - val_acc: 0.3614 epoch 3/20 1120/1120 [==============================] - 7358s - loss: 2.6727 - acc: 0.2643 - val_loss: 2.9034 - val_acc: 0.3882 epoch 4/20 1120/1120 [==============================] - 7361s - loss: 2.5565 - acc: 0.3071 - val_loss: 2.8861 - val_acc: 0.4016 epoch 5/20 1120/1120 [==============================] - 7360s - loss: 2.4597 - acc: 0.3518 - val_loss: 2.8667 - val_acc: 0.4043 epoch 6/20 1120/1120 [==============================] - 7363s - loss: 2.3827 - acc: 0.3714 - val_loss: 2.8448 - val_acc: 0.4163 epoch 7/20 1120/1120 [==============================] - 7364s - loss: 2.3108 - acc: 0.4045 - val_loss: 2.8196 - val_acc: 0.4244 epoch 8/20 1120/1120 [==============================] - 7377s - loss: 2.2463 - acc: 0.4268 - val_loss: 2.7905 - val_acc: 0.4324 epoch 9/20 1120/1120 [==============================] - 7373s - loss: 2.1824 - acc: 0.4563 - val_loss: 2.7572 - val_acc: 0.4404 epoch 10/20 1120/1120 [==============================] - 7373s - loss: 2.1313 - acc: 0.4732 - val_loss: 2.7190 - val_acc: 0.4471 epoch 11/20 1120/1120 [==============================] - 7440s - loss: 2.0766 - acc: 0.5036 - val_loss: 2.6754 - val_acc: 0.4565 epoch 12/20 1120/1120 [==============================] - 7414s - loss: 2.0323 - acc: 0.5170 - val_loss: 2.6263 - val_acc: 0.4565 epoch 13/20 1120/1120 [==============================] - 7413s - loss: 1.9840 - acc: 0.5420 - val_loss: 2.5719 - val_acc: 0.4592 epoch 14/20 1120/1120 [==============================] - 7414s - loss: 1.9467 - acc: 0.5464 - val_loss: 2.5130 - val_acc: 0.4592 epoch 15/20 1120/1120 [==============================] - 7412s - loss: 1.9039 - acc: 0.5652 - val_loss: 2.4513 - val_acc: 0.4592 epoch 16/20 1120/1120 [==============================] - 7413s - loss: 1.8716 - acc: 0.5723 - val_loss: 2.3906 - val_acc: 0.4578 epoch 17/20 1120/1120 [==============================] - 7415s - loss: 1.8214 - acc: 0.5866 - val_loss: 2.3319 - val_acc: 0.4538 epoch 18/20 1120/1120 [==============================] - 7416s - loss: 1.7860 - acc: 0.5982 - val_loss: 2.2789 - val_acc: 0.4538 epoch 19/20 1120/1120 [==============================] - 7430s - loss: 1.7623 - acc: 0.5973 - val_loss: 2.2322 - val_acc: 0.4538 epoch 20/20 1120/1120 [==============================] - 7856s - loss: 1.7222 - acc: 0.6170 - val_loss: 2.1913 - val_acc: 0.4538 accuracy: 45.38% 

the results not because can't train more data because takes long. idea?

thanks!!!

please notice want feed ~ 19 * 40 < 800 example in order train 16,867,347 parameters. 2e6 paramters per example. cannot work properly. try delete fcn layers (dense layers @ top) , put smaller dense e.g. ~ 50 neurons each. in opinion should in improving accuracy , speeding training.


Comments

Popular posts from this blog

Command prompt result in label. Python 2.7 -

javascript - How do I use URL parameters to change link href on page? -

amazon web services - AWS Route53 Trying To Get Site To Resolve To www -