python - Why does convolutional network use every 64 image for training? -


i'm looking code here python 3.5 + tensorflow + tflearn:

# -*- coding: utf-8 -*-  """ convolutional neural network mnist dataset classification task.  references:     y. lecun, l. bottou, y. bengio, , p. haffner. "gradient-based     learning applied document recognition." proceedings of ieee,     86(11):2278-2324, november 1998.  links:     [mnist dataset] http://yann.lecun.com/exdb/mnist/  """  __future__ import division, print_function, absolute_import  import tflearn tflearn.layers.core import input_data, dropout, fully_connected tflearn.layers.conv import conv_2d, max_pool_2d tflearn.layers.normalization import local_response_normalization tflearn.layers.estimator import regression  # data loading , preprocessing import tflearn.datasets.mnist mnist x, y, testx, testy = mnist.load_data(one_hot=true) x = x.reshape([-1, 28, 28, 1]) testx = testx.reshape([-1, 28, 28, 1])  # building convolutional network network = input_data(shape=[none, 28, 28, 1], name='input') network = conv_2d(network, 32, 3, activation='relu', regularizer="l2") network = max_pool_2d(network, 2) network = local_response_normalization(network) network = conv_2d(network, 64, 3, activation='relu', regularizer="l2") network = max_pool_2d(network, 2) network = local_response_normalization(network) network = fully_connected(network, 128, activation='tanh') network = dropout(network, 0.8) network = fully_connected(network, 256, activation='tanh') network = dropout(network, 0.8) network = fully_connected(network, 10, activation='softmax') network = regression(network, optimizer='adam', learning_rate=0.01,                      loss='categorical_crossentropy', name='target')  # training model = tflearn.dnn(network, tensorboard_verbose=0) model.fit({'input': x}, {'target': y}, n_epoch=20,            validation_set=({'input': testx}, {'target': testy}),            snapshot_step=100, show_metric=true, run_id='convnet_mnist') 

ok, works. uses every 64th image set while learning. why happen? should if have small set , want network use every first image?

example of training message

training step: 1  | time: 2.416s | adam | epoch: 001 | loss: 0.00000 -- iter: 064/55000 training step: 2  | total loss: 0.24470 | time: 4.621s | adam | epoch: 001 | loss: 0.24470 -- iter: 128/55000 training step: 3  | total loss: 0.10852 | time: 6.876s | adam | epoch: 001 | loss: 0.10852 -- iter: 192/55000 training step: 4  | total loss: 0.20421 | time: 9.129s | adam | epoch: 001 | loss: 0.20421 -- iter: 256/55000 

it not using every 64th image, loading batches of 64 image. why see iter increase 64 each time, because has processed 64 images per training step. take @ documentation regression layer http://tflearn.org/layers/estimator/, here can set the batch size.


Comments

Popular posts from this blog

Command prompt result in label. Python 2.7 -

javascript - How do I use URL parameters to change link href on page? -

amazon web services - AWS Route53 Trying To Get Site To Resolve To www -