Updating fc8

As the batch_size increasing from 30 to 60, the loss variation pattern changed, but still periodic.

Don't worry about decreasing the learning rate, it is relative to the magnitude of the loss, which in case of euclidean loss can be huge.

We’ll occasionally send you account related emails. Sign in to your account I try to ues caffe to implement the Deep Pose proposed in this paper: Deep Pose has 3 stages. On Sunday, November 23, 2014, mender05 [email protected]: I have corrected the labels, changed the input type to float and randomlized the training samples, but this problem still there.

And each stage is almost the same as Alex Net (Deep Pose changes the loss layer in Alex Net to euclidean loss). The train.prototxt is: net: "models/lsp/deeppose_train.prototxt" base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 7500 display: 50 max_iter: 36500 momentum: 0.9 weight_decay: 0.0000005 snapshot: 2000 snapshot_prefix: "models/lsp/caffenet_train" solver_mode: GPU array([[ 0.48381898, 0.02326088, 0.02317634, 0.02317682, 0.48248914, 0.01622555, 0.0161516 , 0.01615119, 0.48646507, 0.03201264, 0.03185751, 0.03185739, 0.52191395, 0.03508802, 0.03494693, 0.03494673, 0.52380753, 0.01708153, 0.01701014, 0.01700996, 0.52726734, 0.02286946, 0.02277863, 0.0227785 , 0.46513146, 0.02239206, 0.02227863, 0.02227836], [ 0.48381898, 0.02326088, 0.02317634, 0.02317682, 0.48248914, 0.01622555, 0.0161516 , 0.01615119, 0.48646507, 0.03201264, 0.03185751, 0.03185739, 0.52191395, 0.03508802, 0.03494693, 0.03494673, 0.52380753, 0.01708153, 0.01701014, 0.01700996, 0.52726734, 0.02286946, 0.02277863, 0.0227785 , 0.46513146, 0.02239206, 0.02227863, 0.02227836], [ 0.48381898, 0.02326088, 0.02317634, 0.02317682, 0.48248914, 0.01622555, 0.0161516 , 0.01615119, 0.48646507, 0.03201264, 0.03185751, 0.03185739, 0.52191395, 0.03508802, 0.03494693, 0.03494673, 0.52380753, 0.01708153, 0.01701014, 0.01700996, 0.52726734, 0.02286946, 0.02277863, 0.0227785 , 0.46513146, 0.02239206, 0.02227863, 0.02227836], The loss variation looks very strange: [image: figure_1] https://cloud.githubusercontent.com/assets/7811449/5040242/bab9f878-6be6-11e4-99dc-675424c37What causes the loss changing periodically? — Reply to this email directly or view it on Git Hub #1396 (comment). There are 22000 training images, which is equivalent to 72000/22000 = 3.3 epochs When you shuffle the training data did you made sure the labels align? [image: figure_1] https://cloud.githubusercontent.com/assets/7811449/5161721/8ee3061a-73eb-11e4-8c93-48e7c7bee80a period == 2400 iterations. There are 22000 training images, which is equivalent to 72000/22000 = 3.3 epochs — Reply to this email directly or view it on Git Hub #1396 (comment). On Sunday, November 23, 2014, mender05 [email protected]: I have corrected the labels, changed the input type to float and randomlized the training samples, but this problem still there.

@Only Sang I met the same problem recently while I used the Alex Net for training a 2-category classifier.

When I used the model to test my images with python interface, I always got the same output.

Leave a Reply