I0318 00:58:32.322571 2013098752 caffe.cpp:117] Use CPU.
I0318 00:58:32.643163 2013098752 caffe.cpp:121] Starting Optimization
I0318 00:58:32.643229 2013098752 solver.cpp:32] Initializing solver from parameters:
train_net: "examples/hdf5_classification/logreg_auto_train.prototxt"
test_net: "examples/hdf5_classification/logreg_auto_test.prototxt"
test_iter: 250
test_interval: 1000
base_lr: 0.01
display: 1000
max_iter: 10000
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 5000
snapshot: 10000
snapshot_prefix: "examples/hdf5_classification/data/train"
solver_mode: CPU
I0318 00:58:32.643333 2013098752 solver.cpp:61] Creating training net from train_net file: examples/hdf5_classification/logreg_auto_train.prototxt
I0318 00:58:32.643465 2013098752 net.cpp:42] Initializing net from parameters:
state {
phase: TRAIN
}
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
hdf5_data_param {
source: "examples/hdf5_classification/data/train.txt"
batch_size: 10
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "data"
top: "ip1"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip1"
bottom: "label"
top: "accuracy"
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label"
top: "loss"
}
I0318 00:58:32.644197 2013098752 layer_factory.hpp:74] Creating layer data
I0318 00:58:32.644219 2013098752 net.cpp:84] Creating Layer data
I0318 00:58:32.644230 2013098752 net.cpp:338] data -> data
I0318 00:58:32.644256 2013098752 net.cpp:338] data -> label
I0318 00:58:32.644269 2013098752 net.cpp:113] Setting up data
I0318 00:58:32.644278 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/train.txt
I0318 00:58:32.644327 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 2
I0318 00:58:32.646458 2013098752 net.cpp:120] Top shape: 10 4 (40)
I0318 00:58:32.646502 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.646518 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split
I0318 00:58:32.646538 2013098752 net.cpp:84] Creating Layer label_data_1_split
I0318 00:58:32.646546 2013098752 net.cpp:380] label_data_1_split <- label
I0318 00:58:32.646556 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0
I0318 00:58:32.646569 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1
I0318 00:58:32.646579 2013098752 net.cpp:113] Setting up label_data_1_split
I0318 00:58:32.646586 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.646595 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.646601 2013098752 layer_factory.hpp:74] Creating layer ip1
I0318 00:58:32.646615 2013098752 net.cpp:84] Creating Layer ip1
I0318 00:58:32.646622 2013098752 net.cpp:380] ip1 <- data
I0318 00:58:32.646664 2013098752 net.cpp:338] ip1 -> ip1
I0318 00:58:32.646689 2013098752 net.cpp:113] Setting up ip1
I0318 00:58:32.652330 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.652371 2013098752 layer_factory.hpp:74] Creating layer ip1_ip1_0_split
I0318 00:58:32.652393 2013098752 net.cpp:84] Creating Layer ip1_ip1_0_split
I0318 00:58:32.652407 2013098752 net.cpp:380] ip1_ip1_0_split <- ip1
I0318 00:58:32.652421 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_0
I0318 00:58:32.652467 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_1
I0318 00:58:32.652480 2013098752 net.cpp:113] Setting up ip1_ip1_0_split
I0318 00:58:32.652489 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.652498 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.652505 2013098752 layer_factory.hpp:74] Creating layer accuracy
I0318 00:58:32.652521 2013098752 net.cpp:84] Creating Layer accuracy
I0318 00:58:32.652534 2013098752 net.cpp:380] accuracy <- ip1_ip1_0_split_0
I0318 00:58:32.652545 2013098752 net.cpp:380] accuracy <- label_data_1_split_0
I0318 00:58:32.652562 2013098752 net.cpp:338] accuracy -> accuracy
I0318 00:58:32.652577 2013098752 net.cpp:113] Setting up accuracy
I0318 00:58:32.652590 2013098752 net.cpp:120] Top shape: (1)
I0318 00:58:32.652642 2013098752 layer_factory.hpp:74] Creating layer loss
I0318 00:58:32.652655 2013098752 net.cpp:84] Creating Layer loss
I0318 00:58:32.652663 2013098752 net.cpp:380] loss <- ip1_ip1_0_split_1
I0318 00:58:32.652672 2013098752 net.cpp:380] loss <- label_data_1_split_1
I0318 00:58:32.652679 2013098752 net.cpp:338] loss -> loss
I0318 00:58:32.652689 2013098752 net.cpp:113] Setting up loss
I0318 00:58:32.652701 2013098752 layer_factory.hpp:74] Creating layer loss
I0318 00:58:32.652716 2013098752 net.cpp:120] Top shape: (1)
I0318 00:58:32.652724 2013098752 net.cpp:122] with loss weight 1
I0318 00:58:32.652740 2013098752 net.cpp:167] loss needs backward computation.
I0318 00:58:32.652746 2013098752 net.cpp:169] accuracy does not need backward computation.
I0318 00:58:32.652753 2013098752 net.cpp:167] ip1_ip1_0_split needs backward computation.
I0318 00:58:32.652760 2013098752 net.cpp:167] ip1 needs backward computation.
I0318 00:58:32.652786 2013098752 net.cpp:169] label_data_1_split does not need backward computation.
I0318 00:58:32.652801 2013098752 net.cpp:169] data does not need backward computation.
I0318 00:58:32.652808 2013098752 net.cpp:205] This network produces output accuracy
I0318 00:58:32.652815 2013098752 net.cpp:205] This network produces output loss
I0318 00:58:32.652825 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.
I0318 00:58:32.652833 2013098752 net.cpp:217] Network initialization done.
I0318 00:58:32.652839 2013098752 net.cpp:218] Memory required for data: 528
I0318 00:58:32.652964 2013098752 solver.cpp:154] Creating test net (#0) specified by test_net file: examples/hdf5_classification/logreg_auto_test.prototxt
I0318 00:58:32.652986 2013098752 net.cpp:42] Initializing net from parameters:
state {
phase: TEST
}
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
hdf5_data_param {
source: "examples/hdf5_classification/data/test.txt"
batch_size: 10
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "data"
top: "ip1"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip1"
bottom: "label"
top: "accuracy"
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label"
top: "loss"
}
I0318 00:58:32.653069 2013098752 layer_factory.hpp:74] Creating layer data
I0318 00:58:32.653080 2013098752 net.cpp:84] Creating Layer data
I0318 00:58:32.653090 2013098752 net.cpp:338] data -> data
I0318 00:58:32.653128 2013098752 net.cpp:338] data -> label
I0318 00:58:32.653146 2013098752 net.cpp:113] Setting up data
I0318 00:58:32.653154 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/test.txt
I0318 00:58:32.653192 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 1
I0318 00:58:32.654850 2013098752 net.cpp:120] Top shape: 10 4 (40)
I0318 00:58:32.654897 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.654914 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split
I0318 00:58:32.654933 2013098752 net.cpp:84] Creating Layer label_data_1_split
I0318 00:58:32.654943 2013098752 net.cpp:380] label_data_1_split <- label
I0318 00:58:32.654953 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0
I0318 00:58:32.654966 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1
I0318 00:58:32.654976 2013098752 net.cpp:113] Setting up label_data_1_split
I0318 00:58:32.654985 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.654992 2013098752 net.cpp:120] Top shape: 10 (10)
I0318 00:58:32.655000 2013098752 layer_factory.hpp:74] Creating layer ip1
I0318 00:58:32.655010 2013098752 net.cpp:84] Creating Layer ip1
I0318 00:58:32.655017 2013098752 net.cpp:380] ip1 <- data
I0318 00:58:32.655030 2013098752 net.cpp:338] ip1 -> ip1
I0318 00:58:32.655041 2013098752 net.cpp:113] Setting up ip1
I0318 00:58:32.655061 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.655072 2013098752 layer_factory.hpp:74] Creating layer ip1_ip1_0_split
I0318 00:58:32.655148 2013098752 net.cpp:84] Creating Layer ip1_ip1_0_split
I0318 00:58:32.655159 2013098752 net.cpp:380] ip1_ip1_0_split <- ip1
I0318 00:58:32.655170 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_0
I0318 00:58:32.655180 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_1
I0318 00:58:32.655190 2013098752 net.cpp:113] Setting up ip1_ip1_0_split
I0318 00:58:32.655199 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.655206 2013098752 net.cpp:120] Top shape: 10 2 (20)
I0318 00:58:32.655213 2013098752 layer_factory.hpp:74] Creating layer accuracy
I0318 00:58:32.655223 2013098752 net.cpp:84] Creating Layer accuracy
I0318 00:58:32.655230 2013098752 net.cpp:380] accuracy <- ip1_ip1_0_split_0
I0318 00:58:32.655237 2013098752 net.cpp:380] accuracy <- label_data_1_split_0
I0318 00:58:32.655251 2013098752 net.cpp:338] accuracy -> accuracy
I0318 00:58:32.655259 2013098752 net.cpp:113] Setting up accuracy
I0318 00:58:32.655267 2013098752 net.cpp:120] Top shape: (1)
I0318 00:58:32.655340 2013098752 layer_factory.hpp:74] Creating layer loss
I0318 00:58:32.655354 2013098752 net.cpp:84] Creating Layer loss
I0318 00:58:32.655361 2013098752 net.cpp:380] loss <- ip1_ip1_0_split_1
I0318 00:58:32.655369 2013098752 net.cpp:380] loss <- label_data_1_split_1
I0318 00:58:32.655378 2013098752 net.cpp:338] loss -> loss
I0318 00:58:32.655388 2013098752 net.cpp:113] Setting up loss
I0318 00:58:32.655397 2013098752 layer_factory.hpp:74] Creating layer loss
I0318 00:58:32.655414 2013098752 net.cpp:120] Top shape: (1)
I0318 00:58:32.655422 2013098752 net.cpp:122] with loss weight 1
I0318 00:58:32.655438 2013098752 net.cpp:167] loss needs backward computation.
I0318 00:58:32.655446 2013098752 net.cpp:169] accuracy does not need backward computation.
I0318 00:58:32.655455 2013098752 net.cpp:167] ip1_ip1_0_split needs backward computation.
I0318 00:58:32.655462 2013098752 net.cpp:167] ip1 needs backward computation.
I0318 00:58:32.655469 2013098752 net.cpp:169] label_data_1_split does not need backward computation.
I0318 00:58:32.655477 2013098752 net.cpp:169] data does not need backward computation.
I0318 00:58:32.655483 2013098752 net.cpp:205] This network produces output accuracy
I0318 00:58:32.655489 2013098752 net.cpp:205] This network produces output loss
I0318 00:58:32.655503 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.
I0318 00:58:32.655511 2013098752 net.cpp:217] Network initialization done.
I0318 00:58:32.655517 2013098752 net.cpp:218] Memory required for data: 528
I0318 00:58:32.655547 2013098752 solver.cpp:42] Solver scaffolding done.
I0318 00:58:32.655567 2013098752 solver.cpp:222] Solving
I0318 00:58:32.655575 2013098752 solver.cpp:223] Learning Rate Policy: step
I0318 00:58:32.655583 2013098752 solver.cpp:266] Iteration 0, Testing net (#0)
I0318 00:58:32.683643 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.3736
I0318 00:58:32.683686 2013098752 solver.cpp:315] Test net output #1: loss = 1.00555 (* 1 = 1.00555 loss)
I0318 00:58:32.683846 2013098752 solver.cpp:189] Iteration 0, loss = 0.869394
I0318 00:58:32.683861 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.3
I0318 00:58:32.683871 2013098752 solver.cpp:204] Train net output #1: loss = 0.869394 (* 1 = 0.869394 loss)
I0318 00:58:32.683883 2013098752 solver.cpp:464] Iteration 0, lr = 0.01
I0318 00:58:32.698721 2013098752 solver.cpp:266] Iteration 1000, Testing net (#0)
I0318 00:58:32.701917 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7848
I0318 00:58:32.701961 2013098752 solver.cpp:315] Test net output #1: loss = 0.590972 (* 1 = 0.590972 loss)
I0318 00:58:32.702014 2013098752 solver.cpp:189] Iteration 1000, loss = 0.54742
I0318 00:58:32.702029 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.7
I0318 00:58:32.702041 2013098752 solver.cpp:204] Train net output #1: loss = 0.54742 (* 1 = 0.54742 loss)
I0318 00:58:32.702051 2013098752 solver.cpp:464] Iteration 1000, lr = 0.01
I0318 00:58:32.718360 2013098752 solver.cpp:266] Iteration 2000, Testing net (#0)
I0318 00:58:32.721529 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7696
I0318 00:58:32.721562 2013098752 solver.cpp:315] Test net output #1: loss = 0.593946 (* 1 = 0.593946 loss)
I0318 00:58:32.721593 2013098752 solver.cpp:189] Iteration 2000, loss = 0.729569
I0318 00:58:32.721603 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.5
I0318 00:58:32.721613 2013098752 solver.cpp:204] Train net output #1: loss = 0.729569 (* 1 = 0.729569 loss)
I0318 00:58:32.721622 2013098752 solver.cpp:464] Iteration 2000, lr = 0.01
I0318 00:58:32.740182 2013098752 solver.cpp:266] Iteration 3000, Testing net (#0)
I0318 00:58:32.743494 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.77
I0318 00:58:32.743544 2013098752 solver.cpp:315] Test net output #1: loss = 0.591229 (* 1 = 0.591229 loss)
I0318 00:58:32.744209 2013098752 solver.cpp:189] Iteration 3000, loss = 0.406097
I0318 00:58:32.744231 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.8
I0318 00:58:32.744249 2013098752 solver.cpp:204] Train net output #1: loss = 0.406096 (* 1 = 0.406096 loss)
I0318 00:58:32.744266 2013098752 solver.cpp:464] Iteration 3000, lr = 0.01
I0318 00:58:32.764135 2013098752 solver.cpp:266] Iteration 4000, Testing net (#0)
I0318 00:58:32.769110 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7848
I0318 00:58:32.769170 2013098752 solver.cpp:315] Test net output #1: loss = 0.590972 (* 1 = 0.590972 loss)
I0318 00:58:32.769223 2013098752 solver.cpp:189] Iteration 4000, loss = 0.54742
I0318 00:58:32.769242 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.7
I0318 00:58:32.769255 2013098752 solver.cpp:204] Train net output #1: loss = 0.54742 (* 1 = 0.54742 loss)
I0318 00:58:32.769265 2013098752 solver.cpp:464] Iteration 4000, lr = 0.01
I0318 00:58:32.785846 2013098752 solver.cpp:266] Iteration 5000, Testing net (#0)
I0318 00:58:32.788722 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7696
I0318 00:58:32.788751 2013098752 solver.cpp:315] Test net output #1: loss = 0.593946 (* 1 = 0.593946 loss)
I0318 00:58:32.788811 2013098752 solver.cpp:189] Iteration 5000, loss = 0.72957
I0318 00:58:32.788833 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.5
I0318 00:58:32.788846 2013098752 solver.cpp:204] Train net output #1: loss = 0.729569 (* 1 = 0.729569 loss)
I0318 00:58:32.788856 2013098752 solver.cpp:464] Iteration 5000, lr = 0.001
I0318 00:58:32.804762 2013098752 solver.cpp:266] Iteration 6000, Testing net (#0)
I0318 00:58:32.808061 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7856
I0318 00:58:32.808112 2013098752 solver.cpp:315] Test net output #1: loss = 0.59028 (* 1 = 0.59028 loss)
I0318 00:58:32.808732 2013098752 solver.cpp:189] Iteration 6000, loss = 0.415444
I0318 00:58:32.808753 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.9
I0318 00:58:32.808773 2013098752 solver.cpp:204] Train net output #1: loss = 0.415444 (* 1 = 0.415444 loss)
I0318 00:58:32.808786 2013098752 solver.cpp:464] Iteration 6000, lr = 0.001
I0318 00:58:32.827118 2013098752 solver.cpp:266] Iteration 7000, Testing net (#0)
I0318 00:58:32.831614 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7848
I0318 00:58:32.831657 2013098752 solver.cpp:315] Test net output #1: loss = 0.589454 (* 1 = 0.589454 loss)
I0318 00:58:32.831707 2013098752 solver.cpp:189] Iteration 7000, loss = 0.538038
I0318 00:58:32.831728 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.8
I0318 00:58:32.831745 2013098752 solver.cpp:204] Train net output #1: loss = 0.538037 (* 1 = 0.538037 loss)
I0318 00:58:32.831759 2013098752 solver.cpp:464] Iteration 7000, lr = 0.001
I0318 00:58:32.849634 2013098752 solver.cpp:266] Iteration 8000, Testing net (#0)
I0318 00:58:32.852712 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7796
I0318 00:58:32.852748 2013098752 solver.cpp:315] Test net output #1: loss = 0.589365 (* 1 = 0.589365 loss)
I0318 00:58:32.852792 2013098752 solver.cpp:189] Iteration 8000, loss = 0.684219
I0318 00:58:32.852840 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.5
I0318 00:58:32.852852 2013098752 solver.cpp:204] Train net output #1: loss = 0.684219 (* 1 = 0.684219 loss)
I0318 00:58:32.852861 2013098752 solver.cpp:464] Iteration 8000, lr = 0.001
I0318 00:58:32.868440 2013098752 solver.cpp:266] Iteration 9000, Testing net (#0)
I0318 00:58:32.871438 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.7816
I0318 00:58:32.871461 2013098752 solver.cpp:315] Test net output #1: loss = 0.589656 (* 1 = 0.589656 loss)
I0318 00:58:32.872109 2013098752 solver.cpp:189] Iteration 9000, loss = 0.421879
I0318 00:58:32.872131 2013098752 solver.cpp:204] Train net output #0: accuracy = 0.9
I0318 00:58:32.872143 2013098752 solver.cpp:204] Train net output #1: loss = 0.421879 (* 1 = 0.421879 loss)
I0318 00:58:32.872153 2013098752 solver.cpp:464] Iteration 9000, lr = 0.001
I0318 00:58:32.889981 2013098752 solver.cpp:334] Snapshotting to examples/hdf5_classification/data/train_iter_10000.caffemodel
I0318 00:58:32.890224 2013098752 solver.cpp:342] Snapshotting solver state to examples/hdf5_classification/data/train_iter_10000.solverstate
I0318 00:58:32.890362 2013098752 solver.cpp:248] Iteration 10000, loss = 0.538933
I0318 00:58:32.890380 2013098752 solver.cpp:266] Iteration 10000, Testing net (#0)
I0318 00:58:32.893728 2013098752 solver.cpp:315] Test net output #0: accuracy = 0.782
I0318 00:58:32.893757 2013098752 solver.cpp:315] Test net output #1: loss = 0.589366 (* 1 = 0.589366 loss)
I0318 00:58:32.893775 2013098752 solver.cpp:253] Optimization Done.
I0318 00:58:32.893786 2013098752 caffe.cpp:134] Optimization Done.