caret and Neural Networks

Instructor: Alessandro Gagliardi
TA: Kevin Perko

install.packages("caret", dependencies = c("Depends", "Suggests"))

Last Time:

  • Principal Components Analysis
  • Eigenvalue Decomposition
  • Multicollinearity
  • Dimensionality Reduction

Questions?

  1. caret
  2. Neural Networks

An introduction to caret

(slides lovingly adapted from Max Kuhn's presentations)

The caret Package

The caret package, short for Classification And REgression Training, contains numerous tools for developing predictive models using the rich set of models available in R. The package focuses on

  • simplifying model training and tuning across a wide variety of modeling techniques
  • pre–processing training data
  • calculating variable importance
  • model visualizations

The package is available at the Comprehensive R Archive Network (CRAN). caret depends on over 25 other packages, although many of these are listed as "suggested" packages are are not automatically loaded when caret is started. Packages are loaded individually when a model is trained or predicted.


In [1]:
%load_ext rmagic

Test/Training Set Split

We decide to train on 75% of the data:


In [2]:
%%R
require("caret")
require("mlbench")
data(Sonar)
set.seed(107)
inTrain <- createDataPartition(y = Sonar$Class, p = 3/4, list = FALSE)
## The output is a set of integers for the rows of Sonar that belong in the training set.

trainDescr <- Sonar[inTrain,1:60]
testDescr <- Sonar[-inTrain,1:60]

trainClass <- Sonar$Class[inTrain]
print(length(trainClass))
testClass <- Sonar$Class[-inTrain]
print(length(testClass))


Loading required package: caret
Loading required package: lattice
Loading required package: ggplot2
Find out what's changed in ggplot2 with
news(Version == "0.9.3.1", package = "ggplot2")
Loading required package: mlbench
[1] 157
[1] 51

By default, createDataPartition does stratified random splits.

Filtering Predictors

In order to avoid multicollinearity, we remove predictors to make sure that there are no between-predictor (absolute) correlations greater than 90%:


In [3]:
%%R
print(ncol(trainDescr))
trainingCorr <- cor(trainDescr)
highCorr <- findCorrelation(trainingCorr, 0.90)
# returns an index of column numbers for removal

trainDescr <- trainDescr[, -highCorr]
testDescr <- testDescr[, -highCorr]
print(ncol(trainDescr))


[1] 60
[1] 57

Transforming Predictors

The class preProcess can be used to center/scale the predictors, as well as apply other transformations. By default, centering and scaling is done:


In [4]:
%%R
xTrans <- preProcess(trainDescr, method = c("center", "scale"))
trainDescr <- predict(xTrans, trainDescr)
testDescr <- predict(xTrans, testDescr)

To apply PCA to predictors in the training, test or other data, you can use:


In [5]:
%%R 
xTrans <- preProcess(trainDescr, method = "pca")

Cross-Validation

caret gives us an easy way to do cross-validation during the training of our model.


In [6]:
%%R
trControl <- trainControl(method="cv", number=25)
logFit <- train(x = trainDescr, y = trainClass, 
                method='glm', family=binomial(link="logit"), 
                trControl = trControl)
logFit


Loading required package: class
Generalized Linear Model 

157 samples
 57 predictors
  2 classes: 'M', 'R' 

No pre-processing
Resampling: Cross-Validated (25 fold) 

Summary of sample sizes: 151, 151, 151, 150, 151, 150, ... 

Resampling results

  Accuracy  Kappa  Accuracy SD  Kappa SD
  0.742     0.491  0.189        0.371   

 

In [7]:
%%R -w 960 -h 480 -u px
resampleHist(logFit)


Tuning Models using Resampling

Resampling (i.e. the bootstrap, cross–validation) can also be used to figure out the values of model tuning parameters (if any). We come up with a set of candidate values for these parameters and fit a series of models for each tuning parameter combination. For each combination, fit $B$ models to the $B$ resamples of the training data. There are also $B$ sets of samples that are not in the resamples. These are predicted for each model. $B$ sets of performance values is computed for each candidate variable(s). Performance is estimated by averaging the $B$ performance values.

Tuning Models using Resampling

As an example, k-Nearest Neighbors has the tuning parameter $k$

We can train over 5 values of $k$: 5, 7, 9, 11, and 13.

$B = 25$ iterations of the bootstrap will be used as the resampling method.
We use:


In [8]:
%%R
knnFit <- train(x = trainDescr, y = trainClass, trControl = trControl,
                method = "knn", tuneLength = 5)

The train Function


In [9]:
%R print(knnFit)


k-Nearest Neighbors 

157 samples
 57 predictors
  2 classes: 'M', 'R' 

No pre-processing
Resampling: Cross-Validated (25 fold) 

Summary of sample sizes: 151, 151, 150, 151, 150, 151, ... 

Resampling results across tuning parameters:

  k   Accuracy  Kappa  Accuracy SD  Kappa SD
  5   0.8       0.584  0.168        0.349   
  7   0.782     0.547  0.169        0.352   
  9   0.776     0.533  0.177        0.366   
  11  0.775     0.531  0.171        0.356   
  13  0.774     0.532  0.194        0.395   

Accuracy was used to select the optimal model using  the largest value.
The final value used for the model was k = 5. 

The Final Model

Resampling indicated that $k = 5$ is the best value. It fits a final model with this value and saves it in the object:


In [10]:
%%R 
knnFit$finalModel


5-nearest neighbor classification model

Call:
knn3.matrix(x = as.matrix(x), y = y, k = param$k)

Training set class distribution:

 M  R 
84 73 

Other Tuning Values

If you don’t like the default candidate values, you can create your own.


In [11]:
%%R
knnFit <- train(x = trainDescr, y = trainClass, method = "knn", trControl = trControl, 
      tuneGrid = expand.grid(k=seq(1,21,2)))
knnFit


k-Nearest Neighbors 

157 samples
 57 predictors
  2 classes: 'M', 'R' 

No pre-processing
Resampling: Cross-Validated (25 fold) 

Summary of sample sizes: 151, 150, 150, 151, 151, 151, ... 

Resampling results across tuning parameters:

  k   Accuracy  Kappa  Accuracy SD  Kappa SD
  1   0.84      0.678  0.163        0.327   
  3   0.803     0.592  0.147        0.298   
  5   0.81      0.612  0.128        0.256   
  7   0.771     0.532  0.154        0.308   
  9   0.778     0.541  0.146        0.29    
  11  0.779     0.548  0.142        0.282   
  13  0.781     0.552  0.168        0.334   
  15  0.736     0.458  0.166        0.334   
  17  0.724     0.43   0.175        0.357   
  19  0.723     0.426  0.152        0.315   
  21  0.716     0.413  0.15         0.312   

Accuracy was used to select the optimal model using  the largest value.
The final value used for the model was k = 1. 

In [12]:
%%R
plot(knnFit)


Predictions

Since the output of train contains the final model object, you can use its predict methods as usual:


In [13]:
%%R
head(predict(knnFit$finalModel, newdata = testDescr))


     M R
[1,] 1 0
[2,] 0 1
[3,] 0 1
[4,] 1 0
[5,] 0 1
[6,] 0 1

However, predict, can have nuanced syntax depending upon the model in question. Instead, we can use caret functions extractPrediction and extractProb to handle all of the inconsistent syntax.

It can also handle multiple models at once.

Using extractPrediction to Get Class Predictions


In [14]:
%%R
predValues <- extractPrediction(list(
                knnFit,
                logFit),
           testX = testDescr,
           testY = testClass)
testValues <- subset(predValues, dataType == "Test")
str(testValues)


'data.frame':	102 obs. of  5 variables:
 $ obs     : Factor w/ 2 levels "M","R": 2 2 2 2 2 2 2 2 2 2 ...
 $ pred    : Factor w/ 2 levels "M","R": 1 2 2 1 2 2 2 2 2 2 ...
 $ model   : Factor w/ 2 levels "glm","knn": 2 2 2 2 2 2 2 2 2 2 ...
 $ dataType: Factor w/ 2 levels "Test","Training": 1 1 1 1 1 1 1 1 1 1 ...
 $ object  : Factor w/ 2 levels "Object1","Object2": 1 1 1 1 1 1 1 1 1 1 ...

Using extractProb to Get Class Probabilities


In [15]:
%%R
probValues <- extractProb(list(knnFit, logFit),
                          testX = testDescr,
                          testY = testClass)
testProbs <- subset(probValues,
                    dataType == "Test")
str(testProbs)


'data.frame':	102 obs. of  7 variables:
 $ M       : num  1 0 0 1 0 0 0 0 0 0 ...
 $ R       : num  0 1 1 0 1 1 1 1 1 1 ...
 $ obs     : Factor w/ 2 levels "M","R": 2 2 2 2 2 2 2 2 2 2 ...
 $ pred    : Factor w/ 2 levels "M","R": 1 2 2 1 2 2 2 2 2 2 ...
 $ model   : chr  "knn" "knn" "knn" "knn" ...
 $ dataType: chr  "Test" "Test" "Test" "Test" ...
 $ object  : chr  "Object1" "Object1" "Object1" "Object1" ...

Evaluating Performance

For classification models, there are functions to compute the confusion matrix and associated statistics. There are also functions for two–class problems: sensitivity, specificity and so on.

The function confusionMatrix calculates statistics for a data set. The no–information rate (NIR) is estimated as the largest class proportion in the data set. A one–sided statistical test is done to see if the observed accuracy is greater than the NIR.

Confusion Matrices and Statistics


In [16]:
%%R
knnPred <- subset(testValues, model == "knn")
confusionMatrix(knnPred$pred, knnPred$obs)


Confusion Matrix and Statistics

          Reference
Prediction  M  R
         M 22  4
         R  5 20
                                         
               Accuracy : 0.8235         
                 95% CI : (0.6913, 0.916)
    No Information Rate : 0.5294         
    P-Value [Acc > NIR] : 1.117e-05      
                                         
                  Kappa : 0.6467         
 Mcnemar's Test P-Value : 1              
                                         
            Sensitivity : 0.8148         
            Specificity : 0.8333         
         Pos Pred Value : 0.8462         
         Neg Pred Value : 0.8000         
             Prevalence : 0.5294         
         Detection Rate : 0.4314         
   Detection Prevalence : 0.5098         
      Balanced Accuracy : 0.8241         
                                         
       'Positive' Class : M              
                                         

In [17]:
%%R
logPred <- subset(testValues, model == "glm")
confusionMatrix(logPred$pred, logPred$obs)


Confusion Matrix and Statistics

          Reference
Prediction  M  R
         M 14  5
         R 13 19
                                          
               Accuracy : 0.6471          
                 95% CI : (0.5007, 0.7757)
    No Information Rate : 0.5294          
    P-Value [Acc > NIR] : 0.06052         
                                          
                  Kappa : 0.3045          
 Mcnemar's Test P-Value : 0.09896         
                                          
            Sensitivity : 0.5185          
            Specificity : 0.7917          
         Pos Pred Value : 0.7368          
         Neg Pred Value : 0.5937          
             Prevalence : 0.5294          
         Detection Rate : 0.2745          
   Detection Prevalence : 0.3725          
      Balanced Accuracy : 0.6551          
                                          
       'Positive' Class : M               
                                          

Neural Networks

History of Neural Networks

  • The Neuron Doctrine (1899)
  • The Perceptron (1943)
  • Parallel Distributed Processing (1986)
  • Deep Learning

THE NEURON DOCTRINE ~1899


Santiago Ramon y Cajal (1852-1934)

  • Neurons are the fundamental processing unit of the brain
  • Receive input from many sources
  • Direct output to many other neurons
  • </UL>

    The Model of Neuron (1943)

    Walter Pitts (1923 - 1969) Warren McCulloch (1898 - 1969)

    Parallel Distributed Processing (PDP) (1986)

    David Rumelhart (1942 - 2011) James McClelland (1948 - )

    Deep Learning

    Geoffrey Hinton (1947 - )

    What Are Artificial Neural Networks?

    (this section lovingly adapted from Vincent Cheung and Kevin Cannons)

    • An extremely simplified model of the brain
    • Essentially a function approximator
      • Transforms inputs into outputs to the best of its ability
    • Composed of many "neurons" that co-operate to perform the desired function

    "Neurons," in this case, can be thought of as logistic regressors.

    What Are They Used For?

    • Classification
      • Pattern recognition, feature extraction, image matching
    • Noise Reduction
      • Recognize patterns in the inputs and produce noiseless outputs
    • Prediction
      • Extrapolation based on historical data

    Why Use Neural Networks?

    • Ability to learn
      • NN’s figure out how to perform their function on their own
      • Determine their function based only upon sample inputs
    • Ability to generalize
      • i.e. produce reasonable outputs for inputs it has not been taught how to deal with

    How Do Neural Networks Work?

    • The output of a neuron is a function of the weighted sum of the inputs plus a bias

      $$ Output = f(i_1w_1 + i_2w_2 + \ldots + i_nw_n + bias) $$

    • The function of the entire neural network is simply the computation of the outputs of all the neurons
      • An entirely deterministic calculation

    Activation Functions

    • Applied to the weighted sum of the inputs of a neuron to produce the output
    • Majority of NN’s use sigmoid functions
      • Smooth, continuous, and monotonically increasing (derivative is always positive)
      • Bounded range - but never reaches max or min
    • Consider “ON” to be slightly less than the max and “OFF” to be slightly greater than the min

    Activation Functions

    • The most common sigmoid function used is the logistic function
      • $f(x) = \frac{1}{(1 + e^{-x})}$
      • The calculation of derivatives are important for neural networks and the logistic function has a very nice derivative
        • $f’(x) = f(x)(1 - f(x))$
    • Other sigmoid functions also used
      • hyperbolic tangent
      • arctangent
    • The exact nature of the function has little effect on the abilities of the neural network

    Where Do The Weights Come From?

    • The weights in a neural network are the most important factor in determining its function
    • Training is the act of presenting the network with some sample data and modifying the weights to better approximate the desired function
    • There are two main types of training
      • Supervised Training
      • Unsupervised Training

    Supervised Training

    • Supplies the neural network with inputs and the desired outputs
    • Response of the network to the inputs is measured
      • The weights are modified to reduce the difference between the actual and desired outputs

    Unsupervised Training

    • Only supplies inputs
    • The neural network adjusts its own weights so that similar inputs cause similar outputs
      • The network identifies the patterns and differences in the inputs without any external assistance

    Where Do The Weights Come From?

    • Epoch
      • One iteration through the process of providing the network with an input and updating the network's weights
      • Typically many epochs are required to train the neural network

    Perceptrons

    • First neural network with the ability to learn
    • Made up of only input neurons and output neurons
    • Input neurons typically have two states: ON and OFF
    • Output neurons use a simple threshold activation function
    • In basic form, can only solve linear problems
      • Limited applications

    How Do Perceptrons Learn?

    • Uses supervised training
    • If the output is not correct, the weights are adjusted according to the formula:
      • $w_{new} = w_{old} + \alpha(desired - output) \times input$
        $\alpha$ is the learning rate

    Example:
    Given input: $[1, 0, 1]$
    and initial weights: $[0.5, 0.2, 0.8]$

    Assuming Output Threshold = 1.2
    $1 \times 0.5 + 0 \times 0.2 + 1 \times 0.8 = 1.3 > 1.2$

    Assume Output was supposed to be 0 -> update the weights

    Assume $\alpha = 1; \\ W_{1_{new}} = 0.5 + 1\times(0-1)\times1 = -0.5 \\ W_{2_{new}} = 0.2 + 1\times(0-1)\times0 = 0.2 \\ W_{3_{new}} = 0.8 + 1\times(0-1)\times1 = -0.2 $

    Multilayer Feedforward Networks

    • Most common neural network
    • An extension of the perceptron
      • Multiple layers
        • The addition of one or more “hidden” layers in between the input and output layers
      • Activation function is not simply a threshold
        • Usually a sigmoid function
      • A general function approximator
        • Not limited to linear problems
    • Information flows in one direction
      • The outputs of one layer act as inputs to the next layer

    XOR Example

    N.B. This is impossible for logistic regression and exemplifies the difference between a linear and non-linear model

    Backpropagation

    • Most common method of obtaining the many weights in the network
    • A form of supervised training
    • The basic backpropagation algorithm is based on minimizing the error of the network using the derivatives of the error function
      • Simple
      • Slow
      • Prone to local minima issues

    Backpropagation

    • Most common measure of error is the mean square error:
      $E = (target – output)^2$
    • Partial derivatives of the error wrt the weights:
      • Output Neurons:
        let: $\delta = f'(net_j)(target_j-output_j)$
        $\frac{\delta\ E}{\delta\ w_{ij}} = -output_i\delta_j$
        j = output neuron; i = neuron in last hidden
      • Hidden Neurons:
        let: $\delta_j = f'(net_j)\sum{\delta_kw_{kj}}$
        $\frac{\delta\ E}{\delta\ w_{ij}} = -output_i\delta_j$
        j = hidden neuron; i = neuron in previous layer; k=neuron in next layer

    Backpropagation

    • Calculation of the derivatives flows backwards through the network, hence the name, backpropagation
    • These derivatives point in the direction of the maximum increase of the error function
    • A small step (learning rate) in the opposite direction will result in the maximum decrease of the (local) error function:
      $w_{new} = w_{old} - \alpha\frac{\delta E}{\delta w_{old}}$
      where $\alpha$ is the learning rate

    Backpropagation

    • The learning rate is important
      • Too small
        • Convergence extremely slow
      • Too large
        • May not converge
    • Momentum
      • Tends to aid convergence
      • Applies smoothed averaging to the change in weights:
        $\Delta_{new} = \beta \Delta_{old} - \alpha \frac{\delta E}{\delta w_{old}}$
        $\beta$ is the momentum coefficient $w_{new} = w_{old} + \Delta_{new}$
      • Acts as a low-pass filter by reducing rapid fluctuations

    Local Minima

    • Training is essentially minimizing the mean square error function
      • Key problem is avoiding local minima
      • Traditional techniques for avoiding local minima:
        • Simulated annealing
          • Perturb the weights in progressively smaller amounts
        • Genetic algorithms
          • Use the weights as chromosomes
          • Apply natural selection, mating, and mutations to these chromosomes

    Neural Networks in Practice

    
    
    In [18]:
    %%R
    library(nnet)
    library(devtools)
    source_url('https://gist.github.com/fawda123/7471137/raw/c720af2cea5f312717f020a09946800d55b8f45b/nnet_plot_update.r')
    
    
    
    
    SHA-1 hash of file is 3e535ef9cbcad648f0fffb89a8879e12c09be1e7
    

    Unsupervised

    
    
    In [24]:
    %%R
    eight <- data.frame(X1=c(1, rep(0, 7)), X2=c(0,1,rep(0,6)), X3=c(0,0,1,rep(0,5)), X4=c(0,0,0,1,rep(0,4)), 
                        X5=c(rep(0,4),1,0,0,0), X6=c(rep(0,5),1,0,0), X7=c(rep(0,6),1,0), X8=c(rep(0,7),1))
    eight
    
    
    
    
      X1 X2 X3 X4 X5 X6 X7 X8
    1  1  0  0  0  0  0  0  0
    2  0  1  0  0  0  0  0  0
    3  0  0  1  0  0  0  0  0
    4  0  0  0  1  0  0  0  0
    5  0  0  0  0  1  0  0  0
    6  0  0  0  0  0  1  0  0
    7  0  0  0  0  0  0  1  0
    8  0  0  0  0  0  0  0  1
    
    
    
    In [37]:
    %%R
    library(nnet)
    eight.net <- nnet(x = eight, y = eight, size = 3)
    
    
    
    
    # weights:  59
    initial  value 19.013140 
    iter  10 value 6.037264
    iter  20 value 2.095977
    iter  30 value 0.048524
    iter  40 value 0.000155
    final  value 0.000067 
    converged
    
    
    
    In [20]:
    %%R
    plot(eight.net)
    
    
    
    
    Loading required package: scales
    Loading required package: reshape
    Loading required package: plyr
    
    Attaching package: ‘reshape’
    
    The following objects are masked from ‘package:plyr’:
    
        rename, round_any
    
    The following object is masked from ‘package:class’:
    
        condense
    
    
    
    
    In [54]:
    %%R
    plot(eight.net)
    
    
    
    
    
    
    In [41]:
    %%R
    hidden_sums <- function(i) {
        return(c(sum(plot.nnet(eight.net,wts.only=T)[['hidden 1 1']][c(1,i+1)]), 
                 sum(plot.nnet(eight.net,wts.only=T)[['hidden 1 2']][c(1,i+1)]), 
                 sum(plot.nnet(eight.net,wts.only=T)[['hidden 1 3']][c(1,i+1)])))
    }
    t(sapply(c(1:8), hidden_sums))
    
    
    
    
                [,1]       [,2]        [,3]
    [1,]  -53.493462 -78.163640  -97.705215
    [2,]   41.023794  -1.851191    1.150396
    [3,]   25.064062  94.200762  101.144791
    [4,]  -75.822102 104.801736  -84.204842
    [5,]   -1.921834 -98.268453  136.608166
    [6,]   44.748766 140.694407 -113.532087
    [7,]  137.462885 -64.387803  -13.907567
    [8,] -127.439960  87.422128   70.566624
    
    
    
    In [42]:
    %%R
    t(sapply(c(1:8), hidden_sums) > 1) * 1
    
    
    
    
         [,1] [,2] [,3]
    [1,]    0    0    0
    [2,]    1    0    1
    [3,]    1    1    1
    [4,]    0    1    0
    [5,]    0    0    1
    [6,]    1    1    0
    [7,]    1    0    0
    [8,]    0    1    1
    

    Supervised

    
    
    In [138]:
    %%R
    nnet(trainClass ~ ., data=trainDescr, size = 3, decay = 5e-4)
    
    
    
    
    # weights:  187
    initial  value 97.094228 
    iter  10 value 48.353877
    iter  20 value 10.988537
    iter  30 value 5.351126
    iter  40 value 2.336103
    iter  50 value 1.836484
    iter  60 value 1.572561
    iter  70 value 1.422582
    iter  80 value 1.325231
    iter  90 value 1.268751
    iter 100 value 1.228825
    final  value 1.228825 
    stopped after 100 iterations
    a 60-3-1 network with 187 weights
    inputs: V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 V23 V24 V25 V26 V27 V28 V29 V30 V31 V32 V33 V34 V35 V36 V37 V38 V39 V40 V41 V42 V43 V44 V45 V46 V47 V48 V49 V50 V51 V52 V53 V54 V55 V56 V57 V58 V59 V60 
    output(s): trainClass 
    options were - entropy fitting  decay=5e-04
    
    
    
    In [43]:
    %%R
    library(nnet)
    nnet(trainClass ~ ., data=trainDescr, size = 3, decay = 5e-4, trace=FALSE)
    
    
    
    
    a 57-3-1 network with 178 weights
    inputs: V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V16 V17 V19 V21 V22 V23 V24 V25 V26 V27 V28 V29 V30 V31 V32 V33 V34 V35 V36 V37 V38 V39 V40 V41 V42 V43 V44 V45 V46 V47 V48 V49 V50 V51 V52 V53 V54 V55 V56 V57 V58 V59 V60 
    output(s): trainClass 
    options were - entropy fitting  decay=5e-04
    

    Tuning Models using Resampling

    The number of hidden nodes and the decay can both greatly affect the success of a neural net. caret to the rescue:

    
    
    In [82]:
    %%R
    eights <- rbind(eight, eight, eight, eight, eight, eight, eight, eight)
    eightTrain <- createDataPartition(y = seq(1:nrow(eights)), p = 7/8, list = FALSE)
    trainEight <- eights[eightTrain,]
    testEight <- eights[-eightTrain,]
    is(apply(testEight, 2, factor)[,2]
    
    
    
    
    [1] "character"           "vector"              "data.frameRowLabels"
    [4] "SuperClassMethod"    "EnumerationValue"   
    
    
    
    In [86]:
    %%R
    nnetFit <- train(x = trainEight, y = apply(testEight, 2, factor),
                     method = "nnet", #trace=FALSE,
                     tuneLength = 3)
    plot(nnetFit)
    
    
    
    
    # weights:  11
    initial  value 8.673728 
    iter  10 value 4.999045
    iter  20 value 4.999025
    iter  30 value 4.999004
    iter  40 value 4.998983
    iter  50 value 4.998960
    iter  60 value 4.998937
    iter  70 value 4.998913
    iter  80 value 4.998887
    iter  90 value 4.998860
    iter 100 value 4.998833
    final  value 4.998833 
    stopped after 100 iterations
    # weights:  31
    initial  value 10.609366 
    final  value 5.000000 
    converged
    # weights:  51
    initial  value 16.708649 
    final  value 5.000000 
    converged
    # weights:  11
    initial  value 18.270834 
    iter  10 value 4.881627
    final  value 4.875318 
    converged
    # weights:  31
    initial  value 9.803649 
    iter  10 value 4.779104
    iter  20 value 4.775880
    final  value 4.775879 
    converged
    # weights:  51
    initial  value 8.163138 
    iter  10 value 4.727742
    iter  20 value 4.727095
    iter  20 value 4.727095
    iter  20 value 4.727095
    final  value 4.727095 
    converged
    # weights:  11
    initial  value 25.260746 
    iter  10 value 5.002238
    iter  20 value 5.001798
    iter  30 value 5.001298
    iter  40 value 5.000723
    iter  50 value 5.000053
    iter  60 value 4.999265
    iter  70 value 4.998320
    iter  80 value 4.997166
    iter  90 value 4.995726
    iter 100 value 4.993876
    final  value 4.993876 
    stopped after 100 iterations
    # weights:  31
    initial  value 11.097899 
    iter  10 value 5.005318
    iter  20 value 5.005263
    iter  30 value 5.005208
    iter  40 value 5.005151
    iter  50 value 5.005092
    iter  60 value 5.005032
    iter  70 value 5.004971
    iter  80 value 5.004907
    iter  90 value 5.004842
    iter 100 value 5.004774
    final  value 5.004774 
    stopped after 100 iterations
    # weights:  51
    initial  value 4.646928 
    iter  10 value 4.267219
    iter  20 value 3.518633
    iter  30 value 3.511242
    iter  40 value 3.507777
    iter  50 value 3.506980
    iter  60 value 3.506815
    iter  70 value 3.506137
    iter  80 value 3.505601
    iter  90 value 3.505180
    iter 100 value 3.505060
    final  value 3.505060 
    stopped after 100 iterations
    # weights:  11
    initial  value 17.716435 
    final  value 7.000000 
    converged
    # weights:  31
    initial  value 14.860389 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 18.894532 
    final  value 7.000000 
    converged
    # weights:  11
    initial  value 10.486567 
    iter  10 value 6.166966
    final  value 6.107471 
    converged
    # weights:  31
    initial  value 10.622843 
    iter  10 value 6.080145
    iter  20 value 6.047568
    iter  30 value 6.045179
    final  value 6.045150 
    converged
    # weights:  51
    initial  value 20.397641 
    iter  10 value 6.065362
    iter  20 value 6.046806
    iter  30 value 6.038947
    iter  40 value 6.038931
    final  value 6.038931 
    converged
    # weights:  11
    initial  value 18.367755 
    iter  10 value 7.003787
    iter  20 value 7.003490
    iter  30 value 7.003159
    iter  40 value 7.002789
    iter  50 value 7.002371
    iter  60 value 7.001894
    iter  70 value 7.001345
    iter  80 value 7.000705
    iter  90 value 6.999949
    iter 100 value 6.999040
    final  value 6.999040 
    stopped after 100 iterations
    # weights:  31
    initial  value 10.628473 
    iter  10 value 7.005315
    iter  20 value 7.005244
    iter  30 value 7.005170
    iter  40 value 7.005094
    iter  50 value 7.005014
    iter  60 value 7.004931
    iter  70 value 7.004843
    iter  80 value 7.004752
    iter  90 value 7.004656
    iter 100 value 7.004554
    final  value 7.004554 
    stopped after 100 iterations
    # weights:  51
    initial  value 23.789082 
    iter  10 value 7.000137
    iter  20 value 6.999516
    iter  30 value 6.998706
    iter  40 value 6.997600
    iter  50 value 6.995996
    iter  60 value 6.993468
    iter  70 value 6.988926
    iter  80 value 6.978673
    iter  90 value 6.939879
    iter 100 value 5.680219
    final  value 5.680219 
    stopped after 100 iterations
    # weights:  11
    initial  value 25.633738 
    final  value 1.999999 
    converged
    # weights:  31
    initial  value 10.080237 
    final  value 1.999993 
    converged
    # weights:  51
    initial  value 17.377625 
    final  value 2.000000 
    converged
    # weights:  11
    initial  value 6.951183 
    iter  10 value 2.467581
    final  value 2.467580 
    converged
    # weights:  31
    initial  value 14.092601 
    iter  10 value 2.334016
    iter  20 value 2.332511
    iter  20 value 2.332511
    iter  20 value 2.332511
    final  value 2.332511 
    converged
    # weights:  51
    initial  value 6.625965 
    iter  10 value 2.257854
    final  value 2.257832 
    converged
    # weights:  11
    initial  value 10.746226 
    iter  10 value 2.000334
    iter  20 value 1.999931
    iter  30 value 1.999457
    iter  40 value 1.998888
    iter  50 value 1.998191
    iter  60 value 1.997314
    iter  70 value 1.996175
    iter  80 value 1.994635
    iter  90 value 1.992436
    iter 100 value 1.989065
    final  value 1.989065 
    stopped after 100 iterations
    # weights:  31
    initial  value 12.589536 
    iter  10 value 2.003197
    iter  20 value 2.003129
    iter  30 value 2.003059
    iter  40 value 2.002985
    iter  50 value 2.002907
    iter  60 value 2.002825
    iter  70 value 2.002739
    iter  80 value 2.002647
    iter  90 value 2.002550
    iter 100 value 2.002447
    final  value 2.002447 
    stopped after 100 iterations
    # weights:  51
    initial  value 18.386290 
    iter  10 value 1.989134
    iter  20 value 1.982939
    iter  30 value 1.969454
    iter  40 value 1.934161
    iter  50 value 1.657039
    iter  60 value 1.592682
    iter  70 value 1.589571
    iter  80 value 1.588750
    iter  90 value 1.588673
    iter 100 value 1.588590
    final  value 1.588590 
    stopped after 100 iterations
    # weights:  11
    initial  value 15.481109 
    final  value 5.999955 
    converged
    # weights:  31
    initial  value 9.886870 
    final  value 5.999996 
    converged
    # weights:  51
    initial  value 15.410244 
    final  value 6.000000 
    converged
    # weights:  11
    initial  value 11.167058 
    iter  10 value 5.611302
    final  value 5.610677 
    converged
    # weights:  31
    initial  value 9.435758 
    iter  10 value 5.523095
    iter  20 value 5.522773
    final  value 5.522773 
    converged
    # weights:  51
    initial  value 24.495347 
    iter  10 value 5.502849
    iter  20 value 5.480956
    final  value 5.480955 
    converged
    # weights:  11
    initial  value 11.593586 
    iter  10 value 6.010395
    iter  20 value 6.010344
    iter  30 value 6.010293
    iter  40 value 6.010241
    iter  50 value 6.010190
    iter  60 value 6.010139
    iter  70 value 6.010087
    iter  80 value 6.010036
    iter  90 value 6.009984
    iter 100 value 6.009932
    final  value 6.009932 
    stopped after 100 iterations
    # weights:  31
    initial  value 9.066718 
    iter  10 value 5.294354
    iter  20 value 4.528664
    iter  30 value 4.497458
    iter  40 value 4.495742
    iter  50 value 4.494949
    iter  60 value 4.494880
    iter  70 value 4.494804
    iter  80 value 4.494757
    iter  90 value 4.494713
    final  value 4.494702 
    converged
    # weights:  51
    initial  value 17.588226 
    iter  10 value 6.002329
    iter  20 value 6.002177
    iter  30 value 6.002007
    iter  40 value 6.001817
    iter  50 value 6.001601
    iter  60 value 6.001354
    iter  70 value 6.001068
    iter  80 value 6.000730
    iter  90 value 6.000324
    iter 100 value 5.999827
    final  value 5.999827 
    stopped after 100 iterations
    # weights:  11
    initial  value 17.727503 
    final  value 7.000000 
    converged
    # weights:  31
    initial  value 14.519545 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 15.424997 
    final  value 7.000000 
    converged
    # weights:  11
    initial  value 25.985396 
    iter  10 value 6.278687
    final  value 6.269723 
    converged
    # weights:  31
    initial  value 21.975306 
    iter  10 value 6.323554
    iter  20 value 6.198427
    final  value 6.198169 
    converged
    # weights:  51
    initial  value 17.058147 
    iter  10 value 6.196971
    iter  20 value 6.182819
    iter  30 value 6.182382
    final  value 6.182379 
    converged
    # weights:  11
    initial  value 13.526453 
    iter  10 value 6.998069
    iter  20 value 6.996839
    iter  30 value 6.995166
    iter  40 value 6.992752
    iter  50 value 6.988968
    iter  60 value 6.982249
    iter  70 value 6.967576
    iter  80 value 6.918539
    iter  90 value 6.213270
    iter 100 value 4.618646
    final  value 4.618646 
    stopped after 100 iterations
    # weights:  31
    initial  value 8.729255 
    iter  10 value 7.005703
    iter  20 value 7.005660
    iter  30 value 7.005617
    iter  40 value 7.005573
    iter  50 value 7.005529
    iter  60 value 7.005484
    iter  70 value 7.005438
    iter  80 value 7.005391
    iter  90 value 7.005344
    iter 100 value 7.005295
    final  value 7.005295 
    stopped after 100 iterations
    # weights:  51
    initial  value 31.683383 
    iter  10 value 7.004576
    iter  20 value 7.004468
    iter  30 value 7.004352
    iter  40 value 7.004227
    iter  50 value 7.004091
    iter  60 value 7.003944
    iter  70 value 7.003782
    iter  80 value 7.003604
    iter  90 value 7.003406
    iter 100 value 7.003183
    final  value 7.003183 
    stopped after 100 iterations
    # weights:  11
    initial  value 14.618582 
    final  value 7.999983 
    converged
    # weights:  31
    initial  value 14.177186 
    final  value 8.000000 
    converged
    # weights:  51
    initial  value 11.583555 
    final  value 8.000000 
    converged
    # weights:  11
    initial  value 8.990453 
    iter  10 value 6.605450
    final  value 6.574753 
    converged
    # weights:  31
    initial  value 14.336989 
    iter  10 value 6.531526
    iter  20 value 6.510687
    iter  30 value 6.510639
    final  value 6.510638 
    converged
    # weights:  51
    initial  value 22.857913 
    iter  10 value 6.616519
    iter  20 value 6.529219
    iter  30 value 6.443550
    iter  40 value 6.443114
    final  value 6.443114 
    converged
    # weights:  11
    initial  value 10.313407 
    iter  10 value 8.006229
    iter  20 value 8.006132
    iter  30 value 8.006031
    iter  40 value 8.005924
    iter  50 value 8.005811
    iter  60 value 8.005690
    iter  70 value 8.005562
    iter  80 value 8.005425
    iter  90 value 8.005277
    iter 100 value 8.005117
    final  value 8.005117 
    stopped after 100 iterations
    # weights:  31
    initial  value 27.776048 
    iter  10 value 8.001621
    iter  20 value 8.001059
    iter  30 value 8.000358
    iter  40 value 7.999456
    iter  50 value 7.998251
    iter  60 value 7.996562
    iter  70 value 7.994027
    iter  80 value 7.989842
    iter  90 value 7.981806
    iter 100 value 7.961527
    final  value 7.961527 
    stopped after 100 iterations
    # weights:  51
    initial  value 29.203238 
    iter  10 value 7.999132
    iter  20 value 7.998097
    iter  30 value 7.996618
    iter  40 value 7.994329
    iter  50 value 7.990340
    iter  60 value 7.981827
    iter  70 value 7.953952
    iter  80 value 7.436207
    iter  90 value 4.600000
    iter 100 value 4.522029
    final  value 4.522029 
    stopped after 100 iterations
    # weights:  11
    initial  value 10.871622 
    iter  10 value 6.996968
    iter  20 value 6.996864
    iter  30 value 6.996752
    iter  40 value 6.996633
    iter  50 value 6.996505
    iter  60 value 6.996366
    iter  70 value 6.996217
    iter  80 value 6.996055
    iter  90 value 6.995879
    iter 100 value 6.995688
    final  value 6.995688 
    stopped after 100 iterations
    # weights:  31
    initial  value 12.854428 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 11.720122 
    final  value 7.000000 
    converged
    # weights:  11
    initial  value 8.855860 
    iter  10 value 6.263145
    final  value 6.262857 
    converged
    # weights:  31
    initial  value 12.187882 
    iter  10 value 6.192021
    iter  20 value 6.189036
    final  value 6.189036 
    converged
    # weights:  51
    initial  value 7.239005 
    iter  10 value 6.156232
    iter  20 value 6.155381
    iter  20 value 6.155381
    iter  20 value 6.155381
    final  value 6.155381 
    converged
    # weights:  11
    initial  value 9.751729 
    iter  10 value 6.999946
    iter  20 value 6.999245
    iter  30 value 6.998423
    iter  40 value 6.997445
    iter  50 value 6.996257
    iter  60 value 6.994782
    iter  70 value 6.992898
    iter  80 value 6.990399
    iter  90 value 6.986920
    iter 100 value 6.981732
    final  value 6.981732 
    stopped after 100 iterations
    # weights:  31
    initial  value 19.312728 
    iter  10 value 7.004163
    iter  20 value 7.004030
    iter  30 value 7.003888
    iter  40 value 7.003733
    iter  50 value 7.003566
    iter  60 value 7.003382
    iter  70 value 7.003180
    iter  80 value 7.002955
    iter  90 value 7.002704
    iter 100 value 7.002420
    final  value 7.002420 
    stopped after 100 iterations
    # weights:  51
    initial  value 20.386776 
    iter  10 value 7.004795
    iter  20 value 7.004744
    iter  30 value 7.004691
    iter  40 value 7.004636
    iter  50 value 7.004580
    iter  60 value 7.004521
    iter  70 value 7.004461
    iter  80 value 7.004399
    iter  90 value 7.004334
    iter 100 value 7.004266
    final  value 7.004266 
    stopped after 100 iterations
    # weights:  11
    initial  value 22.265627 
    iter  10 value 15.998731
    iter  20 value 15.998712
    iter  30 value 15.998693
    iter  40 value 15.998673
    iter  50 value 15.998653
    iter  60 value 15.998632
    iter  70 value 15.998610
    iter  80 value 15.998588
    iter  90 value 15.998565
    iter 100 value 15.998541
    final  value 15.998541 
    stopped after 100 iterations
    # weights:  31
    initial  value 21.407812 
    final  value 16.000000 
    converged
    # weights:  51
    initial  value 15.705263 
    iter  10 value 6.315596
    iter  20 value 6.145859
    iter  30 value 6.144857
    iter  40 value 6.144843
    final  value 6.144843 
    converged
    # weights:  11
    initial  value 16.293926 
    iter  10 value 9.534451
    final  value 9.499919 
    converged
    # weights:  31
    initial  value 16.330965 
    iter  10 value 9.688788
    iter  20 value 9.175265
    iter  30 value 9.165602
    final  value 9.165120 
    converged
    # weights:  51
    initial  value 14.528910 
    iter  10 value 9.169283
    iter  20 value 9.108662
    iter  30 value 9.107292
    final  value 9.107290 
    converged
    # weights:  11
    initial  value 17.456383 
    iter  10 value 16.009586
    iter  20 value 16.009536
    iter  30 value 16.009486
    iter  40 value 16.009435
    iter  50 value 16.009384
    iter  60 value 16.009333
    iter  70 value 16.009282
    iter  80 value 16.009230
    iter  90 value 16.009178
    iter 100 value 16.009126
    final  value 16.009126 
    stopped after 100 iterations
    # weights:  31
    initial  value 13.370069 
    iter  10 value 6.435933
    iter  20 value 6.160126
    iter  30 value 6.157456
    iter  40 value 6.154745
    iter  50 value 6.154614
    iter  60 value 6.154309
    final  value 6.154295 
    converged
    # weights:  51
    initial  value 24.108535 
    iter  10 value 6.995909
    iter  20 value 6.346121
    iter  30 value 6.167736
    iter  40 value 6.164743
    iter  50 value 6.156578
    iter  60 value 6.154791
    iter  70 value 6.154623
    iter  80 value 6.154152
    iter  90 value 6.154130
    final  value 6.154089 
    converged
    # weights:  11
    initial  value 16.449487 
    final  value 9.000000 
    converged
    # weights:  31
    initial  value 18.163770 
    final  value 9.000000 
    converged
    # weights:  51
    initial  value 9.078179 
    iter  10 value 8.999803
    iter  20 value 8.999802
    iter  30 value 8.999800
    iter  40 value 8.999798
    iter  50 value 8.999796
    iter  60 value 8.999794
    iter  70 value 8.999792
    iter  80 value 8.999790
    iter  90 value 8.999788
    iter 100 value 8.999786
    final  value 8.999786 
    stopped after 100 iterations
    # weights:  11
    initial  value 21.130651 
    iter  10 value 7.502028
    iter  20 value 7.339857
    iter  20 value 7.339857
    iter  20 value 7.339857
    final  value 7.339857 
    converged
    # weights:  31
    initial  value 15.218695 
    iter  10 value 7.333072
    iter  20 value 7.295429
    iter  30 value 7.294624
    final  value 7.294598 
    converged
    # weights:  51
    initial  value 18.877172 
    iter  10 value 7.528185
    iter  20 value 7.297149
    iter  30 value 7.220824
    iter  40 value 7.220043
    final  value 7.220037 
    converged
    # weights:  11
    initial  value 9.406574 
    iter  10 value 8.970409
    iter  20 value 8.926541
    iter  30 value 8.136637
    iter  40 value 5.076649
    iter  50 value 5.057555
    iter  60 value 5.016040
    iter  70 value 5.013796
    iter  80 value 5.012965
    iter  90 value 5.012937
    final  value 5.012916 
    converged
    # weights:  31
    initial  value 16.774179 
    iter  10 value 9.001849
    iter  20 value 9.001424
    iter  30 value 9.000915
    iter  40 value 9.000295
    iter  50 value 8.999519
    iter  60 value 8.998517
    iter  70 value 8.997169
    iter  80 value 8.995259
    iter  90 value 8.992344
    iter 100 value 8.987375
    final  value 8.987375 
    stopped after 100 iterations
    # weights:  51
    initial  value 17.525798 
    iter  10 value 7.873848
    iter  20 value 5.066852
    iter  30 value 5.013790
    iter  40 value 5.013069
    iter  50 value 5.011775
    iter  60 value 5.011715
    final  value 5.011703 
    converged
    # weights:  11
    initial  value 8.168237 
    iter  10 value 3.999176
    iter  20 value 3.999164
    iter  30 value 3.999152
    iter  40 value 3.999139
    iter  50 value 3.999125
    iter  60 value 3.999112
    iter  70 value 3.999098
    iter  80 value 3.999083
    iter  90 value 3.999068
    iter 100 value 3.999053
    final  value 3.999053 
    stopped after 100 iterations
    # weights:  31
    initial  value 31.068493 
    final  value 4.000000 
    converged
    # weights:  51
    initial  value 22.757584 
    final  value 4.000000 
    converged
    # weights:  11
    initial  value 24.444726 
    iter  10 value 4.175260
    final  value 4.171580 
    converged
    # weights:  31
    initial  value 17.248785 
    iter  10 value 4.056994
    final  value 4.053473 
    converged
    # weights:  51
    initial  value 22.146053 
    iter  10 value 3.998396
    iter  20 value 3.990404
    final  value 3.990404 
    converged
    # weights:  11
    initial  value 19.151843 
    iter  10 value 4.004540
    iter  20 value 4.004377
    iter  30 value 4.004203
    iter  40 value 4.004018
    iter  50 value 4.003819
    iter  60 value 4.003604
    iter  70 value 4.003372
    iter  80 value 4.003120
    iter  90 value 4.002844
    iter 100 value 4.002541
    final  value 4.002541 
    stopped after 100 iterations
    # weights:  31
    initial  value 9.263435 
    iter  10 value 4.004570
    iter  20 value 4.004515
    iter  30 value 4.004458
    iter  40 value 4.004400
    iter  50 value 4.004340
    iter  60 value 4.004278
    iter  70 value 4.004213
    iter  80 value 4.004146
    iter  90 value 4.004077
    iter 100 value 4.004004
    final  value 4.004004 
    stopped after 100 iterations
    # weights:  51
    initial  value 7.736685 
    iter  10 value 3.998352
    iter  20 value 3.997465
    iter  30 value 3.996231
    iter  40 value 3.994396
    iter  50 value 3.991384
    iter  60 value 3.985596
    iter  70 value 3.970647
    iter  80 value 3.889083
    iter  90 value 3.397467
    iter 100 value 3.319148
    final  value 3.319148 
    stopped after 100 iterations
    # weights:  11
    initial  value 19.030289 
    final  value 10.000000 
    converged
    # weights:  31
    initial  value 13.516618 
    final  value 10.000000 
    converged
    # weights:  51
    initial  value 11.345462 
    final  value 10.000000 
    converged
    # weights:  11
    initial  value 12.555017 
    iter  10 value 7.382475
    iter  20 value 7.106459
    final  value 7.106459 
    converged
    # weights:  31
    initial  value 11.793963 
    iter  10 value 7.055665
    iter  20 value 6.816820
    iter  30 value 6.815147
    final  value 6.815146 
    converged
    # weights:  51
    initial  value 14.381127 
    iter  10 value 7.841802
    iter  20 value 7.053894
    iter  30 value 6.781306
    iter  40 value 6.778841
    final  value 6.778777 
    converged
    # weights:  11
    initial  value 16.963772 
    iter  10 value 9.987234
    iter  20 value 9.981880
    iter  30 value 9.973003
    iter  40 value 9.955381
    iter  50 value 9.904226
    iter  60 value 9.246035
    iter  70 value 4.086422
    iter  80 value 4.025535
    final  value 4.025414 
    converged
    # weights:  31
    initial  value 10.140596 
    iter  10 value 10.000948
    iter  20 value 10.000123
    iter  30 value 9.999022
    iter  40 value 9.997475
    iter  50 value 9.995139
    iter  60 value 9.991227
    iter  70 value 9.983479
    iter  80 value 9.962396
    iter  90 value 9.821719
    iter 100 value 5.098902
    final  value 5.098902 
    stopped after 100 iterations
    # weights:  51
    initial  value 15.082355 
    iter  10 value 8.014953
    iter  20 value 4.051294
    iter  30 value 4.023400
    iter  40 value 4.022750
    iter  50 value 4.022660
    iter  60 value 4.022348
    iter  70 value 4.022083
    iter  80 value 4.021937
    iter  90 value 4.021889
    iter 100 value 4.021622
    final  value 4.021622 
    stopped after 100 iterations
    # weights:  11
    initial  value 10.452651 
    iter  10 value 5.998285
    iter  20 value 5.998248
    iter  30 value 5.998210
    iter  40 value 5.998170
    iter  50 value 5.998128
    iter  60 value 5.998084
    iter  70 value 5.998039
    iter  80 value 5.997991
    iter  90 value 5.997942
    iter 100 value 5.997890
    final  value 5.997890 
    stopped after 100 iterations
    # weights:  31
    initial  value 10.581240 
    final  value 5.999939 
    converged
    # weights:  51
    initial  value 16.032967 
    final  value 6.000000 
    converged
    # weights:  11
    initial  value 11.410167 
    iter  10 value 5.528984
    final  value 5.528843 
    converged
    # weights:  31
    initial  value 16.322416 
    iter  10 value 5.452224
    final  value 5.448848 
    converged
    # weights:  51
    initial  value 8.792933 
    iter  10 value 5.410156
    iter  20 value 5.409336
    final  value 5.409335 
    converged
    # weights:  11
    initial  value 8.209523 
    iter  10 value 6.002492
    iter  20 value 6.002241
    iter  30 value 6.001961
    iter  40 value 6.001646
    iter  50 value 6.001290
    iter  60 value 6.000880
    iter  70 value 6.000404
    iter  80 value 5.999840
    iter  90 value 5.999161
    iter 100 value 5.998323
    final  value 5.998323 
    stopped after 100 iterations
    # weights:  31
    initial  value 16.662172 
    iter  10 value 6.002486
    iter  20 value 6.002232
    iter  30 value 6.001945
    iter  40 value 6.001616
    iter  50 value 6.001236
    iter  60 value 6.000788
    iter  70 value 6.000252
    iter  80 value 5.999599
    iter  90 value 5.998781
    iter 100 value 5.997725
    final  value 5.997725 
    stopped after 100 iterations
    # weights:  51
    initial  value 19.960198 
    iter  10 value 5.995669
    iter  20 value 5.993329
    iter  30 value 5.989221
    iter  40 value 5.980296
    iter  50 value 5.949377
    iter  60 value 5.324430
    iter  70 value 4.669766
    iter  80 value 4.661141
    iter  90 value 4.659221
    iter 100 value 4.657522
    final  value 4.657522 
    stopped after 100 iterations
    # weights:  11
    initial  value 15.975060 
    final  value 4.999995 
    converged
    # weights:  31
    initial  value 16.780503 
    final  value 5.000000 
    converged
    # weights:  51
    initial  value 20.878886 
    final  value 5.000000 
    converged
    # weights:  11
    initial  value 22.547605 
    iter  10 value 4.913364
    iter  20 value 4.909096
    iter  20 value 4.909096
    iter  20 value 4.909096
    final  value 4.909096 
    converged
    # weights:  31
    initial  value 20.696130 
    iter  10 value 4.809347
    iter  20 value 4.807352
    final  value 4.807351 
    converged
    # weights:  51
    initial  value 18.020127 
    iter  10 value 4.758789
    iter  20 value 4.756006
    final  value 4.756005 
    converged
    # weights:  11
    initial  value 7.959249 
    iter  10 value 5.001124
    iter  20 value 5.000678
    iter  30 value 5.000171
    iter  40 value 4.999590
    iter  50 value 4.998915
    iter  60 value 4.998120
    iter  70 value 4.997170
    iter  80 value 4.996012
    iter  90 value 4.994567
    iter 100 value 4.992712
    final  value 4.992712 
    stopped after 100 iterations
    # weights:  31
    initial  value 6.948163 
    iter  10 value 5.005423
    iter  20 value 5.005374
    iter  30 value 5.005325
    iter  40 value 5.005275
    iter  50 value 5.005224
    iter  60 value 5.005172
    iter  70 value 5.005118
    iter  80 value 5.005063
    iter  90 value 5.005006
    iter 100 value 5.004948
    final  value 5.004948 
    stopped after 100 iterations
    # weights:  51
    initial  value 17.236488 
    iter  10 value 5.003747
    iter  20 value 5.003685
    iter  30 value 5.003622
    iter  40 value 5.003555
    iter  50 value 5.003485
    iter  60 value 5.003411
    iter  70 value 5.003333
    iter  80 value 5.003251
    iter  90 value 5.003164
    iter 100 value 5.003072
    final  value 5.003072 
    stopped after 100 iterations
    # weights:  11
    initial  value 25.767144 
    final  value 6.999993 
    converged
    # weights:  31
    initial  value 9.134123 
    iter  10 value 6.999763
    iter  20 value 6.999761
    iter  30 value 6.999759
    iter  40 value 6.999756
    iter  50 value 6.999754
    iter  60 value 6.999752
    iter  70 value 6.999750
    iter  80 value 6.999747
    iter  90 value 6.999745
    iter 100 value 6.999742
    final  value 6.999742 
    stopped after 100 iterations
    # weights:  51
    initial  value 23.678110 
    final  value 7.000000 
    converged
    # weights:  11
    initial  value 11.577876 
    iter  10 value 6.283029
    final  value 6.272815 
    converged
    # weights:  31
    initial  value 13.302911 
    iter  10 value 6.215106
    iter  20 value 6.200026
    final  value 6.200003 
    converged
    # weights:  51
    initial  value 12.642082 
    iter  10 value 6.193970
    iter  20 value 6.184632
    iter  30 value 6.184379
    final  value 6.184378 
    converged
    # weights:  11
    initial  value 10.955348 
    iter  10 value 7.008606
    iter  20 value 7.008539
    iter  30 value 7.008471
    iter  40 value 7.008401
    iter  50 value 7.008330
    iter  60 value 7.008259
    iter  70 value 7.008186
    iter  80 value 7.008111
    iter  90 value 7.008035
    iter 100 value 7.007958
    final  value 7.007958 
    stopped after 100 iterations
    # weights:  31
    initial  value 13.086127 
    iter  10 value 6.998996
    iter  20 value 6.998086
    iter  30 value 6.996860
    iter  40 value 6.995117
    iter  50 value 6.992437
    iter  60 value 6.987806
    iter  70 value 6.978052
    iter  80 value 6.947004
    iter  90 value 6.519557
    iter 100 value 4.506551
    final  value 4.506551 
    stopped after 100 iterations
    # weights:  51
    initial  value 7.342172 
    iter  10 value 7.001387
    iter  20 value 7.001017
    iter  30 value 7.000570
    iter  40 value 7.000018
    iter  50 value 6.999314
    iter  60 value 6.998385
    iter  70 value 6.997095
    iter  80 value 6.995179
    iter  90 value 6.992036
    iter 100 value 6.985979
    final  value 6.985979 
    stopped after 100 iterations
    # weights:  11
    initial  value 8.893524 
    iter  10 value 4.997188
    iter  20 value 4.997097
    iter  30 value 4.997001
    iter  40 value 4.996898
    iter  50 value 4.996788
    iter  60 value 4.996670
    iter  70 value 4.996543
    iter  80 value 4.996407
    iter  90 value 4.996259
    iter 100 value 4.996100
    final  value 4.996100 
    stopped after 100 iterations
    # weights:  31
    initial  value 17.936721 
    final  value 5.000000 
    converged
    # weights:  51
    initial  value 6.133998 
    iter  10 value 4.999525
    iter  20 value 4.999516
    iter  30 value 4.999506
    iter  40 value 4.999496
    iter  50 value 4.999485
    iter  60 value 4.999474
    iter  70 value 4.999463
    iter  80 value 4.999451
    iter  90 value 4.999439
    iter 100 value 4.999426
    final  value 4.999426 
    stopped after 100 iterations
    # weights:  11
    initial  value 14.350507 
    iter  10 value 4.870926
    final  value 4.869503 
    converged
    # weights:  31
    initial  value 12.452700 
    iter  10 value 4.773509
    iter  20 value 4.771592
    final  value 4.771592 
    converged
    # weights:  51
    initial  value 7.744401 
    iter  10 value 4.722915
    final  value 4.722235 
    converged
    # weights:  11
    initial  value 17.077757 
    iter  10 value 4.991619
    iter  20 value 4.988949
    iter  30 value 4.985211
    iter  40 value 4.979598
    iter  50 value 4.970240
    iter  60 value 4.951684
    iter  70 value 4.900179
    iter  80 value 4.626554
    iter  90 value 4.086041
    iter 100 value 3.906687
    final  value 3.906687 
    stopped after 100 iterations
    # weights:  31
    initial  value 22.137904 
    iter  10 value 4.908607
    iter  20 value 4.411717
    iter  30 value 3.930598
    iter  40 value 3.919484
    iter  50 value 3.912386
    iter  60 value 3.908797
    iter  70 value 3.908248
    iter  80 value 3.907990
    iter  90 value 3.907230
    iter 100 value 3.906797
    final  value 3.906797 
    stopped after 100 iterations
    # weights:  51
    initial  value 8.982054 
    iter  10 value 4.998880
    iter  20 value 4.998160
    iter  30 value 4.997203
    iter  40 value 4.995867
    iter  50 value 4.993867
    iter  60 value 4.990548
    iter  70 value 4.984029
    iter  80 value 4.966237
    iter  90 value 4.842527
    iter 100 value 4.058323
    final  value 4.058323 
    stopped after 100 iterations
    # weights:  11
    initial  value 8.994385 
    iter  10 value 6.997719
    iter  20 value 6.997601
    iter  30 value 6.997471
    iter  40 value 6.997325
    iter  50 value 6.997163
    iter  60 value 6.996980
    iter  70 value 6.996772
    iter  80 value 6.996535
    iter  90 value 6.996261
    iter 100 value 6.995942
    final  value 6.995942 
    stopped after 100 iterations
    # weights:  31
    initial  value 20.877544 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 18.251895 
    final  value 7.000000 
    converged
    # weights:  11
    initial  value 25.760391 
    iter  10 value 6.363395
    final  value 6.353131 
    converged
    # weights:  31
    initial  value 13.660612 
    iter  10 value 6.283886
    iter  20 value 6.271493
    final  value 6.271477 
    converged
    # weights:  51
    initial  value 16.050385 
    iter  10 value 6.249999
    iter  20 value 6.239089
    final  value 6.239018 
    converged
    # weights:  11
    initial  value 11.294569 
    iter  10 value 7.008551
    iter  20 value 7.008489
    iter  30 value 7.008427
    iter  40 value 7.008364
    iter  50 value 7.008300
    iter  60 value 7.008235
    iter  70 value 7.008170
    iter  80 value 7.008104
    iter  90 value 7.008037
    iter 100 value 7.007968
    final  value 7.007968 
    stopped after 100 iterations
    # weights:  31
    initial  value 10.564469 
    iter  10 value 6.086679
    iter  20 value 5.125211
    iter  30 value 5.116156
    iter  40 value 5.106074
    iter  50 value 5.104425
    iter  60 value 5.103682
    iter  70 value 5.103319
    iter  80 value 5.103164
    iter  90 value 5.103124
    iter 100 value 5.103052
    final  value 5.103052 
    stopped after 100 iterations
    # weights:  51
    initial  value 11.567648 
    iter  10 value 7.004807
    iter  20 value 7.004763
    iter  30 value 7.004719
    iter  40 value 7.004674
    iter  50 value 7.004627
    iter  60 value 7.004580
    iter  70 value 7.004531
    iter  80 value 7.004481
    iter  90 value 7.004430
    iter 100 value 7.004377
    final  value 7.004377 
    stopped after 100 iterations
    # weights:  11
    initial  value 17.745229 
    final  value 7.000000 
    converged
    # weights:  31
    initial  value 27.255976 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 7.829178 
    iter  10 value 6.999873
    iter  20 value 6.999872
    iter  30 value 6.999871
    iter  40 value 6.999871
    iter  50 value 6.999870
    iter  60 value 6.999869
    iter  70 value 6.999868
    iter  80 value 6.999867
    iter  90 value 6.999867
    iter 100 value 6.999866
    final  value 6.999866 
    stopped after 100 iterations
    # weights:  11
    initial  value 16.305536 
    iter  10 value 6.246216
    final  value 6.214667 
    converged
    # weights:  31
    initial  value 21.012069 
    iter  10 value 6.207481
    iter  20 value 6.148654
    final  value 6.148532 
    converged
    # weights:  51
    initial  value 14.524318 
    iter  10 value 6.187450
    iter  20 value 6.145487
    iter  30 value 6.145297
    final  value 6.145291 
    converged
    # weights:  11
    initial  value 14.821245 
    iter  10 value 6.986710
    iter  20 value 6.979144
    iter  30 value 6.964012
    iter  40 value 6.921950
    iter  50 value 6.604091
    iter  60 value 5.077008
    iter  70 value 4.085006
    iter  80 value 4.057073
    iter  90 value 4.057032
    iter 100 value 4.056914
    final  value 4.056914 
    stopped after 100 iterations
    # weights:  31
    initial  value 16.496305 
    iter  10 value 6.988034
    iter  20 value 6.978545
    iter  30 value 6.946733
    iter  40 value 6.298307
    iter  50 value 4.105334
    iter  60 value 4.062246
    iter  70 value 4.060198
    iter  80 value 4.056404
    iter  90 value 4.056215
    iter 100 value 4.056021
    final  value 4.056021 
    stopped after 100 iterations
    # weights:  51
    initial  value 16.292015 
    iter  10 value 7.004158
    iter  20 value 7.004092
    iter  30 value 7.004023
    iter  40 value 7.003952
    iter  50 value 7.003877
    iter  60 value 7.003798
    iter  70 value 7.003714
    iter  80 value 7.003626
    iter  90 value 7.003533
    iter 100 value 7.003433
    final  value 7.003433 
    stopped after 100 iterations
    # weights:  11
    initial  value 13.198909 
    final  value 6.999966 
    converged
    # weights:  31
    initial  value 15.066457 
    final  value 7.000000 
    converged
    # weights:  51
    initial  value 8.135236 
    iter  10 value 6.999689
    iter  20 value 6.999685
    iter  30 value 6.999681
    iter  40 value 6.999677
    iter  50 value 6.999673
    iter  60 value 6.999669
    iter  70 value 6.999664
    iter  80 value 6.999660
    iter  90 value 6.999655
    iter 100 value 6.999650
    final  value 6.999650 
    stopped after 100 iterations
    # weights:  11
    initial  value 11.236554 
    iter  10 value 6.285577
    final  value 6.283343 
    converged
    # weights:  31
    initial  value 10.281890 
    iter  10 value 6.208128
    iter  20 value 6.206116
    final  value 6.206112 
    converged
    # weights:  51
    initial  value 12.583298 
    iter  10 value 6.186108
    iter  20 value 6.180127
    iter  30 value 6.180018
    final  value 6.180018 
    converged
    # weights:  11
    initial  value 16.486685 
    iter  10 value 5.787473
    iter  20 value 4.961948
    iter  30 value 4.939719
    iter  40 value 4.925720
    iter  50 value 4.921076
    iter  60 value 4.919906
    iter  70 value 4.919646
    iter  80 value 4.918189
    iter  90 value 4.917911
    iter 100 value 4.917819
    final  value 4.917819 
    stopped after 100 iterations
    # weights:  31
    initial  value 23.143967 
    iter  10 value 7.003616
    iter  20 value 7.003361
    iter  30 value 7.003076
    iter  40 value 7.002755
    iter  50 value 7.002391
    iter  60 value 7.001972
    iter  70 value 7.001485
    iter  80 value 7.000910
    iter  90 value 7.000220
    iter 100 value 6.999374
    final  value 6.999374 
    stopped after 100 iterations
    # weights:  51
    initial  value 14.131431 
    iter  10 value 7.002261
    iter  20 value 7.002105
    iter  30 value 7.001932
    iter  40 value 7.001737
    iter  50 value 7.001516
    iter  60 value 7.001262
    iter  70 value 7.000967
    iter  80 value 7.000617
    iter  90 value 7.000196
    iter 100 value 6.999676
    final  value 6.999676 
    stopped after 100 iterations
    # weights:  11
    initial  value 16.802538 
    final  value 7.999999 
    converged
    # weights:  31
    initial  value 9.725303 
    iter  10 value 7.999852
    iter  20 value 7.999851
    iter  30 value 7.999850
    iter  40 value 7.999849
    iter  50 value 7.999848
    iter  60 value 7.999847
    iter  70 value 7.999846
    iter  80 value 7.999845
    iter  90 value 7.999844
    iter 100 value 7.999843
    final  value 7.999843 
    stopped after 100 iterations
    # weights:  51
    initial  value 9.409114 
    final  value 7.999958 
    converged
    # weights:  11
    initial  value 15.578559 
    iter  10 value 6.068243
    final  value 6.000450 
    converged
    # weights:  31
    initial  value 13.121039 
    iter  10 value 6.212629
    iter  20 value 5.912302
    final  value 5.912201 
    converged
    # weights:  51
    initial  value 10.497421 
    iter  10 value 6.045483
    iter  20 value 5.920286
    iter  30 value 5.738367
    iter  40 value 5.731880
    final  value 5.731572 
    converged
    # weights:  11
    initial  value 11.393880 
    iter  10 value 8.007523
    iter  20 value 8.007453
    iter  30 value 8.007382
    iter  40 value 8.007310
    iter  50 value 8.007235
    iter  60 value 8.007158
    iter  70 value 8.007080
    iter  80 value 8.006999
    iter  90 value 8.006915
    iter 100 value 8.006829
    final  value 8.006829 
    stopped after 100 iterations
    # weights:  31
    initial  value 22.192156 
    iter  10 value 7.107534
    iter  20 value 3.287414
    iter  30 value 3.208704
    iter  40 value 3.193053
    iter  50 value 3.187582
    iter  60 value 3.181107
    iter  70 value 3.180873
    iter  80 value 3.180827
    iter  90 value 3.180764
    iter 100 value 3.180724
    final  value 3.180724 
    stopped after 100 iterations
    # weights:  51
    initial  value 30.470275 
    iter  10 value 8.004948
    iter  20 value 8.004879
    iter  30 value 8.004806
    iter  40 value 8.004730
    iter  50 value 8.004651
    iter  60 value 8.004567
    iter  70 value 8.004478
    iter  80 value 8.004384
    iter  90 value 8.004284
    iter 100 value 8.004176
    final  value 8.004176 
    stopped after 100 iterations
    # weights:  11
    initial  value 11.813650 
    final  value 4.999939 
    converged
    # weights:  31
    initial  value 28.453694 
    final  value 5.000000 
    converged
    # weights:  51
    initial  value 6.263991 
    iter  10 value 4.999879
    iter  20 value 4.999878
    iter  30 value 4.999878
    iter  40 value 4.999877
    iter  50 value 4.999876
    iter  60 value 4.999876
    iter  70 value 4.999875
    iter  80 value 4.999874
    iter  90 value 4.999874
    iter 100 value 4.999873
    final  value 4.999873 
    stopped after 100 iterations
    # weights:  11
    initial  value 19.377885 
    iter  10 value 4.796844
    final  value 4.796586 
    converged
    # weights:  31
    initial  value 12.740293 
    iter  10 value 4.706322
    iter  20 value 4.703720
    final  value 4.703719 
    converged
    # weights:  51
    initial  value 8.773290 
    iter  10 value 4.660740
    iter  20 value 4.659767
    final  value 4.659767 
    converged
    # weights:  11
    initial  value 9.362009 
    iter  10 value 5.007494
    iter  20 value 5.007452
    iter  30 value 5.007410
    iter  40 value 5.007368
    iter  50 value 5.007326
    iter  60 value 5.007284
    iter  70 value 5.007241
    iter  80 value 5.007199
    iter  90 value 5.007156
    iter 100 value 5.007112
    final  value 5.007112 
    stopped after 100 iterations
    # weights:  31
    initial  value 16.918922 
    iter  10 value 4.998888
    iter  20 value 4.997985
    iter  30 value 4.996783
    iter  40 value 4.995102
    iter  50 value 4.992587
    iter  60 value 4.988428
    iter  70 value 4.980372
    iter  80 value 4.959467
    iter  90 value 4.847961
    iter 100 value 4.022003
    final  value 4.022003 
    stopped after 100 iterations
    # weights:  51
    initial  value 16.631602 
    iter  10 value 5.005070
    iter  20 value 5.005029
    iter  30 value 5.004987
    iter  40 value 5.004944
    iter  50 value 5.004900
    iter  60 value 5.004856
    iter  70 value 5.004811
    iter  80 value 5.004764
    iter  90 value 5.004717
    iter 100 value 5.004668
    final  value 5.004668 
    stopped after 100 iterations
    # weights:  11
    initial  value 18.591129 
    final  value 8.999965 
    converged
    # weights:  31
    initial  value 13.791790 
    final  value 9.000000 
    converged
    # weights:  51
    initial  value 9.611425 
    final  value 8.999977 
    converged
    # weights:  11
    initial  value 11.703770 
    iter  10 value 7.696615
    final  value 7.696414 
    converged
    # weights:  31
    initial  value 8.952654 
    iter  10 value 7.633668
    iter  20 value 7.631491
    final  value 7.631490 
    converged
    # weights:  51
    initial  value 10.680464 
    iter  10 value 7.611229
    iter  20 value 7.609113
    final  value 7.609112 
    converged
    # weights:  11
    initial  value 10.808767 
    iter  10 value 8.993926
    iter  20 value 8.991775
    iter  30 value 8.988884
    iter  40 value 8.984777
    iter  50 value 8.978455
    iter  60 value 8.967429
    iter  70 value 8.943356
    iter  80 value 8.853570
    iter  90 value 7.575159
    iter 100 value 6.715215
    final  value 6.715215 
    stopped after 100 iterations
    # weights:  31
    initial  value 14.936637 
    iter  10 value 9.004841
    iter  20 value 9.004789
    iter  30 value 9.004734
    iter  40 value 9.004678
    iter  50 value 9.004621
    iter  60 value 9.004561
    iter  70 value 9.004500
    iter  80 value 9.004436
    iter  90 value 9.004369
    iter 100 value 9.004300
    final  value 9.004300 
    stopped after 100 iterations
    # weights:  51
    initial  value 16.523150 
    iter  10 value 9.004249
    iter  20 value 9.004193
    iter  30 value 9.004134
    iter  40 value 9.004074
    iter  50 value 9.004010
    iter  60 value 9.003945
    iter  70 value 9.003876
    iter  80 value 9.003803
    iter  90 value 9.003727
    iter 100 value 9.003647
    final  value 9.003647 
    stopped after 100 iterations
    # weights:  11
    initial  value 15.891145 
    final  value 10.999951 
    converged
    # weights:  31
    initial  value 13.080646 
    final  value 10.999995 
    converged
    # weights:  51
    initial  value 9.219027 
    iter  10 value 6.308620
    iter  20 value 6.232465
    iter  30 value 6.232144
    final  value 6.232143 
    converged
    # weights:  11
    initial  value 20.839197 
    iter  10 value 8.320368
    final  value 8.310985 
    converged
    # weights:  31
    initial  value 10.583249 
    iter  10 value 8.287341
    iter  20 value 8.283798
    iter  30 value 8.283767
    iter  40 value 8.282681
    iter  50 value 8.281399
    final  value 8.281348 
    converged
    # weights:  51
    initial  value 15.078775 
    iter  10 value 8.497957
    iter  20 value 8.237499
    iter  30 value 8.227747
    iter  40 value 8.204202
    iter  50 value 8.201482
    iter  60 value 8.199272
    final  value 8.199189 
    converged
    # weights:  11
    initial  value 15.341920 
    iter  10 value 11.001600
    iter  20 value 11.001102
    iter  30 value 11.000506
    iter  40 value 10.999777
    iter  50 value 10.998862
    iter  60 value 10.997669
    iter  70 value 10.996043
    iter  80 value 10.993677
    iter  90 value 10.989907
    iter 100 value 10.982972
    final  value 10.982972 
    stopped after 100 iterations
    # weights:  31
    initial  value 9.955797 
    iter  10 value 6.375691
    iter  20 value 6.241141
    iter  30 value 6.240277
    iter  40 value 6.239797
    iter  50 value 6.239760
    iter  60 value 6.239543
    iter  70 value 6.239436
    iter  80 value 6.239349
    iter  90 value 6.239320
    iter 100 value 6.239300
    final  value 6.239300 
    stopped after 100 iterations
    # weights:  51
    initial  value 19.896657 
    iter  10 value 11.002862
    iter  20 value 11.002701
    iter  30 value 11.002523
    iter  40 value 11.002323
    iter  50 value 11.002096
    iter  60 value 11.001835
    iter  70 value 11.001532
    iter  80 value 11.001174
    iter  90 value 11.000741
    iter 100 value 11.000207
    final  value 11.000207 
    stopped after 100 iterations
    # weights:  11
    initial  value 20.785450 
    final  value 5.000000 
    converged
    # weights:  31
    initial  value 10.350695 
    final  value 5.000000 
    converged
    # weights:  51
    initial  value 9.797825 
    final  value 5.000000 
    converged
    # weights:  11
    initial  value 17.048600 
    iter  10 value 4.895483
    final  value 4.891138 
    converged
    # weights:  31
    initial  value 19.052525 
    iter  10 value 4.789923
    iter  20 value 4.787095
    final  value 4.787092 
    converged
    # weights:  51
    initial  value 10.079322 
    iter  10 value 4.740833
    iter  20 value 4.739379
    final  value 4.739378 
    converged
    # weights:  11
    initial  value 19.108232 
    iter  10 value 5.005972
    iter  20 value 5.005901
    iter  30 value 5.005828
    iter  40 value 5.005753
    iter  50 value 5.005675
    iter  60 value 5.005595
    iter  70 value 5.005512
    iter  80 value 5.005425
    iter  90 value 5.005334
    iter 100 value 5.005240
    final  value 5.005240 
    stopped after 100 iterations
    # weights:  31
    initial  value 13.562621 
    iter  10 value 5.005376
    iter  20 value 5.005327
    iter  30 value 5.005276
    iter  40 value 5.005224
    iter  50 value 5.005170
    iter  60 value 5.005116
    iter  70 value 5.005060
    iter  80 value 5.005002
    iter  90 value 5.004943
    iter 100 value 5.004882
    final  value 5.004882 
    stopped after 100 iterations
    # weights:  51
    initial  value 13.339248 
    iter  10 value 5.001483
    iter  20 value 5.001282
    iter  30 value 5.001053
    iter  40 value 5.000787
    iter  50 value 5.000475
    iter  60 value 5.000102
    iter  70 value 4.999646
    iter  80 value 4.999075
    iter  90 value 4.998336
    iter 100 value 4.997340
    final  value 4.997340 
    stopped after 100 iterations
    # weights:  11
    initial  value 11.134067 
    iter  10 value 9.243710
    iter  20 value 7.285528
    iter  30 value 7.245812
    iter  40 value 7.231656
    iter  50 value 7.231393
    final  value 7.231393 
    converged
    # weights:  31
    initial  value 14.130867 
    final  value 12.000000 
    converged
    # weights:  51
    initial  value 14.415938 
    final  value 12.000000 
    converged
    # weights:  11
    initial  value 15.019432 
    iter  10 value 9.069689
    final  value 9.063171 
    converged
    # weights:  31
    initial  value 17.508841 
    iter  10 value 9.261632
    iter  20 value 9.055988
    iter  30 value 9.055626
    final  value 9.055625 
    converged
    # weights:  51
    initial  value 11.303518 
    iter  10 value 9.017983
    iter  20 value 9.011319
    iter  30 value 9.010274
    iter  40 value 9.009455
    final  value 9.009346 
    converged
    # weights:  11
    initial  value 18.239316 
    iter  10 value 12.007304
    iter  20 value 12.007242
    iter  30 value 12.007180
    iter  40 value 12.007116
    iter  50 value 12.007051
    iter  60 value 12.006985
    iter  70 value 12.006917
    iter  80 value 12.006848
    iter  90 value 12.006777
    iter 100 value 12.006705
    final  value 12.006705 
    stopped after 100 iterations
    # weights:  31
    initial  value 16.014808 
    iter  10 value 12.005794
    iter  20 value 12.005717
    iter  30 value 12.005637
    iter  40 value 12.005553
    iter  50 value 12.005466
    iter  60 value 12.005374
    iter  70 value 12.005278
    iter  80 value 12.005176
    iter  90 value 12.005068
    iter 100 value 12.004953
    final  value 12.004953 
    stopped after 100 iterations
    # weights:  51
    initial  value 13.391025 
    iter  10 value 9.413596
    iter  20 value 7.286503
    iter  30 value 7.242636
    iter  40 value 7.239833
    iter  50 value 7.238170
    iter  60 value 7.237712
    iter  70 value 7.237671
    iter  80 value 7.237563
    iter  90 value 7.237521
    iter 100 value 7.237438
    final  value 7.237438 
    stopped after 100 iterations
    # weights:  11
    initial  value 15.087301 
    final  value 7.999988 
    converged
    # weights:  31
    initial  value 14.180092 
    final  value 8.000000 
    converged
    # weights:  51
    initial  value 7.621209 
    iter  10 value 5.807104
    iter  20 value 5.756148
    iter  30 value 5.755560
    final  value 5.755556 
    converged
    # weights:  11
    initial  value 9.112559 
    iter  10 value 6.884039
    final  value 6.883875 
    converged
    # weights:  31
    initial  value 10.731645 
    iter  10 value 6.824275
    iter  20 value 6.820743
    final  value 6.820727 
    converged
    # weights:  51
    initial  value 18.479336 
    iter  10 value 6.806358
    iter  20 value 6.796743
    iter  30 value 6.796679
    final  value 6.796679 
    converged
    # weights:  11
    initial  value 11.006901 
    iter  10 value 8.003946
    iter  20 value 8.003643
    iter  30 value 8.003313
    iter  40 value 8.002951
    iter  50 value 8.002551
    iter  60 value 8.002107
    iter  70 value 8.001610
    iter  80 value 8.001047
    iter  90 value 8.000405
    iter 100 value 7.999663
    final  value 7.999663 
    stopped after 100 iterations
    # weights:  31
    initial  value 8.823275 
    iter  10 value 8.001656
    iter  20 value 8.001172
    iter  30 value 8.000591
    iter  40 value 7.999875
    iter  50 value 7.998973
    iter  60 value 7.997794
    iter  70 value 7.996188
    iter  80 value 7.993872
    iter  90 value 7.990248
    iter 100 value 7.983840
    final  value 7.983840 
    stopped after 100 iterations
    # weights:  51
    initial  value 11.581945 
    iter  10 value 8.001224
    iter  20 value 8.000881
    iter  30 value 8.000470
    iter  40 value 7.999967
    iter  50 value 7.999334
    iter  60 value 7.998511
    iter  70 value 7.997393
    iter  80 value 7.995779
    iter  90 value 7.993242
    iter 100 value 7.988684
    final  value 7.988684 
    stopped after 100 iterations
    Error in train.default(x = trainEight, y = apply(testEight, 2, factor),  : 
      final tuning parameters could not be determined
    In addition: There were 50 or more warnings (use warnings() to see the first 50)
    Error in train.default(x = trainEight, y = apply(testEight, 2, factor),  : 
      final tuning parameters could not be determined
    
    
    
    In [143]:
    %%R
    nnetFit <- train(x = trainDescr, y = trainClass,
                     method = "nnet", trace=FALSE,
                     tuneLength = 5)
    plot(nnetFit)
    
    
    
    
    
    
    In [147]:
    %%R
    plot(nnetFit, plotType="level")
    
    
    
    

    Questions

    
    
    In [123]:
    %%R
    library(caret)
    data(iris)
    
    irisTrain <- createDataPartition(y = iris$Species, p = 3/4, list = FALSE)
    
    trainX <- iris[irisTrain,1:4]
    testX <- iris[-irisTrain,1:4]
    
    trainY <- iris$Species[irisTrain]
    print(length(trainY))
    testY <- iris$Species[-irisTrain]
    print(length(testY))
    
    
    
    
    [1] 114
    [1] 36
    
    
    
    In [135]:
    %%R
    irisKN <- train(x = trainX, y = trainY, method='knn')
    irisNB <- train(x = trainX, y = trainY, method='nb')
    irisNN <- train(x = trainX, y = trainY, method='nnet', trace=FALSE)
    
    
    
    In [136]:
    %%R
    irisProbValues <- extractProb(list(irisKN, irisNB, irisNN),
                              testX = testX,
                              testY = testY)
    irisTestProbs <- subset(irisProbValues, dataType == "Test")
    str(irisTestProbs)
    
    
    
    
    'data.frame':	108 obs. of  8 variables:
     $ setosa    : num  1 1 1 1 1 1 1 1 1 1 ...
     $ versicolor: num  0 0 0 0 0 0 0 0 0 0 ...
     $ virginica : num  0 0 0 0 0 0 0 0 0 0 ...
     $ obs       : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ pred      : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ model     : chr  "knn" "knn" "knn" "knn" ...
     $ dataType  : chr  "Test" "Test" "Test" "Test" ...
     $ object    : chr  "Object1" "Object1" "Object1" "Object1" ...
    
    
    
    In [140]:
    %%R
    irisKNNPred <- subset(irisTestProbs, model == "knn")
    confusionMatrix(irisKNNPred$pred, irisKNNPred$obs)
    
    
    
    
    Confusion Matrix and Statistics
    
                Reference
    Prediction   setosa versicolor virginica
      setosa         12          0         0
      versicolor      0          9         0
      virginica       0          3        12
    
    Overall Statistics
                                              
                   Accuracy : 0.9167          
                     95% CI : (0.7753, 0.9825)
        No Information Rate : 0.3333          
        P-Value [Acc > NIR] : 3.978e-13       
                                              
                      Kappa : 0.875           
     Mcnemar's Test P-Value : NA              
    
    Statistics by Class:
    
                         Class: setosa Class: versicolor Class: virginica
    Sensitivity                 1.0000            0.7500           1.0000
    Specificity                 1.0000            1.0000           0.8750
    Pos Pred Value              1.0000            1.0000           0.8000
    Neg Pred Value              1.0000            0.8889           1.0000
    Prevalence                  0.3333            0.3333           0.3333
    Detection Rate              0.3333            0.2500           0.3333
    Detection Prevalence        0.3333            0.2500           0.4167
    Balanced Accuracy           1.0000            0.8750           0.9375
    
    
    
    In [141]:
    %%R
    irisNaiveBayesPred <- subset(irisTestProbs, model == "nb")
    confusionMatrix(irisNaiveBayesPred$pred, irisNaiveBayesPred$obs)
    
    
    
    
    Confusion Matrix and Statistics
    
                Reference
    Prediction   setosa versicolor virginica
      setosa         12          0         0
      versicolor      0          9         0
      virginica       0          3        12
    
    Overall Statistics
                                              
                   Accuracy : 0.9167          
                     95% CI : (0.7753, 0.9825)
        No Information Rate : 0.3333          
        P-Value [Acc > NIR] : 3.978e-13       
                                              
                      Kappa : 0.875           
     Mcnemar's Test P-Value : NA              
    
    Statistics by Class:
    
                         Class: setosa Class: versicolor Class: virginica
    Sensitivity                 1.0000            0.7500           1.0000
    Specificity                 1.0000            1.0000           0.8750
    Pos Pred Value              1.0000            1.0000           0.8000
    Neg Pred Value              1.0000            0.8889           1.0000
    Prevalence                  0.3333            0.3333           0.3333
    Detection Rate              0.3333            0.2500           0.3333
    Detection Prevalence        0.3333            0.2500           0.4167
    Balanced Accuracy           1.0000            0.8750           0.9375
    
    
    
    In [138]:
    %%R
    irisNNetPred <- subset(irisTestProbs, model == "nnet")
    confusionMatrix(irisNNetPred$pred, irisNNetPred$obs)
    
    
    
    
    Confusion Matrix and Statistics
    
                Reference
    Prediction   setosa versicolor virginica
      setosa         12          0         0
      versicolor      0         10         0
      virginica       0          2        12
    
    Overall Statistics
                                              
                   Accuracy : 0.9444          
                     95% CI : (0.8134, 0.9932)
        No Information Rate : 0.3333          
        P-Value [Acc > NIR] : 1.728e-14       
                                              
                      Kappa : 0.9167          
     Mcnemar's Test P-Value : NA              
    
    Statistics by Class:
    
                         Class: setosa Class: versicolor Class: virginica
    Sensitivity                 1.0000            0.8333           1.0000
    Specificity                 1.0000            1.0000           0.9167
    Pos Pred Value              1.0000            1.0000           0.8571
    Neg Pred Value              1.0000            0.9231           1.0000
    Prevalence                  0.3333            0.3333           0.3333
    Detection Rate              0.3333            0.2778           0.3333
    Detection Prevalence        0.3333            0.2778           0.3889
    Balanced Accuracy           1.0000            0.9167           0.9583
    

    Discussion