[1] "A1 SH"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 19 0 0 53
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
43 44 0 63 43
Source: local data frame [265 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 FSiS A1 SH 2793.0 0.2155819 0.018551798 0.5120999 -0.04865478
2 FSiS A1 SH 2793.5 0.2372980 0.005670378 1.5169013 0.07359111
3 FSiS A1 SH 2794.0 0.2584779 -0.007211043 1.6571062 0.16480534
4 FSiS A1 SH 2794.5 0.4474882 -0.020092464 1.4467989 0.17702993
5 FSiS A1 SH 2795.0 0.1386372 -0.054442918 1.3533290 0.21182299
6 FSiS A1 SH 2795.5 0.1222831 -0.101674794 1.4701664 0.22780899
7 FSiS A1 SH 2796.0 0.1155806 -0.127437635 1.8440460 0.33030747
8 FSiS A1 SH 2796.5 0.1673239 -0.148906669 2.0543532 0.32842676
9 FSiS A1 SH 2797.0 0.1174573 -0.153200476 1.9842508 0.33971100
10 FSiS A1 SH 2797.5 0.2005683 -0.191844738 2.1478231 0.38484794
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Random Forest
265 samples
9 predictor
4 classes: 'SS', 'CSiS', 'FSiS', 'MS'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 222, 202, 212, 221, 222, 246, ...
Resampling results across tuning parameters:
mtry F1
2 0.5389086
11 0.5839428
21 0.5823181
F1 was used to select the optimal model using the largest value.
The final value used for the model was mtry = 11.
F1 Resample
1 0.3174603 SHANKLE
2 0.7027027 SHRIMPLIN
3 0.8181818 NOLAN
4 0.6800000 LUKE G U
5 0.4736842 CHURCHMAN BIBLE
6 0.5116279 NEWBY
[1] "-----------------------------------------------------------------------------"
[1] "A1 LM"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 94 0 0 67
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
82 61 0 40 51
Source: local data frame [395 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 PS A1 LM 2814.5 -0.3321461 -0.1360252 0.90934700 -0.01574243
2 PS A1 LM 2815.0 -0.7002472 0.1817165 -0.09545440 -1.02850258
3 PS A1 LM 2815.5 -0.8940834 0.4393449 -0.46933399 -1.42533153
4 PS A1 LM 2816.0 -0.6275922 0.7012671 -0.60953883 -1.42062976
5 WS A1 LM 2816.5 -0.2442094 1.0962973 -0.72637620 -1.34728223
6 WS A1 LM 2817.0 0.1557956 1.4569771 -0.77311115 -1.29368211
7 WS A1 LM 2817.5 0.4767111 1.6544922 -0.70300873 -1.28898035
8 WS A1 LM 2818.0 0.4678638 1.7360745 -0.58617136 -1.20152752
9 WS A1 LM 2818.5 0.4984272 1.6802551 -0.28239419 -1.08962552
10 WS A1 LM 2819.0 0.1624981 1.5385594 0.02138298 -0.93258657
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Random Forest
395 samples
9 predictor
6 classes: 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 344, 355, 328, 334, 313, 301, ...
Resampling results across tuning parameters:
mtry F1
2 0.4552256
11 0.5213630
21 0.4873485
F1 was used to select the optimal model using the largest value.
The final value used for the model was mtry = 11.
F1 Resample
1 0.7750000 SHANKLE
2 0.7000000 SHRIMPLIN
3 0.4166667 NOLAN
4 0.7121212 LUKE G U
5 0.0000000 CHURCHMAN BIBLE
6 0.5243902 NEWBY
[1] "-----------------------------------------------------------------------------"
[1] "B1 SH"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 22 0 0 39
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
32 32 0 49 38
Source: local data frame [212 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 FSiS B1 SH 2840.0 1.02846063 0.22894833 -1.3806655 1.38632385
2 FSiS B1 SH 2840.5 1.11157158 0.11301555 -0.3992316 2.72256542
3 FSiS B1 SH 2841.0 0.98985425 0.02284561 0.2083228 2.77710589
4 FSiS B1 SH 2841.5 0.49440572 -0.08449957 0.4419975 1.80478092
5 FSiS B1 SH 2842.0 0.30968493 -0.25195803 0.3017927 0.71773289
6 CSiS B1 SH 2842.5 -0.06619106 -0.39365366 0.8158771 0.07359111
7 CSiS B1 SH 2843.0 -0.21257358 -0.42371031 1.0028169 -0.09097066
8 CSiS B1 SH 2843.5 -0.29890173 -0.39365366 1.0962868 0.02093134
9 CSiS B1 SH 2844.0 -0.48523113 -0.32065894 1.1663892 -0.08250749
10 CSiS B1 SH 2844.5 -0.03991728 -0.32924656 1.2832266 -0.08344784
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Random Forest
212 samples
9 predictor
3 classes: 'SS', 'CSiS', 'FSiS'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 174, 163, 173, 180, 180, 190, ...
Resampling results across tuning parameters:
mtry F1
2 0.5854924
11 0.5428910
21 0.5476950
F1 was used to select the optimal model using the largest value.
The final value used for the model was mtry = 2.
F1 Resample
1 0.0000000 SHANKLE
2 0.6842105 SHRIMPLIN
3 0.7179487 LUKE G U
4 0.9545455 CHURCHMAN BIBLE
5 0.5000000 NEWBY
6 0.6562500 NOLAN
[1] "-----------------------------------------------------------------------------"
[1] "B1 LM"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 31 0 0 23
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
39 20 0 10 18
Source: local data frame [141 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 MS B1 LM 2859.0 0.4190697 -0.13173144 -0.30576167 2.7263268
2 MS B1 LM 2859.5 -0.6227664 0.03572703 -0.79647863 0.8343367
3 MS B1 LM 2860.0 -0.9351027 0.18601027 -0.65627378 -0.7435756
4 MS B1 LM 2860.5 -0.9174081 0.36205635 0.04475045 -1.4037034
5 PS B1 LM 2861.0 -1.1141934 0.61539095 0.02138298 -1.4920966
6 PS B1 LM 2861.5 -1.2313530 1.08770971 -0.39923156 -1.4958580
7 PS B1 LM 2862.0 -1.3133916 1.59437892 -0.63290631 -1.5776687
8 PS B1 LM 2862.5 -1.3099063 2.09246051 -0.84321358 -1.6980339
9 PS B1 LM 2863.0 -1.3249199 2.71506251 -0.70300873 -1.8767010
10 PS B1 LM 2863.5 -1.2930160 3.20026268 -0.58617136 -1.9904837
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Random Forest
141 samples
9 predictor
5 classes: 'SiSh', 'MS', 'WS', 'D', 'PS'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 123, 131, 118, 121, 102, 110, ...
Resampling results across tuning parameters:
mtry F1
2 0.4281388
11 0.3962963
21 0.3754902
F1 was used to select the optimal model using the largest value.
The final value used for the model was mtry = 2.
F1 Resample
1 0.5000000 SHANKLE
2 0.3888889 SHRIMPLIN
3 0.4347826 LUKE G U
4 0.6451613 CHURCHMAN BIBLE
5 0.0000000 NEWBY
6 0.6000000 NOLAN
[1] "-----------------------------------------------------------------------------"
[1] "B2 SH"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 13 0 0 21
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
10 24 0 30 28
Source: local data frame [126 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 FSiS B2 SH 2868.0 -0.04983697 1.315281470 -0.8198461 -1.3247138
2 FSiS B2 SH 2868.5 0.12281933 0.868725557 -0.3758641 -0.8150425
3 FSiS B2 SH 2869.0 0.36679019 0.628272372 0.3952626 -0.1831253
4 FSiS B2 SH 2869.5 0.74722388 0.443638677 0.6523048 0.3378303
5 CSiS B2 SH 2870.0 0.69038671 0.233242141 0.4419975 0.1478790
6 CSiS B2 SH 2870.5 0.28073014 0.005670378 0.2083228 -0.3288800
7 CSiS B2 SH 2871.0 0.18689519 -0.247664227 0.2784252 -0.6373157
8 CSiS B2 SH 2871.5 0.16812821 -0.423710308 0.8626121 -0.3514484
9 CSiS B2 SH 2872.0 0.40700517 -0.526761673 1.3299615 0.2955144
10 FSiS B2 SH 2872.5 0.91210529 -0.578287355 1.6103712 0.8437402
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Random Forest
126 samples
9 predictor
4 classes: 'SS', 'CSiS', 'FSiS', 'D'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 98, 96, 105, 102, 116, 113, ...
Resampling results across tuning parameters:
mtry F1
2 0.4674603
11 0.4809524
21 0.4809524
F1 was used to select the optimal model using the largest value.
The final value used for the model was mtry = 11.
F1 Resample
1 0.9333333 SHANKLE
2 0.6428571 SHRIMPLIN
3 0.0000000 NOLAN
4 0.8095238 LUKE G U
5 0.0000000 CHURCHMAN BIBLE
6 0.5000000 NEWBY
[1] "-----------------------------------------------------------------------------"
[1] "B2 LM"
ALEXANDER D CHURCHMAN BIBLE CROSS H CATTLE KIMZEY A LUKE G U
0 29 0 0 20
NEWBY NOLAN Recruit F9 SHANKLE SHRIMPLIN
26 21 0 7 16
Source: local data frame [119 x 10]
Facies Formation Depth GR ILD_log10 DeltaPHI PHIND
(fctr) (fctr) (dbl) (dbl) (dbl) (dbl) (dbl)
1 D B2 LM 2882.0 -0.5015852 -0.39794747 -1.4274004 0.7271364
2 D B2 LM 2882.5 -1.0348358 -0.25195803 -1.3572980 -1.2128118
3 D B2 LM 2883.0 -1.3956982 -0.01579866 -0.8198461 -1.7760832
4 PS B2 LM 2883.5 -1.3903362 0.37493777 -0.7030087 -1.7394094
5 PS B2 LM 2884.0 -1.3731778 0.83866891 -0.4927015 -1.6717040
6 PS B2 LM 2884.5 -1.4029369 1.38398238 -0.5394364 -1.5391142
7 PS B2 LM 2885.0 -1.4351089 1.92070824 -0.5160689 -1.4732895
8 PS B2 LM 2885.5 -1.3144640 2.19121807 -0.6329063 -1.4159280
9 PS B2 LM 2886.0 -1.1924785 2.20409949 -0.6095388 -1.3924192
10 PS B2 LM 2886.5 -1.0948902 2.10534193 -0.7497437 -1.4008824
.. ... ... ... ... ... ... ...
Variables not shown: PE (dbl), isMarine (lgl), RELPOS (dbl)
Error in {: task 1 failed - "subscript out of bounds"
Traceback:
1. train(Facies ~ ., data = subset(df_i, select = -c(Well.Name)),
. method = "rf", trControl = fitControl, metric = "F1") # at line 28-29 of file <text>
2. train.formula(Facies ~ ., data = subset(df_i, select = -c(Well.Name)),
. method = "rf", trControl = fitControl, metric = "F1")
3. train(x, y, weights = w, ...)
4. train.default(x, y, weights = w, ...)
5. nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo,
. method = models, ppOpts = preProcess, ctrl = trControl, lev = classLevels,
. ...)
6. foreach(iter = seq(along = resampleIndex), .combine = "c", .verbose = FALSE,
. .packages = pkgs, .errorhandling = "stop") %:% foreach(parm = 1:nrow(info$loop),
. .combine = "c", .verbose = FALSE, .packages = pkgs, .errorhandling = "stop") %op%
. {
. testing <- FALSE
. if (!(length(ctrl$seeds) == 1 && is.na(ctrl$seeds)))
. set.seed(ctrl$seeds[[iter]][parm])
. loadNamespace("caret")
. if (ctrl$verboseIter)
. progress(printed[parm, , drop = FALSE], names(resampleIndex),
. iter)
. if (names(resampleIndex)[iter] != "AllData") {
. modelIndex <- resampleIndex[[iter]]
. holdoutIndex <- ctrl$indexOut[[iter]]
. }
. else {
. modelIndex <- 1:nrow(x)
. holdoutIndex <- modelIndex
. }
. if (testing)
. cat("pre-model\n")
. if (!is.null(info$submodels[[parm]]) && nrow(info$submodels[[parm]]) >
. 0) {
. submod <- info$submodels[[parm]]
. }
. else submod <- NULL
. mod <- try(createModel(x = x[modelIndex, , drop = FALSE],
. y = y[modelIndex], wts = wts[modelIndex], method = method,
. tuneValue = info$loop[parm, , drop = FALSE], obsLevels = lev,
. pp = ppp, classProbs = ctrl$classProbs, sampling = ctrl$sampling,
. ...), silent = TRUE)
. if (testing)
. print(mod)
. if (class(mod)[1] != "try-error") {
. predicted <- try(predictionFunction(method = method,
. modelFit = mod$fit, newdata = x[holdoutIndex,
. , drop = FALSE], preProc = mod$preProc, param = submod),
. silent = TRUE)
. if (class(predicted)[1] == "try-error") {
. wrn <- paste(colnames(printed[parm, , drop = FALSE]),
. printed[parm, , drop = FALSE], sep = "=", collapse = ", ")
. wrn <- paste("predictions failed for ", names(resampleIndex)[iter],
. ": ", wrn, " ", as.character(predicted), sep = "")
. if (ctrl$verboseIter)
. cat(wrn, "\n")
. warning(wrn)
. rm(wrn)
. nPred <- length(holdoutIndex)
. if (!is.null(lev)) {
. predicted <- rep("", nPred)
. predicted[seq(along = predicted)] <- NA
. }
. else {
. predicted <- rep(NA, nPred)
. }
. if (!is.null(submod)) {
. tmp <- predicted
. predicted <- vector(mode = "list", length = nrow(info$submodels[[parm]]) +
. 1)
. for (i in seq(along = predicted)) predicted[[i]] <- tmp
. rm(tmp)
. }
. }
. }
. else {
. wrn <- paste(colnames(printed[parm, , drop = FALSE]),
. printed[parm, , drop = FALSE], sep = "=", collapse = ", ")
. wrn <- paste("model fit failed for ", names(resampleIndex)[iter],
. ": ", wrn, " ", as.character(mod), sep = "")
. if (ctrl$verboseIter)
. cat(wrn, "\n")
. warning(wrn)
. rm(wrn)
. nPred <- length(holdoutIndex)
. if (!is.null(lev)) {
. predicted <- rep("", nPred)
. predicted[seq(along = predicted)] <- NA
. }
. else {
. predicted <- rep(NA, nPred)
. }
. if (!is.null(submod)) {
. tmp <- predicted
. predicted <- vector(mode = "list", length = nrow(info$submodels[[parm]]) +
. 1)
. for (i in seq(along = predicted)) predicted[[i]] <- tmp
. rm(tmp)
. }
. }
. if (testing)
. print(head(predicted))
. if (ctrl$classProbs) {
. if (class(mod)[1] != "try-error") {
. probValues <- probFunction(method = method, modelFit = mod$fit,
. newdata = x[holdoutIndex, , drop = FALSE],
. preProc = mod$preProc, param = submod)
. }
. else {
. probValues <- as.data.frame(matrix(NA, nrow = nPred,
. ncol = length(lev)))
. colnames(probValues) <- lev
. if (!is.null(submod)) {
. tmp <- probValues
. probValues <- vector(mode = "list", length = nrow(info$submodels[[parm]]) +
. 1)
. for (i in seq(along = probValues)) probValues[[i]] <- tmp
. rm(tmp)
. }
. }
. if (testing)
. print(head(probValues))
. }
. if (is.numeric(y)) {
. if (is.logical(ctrl$predictionBounds) && any(ctrl$predictionBounds)) {
. if (is.list(predicted)) {
. predicted <- lapply(predicted, trimPredictions,
. mod_type = "Regression", bounds = ctrl$predictionBounds,
. limits = ctrl$yLimits)
. }
. else {
. predicted <- trimPredictions(mod_type = "Regression",
. bounds = ctrl$predictionBounds, limits = ctrl$yLimit,
. pred = predicted)
. }
. }
. else {
. if (is.numeric(ctrl$predictionBounds) && any(!is.na(ctrl$predictionBounds))) {
. if (is.list(predicted)) {
. predicted <- lapply(predicted, trimPredictions,
. mod_type = "Regression", bounds = ctrl$predictionBounds,
. limits = ctrl$yLimits)
. }
. else {
. predicted <- trimPredictions(mod_type = "Regression",
. bounds = ctrl$predictionBounds, limits = ctrl$yLimit,
. pred = predicted)
. }
. }
. }
. }
. if (!is.null(submod)) {
. allParam <- expandParameters(info$loop[parm, , drop = FALSE],
. info$submodels[[parm]])
. allParam <- allParam[complete.cases(allParam), ,
. drop = FALSE]
. predicted <- lapply(predicted, function(x, y, wts,
. lv, rows) {
. x <- outcome_conversion(x, lv = lev)
. out <- data.frame(pred = x, obs = y, stringsAsFactors = FALSE)
. if (!is.null(wts))
. out$weights <- wts
. out$rowIndex <- rows
. out
. }, y = y[holdoutIndex], wts = wts[holdoutIndex],
. lv = lev, rows = holdoutIndex)
. if (testing)
. print(head(predicted))
. if (ctrl$classProbs) {
. for (k in seq(along = predicted)) predicted[[k]] <- cbind(predicted[[k]],
. probValues[[k]])
. }
. if (keep_pred) {
. tmpPred <- predicted
. for (modIndex in seq(along = tmpPred)) {
. tmpPred[[modIndex]]$rowIndex <- holdoutIndex
. tmpPred[[modIndex]] <- merge(tmpPred[[modIndex]],
. allParam[modIndex, , drop = FALSE], all = TRUE)
. }
. tmpPred <- rbind.fill(tmpPred)
. tmpPred$Resample <- names(resampleIndex)[iter]
. }
. else tmpPred <- NULL
. thisResample <- lapply(predicted, ctrl$summaryFunction,
. lev = lev, model = method)
. if (testing)
. print(head(thisResample))
. if (length(lev) > 1 && length(lev) <= 50) {
. cells <- lapply(predicted, function(x) flatTable(x$pred,
. x$obs))
. for (ind in seq(along = cells)) thisResample[[ind]] <- c(thisResample[[ind]],
. cells[[ind]])
. }
. thisResample <- do.call("rbind", thisResample)
. thisResample <- cbind(allParam, thisResample)
. }
. else {
. if (is.factor(y))
. predicted <- outcome_conversion(predicted, lv = lev)
. tmp <- data.frame(pred = predicted, obs = y[holdoutIndex],
. stringsAsFactors = FALSE)
. names(tmp)[1] <- "pred"
. if (!is.null(wts))
. tmp$weights <- wts[holdoutIndex]
. if (ctrl$classProbs)
. tmp <- cbind(tmp, probValues)
. tmp$rowIndex <- holdoutIndex
. if (keep_pred) {
. tmpPred <- tmp
. tmpPred$rowIndex <- holdoutIndex
. tmpPred <- merge(tmpPred, info$loop[parm, , drop = FALSE],
. all = TRUE)
. tmpPred$Resample <- names(resampleIndex)[iter]
. }
. else tmpPred <- NULL
. thisResample <- ctrl$summaryFunction(tmp, lev = lev,
. model = method)
. if (length(lev) > 1 && length(lev) <= 50)
. thisResample <- c(thisResample, flatTable(tmp$pred,
. tmp$obs))
. thisResample <- as.data.frame(t(thisResample))
. thisResample <- cbind(thisResample, info$loop[parm,
. , drop = FALSE])
. }
. thisResample$Resample <- names(resampleIndex)[iter]
. if (ctrl$verboseIter)
. progress(printed[parm, , drop = FALSE], names(resampleIndex),
. iter, FALSE)
. if (testing)
. print(thisResample)
. list(resamples = thisResample, pred = tmpPred)
. }
7. e$fun(obj, substitute(ex), parent.frame(), e$data)