R/evaluation-metrics.R
mean_squared_error.Rd
Accuracy measures for evaluating ordered probability predictions.
mean_squared_error(y, predictions, use.true = FALSE)
mean_absolute_error(y, predictions, use.true = FALSE)
mean_ranked_score(y, predictions, use.true = FALSE)
classification_error(y, predictions)
The MSE, the MAE, the RPS, or the CE of the method.
When calling one of mean_squared_error
, mean_absolute_error
, or mean_ranked_score
,
predictions
must be a matrix of predicted class probabilities, with as many rows as observations in y
and as
many columns as classes of y
.
If use.true == FALSE
, the mean squared error (MSE), the mean absolute error (MAE), and the mean ranked probability score
(RPS) are computed as follows:
$$MSE = \frac{1}{n} \sum_{i = 1}^n \sum_{m = 1}^M (1 (Y_i = m) - \hat{p}_m (x))^2$$
$$MAE = \frac{1}{n} \sum_{i = 1}^n \sum_{m = 1}^M |1 (Y_i = m) - \hat{p}_m (x)|$$
$$RPS = \frac{1}{n} \sum_{i = 1}^n \frac{1}{M - 1} \sum_{m = 1}^M (1 (Y_i \leq m) - \hat{p}_m^* (x))^2$$
If use.true == TRUE
, the MSE, the MAE, and the RPS are computed as follows (useful for simulation studies):
$$MSE = \frac{1}{n} \sum_{i = 1}^n \sum_{m = 1}^M (p_m (x) - \hat{p}_m (x))^2$$
$$MSE = \frac{1}{n} \sum_{i = 1}^n \sum_{m = 1}^M |p_m (x) - \hat{p}_m (x)|$$
$$RPS = \frac{1}{n} \sum_{i = 1}^n \frac{1}{M - 1} \sum_{m = 1}^M (p_m^* (x) - \hat{p}_m^* (x))^2$$
where:
$$p_m (x) = P(Y_i = m | X_i = x)$$
$$p_m^* (x) = P(Y_i \leq m | X_i = x)$$
mean_ranked_score
## Generate synthetic data.
set.seed(1986)
data <- generate_ordered_data(100)
sample <- data$sample
Y <- sample$Y
X <- sample[, -1]
## Training-test split.
train_idx <- sample(seq_len(length(Y)), floor(length(Y) * 0.5))
Y_tr <- Y[train_idx]
X_tr <- X[train_idx, ]
Y_test <- Y[-train_idx]
X_test <- X[-train_idx, ]
## Fit ocf on training sample.
forests <- ocf(Y_tr, X_tr)
## Accuracy measures on test sample.
predictions <- predict(forests, X_test)
mean_squared_error(Y_test, predictions$probabilities)
#> [1] 0.5902776
mean_ranked_score(Y_test, predictions$probabilities)
#> [1] 0.1660277
classification_error(Y_test, predictions$classification)
#> [1] 0.48