Package 'pEPA'

Title: Tests of Equal Predictive Accuracy for Panels of Forecasts
Description: Allows to perform the tests of equal predictive accuracy for panels of forecasts. Main references: Qu et al. (2024) <doi:10.1016/j.ijforecast.2023.08.001> and Akgun et al. (2024) <doi:10.1016/j.ijforecast.2023.02.001>.
Authors: Krzysztof Drachal [aut, cre] (Faculty of Economic Sciences, University of Warsaw, Poland)
Maintainer: Krzysztof Drachal <[email protected]>
License: GPL-3
Version: 1.2
Built: 2025-03-09 10:23:41 UTC
Source: https://github.com/kdrachal/pepa

Help Index


Computes Test for Cross-Sectional Clusters.

Description

This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to CnT(1)C^{(1)}_{nT} statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable for situations with cross-sectional independence.

Usage

csc.C1.test(evaluated1,evaluated2,realized,loss.type="SE",cl)

Arguments

evaluated1

same as in pool_av.test, but cross-sections are ordered rowwise

evaluated2

same as in pool_av.test, but cross-sections are ordered rowwise

realized

same as in pool_av.test, but cross-sections are ordered rowwise

loss.type

same as in pool_av.test

cl

vector of the beginning indices of rows for each pre-defined clusters – as a result always cl[1]=1

Value

class htest object, list of

statistic

test statistic

parameter

KK, number of cross-sectional clusters

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.

See Also

pool_av.test, csc.C3.test

Examples

data(forecasts)
y <- t(observed)
# just to save time
y <- y[,1:40]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][1:40,1]
    f.dma[i,] <- predicted[[i]][1:40,9]
  }
# 2 cross-sectional clusters: energy commodities and non-energy commodities
cs.cl <- c(1,9)
t <- csc.C1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)

Computes Test for Cross-Sectional Clusters.

Description

This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to CnT(3)C^{(3)}_{nT} statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test allows for strong cross-sectional dependence.

Usage

csc.C3.test(evaluated1,evaluated2,realized,loss.type="SE",cl)

Arguments

evaluated1

same as in pool_av.test, but cross-sections are ordered rowwise

evaluated2

same as in pool_av.test, but cross-sections are ordered rowwise

realized

same as in pool_av.test, but cross-sections are ordered rowwise

loss.type

same as in pool_av.test

cl

vector of the beginning indices of rows for each pre-defined clusters – as a result always cl[1]=1

Value

class htest object, list of

statistic

test statistic

parameter

KK, number of cross-sectional clusters

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.

See Also

pool_av.test, csc.C1.test

Examples

data(forecasts)
y <- t(observed)
# just to reduce computation time restrict to energy commodities only
y <- y[1:8,]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=8)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:8)
  {
    f.bsr[i,] <- predicted[[i]][,1]
    f.dma[i,] <- predicted[[i]][,9]
  }
# 2 cross-sectional clusters: crude oil and other energy commodities
cs.cl <- c(1,4)
t <- csc.C3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)

Computes Test for Cross-Sectional Clusters.

Description

This function computes test of the equal predictive accuracy for cross-sectional clusters. The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable if either: K2K \ge 2 and significance level 0.08326\le 0.08326, or 2K142 \le K \le 14 and significance level 0.1\le 0.1, or K={2,3}K = \{ 2,3 \} and significance level 0.2\le 0.2, where KK denotes the number of time clusters.

Usage

csc.test(evaluated1,evaluated2,realized,loss.type="SE",cl,dc=FALSE)

Arguments

evaluated1

same as in pool_av.test, but cross-sections are ordered rowwise

evaluated2

same as in pool_av.test, but cross-sections are ordered rowwise

realized

same as in pool_av.test, but cross-sections are ordered rowwise

loss.type

same as in pool_av.test

cl

vector of the beginning indices of rows for each pre-defined clusters – as a result always cl[1]=1

dc

logical indicating if apply decorrelating clusters, if not specified dc=FALSE is used

Value

class htest object, list of

statistic

test statistic

parameter

KK, number of cross-sectional clusters

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.

See Also

pool_av.test

Examples

data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][,1]
    f.dma[i,] <- predicted[[i]][,9]
  }
# 2 cross-sectional clusters: energy commodities and non-energy commodities
cs.cl <- c(1,9)
t <- csc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)

Sample Panel of Commodities Spot Prices.

Description

Observed spot prices of various commodities.

Usage

data(forecasts)

Format

observed is matrix object such that its columns correspond to spot prices of selected 56 commodities.

Details

They cover the period between 1996 and 2021, and are in monthly freqency. Variables names are the same as in the paper by Drachal and Pawłowski (2024). The observed prices were taken from The World Bank (2022).

References

Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034

The World Bank. 2022. Commodity Markets. https://www.worldbank.org/en/research/commodity-markets

See Also

predicted

Examples

data(forecasts)
# WTI prices
t1 <- observed[,3]

Computes Test for Overall Equal Predictive Ability.

Description

This function computes test of the equal predictive accuracy for the pooled average. It corresponds to SnT(1)S^{(1)}_{nT} statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test is suitable for situations with cross-sectional independence.

Usage

pool_av.S1.test(evaluated1,evaluated2,realized,loss.type="SE")

Arguments

evaluated1

same as in pool_av.test

evaluated2

same as in pool_av.test

realized

same as in pool_av.test

loss.type

same as in pool_av.test

Value

class htest object, list of

statistic

test statistic

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.

See Also

pool_av.test, pool_av.S3.test

Examples

data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][,1]
    f.dma[i,] <- predicted[[i]][,9]
  }
t <- pool_av.S1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")

Computes Test for Overall Equal Predictive Ability.

Description

This function computes test of the equal predictive accuracy for the pooled average. It corresponds to SnT(3)S^{(3)}_{nT} statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test allows for strong cross-sectional dependence.

Usage

pool_av.S3.test(evaluated1,evaluated2,realized,loss.type="SE")

Arguments

evaluated1

same as in pool_av.test

evaluated2

same as in pool_av.test

realized

same as in pool_av.test

loss.type

same as in pool_av.test

Value

class htest object, list of

statistic

test statistic

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.

See Also

pool_av.test, pool_av.S1.test

Examples

data(forecasts)
y <- t(observed)
# just to reduce computation time shorten time-series
y <- y[,1:40]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][1:40,1]
    f.dma[i,] <- predicted[[i]][1:40,9]
  }
t <- pool_av.S3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")

Computes Test for the Pooled Average.

Description

This function computes test of the equal predictive accuracy for the pooled average. The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative hypothesis can be formulated as the differences do not average out across the cross-sectional and time-series dimensions.

Usage

pool_av.test(evaluated1,evaluated2,realized,loss.type="SE",J=NULL)

Arguments

evaluated1

matrix of forecasts from the first method, cross-sections are ordered by rows, and time by columns

evaluated2

matrix of forecasts from the second method, cross-sections are ordered by rows, and time by columns

realized

matrix of the observed values, cross-sections are ordered by rows, and time by columns

loss.type

a method to compute the loss function, loss.type="SE" applies squared errors, loss.type="AE" – absolute errors, loss.type="SPE" – squared proportional error (useful if errors are heteroskedastic), loss.type="ASE" – absolute scaled error, if loss.type is specified as some numeric, then the function of type exp(loss.type*errors)-1-loss.type*errors is applied (useful when it is more costly to underpredict realized than to overpredict), if not specified loss.type="SE" is used

J

numeric maximum lag length, if not specified J=round(T^(1/3)) is used, where T=ncol(realized)

Value

class htest object, list of

statistic

test statistic

parameter

J, maximum lag length

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Hyndman, R.J., Koehler, A.B. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting 22, 679–688.

Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.

Taylor, S. J., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.

Triacca, U., 2024. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.it/phd/documents/Lesson19.pdf.

Examples

data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][,1]
    f.dma[i,] <- predicted[[i]][,9]
  }
t <- pool_av.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")

Sample Panels of Commodities Spot Prices Forecasts.

Description

Forecasts obtained from various methods applied to various commodities prices.

Usage

data(forecasts)

Format

predicted is list of forecasts of spot prices of selected 56 commodities. For each commodity matrix of forecasts generated by various methods is provided. Columns correspond to various methods.

Details

The forecasts were taken from Drachal and Pawłowski (2024). They cover the period between 1996 and 2021, and are in monthly freqency. Variables and methods names are the same as in that paper, where they are described in details.

References

Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034

See Also

observed

Examples

data(forecasts)
# WTI prices predicted by BSR rec method
t2 <- predicted[[3]][,1]

Computes Test for Time Clusters.

Description

This function computes test of the equal predictive accuracy for time clusters. The null hypothesis of this test is that the equal predictive accuracy for the two methods holds within each of the time clusters. The test is suitable if either: K2K \ge 2 and significance level 0.08326\le 0.08326, or 2K142 \le K \leq 14 and significance level 0.1\le 0.1, or K={2,3}K = \{ 2,3 \} and significance level 0.2\le 0.2, where KK denotes the number of time clusters.

Usage

tc.test(evaluated1,evaluated2,realized,loss.type="SE",cl)

Arguments

evaluated1

same as in pool_av.test

evaluated2

same as in pool_av.test

realized

same as in pool_av.test

loss.type

same as in pool_av.test

cl

vector of the beginning indices of each pre-defined blocks of time – as a result always cl[1]=1

Value

class htest object, list of

statistic

test statistic

parameter

KK, number of time clusters

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested data

References

Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.

See Also

pool_av.test

Examples

data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
  {
    f.bsr[i,] <- predicted[[i]][,1]
    f.dma[i,] <- predicted[[i]][,9]
  }
# 3 time clusters: Jun 1996 -- Nov 2007, Dec 2007 -- Jun 2009, Jul 2009 - Aug 2021
# rownames(observed)[1] 
# rownames(observed)[139] 
# rownames(observed)[158] 
t.cl <- c(1,139,158)
t <- tc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=t.cl)