Title: | Tests of Equal Predictive Accuracy for Panels of Forecasts |
---|---|
Description: | Allows to perform the tests of equal predictive accuracy for panels of forecasts. Main references: Qu et al. (2024) <doi:10.1016/j.ijforecast.2023.08.001> and Akgun et al. (2024) <doi:10.1016/j.ijforecast.2023.02.001>. |
Authors: | Krzysztof Drachal [aut, cre] (Faculty of Economic Sciences, University of Warsaw, Poland) |
Maintainer: | Krzysztof Drachal <[email protected]> |
License: | GPL-3 |
Version: | 1.2 |
Built: | 2025-03-09 10:23:41 UTC |
Source: | https://github.com/kdrachal/pepa |
This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable for situations with cross-sectional independence.
csc.C1.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
csc.C1.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
data(forecasts) y <- t(observed) # just to save time y <- y[,1:40] f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][1:40,1] f.dma[i,] <- predicted[[i]][1:40,9] } # 2 cross-sectional clusters: energy commodities and non-energy commodities cs.cl <- c(1,9) t <- csc.C1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
data(forecasts) y <- t(observed) # just to save time y <- y[,1:40] f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][1:40,1] f.dma[i,] <- predicted[[i]][1:40,9] } # 2 cross-sectional clusters: energy commodities and non-energy commodities cs.cl <- c(1,9) t <- csc.C1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test allows for strong cross-sectional dependence.
csc.C3.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
csc.C3.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
data(forecasts) y <- t(observed) # just to reduce computation time restrict to energy commodities only y <- y[1:8,] f.bsr <- matrix(NA,ncol=ncol(y),nrow=8) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:8) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 2 cross-sectional clusters: crude oil and other energy commodities cs.cl <- c(1,4) t <- csc.C3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
data(forecasts) y <- t(observed) # just to reduce computation time restrict to energy commodities only y <- y[1:8,] f.bsr <- matrix(NA,ncol=ncol(y),nrow=8) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:8) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 2 cross-sectional clusters: crude oil and other energy commodities cs.cl <- c(1,4) t <- csc.C3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
This function computes test of the equal predictive accuracy for cross-sectional clusters. The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable if either: and
significance level
, or
and
significance level
, or
and
significance level
, where
denotes the number of time clusters.
csc.test(evaluated1,evaluated2,realized,loss.type="SE",cl,dc=FALSE)
csc.test(evaluated1,evaluated2,realized,loss.type="SE",cl,dc=FALSE)
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
dc |
|
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 2 cross-sectional clusters: energy commodities and non-energy commodities cs.cl <- c(1,9) t <- csc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 2 cross-sectional clusters: energy commodities and non-energy commodities cs.cl <- c(1,9) t <- csc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
Observed spot prices of various commodities.
data(forecasts)
data(forecasts)
observed
is matrix
object such that its columns correspond to spot prices of selected 56 commodities.
They cover the period between 1996 and 2021, and are in monthly freqency. Variables names are the same as in the paper by Drachal and Pawłowski (2024). The observed prices were taken from The World Bank (2022).
Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034
The World Bank. 2022. Commodity Markets. https://www.worldbank.org/en/research/commodity-markets
data(forecasts) # WTI prices t1 <- observed[,3]
data(forecasts) # WTI prices t1 <- observed[,3]
This function computes test of the equal predictive accuracy for the pooled average. It corresponds to statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test is suitable for situations with cross-sectional independence.
pool_av.S1.test(evaluated1,evaluated2,realized,loss.type="SE")
pool_av.S1.test(evaluated1,evaluated2,realized,loss.type="SE")
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
class htest
object, list
of
statistic |
test statistic |
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } t <- pool_av.S1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } t <- pool_av.S1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
This function computes test of the equal predictive accuracy for the pooled average. It corresponds to statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test allows for strong cross-sectional dependence.
pool_av.S3.test(evaluated1,evaluated2,realized,loss.type="SE")
pool_av.S3.test(evaluated1,evaluated2,realized,loss.type="SE")
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
class htest
object, list
of
statistic |
test statistic |
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
data(forecasts) y <- t(observed) # just to reduce computation time shorten time-series y <- y[,1:40] f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][1:40,1] f.dma[i,] <- predicted[[i]][1:40,9] } t <- pool_av.S3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
data(forecasts) y <- t(observed) # just to reduce computation time shorten time-series y <- y[,1:40] f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][1:40,1] f.dma[i,] <- predicted[[i]][1:40,9] } t <- pool_av.S3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
This function computes test of the equal predictive accuracy for the pooled average. The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative hypothesis can be formulated as the differences do not average out across the cross-sectional and time-series dimensions.
pool_av.test(evaluated1,evaluated2,realized,loss.type="SE",J=NULL)
pool_av.test(evaluated1,evaluated2,realized,loss.type="SE",J=NULL)
evaluated1 |
|
evaluated2 |
|
realized |
|
loss.type |
a method to compute the loss function, |
J |
|
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Hyndman, R.J., Koehler, A.B. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting 22, 679–688.
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
Taylor, S. J., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.
Triacca, U., 2024. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.it/phd/documents/Lesson19.pdf.
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } t <- pool_av.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } t <- pool_av.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
Forecasts obtained from various methods applied to various commodities prices.
data(forecasts)
data(forecasts)
predicted
is list
of forecasts of spot prices of selected 56 commodities. For each commodity matrix
of forecasts generated by various methods is provided. Columns correspond to various methods.
The forecasts were taken from Drachal and Pawłowski (2024). They cover the period between 1996 and 2021, and are in monthly freqency. Variables and methods names are the same as in that paper, where they are described in details.
Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034
data(forecasts) # WTI prices predicted by BSR rec method t2 <- predicted[[3]][,1]
data(forecasts) # WTI prices predicted by BSR rec method t2 <- predicted[[3]][,1]
This function computes test of the equal predictive accuracy for time clusters. The null hypothesis of this test is that the equal predictive accuracy for the two methods holds within each of the time clusters. The test is suitable if either: and
significance level
, or
and
significance level
, or
and
significance level
, where
denotes the number of time clusters.
tc.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
tc.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 3 time clusters: Jun 1996 -- Nov 2007, Dec 2007 -- Jun 2009, Jul 2009 - Aug 2021 # rownames(observed)[1] # rownames(observed)[139] # rownames(observed)[158] t.cl <- c(1,139,158) t <- tc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=t.cl)
data(forecasts) y <- t(observed) f.bsr <- matrix(NA,ncol=ncol(y),nrow=56) f.dma <- f.bsr # extract prices predicted by BSR rec and DMA methods for (i in 1:56) { f.bsr[i,] <- predicted[[i]][,1] f.dma[i,] <- predicted[[i]][,9] } # 3 time clusters: Jun 1996 -- Nov 2007, Dec 2007 -- Jun 2009, Jul 2009 - Aug 2021 # rownames(observed)[1] # rownames(observed)[139] # rownames(observed)[158] t.cl <- c(1,139,158) t <- tc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=t.cl)