'Testing and adjusting for autocorrelation / serial correlation
Unfortunately im not able to provide a reproducible example, but hopefully you get the idea regardless. I am conducting some regression analyses where the dependent variable is a DCC of a pair of return series - two stocks. Im using dummies to represent shocks in the return series, i.e. the worst 1% of observed returns. In sum: DCC = c + 1%Dummy
When I run the DurbinWatsonTest I get the output: Autocorrelation: 0,9987 D-W statistic: 0 p-value: 0 HA: rho !=0
- Does this just mean that its highly significant presence of autocorrelation?
I also tried dwtest, but that yields NA values for both P and DW-stat.
To correct for autocorrealtion I used the code:
spx10 = lm(bit_sp500 ~ Spx_0.1)
spx10_hc = coeftest(spx10, vcov. = vcovHC(spx10, method = "arellano",type = "HC3"))
- How can I be certain that it had any effect, as I cannot run the DW-test for the spx10_hc, nor did the regression output change noteworthy. Is it common that regression analysis with 1 independent variable changes just ever so slightly when adjusting for autocorrelation?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
