I was flipping through my trusty copy of Hamilton’s Time Series Analysis when I stumbled across the title “Limit Theorems for Serially Dependent Observations.” I was indeed looking for an appropriate central limit theorem, but I realized I did not actually know a formal definition of “serially dependent.” In fact, I am fairly certain that Hamilton does not define the term before he uses it! Usual appeals to Google and Wikipedia failed, so I pieced together a definition from some other books:
Definition. A sequence is serially independent if it is independent of for all . Here, denotes the lag operator. A sequence that is not serially independent is said to be serially dependent.
This definition comes from Davidson’s Stochastic Limit Theory, a great book on time series. The intuition for the definition is that a sequence is serially independent if we can group the random variables into arbitrary collections, and the variables in each collection are independent. That is, there is only a superficial dependence on time.
Note: The previous article is one of a series of topic summaries I am writing to introduce various topics that are not explained particularly well by online resources such as Wikipedia. I’m tagging all of these posts as “Wikipedia.” Please feel free to adapt these summaries for any use, with citation.
- Andrew Chalk on Evidence for Strong EMH
- vlad on Jensen’s Alpha
- Anon on Dealing with occasionally non-numeric data in Matlab
- Quant on Jensen’s Alpha
- Anonymous on Jensen’s Alpha