Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 43 additions & 64 deletions lectures/additive_functionals.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,15 +35,15 @@ In addition to what's in Anaconda, this lecture will need the following librarie

## Overview

Many economic time series display persistent growth that prevents them from being asymptotically stationary and ergodic.
Many economic time series display persistent growth that prevents them from being asymptotically stationary and ergodic.

For example, outputs, prices, and dividends typically display irregular but persistent growth.
For example, outputs, prices, and dividends typically display irregular but persistent growth.

Asymptotic stationarity and ergodicity are key assumptions needed to make it possible to learn by applying statistical methods.

But there are good ways to model time series that have persistent growth that still enable statistical learning based on a law of large numbers for an asymptotically stationary and ergodic process.
But there are good ways to model time series that have persistent growth that still enable statistical learning based on a law of large numbers for an asymptotically stationary and ergodic process.

Thus, {cite}`Hansen_2012_Eca` described two classes of time series models that accommodate growth.
Thus, {cite}`Hansen_2012_Eca` described two classes of time series models that accommodate growth.

They are

Expand All @@ -56,14 +56,14 @@ If a process $\{y_t\}$ is an additive functional and $\phi_t = \exp(y_t)$, then

In this lecture, we describe both additive functionals and multiplicative functionals.

We also describe and compute decompositions of additive and multiplicative processes into four components:
We also describe and compute decompositions of additive and multiplicative processes into four components.

1. a **constant**
1. a **trend** component
1. an asymptotically **stationary** component
1. a **martingale**

We describe how to construct, simulate, and interpret these components.
We describe how to construct, simulate, and interpret these components.

More details about these concepts and algorithms can be found in Hansen {cite}`Hansen_2012_Eca` and Hansen and Sargent {cite}`Hans_Sarg_book`.

Expand All @@ -83,10 +83,9 @@ from scipy.stats import norm, lognorm

This lecture focuses on a subclass of these: a scalar process $\{y_t\}_{t=0}^\infty$ whose increments are driven by a Gaussian vector autoregression.

Our special additive functional displays interesting time series behavior while also being easy to construct, simulate, and analyze
by using linear state-space tools.
Our special additive functional displays interesting time series behavior while also being easy to construct, simulate, and analyze by using linear state-space tools.

We construct our additive functional from two pieces, the first of which is a **first-order vector autoregression** (VAR)
We construct our additive functional from two pieces, the first of which is a **first-order vector autoregression** (VAR).

```{math}
:label: old1_additive_functionals
Expand All @@ -102,12 +101,7 @@ Here
* $B$ is an $n \times m$ matrix, and
* $x_0 \sim {\cal N}(\mu_0, \Sigma_0)$ is a random initial condition for $x$

The second piece is an equation that expresses increments
of $\{y_t\}_{t=0}^\infty$ as linear functions of

* a scalar constant $\nu$,
* the vector $x_t$, and
* the same Gaussian vector $z_{t+1}$ that appears in the VAR {eq}`old1_additive_functionals`
The second piece is an equation that expresses increments of $\{y_t\}_{t=0}^\infty$ as linear functions of a scalar constant $\nu$, the vector $x_t$, and the same Gaussian vector $z_{t+1}$ that appears in the VAR {eq}`old1_additive_functionals`.

In particular,

Expand All @@ -117,17 +111,15 @@ In particular,
y_{t+1} - y_{t} = \nu + D x_{t} + F z_{t+1}
```

Here $y_0 \sim {\cal N}(\mu_{y0}, \Sigma_{y0})$ is a random
initial condition for $y$.
Here $y_0 \sim {\cal N}(\mu_{y0}, \Sigma_{y0})$ is a random initial condition for $y$.

The nonstationary random process $\{y_t\}_{t=0}^\infty$ displays
systematic but random *arithmetic growth*.
The nonstationary random process $\{y_t\}_{t=0}^\infty$ displays systematic but random *arithmetic growth*.

### Linear state-space representation

A convenient way to represent our additive functional is to use a [linear state space system](https://python-intro.quantecon.org/linear_models.html).

To do this, we set up state and observation vectors
To do this, we set up state and observation vectors.

$$
\hat{x}_t = \begin{bmatrix} 1 \\ x_t \\ y_t \end{bmatrix}
Expand Down Expand Up @@ -186,13 +178,12 @@ which is a standard linear state space system.

To study it, we could map it into an instance of [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from [QuantEcon.py](http://quantecon.org/quantecon-py).

But here we will use a different set of code for simulation, for reasons described below.
However, we use custom simulation code that better serves our decomposition analysis, as described below.

## Dynamics

Let's run some simulations to build intuition.

(addfunc_eg1)=
In doing so we'll assume that $z_{t+1}$ is scalar and that $\tilde x_t$ follows a 4th-order scalar autoregression.

```{math}
Expand All @@ -211,7 +202,7 @@ $$

are strictly greater than unity in absolute value.

(Being a zero of $\phi(z)$ means that $\phi(z) = 0$)
A zero of $\phi(z)$ satisfies $\phi(z) = 0$.

Let the increment in $\{y_t\}$ obey

Expand All @@ -221,21 +212,21 @@ $$

with an initial condition for $y_0$.

While {eq}`ftaf` is not a first order system like {eq}`old1_additive_functionals`, we know that it can be mapped into a first order system.
While {eq}`ftaf` is not a first order system like {eq}`old1_additive_functionals`, we know that it can be mapped into a first order system.

* For an example of such a mapping, see [this example](https://python.quantecon.org/linear_models.html#second-order-difference-equation).
For an example of such a mapping, see [this example](https://python.quantecon.org/linear_models.html#second-order-difference-equation).

In fact, this whole model can be mapped into the additive functional system definition in {eq}`old1_additive_functionals` -- {eq}`old2_additive_functionals` by appropriate selection of the matrices $A, B, D, F$.
This model can be mapped into the additive functional system definition in {eq}`old1_additive_functionals`--{eq}`old2_additive_functionals` by appropriate selection of the matrices $A, B, D, F$.

You can try writing these matrices down now as an exercise --- correct expressions appear in the code below.
As an exercise, try writing down these matricesβ€”correct expressions appear in the code below.

### Simulation

When simulating we embed our variables into a bigger system.
When simulating, we embed our variables into a bigger system.

This system also constructs the components of the decompositions of $y_t$ and of $\exp(y_t)$ proposed by Hansen {cite}`Hansen_2012_Eca`.

All of these objects are computed using the code below
The following code computes all of these objects.

(amf_lss)=

Expand Down Expand Up @@ -714,16 +705,14 @@ Notice the irregular but persistent growth in $y_t$.

### Decomposition

Hansen and Sargent {cite}`Hans_Sarg_book` describe how to construct a decomposition of
an additive functional into four parts:
Hansen and Sargent {cite}`Hans_Sarg_book` describe how to construct a decomposition of an additive functional into four parts.

- a constant inherited from initial values $x_0$ and $y_0$
- a linear trend
- a martingale
- an (asymptotically) stationary component

To attain this decomposition for the particular class of additive
functionals defined by {eq}`old1_additive_functionals` and {eq}`old2_additive_functionals`, we first construct the matrices
To obtain this decomposition for additive functionals defined by {eq}`old1_additive_functionals` and {eq}`old2_additive_functionals`, we construct the matrices

$$
\begin{aligned}
Expand All @@ -745,22 +734,23 @@ $$
\end{aligned}
$$

At this stage, you should pause and verify that $y_{t+1} - y_t$ satisfies {eq}`old2_additive_functionals`.
Verify that $y_{t+1} - y_t$ satisfies {eq}`old2_additive_functionals`.

We introduce the following notation.

It is convenient for us to introduce the following notation:
$\tau_t = \nu t$ denotes a linear, deterministic trend.

- $\tau_t = \nu t$ , a linear, deterministic trend
- $m_t = \sum_{j=1}^t H z_j$, a martingale with time $t+1$ increment $H z_{t+1}$
- $s_t = g x_t$, an (asymptotically) stationary component
$m_t = \sum_{j=1}^t H z_j$ denotes a martingale with time $t+1$ increment $H z_{t+1}$.

$s_t = g x_t$ denotes an (asymptotically) stationary component.

We want to characterize and simulate components $\tau_t, m_t, s_t$ of the decomposition.

A convenient way to do this is to construct an appropriate instance of a [linear state space system](https://python-intro.quantecon.org/linear_models.html) by using [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from [QuantEcon.py](http://quantecon.org/quantecon-py).
We do this by constructing a [linear state space system](https://python-intro.quantecon.org/linear_models.html) using [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from [QuantEcon.py](http://quantecon.org/quantecon-py).

This will allow us to use the routines in [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) to study dynamics.

To start, observe that, under the dynamics in {eq}`old1_additive_functionals` and {eq}`old2_additive_functionals` and with the
definitions just given,
Under the dynamics in {eq}`old1_additive_functionals` and {eq}`old2_additive_functionals` and with the definitions just given,

$$
\begin{bmatrix}
Expand Down Expand Up @@ -820,15 +810,9 @@ $$
\end{bmatrix}
$$

With

$$
\tilde{x} := \begin{bmatrix} 1 \\ t \\ x_t \\ y_t \\ m_t \end{bmatrix}
\quad \text{and} \quad
\tilde{y} := \begin{bmatrix} x_t \\ y_t \\ \tau_t \\ m_t \\ s_t \end{bmatrix}
$$
Define $\tilde{x} := \begin{bmatrix} 1 \\ t \\ x_t \\ y_t \\ m_t \end{bmatrix}$ and $\tilde{y} := \begin{bmatrix} x_t \\ y_t \\ \tau_t \\ m_t \\ s_t \end{bmatrix}$.

we can write this as the linear state space system
We can write this as the linear state space system

$$
\begin{aligned}
Expand All @@ -837,24 +821,20 @@ $$
\end{aligned}
$$

By picking out components of $\tilde y_t$, we can track all variables of
interest.
By picking out components of $\tilde y_t$, we can track all variables of interest.

## Code

The class `AMF_LSS_VAR` mentioned {ref}`above <amf_lss>` does all that we want to study our additive functional.

In fact, `AMF_LSS_VAR` does more
because it allows us to study an associated multiplicative functional as well.
because it allows us to study an associated multiplicative functional as well.

(A hint that it does more is the name of the class -- here AMF stands for
"additive and multiplicative functional" -- the code computes and displays objects associated with
multiplicative functionals too.)
The name hints at this: AMF stands for "additive and multiplicative functional."

Let's use this code (embedded above) to explore the {ref}`example process described above <addfunc_eg1>`.

If you run {ref}`the code that first simulated that example <addfunc_egcode>` again and then the method call
you will generate (modulo randomness) the plot
Running {ref}`the code that first simulated that example <addfunc_egcode>` again and then the method call generates the following plot (modulo randomness).

```{code-cell} ipython3
plot_additive(amf, T)
Expand All @@ -863,9 +843,9 @@ plt.show()

When we plot multiple realizations of a component in the 2nd, 3rd, and 4th panels, we also plot the population 95% probability coverage sets computed using the LinearStateSpace class.

We have chosen to simulate many paths, all starting from the *same* non-random initial conditions $x_0, y_0$ (you can tell this from the shape of the 95% probability coverage shaded areas).
We simulate many paths, all starting from the *same* non-random initial conditions $x_0, y_0$, as indicated by the shape of the 95% probability coverage shaded areas.

Notice tell-tale signs of these probability coverage shaded areas
Notice the tell-tale signs in these probability coverage shaded areas.

* the purple one for the martingale component $m_t$ grows with
$\sqrt{t}$
Expand All @@ -874,7 +854,7 @@ Notice tell-tale signs of these probability coverage shaded areas

### Associated multiplicative functional

Where $\{y_t\}$ is our additive functional, let $M_t = \exp(y_t)$.
Let $M_t = \exp(y_t)$, where $\{y_t\}$ is our additive functional.

As mentioned above, the process $\{M_t\}$ is called a **multiplicative functional**.

Expand Down Expand Up @@ -907,10 +887,9 @@ $$

An instance of class `AMF_LSS_VAR` ({ref}`above <amf_lss>`) includes this associated multiplicative functional as an attribute.

Let's plot this multiplicative functional for our example.
The following plots show this multiplicative functional for our example.

If you run {ref}`the code that first simulated that example <addfunc_egcode>` again and then the method call in the cell below you'll
obtain the graph in the next cell.
Running {ref}`the code that first simulated that example <addfunc_egcode>` again and then the method call in the cell below produces the graph in the next cell.

```{code-cell} ipython3
plot_multiplicative(amf, T)
Expand All @@ -927,7 +906,7 @@ for a limited horizon $T$.

It is interesting to how the martingale behaves as $T \rightarrow +\infty$.

Let's see what happens when we set $T = 12000$ instead of $150$.
The following simulation sets $T = 12000$ instead of $150$.

### Peculiar large sample property

Expand Down