Parameterized expectations algorithm

The Parameterized Expectations Algorithm (PEA hereafter) was introduced by Marcet [1988]. As it will .... As long as good initial conditions can be found and the model is not too ..... Test whether, under this assumption, it(β) ⩾ 0. If it is the case, ...
1MB taille 107 téléchargements 516 vues
Lecture Notes 8

Parameterized expectations algorithm The Parameterized Expectations Algorithm (PEA hereafter) was introduced by Marcet [1988]. As it will become clear in a moment, this may be viewed as a generalized method of undetermined coefficients, in which economic agents learn the decision rule at each step of the algorithm. It will therefore have a natural interpretation in terms of learning behavior. The basic idea of this method is to approximate the expectation function of the individuals — rather than attempting to recover directly the decision rules — by a smooth function, in general a polynomial function. Implicit in this approach is the fact that the space spanned by polynomials is dense in the space spanned by all functions in the sense lim inf sup |Fθ (x) − F (x)| = 0

k→∞ θ∈Rk x∈X

where F is the function to be approximated and Fθ is an kth –order interpolating function that is parameterized by θ.

8.1

Basics

The basic idea that underlies this approach is to replace expectations by an a priori given function of the state variables of the problem in hand, and then 1

reveal the set of parameters that insure that the residuals from the Euler equations are a martingale difference sequence (Et εt+1 = 0). Note that the main difficulty when solving the model is to deal with the integral involved by the expectation. The approach of the basic PEA algorithm is to approximate it by Monte–Carlo simulations. PEA algorithm may be implemented to solve a large set of models that admit the following general representation F (Et (E (yt+1 , xt+1 , yt , xt )), yt , xt , εt ) = 0

(8.1)

where F : Rm × Rny × Rnx × Rne −→ Rnx +ny describes the model and E : Rny × Rnx × Rny × Rnx −→ Rm defines the transformed variables on which we take expectations. Et is the standard conditional expectations operator. εt is the set of innovations of the structural shocks that affect the economy. In order to fix notations, let us take the optimal growth model as an example ¡ ¢¤ £ α−1 +1−δ = 0 λt − βEt λt+1 αzt+1 kt+1

c−σ t − λt = 0

kt+1 − zt ktα + ct − (1 − δ)kt = 0 zt+1 − ρzt − εt+1 = 0 In this example, we have y = {c, λ}, x = {k, z} and ε = ε, the function E takes the form ¡ ¢ α−1 +1−δ E ({c, λ}t+1 , {k, z}t+1 , {c, λ}t , {k, z}t ) = λt+1 αzt+1 kt+1

while F (.) is given by  λt − βEt [E ({c, λ}t+1 , {k, z}t+1 , {c, λ}t , {k, z}t )]    −σ ct − λ t F (.) = k − zt ktα + ct − (1 − δ)kt    t+1 zt+1 − ρzt − εt+1

The idea of the PEA algorithm is then to replace the expectation function Et (E (yt+1 , xt+1 , yt , xt )) by an parametric approximation function, Φ(xt ; θ), of 2

the current state variables xt and a vector of parameters θ, such that the approximated model may be restated as F (Φ(xt , θ), yt , xt , εt ) = 0

(8.2)

The problem of the PEA algorithm is then to find a vector θ such that θ ∈ Argmin kΦ(xt , θ) − Et (E (yt+1 , xt+1 , yt , xt ))k2 θ∈Θ

that is the solution satisfies the rational expectations hypothesis. At this point, note that we selected a quadratic norm, but one also may consider other metrics of the form θ ∈ Argmin R(xt , θ)0 ΩR(xt , θ) θ∈Θ

with R(xt , θ) ≡ Φ(xt , θ)−Et (E (yt+1 , xt+1 , yt , xt )) and Ω is a weighting matrix. This would then correspond to a GMM type of estimation. One may also consider θ ∈ Argmin max{|Φ(xt , θ) − Et (E (yt+1 , xt+1 , yt , xt ))|} θ∈Θ

which would call for LAD estimation methods. However, the usual practice is use the standard quadratic norm. Once, θ and therefore the approximation function has been found, Φ(xt , θ) and equation (8.2) may be used to generate time series for the variables of the model. The algorithm may then be described as follows. Step 1. Specify a guess for the function Φ(xt , θ), an initial θ. Choose a stopping criterion η > 0, a sample size T that should be large enough and draw a sequence {εt }Tt=0 that will be used during all the algorithm. Step 2. At iteration i, and for the given θ i , simulate, recursively, a sequence for {yt (θi )}Tt=0 and {xt (θi )}Tt=0 3

Step 3. Find G(θ i ) that satisfies T 1X θb ∈ Argmin kE (yt+1 (θ), xt+1 (θ), yt (θ), xt (θ)) − Φ(xt (θ), θ)k2 T θ∈Θ t=0

which just amounts to perform a non–linear least square regression taking E (yt+1 (θ), xt+1 (θ), yt (θ), xt (θ)) as the dependent variable, Φ(.) as the explanatory function and θ as the parameter to be estimated. Step 4. Set θ i+1 to θi+1 = γ θbi + (1 − γ)θ i

(8.3)

where γ ∈ (0, 1) is a smoothing parameter. On the one hand, setting low γ helps convergence, but at the cost of increasing the computational time. As long as good initial conditions can be found and the model is not too non–linear, setting γ close to 1 is sufficient, however, when dealing with strongly non–linear models — with binding constraints for example — decreasing γ will generally helps a lot. Step 5. If |θ i+1 − θi | < η then stop, otherwise go back to step 2. Reading this algorithm, it appears that it may easily be given a learning interpretation. Indeed, each iteration mays be interpreted as a learning step, in which the individual uses a rule of thumb as a decision rule and reveal information on the kind of errors he/she does using this rule of thumb. He/she then corrects the rule — that is find another θ — that will be used during the next step. But it should be noted that nothing in the algorithm guarantees that the algorithm always converges and — if it does — delivers a decision rule that is compatible with the rational expectation hypothesis.1 At this point, several comments stemming from the implementation of the method are in order. First of all, we need to come with an interpolating 1 For a convergence proof in the case of the optimal growth model, see Marcet and Marshall [1994].

4

function, Φ(.). How should it be specified? In fact, we are free to choose any functional form we may think of, nevertheless economic theory may guide us as well as some constraints imposed by the method — more particularly in step 3. A widely used interpolating function combines non–linear aspects of the exponential function with some polynomials, such that Φj (x, θ) may take the form (where j ∈ {1, . . . , m} refers to a particular expectation) ¡ ¢ Φj (x, θ) = exp θ0 P (x)

where P (x) is a multivariate polynomial.2 One advantage of this interpolating function is obviously that it guarantees positive values for the expectations, which turns out to be mostly the case in economics. One potential problem with such a functional form is precisely related to the fact that it uses simple polynomials which then may generate multicolinearity problems during step 3. As an example, let us take the simple case in which the state variable is totally exogenous and is an AR(1) process with log–normal innovations: log(at ) = ρ log(at−1 ) + εt with |ρ| < 1 and ε ; N (0, σ). The state variable is then at . If we simulate the sequence {at }Tt=0 with T = 10000, and compute the correlation matrix of {at , a2t , a3t , a4t } we get, for ρ = 0.95 and  1.0000 0.9998  0.9998 1.0000   0.9991 0.9998 0.9980 0.9991

σ = 0.01 0.9991 0.9998 1.0000 0.9998

 0.9980 0.9991   0.9998  1.0000

revealing some potential multicolinearity problems to occur. As an illustrative example, assume that we want to approximate the expectation function in this model, it will be a function of the capital stock which is a particularly smooth sequence, therefore if there will be significant differences between the sequence itself and the sequence taken at the power 2, the difference may then be small 2 For instante, let us consider the case nx = 2, P (xt ) may then consists of a constant term, x1t , x2t , x21t , x22t , x1t x2t .

5

for the sequence at the power 4. Hence multicolinearity may occur. One way to circumvent this problem is to rely on orthogonal polynomials instead of standard polynomials in the interpolating function. A second problem that arises in this approach is to select initial conditions for θ. Indeed, this step is crucial for at least 3 reasons: (i) the problem is fundamentally non–linear, (ii) convergence is not always guarantee, (iii) economic theory imposes a set of restrictions to insure positivity of some variables for example. Therefore, much attention should be paid when imposing an initial value to θ. A third important problem is related to the choice of γ, the smoothing parameter. A too large value may put too much weight on new values for θ and therefore reinforce the potential forces that lead to divergence of the algorithm. On the contrary, setting γ too close to 0 may be costly in terms of computational CPU time. It must however be noted that no general rule may be given for these implementation issues and that in most of the case, one has to guess and try. Therefore, I shall now report 3 examples of implementation. The first one is the standard optimal growth model, the other one corresponds to the optimal growth model with investment irreversibility, the last one will be the problem of a household facing borrowing constraints. But before going to the examples, we shall consider a linear example that will highlight the similarity between this approach and the undetermined coefficient approach.

8.2

A linear example

Let us consider the simple model yt = aEt yt+1 + bxt xt+1 = (1 − ρ)x + ρxt + εt+1 6

where ε ; N (0, σ 2 ). Finding an expectation function in this model amounts to find a function Φ(xt , θ) for Et (ayt+1 + bxt ). Let us make the following guess for the solution: Φ(xt , θ) = θ0 + θ1 xt In this case, solving the PEA problem amount to solve N 1 X (Φ(xt , θ) − ayt+1 − bxt )2 {θ0 ,θ1 } N

min

t=1

The first order conditions for this problem are N 1 X (θ0 + θ1 xt − ayt+1 − bxt ) = 0 N

(8.4)

t=1

N 1 X xt (θ0 + θ1 xt − ayt+1 − bxt ) = 0 N

(8.5)

t=1

Equation (8.4) can be rewritten as N N N 1 X 1 X 1 X θ0 + θ 1 xt = a yt+1 + b xt N N N t=1

t=1

t=1

But, since Φ(xt , θ) is an approximate solution for the expectation function, the model implies that yt = Et (ayt+1 + bxt ) = Φ(xt , θ) such that the former equation rewrites θ0 + θ 1

N N N 1 X 1 X 1 X xt = a (θ0 + θ1 xt+1 ) + b xt N N N t=1

t=1

t=1

Asymptotically, we have N N 1 X 1 X xt = lim xt+1 = x lim N →∞ N N →∞ N t=1

t=1

such that this first order condition converges to θ0 + θ1 x = aθ0 + aθ1 x + bx 7

therefore, rearranging terms, we have θ0 (1 − a) + θ1 (1 − a)x = bx

(8.6)

Now, let us consider equation (8.5), which can be rewritten as N N N N 1 X 1 X 2 1 X 1 X 2 xt + θ 1 xt = a yt+1 xt + b xt N N N N

θ0

t=1

t=1

t=1

t=1

Like for the first condition, we acknowledge that yt = Et (ayt+1 + bxt ) = Φ(xt , θ) such that the condition rewrites N N N N 1 X 2 1 X 1 X 2 1 X xt + θ 1 xt = a (θ0 + θ1 xt+1 )xt + b xt θ0 N N N N t=1

t=1

t=1

(8.7)

t=1

Asymptotically, we have N N 1 X 1 X 2 xt = x and lim xt = E(x2 ) = σx2 + x2 N →∞ N N →∞ N

lim

t=1

t=1

finally, we have N N 1 X 1 X xt xt+1 = lim xt ((1 − ρ)x + ρxt + εt+1 ) N →∞ N N →∞ N

lim

t=1

t=1

Since ε is the innovation of the process, we have limN →∞ such that

1 N

PN

t=1 xt εt+1

N 1 X xt xt+1 = (1 − ρ)x2 + ρE(x2 ) = x2 + ρσx2 N →∞ N

lim

t=1

Hence, (8.7) asymptotically rewrites as x(1 − a)θ0 + (1 − aρ)(x2 + σx2 )θ1 = b(x2 + σx2 ) 8

= 0,

We therefore have to solve the system θ0 (1 − a) + θ1 (1 − a)x = bx x(1 − a)θ0 + (1 − a)(x2 + ρσx2 )θ1 = b(x2 + σx2 ) premultiplying the first equation by x, and plugging the result in the second equation leads to (1 − aρ)θ1 σx2 = bσx2 such that θ1 =

b 1 − aρ

Plugging this result into the first equation, we get θ0 =

ab(1 − ρ)x (1 − a)(1 − aρ)

Therefore, Asymptotically, the solution is given by yt =

ab(1 − ρ)x b + xt (1 − a)(1 − aρ) 1 − aρ

which corresponds exactly to the solution of the model (see Lecture notes #1).Therefore, asymptotically, the PEA algorithm is nothing else but an undetermined coefficient method.

8.3

Standard PEA solution: the Optimal Growth Model

Let us first recall the type of problem we have in hand. We are about to solve the set of equations ¡ ¢¤ £ α−1 +1−δ = 0 λt − βEt λt+1 αzt+1 kt+1 c−σ t − λt = 0

kt+1 − zt ktα + ct − (1 − δ)kt = 0 log(zt+1 ) − ρ log(zt ) − εt+1 = 0 9

Our problem will therefore be to get an approximation for the expectation function:

£ ¡ ¢¤ α−1 βEt λt+1 αzt+1 kt+1 +1−δ

In this problem, we have 2 state variables: kt and zt , such that Φ(.) should be a function of both kt and zt . We will make the guess ¡ ¢ Φ(kt , zt ; θ) = exp θ0 + θ1 log(kt ) + θ2 log(zt ) + θ3 log(kt )2 + θ4 log(zt )2 + θ5 log(kt ) log(zt )

From the first equation of the above system, we have that — for a given

vector θ = {θ0 , θ1 , θ2 , θ3 , θ4 , θ5 } — λt (θ) = Φ(kt (θ), zt (θ); θ), which enables us to recover 1

ct (θ) = λt (θ)− σ and therefore get kt+1 (θ) = zt kt (θ)α − ct (θ) + (1 − δ)kt (θ) We then recover a whole sequence for {kt (θ)}Tt=0 , {zt }Tt=0 , {λt (θ)}Tt=0 , and {ct (θ)}Tt=0 , which makes it simple to compute a sequence for ¡ ¢ ϕt+1 (θ) ≡ λt+1 (θ) αzt+1 kt+1 (θ)α−1 + 1 − δ

Since Φ(kt , zt ; θ) is an exponential function of a polynomial, we may run the regression log(ϕt+1 (θ)) = θ0 + θ1 log(kt (θ)) + θ2 log(zt ) + θ3 log(kt (θ))2 +θ4 log(zt )2 + θ5 log(kt (θ)) log(zt )

(8.8)

b We then set a new value for θ according to the updating scheme (8.3) to get θ. and restart the process until convergence.

The parameterization we used in the matlab code are given in table 8.1 and is totally standard. γ, the smoothing parameter was set to 1, implying that in each iteration the new θ vector is totally passed as a new guess in the 10

Table 8.1: Optimal growth: Parameterization β 0.95

σ 1

α 0.3

δ 0.1

ρ 0.9

σe 0.01

progression of the algorithm. The stopping criterion was set at η=1e-6 and T =20000 data points were used to compute the OLS regression. Initial conditions were set as follows. We first solve the model relying on a log–linear approximation. We then generate a random draw of size T for ε and generate series using the log–linear approximate solution. We then built ∞ ∞ the needed series to recover a draw for {ϕt+1 (θ)}∞ t=0 , {kt (θ)}t=0 and {zt (θ)}t=0

and ran the regression (8.8) to get an initial condition for θ, reported in table 8.2. The algorithm converges after 22 iterations and delivers the final decision Table 8.2: Decision rule

Initial Final

θ0 0.5386 0.5489

θ1 -0.7367 -0.7570

θ2 -0.2428 -0.3337

θ3 0.1091 0.1191

θ4 0.2152 0.1580

θ5 -0.2934 -0.1961

rule reported in table 8.2. When γ is set at 0.75, 31 iterations are needed, 46 for γ = 0.5 and 90 for γ = 0.25. It is worth noting that the final decision rule does differ from the initial conditions, but not by an as large amount as one would have expected, meaning that in this setup — and provided the approximation is good enough3 — certainty equivalence and non–linearities do not play such a great role. In fact, as illustrated in figure 8.1, the capital decision rule does not display that much non linearities. Although particularly simple to implement (see the following matlab code), this method should be handle with care as it may be difficult to obtain convergence for some models. Nevertheless it has another attractive feature: it can handle problems with 3 Note that for the moment we have not made any evaluation of the accuracy of the decision rule. We will undertake such an evaluation in the sequel.

11

Figure 8.1: Capital decision rule 3 2.9 2.8

kt+1

2.7 2.6 2.5 2.4 2.3 2.2 2.2

2.3

2.4

2.5

2.6 k

2.7

2.8

2.9

3

t

possibly binding contraints. We now provide two examples of such models. Matlab Code: PEA Algorithm (OGM) clear all long init slong T T1 tol crit gam

= = = = = = = =

20000; 500; init+long; init+1:slong-1; init+2:slong; 1e-6; 1; 1;

sigma delta beta alpha ab rho se param ksy yss kss

= 1; = 0.1; = 0.95; = 0.3; = 0; = 0.9; = 0.01; = [ab alpha beta delta rho se sigma long init]; =(alpha*beta)/(1-beta*(1-delta)); = ksy^(alpha/(1-alpha)); = yss^(1/alpha);

12

iss css csy lss

= = = =

delta*kss; yss-iss; css/yss; css^(-sigma);

randn(’state’,1); e = se*randn(slong,1); a = zeros(slong,1); a(1) = ab+e(1); for i = 2:slong; a(i) = rho*a(i-1)+(1-rho)*ab+e(i); end b0

= peaoginit(e,param); % Compute initial conditions

% % Main Loop % iter = 1; while crit>tol; % % Simulated path % k = zeros(slong+1,1); lb = zeros(slong,1); X = zeros(slong,length(b0)); k(1) = kss; for i = 1:slong; X(i,:)= [1 log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)]; lb(i) = exp(X(i,:)*b0); k(i+1)=exp(a(i))*k(i)^alpha+(1-delta)*k(i)-lb(i)^(-1/sigma); end y = beta*lb(T1).*(alpha*exp(a(T1)).*k(T1).^(alpha-1)+1-delta); bt = X(T,:)\log(y); b = gam*bt+(1-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf(’Iteration: %d\tConv. crit.: %g’,iter,crit)) iter=iter+1; end;

13

8.4

PEA and binding constraints: Optimal growth with irreversible investment

We now consider a variation to the previous model, in the sense that we restrict gross investment to be positive in each and every period: it > 0 ⇐⇒ kt+1 > (1 − δ)kt

(8.9)

This assumption amounts to assume that there does not exist a second hand market for capital. In such a case the problem of the central planner is to determined consumption and capital accumulation, such that utility is maximum: max ∞ E0

{ct ,kt+1 }t=0

s.t.

∞ X t=0

βt

c1−σ −1 t 1−σ

kt+1 = zt ktα − ct + (1 − δ)kt and kt+1 > (1 − δ)kt Forming the Lagrangean associated to the previous problem, we have " ∞ 1−σ X ¢ ¡ α τ ct+τ − 1 − ct+τ + (1 − δ)kt+τ − kt+τ +1 β + λt+τ zt+τ kt+τ Lt = E t 1−σ τ =0 # + µt+τ (kt+1 − (1 − δ)kt )

which leads to the following set of first order conditions c−σ = λt t

(8.10) £

¡

¢

α−1 + 1 − δ − µt+1 (1 − δ) λt − µt = βEt λt+1 αzt+1 kt+1

kt+1 = zt ktα − ct + (1 − δ)kt µt (kt+1 − (1 − δ)kt )

¤

(8.11) (8.12) (8.13)

The main difference with the previous example is that now the central planner faces a constraint that may be binding in each and every period. Therefore, 14

this complicates a little bit the algorithm, and we have to find a rule for both the expectation function Et [ϕt+1 ] where

¡ ¢ ¢ ¡ α−1 + 1 − δ − µt+1 (1 − δ) ϕt+1 ≡ β λt+1 αzt+1 kt+1

and µt . We then proceed as suggested in Marcet and Lorenzoni [1999]: ∞ 1. Compute two sequences for {λt (θ)}∞ t=0 and {kt (θ)}t=0 from (8.11) and

(8.12) under the assumption that the constraint is not binding — that is µt (θ) = 0. In such a case, we just compute the sequences as in the standard optimal growth model. 2. Test whether, under this assumption, it (β) > 0. If it is the case, then set µt (θ) = 0, otherwise set kt+1 (θ) = (1 − δ)kt (θ), ct (θ) is computed from the resource constraint and µt (θ) is found from (8.11). Note that, using this procedure, µt is just treated as an additional variable which is just used to compute a sequence to solve the model. We therefore do not need to compute explicitly its interpolating function, as far as ϕt+1 is concerned we use the same interpolating function as in the previous example and therefore run a regression of the type log(ϕt+1 (θ)) = θ0 + θ1 log(kt (θ)) + θ2 log(zt ) + θ3 log(kt (θ))2 +θ4 log(zt )2 + θ5 log(kt (θ)) log(zt )

(8.14)

b to get θ.

Up to the shock, the parameterization, reported in table 8.3, we used in the

matlab code is essentially the same as the one we used in the optimal growth

model. The shock was artificially assigned a lower persistence and a greater volatility in order to increase the probability of binding the constraint, and therefore illustrate the potential of this approach. γ, the smoothing parameter was set to 1. The stopping criterion was set at η=1e-6 and T =20000 data points were used to compute the OLS regression. 15

Table 8.3: Optimal growth: Parameterization β 0.95

σ 1

α 0.3

δ 0.1

ρ 0.8

σe 0.14

Initial conditions were set as in the standard optimal growth model: We first solve the model relying on a log–linear approximation. We then generate a random draw of size T for ε and generate series using the log–linear approximate solution. We then built the needed series to recover a draw for ∞ ∞ {ϕt+1 (θ)}∞ t=0 , {kt (θ)}t=0 and {zt (θ)}t=0 and ran the regression (8.14) to get an

initial condition for θ, reported in table 8.4. The algorithm converges after 115 iterations and delivers the final decision rule reported in table 8.4. Contrary Table 8.4: Decision rule

Initial Final

θ0 0.4620 0.3558

θ1 -0.5760 -0.3289

θ2 -0.3909 -0.7182

θ3 0.0257 -0.1201

θ4 0.0307 -0.2168

θ5 -0.0524 0.3126

to the standard optimal growth model, the initial and final rule totally differ in the sense the coefficient in front of the capital stock in the final rule is half that on the initial rule, that in front of the shock is double, and the sign in front of all the quadratic terms are reversed. This should not be surprising as the initial rule is computed under (i) the certainty equivalence hypothesis and (ii) the assumption that the constraint never binds, whereas the size of the shocks we introduce in the model implies that the constraint binds in 2.8% of the cases. The latter quantity may seem rather small, but this is sufficient to dramatically alter the decision of the central planner when it acts under rational expectations. This is illustrated by figures 8.2 and 8.3 which respectively report the decision rules for investment, capital and the lagrange multiplier and a typical path for investment and lagrange multiplier. As reflected in 16

Figure 8.2: Decision rules investment

1.5

800

Distribution of investment

600

1

400 0.5

0 0

200

2

4 k

6

0 0

8

t

Capital stock

8

0.8

6

0.6

4

0.4

2

0.2

0 0

2

4 kt

6

0 0

8

0.5

1

1.5

Lagrange multiplier

2

4 kt

6

8

Figure 8.3: Typical investment path 1

investment

0.25

0.8

0.2

0.6

0.15

0.4

0.1

0.2

0.05

0 0

200

Time

400

0 0

600

17

Lagrange multiplier

200

Time

400

600

the upper right panel of figure 8.2 which reports the simulated distribution of investment the distribution is highly skewed and exhibits a mode at it = 0, revealing the fact that the constraint occasionally binds. This is also illustrated in the lower left panel that reports the decision rule for the capital stock. As can be seen from this graph, the decision rule is bounded from below by the line (1 − δ)kt (the grey line on the graph), such situation then correspond to situations where the Lagrange multiplier is positive as reported in the lower right panel of the figure. Matlab Code: PEA Algorithm (Irreversible Investment) clear all long = 20000; init = 500; slong = init+long; T = init+1:slong-1; T1 = init+2:slong; tol = 1e-6; crit = 1; gam = 1; sigma = 1; delta = 0.1; beta = 0.95; alpha = 0.3; ab = 0; rho = 0.8; se = 0.125; kss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1)); css = kss^alpha-delta*kss; lss = css^(-sigma); ysk = (1-beta*(1-delta))/(alpha*beta); csy = 1-delta/ysk; % % Simulation of the shock % randn(’state’,1); e = se*randn(slong,1); a = zeros(slong,1); a(1) = ab+e(1); for i = 2:slong; a(i) = rho*a(i-1)+(1-rho)*ab+e(i); end % % Initial guess

18

% param = [ab alpha beta delta rho se sigma long init]; b0 = peaoginit(e,param); % % Main Loop % iter = 1; while crit>tol; % % Simulated path % k = zeros(slong+1,1); lb = zeros(slong,1); mu = zeros(slong,1); X = zeros(slong,length(b0)); k(1) = kss; for i = 1:slong; X(i,:)= [1 log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)]; lb(i) = exp(X(i,:)*b0); iv = exp(a(i))*k(i)^alpha-lb(i)^(-1/sigma); if iv>0; k(i+1) = (1-delta)*k(i)+iv; mu(i) = 0; else k(i+1) = (1-delta)*k(i); c = exp(a(i))*k(i)^alpha; mu(i) = c^(-sigma)-lb(i); end end y = beta*(lb(T1).*(alpha*exp(a(T1)).*k(T1).^(alpha-1)+1-delta) ... -mu(T1)*(1-delta)); bt = X(T,:)\log(y); b = gam*bt+(1-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf(’Iteration: %d\tConv. crit.: %g’,iter,crit)) iter = iter+1; end;

19

8.5

The Households Problem With Borrowing Constraints

As a final example, we now report the example of a consumer that faces borrowing constraints, such that she solves the program max Et {ct }

∞ X

β τ u(ct+τ )

τ =0

s.t. at+1 = (1 + r)at + ωt − ct at+1 > 0 log(ωt+1 ) = ρ log(ωt ) + (1 − ρ) log(ω) + εt+1 Let us first recall the first order conditions that are associated with this problem: c−σ = λt t

(8.15)

λt = µt + β(1 + r)Et λt+1 at+1 = (1 + r)at + ωt − ct log(ωt+1 ) = ρ log(ωt ) + (1 − ρ) log(ω) + εt+1

(8.16) (8.17) (8.18)

µt (at+1 − a) = 0

(8.19)

µt > 0

(8.20)

In order to solve this model, we have to find a rule for both the expectation function Et [ϕt+1 ] where ϕt+1 ≡ βRλt+1 and µt . We propose to follow the same procedure as the previous one: 20

∞ 1. Compute two sequences for {λt (θ)}∞ t=0 and {at (θ)}t=0 from (8.16) and

(8.17) under the assumption that the constraint is not binding — that is µt (θ) = 0. 2. Test whether, under this assumption, at+1 (β) > a. If it is the case, then set µt (θ) = 0, otherwise set at+1 (θ) = a, ct (θ) is computed from the resource constraint and µt (θ) is found from (8.16). Note that, using this procedure, µt is just treated as an additional variable which is just used to compute a sequence to solve the model. We therefore do not need to compute explicitly its interpolating function, as far as ϕt+1 is concerned we use the same interpolating function as in the previous example and therefore run a regression of the type log(ϕt+1 (θ)) = θ0 + θ1 at (θ) + θ2 ωt + θ3 at (θ)2 + θ4 ωt2 + θ5 at (θ)ωt

(8.21)

b to get θ.

The parameterization is reported in table 8.5. γ, the smoothing parameter Table 8.5: Borrowing constraint: Parameterization a 0

β 0.95

σ 1.5

ρ 0.7

σω 0.1

R 1.04

ω 1

was set to 1. The stopping criterion was set at η=1e-6 and T =20000 data points were used to compute the OLS regression. One key issue in this particular problem is related to the initial conditions. Indeed, it is extremely difficult to find a good initial guess as the only model for which we might get an analytical solution while being related to the present model is the standard permanent income model. Unfortunately, this model exhibits a non–stationary behavior, in the sense it generates an I(1) process for the level of individual wealth and consumption, and therefore the marginal utility of wealth. We therefore have to take another route. We propose the 21

following procedure. For a given a0 and a sequence {ωt }Tt=0 , we generate c0 = rea0 + ω0 + η0 where re > r and ε0 ; N (0, ση ). In practice, we took

re = 0.1 and ση = 0.1. We then compute a1 from the law of motion of wealth.

If a1 < a then a1 is set to a and c0 = Ra0 +y0 −a, otherwise c0 is not modified. We then proceed exactly the same way for all t > 0. We then have in hand a

sequence for both at and ct , and therefore for λt . We can then recover easily ϕt+1 and an initial θ from the regression (8.21) (see table 8.6). Table 8.6: Decision rule

Initial Final

θ0 1.6740 1.5046

θ1 -0.6324 -0.5719

θ2 -2.1918 -2.1792

θ3 0.0133 0.0458

θ4 0.5438 0.7020

θ5 0.2971 0.3159

The algorithm converges after 79 iterations and delivers the final decision rule reported in table 8.6. Note that if the final decision rule effectively differs from the initial one, the difference is not huge, meaning that our initialization procedure is relevant. Figure 8.4 reports the decision rule of consumption in terms of cash–on–hand — that is the effective amount a household may use to purchase goods (Rat + ωt − a). Figure 8.5 reports the decision rule for wealth accumulation as well as the implied distribution, which admits a mode in a, revealing that the constraints effectively binds (in 13.7% of the cases). Matlab Code: PEA Algorithm (Borrowing Constraints) clear crit tol gam long init slong T T1

= = = = = = = =

1; 1e-6; 1; 20000; 500; long+init; init+1:slong-1; init+2:slong;

rw sw

= 0.7; = 0.1;

22

Figure 8.4: Consumption decision rule Consumption

1.3 1.2 1.1 1 0.9 0.8 0.7 0.6 0.5 0.5

1

1.5

2

2.5 3 3.5 Cash−on−hand (R a +ω −a) t

4

4.5

5

t

Figure 8.5: Wealth accumulation Wealth

4

4000

3

3000

2

2000

1

1000

0 0

1

2 a

3

0 0

4

t

23

Distribution of wealth

1

2

3

4

wb beta R sigma ab

= = = = =

0; 0.95; 1/(beta+0.01); 1.5; 0;

randn(’state’,1); e = sw*randn(slong,1); w = zeros(slong,1); w(1) = wb+e(1); for i = 2:slong; w(i)= rw*w(i-1)+(1-rw)*wb+e(i); end w=exp(w); a = zeros(slong,1); c = zeros(slong,1); lb = zeros(slong,1); X = zeros(slong,6); a(1) = ass; rt = 0.2; sc = 0.1; randn(’state’,1234567890); ec = sc*randn(slong,1); for i=1:slong; X(i,:) = [1 a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)]; c(i) = rt*a(i)+w(i)+ec(i); a1 = R*a(i)+w(i)-c(i); if a1>ab; a(i+1)=a1; else a(i+1)= ab; c(i) = R*a(i)+w(i)-ab; end end lb = c.^(-sigma); y = log(beta*R*lb(T1)); b0 = X(T,:)\y iter=1; while crit>tol; a = zeros(slong,1); c = zeros(slong,1); lb = zeros(slong,1); X = zeros(slong,length(b0)); a(1) = 0; for i=1:slong; X(i,:)= [1 a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)];

24

lb(i) = exp(X(i,:)*b0); a1 = R*a(i)+w(i)-lb(i)^(-1/sigma); if a1>ab; a(i+1) = a1; c(i) = lb(i).^(-1./sigma); else a(i+1) = ab; c(i) = R*a(i)+w(i)-ab; lb(i) = c(i)^(-sigma); end end y

= log(beta*R*lb(T1));

b = X(T,:)\y; b = gam*b+(1-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf(’Iteration: %d\tConv. crit.: %g’,iter,crit)) iter=iter+1; end;

25

26

Bibliography Marcet, A., Solving Nonlinear Stochastic Models by Parametrizing Expectations, mimeo, Carnegie–Mellon University 1988. and D.A. Marshall, Solving Nonlinear Rational Expectations Models by Parametrized Expectations : Convergence to Stationary Solutions, Manuscript, Universitat Pompeu Fabra, Barcelone 1994. and G. Lorenzoni, The Parameterized Expectations Approach: Some Practical Issues, in M. Marimon and A. Scott, editors, Computational Methods for the Study of Dynamic Economies, Oxford: Oxford University Press, 1999.

27

Index Expectation function, 2 Interpolating function, 1 Orthogonal polynomial, 6

28

Contents 8 Parameterized expectations algorithm

1

8.1

Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

8.2

A linear example . . . . . . . . . . . . . . . . . . . . . . . . . .

6

8.3

Standard PEA solution: the Optimal Growth Model . . . . . .

9

8.4

PEA and binding constraints: Optimal growth with irreversible

8.5

investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

The Households Problem With Borrowing Constraints . . . . .

20

29

30

List of Figures 8.1

Capital decision rule . . . . . . . . . . . . . . . . . . . . . . . .

12

8.2

Decision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

8.3

Typical investment path . . . . . . . . . . . . . . . . . . . . . .

17

8.4

Consumption decision rule . . . . . . . . . . . . . . . . . . . . .

23

8.5

Wealth accumulation . . . . . . . . . . . . . . . . . . . . . . . .

23

31

32

List of Tables 8.1

Optimal growth: Parameterization . . . . . . . . . . . . . . . .

11

8.2

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

8.3

Optimal growth: Parameterization . . . . . . . . . . . . . . . .

16

8.4

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

8.5

Borrowing constraint: Parameterization . . . . . . . . . . . . .

21

8.6

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

33