Changes

Jump to navigation Jump to search
no edit summary
:<math>g^{\prime }(a_{1}^{*})=\beta \cdot \frac{h_{\varepsilon }}{h_{1}+h_{\varepsilon }}\in (0,1)\,</math>
As <math>g^{\prime }(a_{1}^{*}) < 1\,</math>, it must be the case that <math>a_{1}^{*} < a^{FB}\,</math>. Likewise if the agent lives for <math>T\,</math> periods then <math>a_{T}^{*} = 0\,</math>, so effort declines from the first period onwards, but some effort is exerted in the first period and it is increasing in <math>\beta\,</math> (as the future becomes more important).The manager exerts a higher effort in the first period because he knows that this be attributed to ability in the second period and so result in a higher wage, but in a rational expectations equilibrium this effort is anticipated by the market and the manager is forced into making it because otherwise the market will downgrade his ability.  ===Infinite Horizon with fixed ability=== First we define: :<math>z_{t}\equiv \eta +\varepsilon _{t}=y_{t}-a_{t}^{*}(y^{t-1})\,</math>  Then we apply the Bayesian updating as before: :<math>[\eta |z^{t-1}]\sim N(m_{t},\frac{1}{h_{t}})\,</math>  To give: :<math>m_{t}(z^{t-1})=\frac{h_{1}}{h_{1}+(t-1)h_{\varepsilon }}\cdot m_{1}+\frac{h_{\varepsilon }}{h_{1}+(t-1)h_{\varepsilon }}\cdot \sum_{s=1}^{t-1}z_{s}\,</math> and :<math>h_{t}=h_{1}+(t-1)h_{\varepsilon}\,</math>  Noting that <math>h_{t} \rightarrow \infty\,</math> which implies that as time progresses the market gets an ever better estimate of the manager's ability and so the manager's wages <math>a_{t}^{*}(y^{t-1}) \rightarrow 0\,</math> (see below). The manager's ex-ante expected wage is:  :<math>\mathbb{E}[w_{t}(y^{t-1})]=\frac{h_{1}}{h_{t}}\cdot m_{1}+\frac{h_{\varepsilon }}{h_{t}}\cdot \sum_{s=1}^{t-1}\underset{\mathbb{E}z_{s}}{\underbrace{[\overset{\mathbb{E}y_{s}}{\overbrace{m_{1}+a_{s}}}-\mathbb{E}a_{s}^{*}(y^{s-1})]}+\mathbb{E}a_{t}^{*}(y^{t-1})}\,</math> Which can be substituted into the manager's utility function: :<math>\underset{\{a_{t}(y^{t-1})\}_{t=1}^{\infty }}{\max }\;\sum_{t=1}^{\infty}\beta ^{t-1}[\mathbb{E}w_{t}(y^{t-1})-\mathbb{E}g(a_{t}(y^{t-1}))]\,</math>  Which is solved by a first order condition to give: :<math>g^{\prime }(a_{t}^{*})=\sum_{s=t}^{\infty }\beta ^{s-t}\cdot \alpha_{s}\equiv \gamma _{t}\,</math>  where <math>\alpha _{s}\equiv \frac{h_{\varepsilon }}{h_{s}}\,</math>  So early in the manager's career he will work hard (though possibly still below first best, depending on the parameterization), and this work 'ethic' will tend to zero as his career progresses. ===Infinite Horizon with varying ability=== For incentives not to disappear there must always be some uncertainty about the manager's ability, so now suppose: :<math>\eta _{t+1}=\eta _{t}+\delta _{t}\,</math>  where <math>\delta _{t}\sim N(0,\frac{1}{h_{\delta}})\,</math>  Bayesian updating on the mean is as before:  :<math>m_{t+1}=\mu _{t}m_{t}+(1-\mu _{t})z_{t}\,</math> where <math>\mu _{t}=\frac{h_{t}}{h_{t}+h_{\varepsilon}}\,</math>  However, Bayesian updating on the variance (precision) is different: :<math>h_{t+1}=\frac{(h_{t}+h_{\varepsilon })h_{\delta }}{h_{t}+h_{\varepsilon}+h_{\delta}}\,</math>  Essentially the shocks prevent the market from learning the true ability.  Looking at how the variance changes over time we take: :<math>\frac{\partial h_{t+1}}{\partial h_{t}} = \frac{h_{\delta }^{2}}{(h_{t}+h_{\varepsilon }+h_{\delta })^{2}}\in (0,1)\,</math>  So must conclude that the variance tends to a steady state and not to zero. This in turn leads to steady state effort, which we can solve for by equating the marginal benefit to the marginal cost of a change: :<math>g^{\prime }(a^{*}) &=&\beta (1-\mu ^{*})+\beta ^{2}\mu ^{*}(1-\mu^{*})+\beta ^{3}(\mu ^{*})^{2}(1-\mu ^{*})+\cdot \cdot \cdot=\frac{\beta (1-\mu ^{*})}{1-\beta \mu ^{*}}\,</math>  Which in turn leads to Holmstrom's proposition 1 that the stationary effort is <math>a^{*}\leq a^{FB}\,</math> and only equal to first best if <math>\beta =1,\,\frac{1}{h_{\varepsilon }}>0\,</math>, and <math>$\frac{1}{h_{\delta }}>0\,</math>
Anonymous user

Navigation menu