Holmstrom (1999) - The Firm As A Subeconomy

From edegan.com
Jump to navigation Jump to search
Article
Has bibtex key
Has article title The Firm As A Subeconomy
Has author Holmstrom
Has year 1999
In journal
In volume
In number
Has pages
Has publisher
© edegan.com, 2016


Reference(s)

  • Holmstrom, Bengt (1999), "The Firm As A Subeconomy", Journal of Law, Economics, and Organization Vol. 15, Issue 1, pp. 74-102. pdf


Abstract

This article explores the economic role of the firm in a market economy. The analysis begins with a discussion and critique of the property rights approach to the theory of the firm as exposited in the recent work by Hart and Moore ('Property Rights and the Nature of the Firm'). It is argued that the Hart-Moore model, taken literally, can only explain why individuals own assets, but not why firms own assets. In particular, the logic of the model suggests that each asset should be free standing in order to provide maximal flexibility for the design of individual incentives. These implications run counter to fact. One of the key features of the modern firm is that it owns essentially all the productive assets that it employs. Employees rarely own any assets; they only contribute human capital. Why is the ownership of assets clustered in firms? This article outlines an answer based on the notion that control over physical assets gives control over contracting rights to those assets. Metaphorically, the firm is viewed as a miniature economy, an 'island' economy, in which asset ownership conveys the CEO the power to define the 'rules of the game', that is, the ability to restructure the incentives of those that accept to do business on (or with) the island. The desire to regulate trade in this fashion stems from contractual externalities characteristic of imperfect information environments. The inability to regulate all trade through a single firm stems from the value of exit rights as an incentive instrument and a tool to discipline the abuse of power.


Free Rider Problem

What follows is essentially a simple elaboration of the basics of the Holmstrom 1982 (Team's Problem) model, but rather than considering the principal as a budget breaker, it will primarily be used to discuss monitoring.

The set up is as follows:

  • There are [math]n\;[/math] workers, each is identical and risk neutral
  • Each worker exerts effort [math]e_i\;[/math], which is unobservable to either a principal or the other workers, at a cost of [math]e_i\;[/math]
  • Ouput is observable to all and is [math]y=y(e_1,\ldots,e_n)\;[/math]
  • There is a sharing contract [math]s_i(y)\;[/math] that denotes the share of output that [math]i\;[/math] gets paid
  • Initially shares are choosen in a partnership without disposal, so that [math]\sum_i s_i(y) = y\;[/math]


The "Team's Problem" is then simply whether [math]s_i\;[/math] can be chosen to induce workers to provide inputs efficiently (i.e. [math]e_i=e^*\; \forall i\;[/math]), that is so that output will maximize total surplus (i.e. [math]y=y(e^*)\;[/math]).


Assuming [math]y\;[/math] is diffentiable (which is not innocuous) and strictly concave (diminising returns to scale, or increasing costs of effort), then [math]e^*\;[/math] is completely characterized by (marginal benefit equals marginal cost):

[math]\frac{\partial y(e^*)}{\partial e_i} = 1 \quad \forall i=1,\ldots,n\;[/math]


Assuming sharing rules are differentiable (which again may not be innocuous), then a non-cooperative Nash equilbrium must be characterized by (again marginal benefit equals marginal cost):

[math]\frac{ d s_i(y(e^{NE})) }{d y} \frac{\partial y(e^{NE})}{\partial e_i} = 1 \quad \forall i=1,\ldots,n\;[/math]


So for [math]e^{NE}=e^*\;[/math], it must be that:

[math]\frac{ d s_i(y(e^{NE})) }{d y} = 1 \quad \forall i=1,\ldots,n\;[/math]


But from the budget constraint:

[math]\sum_i s_i(y) = y \quad \therefore \sum_i \frac{ d s_i(y(e^{NE})) }{d y} = 1\;[/math]


The above two equations are clearly inconsistent. The problem is simply that in a partnership each worker can not (credibly and under self-imposition) face the full social marginal benefit in his maximization. There are two solutions:

  • The solution presented in Holmstrom 1982 which uses a budget breaker
  • A solution based on Alchian and Demsetz (1972) which uses monitoring


Budget Breakers

The budget breaker solution is loosely to let [math]\sum_i s_i(y) \le y\;[/math], and then create a sharing rule as follows:

[math]s_i(y) = \begin{cases} \frac{y}{n} \quad &\mbox{if } y \ge y(e^*) \\ \frac{y(e^*)}{n} + y - y(e^*) \quad &\mbox{if } y \lt y(e^*) \end{cases} \;[/math]

This requires the team to credible commit to destroying output if the efficient output is not reached (and is not renegotiation proof). In equilibrium this isn't a problem as first best is achieved. And this maybe overcome by including a principle as a residual claimant.


Monitoring

The Alchian and Demsetz (1972) solution is to add a principle who monitors the inputs in some fashion (ignoring the problems of how this person is compensated or incentivized). Again, the monitor may be the residual claimant. A long term reputation game may prevent the monitor from cheating.

The question here is when will an additional performance measure [math]z=z(e_1,\ldots,e_n)\;[/math] add value?

Suppose we rewrite [math]y=e_1 + e_2\;[/math] (with just two team members now), and that [math]c_i(e_i)\;[/math] is now strictly convex (to give an interior solution).


Imagine there is another performance measure:

[math]z = e_1 + \gamma e_2\;[/math]


Restricting to linear sharing rules we have:

[math]s_1(y,z) = \alpha y + \beta z + \delta \;[/math]
[math]s_2(y,z) = (1-\alpha) y - \beta z - \delta \;[/math]


The FOCs for effort (under the sharing contract) are then:

[math]\alpha + \beta = c_1'(e_1)\;[/math]
[math](1-\alpha) - \beta = c_2'(e_2)\;[/math]


We should call the terms on the left the incentive coefficients.

And the workers will respond to changes in [math]\beta\;[/math]:


[math]\frac{d e_1}{d \beta} = \frac{1}{c_1''}\;[/math]
[math]\frac{d e_2}{d \beta} = \frac{- \gamma}{c_2''}\;[/math]


As net output is [math]y-c_1-c_2\;[/math], the effect of a change in [math]\beta\;[/math] is in total:

[math]\left ( (1-\alpha) \frac{1}{c_1''} + \alpha \frac{- \gamma}{c_2''} \right ) d\beta\;[/math]


Thus [math]z\;[/math] is valuable iff the term in the bracket is non-zero. It is easy to show that this is true - in brief:

  • If [math]\gamma = 1\;[/math] then [math]z=y\;[/math] and the term is zero
  • If [math]\gamma \ne 1\;[/math] then [math]z \ne y\;[/math] and the term is non-zero


Thus as long as the measure is not collinear with output, it provides additional info and can be used to strengthen incentives. This extends directly to [math]n\;[/math] workers.

Ownership and Incomplete Contracts

Property rights theory as embodied originally in Grossman and Hart (1986), and crystalized in Hart and Moore (1990), argues that asset ownership is crucial and central to the theory of the firm. Following Williamson (1971) and others, this model argues that the threat of hold-up will lead to underinvestment in relational-specific assets, and so that integration of two assets into a single firm is efficient.


The Hart-Moore Model

The simplified model is as follows:

  • There are two parties [math]B\;[/math] (a buyer) and [math]S\;[/math] (a seller).
  • There are two dates, 1 and 2.
  • There is a set of assets [math]A\;[/math], that can be allocated to either or both parties, or some outsider (which is never optimal)
  • At [math]t=1\;[/math], [math]B\;[/math] and [math]S\;[/math] make private investments [math]b\;[/math] and [math]s\;[/math].
  • No contract can be written to specify the use (or trades) of the assets at [math]t=2\;[/math]
  • At [math]t=2\;[/math] parties can trade or not. If they trade they split (50-50, see below) [math]v(b,s)\;[/math]. If they don't they get their outside options; [math]v_b(b|A_b)\;[/math] and [math]v_s(s|A_s)\;[/math].
  • It is assumed that [math]v \ge v_s +v_b\;[/math], and these values are exogenously given


The surplus if they trade is:

[math]v - v_s - v_b\;[/math]


Each firm gets half of the surplus (plus the value of their outside option):

[math]s_b(b,s|A_b,A_s) = \frac{1}{2}\cdot \left ( v(b,s) - v_s(s|A_s) + v_b(b|A_b) \right )\;[/math]


The authors assume that with more assets under control, the marginal incentive to invest increases. That is [math]\frac{\partial v_b}{\partial b}\;[/math] increases as [math]A_b\;[/math] increases, and likewise for [math]s\;[/math]. And that there are complementarities in production, that is [math]\frac{\partial v^2}{\partial b \partial s} \gt 0\;[/math].


Crucially they also assume that the marginal contributions of investment to value are higher when the two parties work together. That is [math]\frac{\partial v}{\partial b} \ge \frac{\partial v_b}{\partial b}\;[/math], and likewise for [math]s\;[/math]. This gives us supermodular value functions (see also Topkis's theorem for the continous case).

Implications of Hart-Moore

The equilibrium level of investment is at or below the efficient level.

From the supermodularity assumption above, we immediately have that:

[math]\frac{\partial v}{\partial b} \ge \frac{1}{2} \cdot \left(\frac{\partial v}{\partial b} + \frac{\partial v_b}{\partial b} \right )\;[/math]


As investments are complementary, the less [math]B\;[/math] invests, the less [math]S\;[/math] will invest (and so forth).


It is never optimal to have joint ownership, as this amounts to two vetos on usage, instead of one. Implicit here is that [math]\frac{\partial v_b}{\partial b}\;[/math] increases as sole ownership of assets counted in [math]A_b\;[/math] increases, and likewise for [math]s\;[/math]. Thus joint ownership assets could always be reassigned to sole ownership of either party without weakening either parties' incentives to invest, and strengthening at least one party's. Likewise outside ownership is never optimal, at least if Shapley value is the bargaining outcome.


Assets that are perfectly complementary should always be owned by the same party as otherwise the ownership of one asset by one party renders both assets worthless.


If investments could targetted towards enhancing either inside value [math]v\;[/math], or outside value, [math]v_b\;[/math], then there will be a bias towards investing in outside options, which is a form of rent-seeking - it sacrifices total value in order to get a bigger share.


Equivalence to Team's Problem

In the Hart-Moore problem the joint output is [math]y = v(s,b)\;[/math], and the parties provide unobserved inputs [math]e_1 = b\;[/math] and [math]e_2 = s\;[/math]. The only difference is in the instruments used to motivate the agents: Instead of sharing rules there are asset allocation rules. The asset allocation rules need not be binary (even if there are only two ex-post agreed upon lotteries can expand the payoffs to all linear combinations.


However, there is an important distinction: The payoffs in the property rights model are not only a function of the joint output but also the outside option. In the team's problem, a monitor can break the budget - here the market breaks the budget. It is implicitly assumed that the agents can observe each other's outside options (this may be costly and a friction). If this is the actually the case then unless [math]v_b\;[/math] and [math]v_s\;[/math] are collinear with with [math]v\;[/math], they are additional measures as discussed above.


The special case of:

[math]v = v_b + v_s\; \forall b,s\;[/math]


Is a perfectly competitive market for investment - the outside option is the same as the inside option, which gives:

[math]\frac{\partial s_b}{\partial b} = \frac{\partial v }{\partial b } \quad \mbox{and} \quad \frac{\partial v_s }{\partial s} = \frac{\partial v}{\partial s}\;[/math]


And so a socially optimal level of investment. Perfect market monitoring breaks the budget and both sides get the full social return.


The key point is that in this model the market provides to both information and provide the right to exit a relationship, and so the investment incentives. Of course, in reality the generation of the information is not costless, and the bargaining might result in rent seeking.


Problems with this model

There are a number of problems, which the paper considers. These include:

  • Empirical issues:
    • Joint production does occur
    • Different bargaining rules can alter the conclusions
    • Asset specificity only matters on the margin (adding a constant to the joint surplus makes no difference)
  • Why Do Firms Own Assets?
    • The theory is about asset ownership by individuals
    • If human capital is used then we have taken the firm as exogenous
    • We should be asking what activities do firms do, not what assets do they own
    • The model suggests assets should be widely held across individuals, but we observe the opposite - firms own everything!


The paper provides three possible answers to the last point:

  1. Concentrating assets to ownership by just the firm strengthens the firm's bargaining position with outsiders
  2. It may influence the terms for financing asset purchases (the firm may be a financial intermediary)
  3. A firm can then assign workers to assets in 'richer and more varied' manner. This makes the firm more responsive to changes that it could not anticipate ex-ante.


In the words of Holmstrom:

"My argument is that it allows the firm in internalize many of the externalities
that are associated with incentive design in a world characterized by informational imperfections.
As the theory of second best suggests, an uncoordinated application of the available
incentive instruments will lead to significant externalities. By having access to more instreuments,
the firm can set up a more coherent system of incentives. Often this involves suppressing 
excessively strong incentives on individually measured performance for the benefit of enhancing
the effectiveness of more delicate and subtle instruments aimed at encouraging cooperation 
and other less easily measured activties...
Let me stress that viewing the firm as a subeconomy, which regulates trade according to 
second best principles, does not imply that one firm should own all the assets... 
seperate ownership does allow market based bargaining... [and] the very fact that workers can exit
a firm at will... and that consumers and suppliers can do likewise, limits the firm's ability 
to exploit these constituents."


Regulating Trade Within the Firm

This section basis much of it's analysis on the implications of [Holmstrom Milgrom (1991) - Multi Task Principal Agent Analyses| Holmstrom and Milgrom (1991)] (the multitask principal agent problem). With that in mind it sets up three moral hazard models.

Moral hazard stems from imperfect performance measurement. Three ways that measurement can be imperfect are:

  1. [math]x=f(e,\theta)\;[/math] where [math]\theta\;[/math] is realized by nature, and the agent is risk averse (or has limited wealth)
  2. [math]x=\theta e\;[/math] where [math]\theta\;[/math] is observed by the agent prior to exerting effort, [math]c(e)\;[/math] is strictly convex and [math]\mathbb{E}(\theta) =1 \;[/math]
  3. [math]x = e + m\;[/math], where [math]m\;[/math] is the degree to which the agent manipulates [math]x\;[/math], and is privately costly (this can be interpretted as shading on quality)


A simple model

Using the shading model ([math]x = e + m\;[/math]), with private cost:

[math]cost = c(e) + d(m) = \frac{1}{2}e^2 + \frac{1}{2}\lambda m^2\;[/math]


The value of output is:

[math]y = pe\;[/math]


where [math]p\;[/math] is the value of the agents input(s).


Using linear sharing rules of the form:

[math]s(x) = \alpha x + \beta\;[/math]


The agent's FOCs from [math]s(x) - (c(e) + d(m))\;[/math] are:

[math]\alpha = e \quad \mbox{and} \quad \alpha = m\cdot \lambda \quad \therefore \frac{e}{m} = \lambda\;[/math]


So the ratio above will be the same irrespective of the sharing rule (incentive scheme) that is used (even if it is non-linear).


The total surplus is given by:

[math]S = y - (c(e) + d(m)) = pe - \left( \frac{1}{2}e^2 + \frac{1}{2}\lambda m^2 \right)\;[/math]


The FOCs are:

[math]p = e \quad \mbox{and} \quad \lambda m = 0\;[/math]


As [math]\alpha = e\;[/math] from the agent's FOC, the first best contract is [math]\alpha = p\;[/math]. However, subbing in the agent's solutions for [math]e\;[/math] and [math]m\;[/math] into [math]S\;[/math] gives:

[math]S = p\alpha - \left( \frac{1}{2} \alpha ^2 + \frac{1}{2}\lambda \frac{\alpha}{\lambda}^2 \right)\;[/math]


Then the best choice is to set:

[math]p - \frac{1}{2} \left (2 \alpha + 2\frac{\alpha}{\lambda} \right ) = 0 \;\therefore\; \alpha = \frac{\lambda p}{1+\lambda}\;[/math]


Introducing Multitasking

Now we allow for two tasks, and two performance measures, by changing the cost to:

[math]cost = \frac{1}{2}(e_1 + e_2)^2 + \frac{1}{2}\lambda m^2\;[/math]


and adding the two performance measures:

[math]x_1 = R(e_1) \;\mbox{and}\; x_2 = e_2+m\;[/math]


and changing the Principal's return function to be:

[math]y = p_1 R(e_1)+p_2 e_2\;[/math]


where [math]R(\cdot)\;[/math] is strictly concave and increasing to prevent corners.


Now the agent maximizes:

[math]\alpha_1 R(e_1) + \alpha_2 (e_2 +m) - \frac{1}{2}(e_1 + e_2)^2 + \frac{1}{2}\lambda m^2\;[/math]


The three FOCs wrt to [math]e_1,e_2,m\;[/math] are:

[math]\alpha_1 R'(e) = \alpha_2\;[/math]
[math]e_2 = \alpha_2 -e_1\;[/math]
[math]m = \frac{\alpha_2}{\lambda}\;[/math]


The deadweight loss is immediately obvious from the equations above - the first best would have no shading so [math]\frac{1}{2}\lambda m^2\;[/math] is the loss, and from the last FOC we have:

[math]\frac{1}{2}\lambda m^2 = \frac{\alpha_2^2}{2\lambda}\;[/math]


Without specifying [math]R(\cdot)\;[/math] above, there is not simple analytical solution to either the first best or the second best. However, as manipulation becomes infinitely costly (i.e. [math]\lambda \to \infty\;[/math]) the principal sets:

[math]\alpha_1 - p_1 \;\mbox{and}\; \alpha_2 = p_2\;[/math]


When manipulation is possible, the second best solution sets low powered incentives on [math]e_2\;[/math], to reduce wasteful manipulation. This makes is it optimal to set lower powered incentives in [math]e_1\;[/math] too, as otherwise too much attention will be diverted towards it. As [math]\lambda\;[/math] decreases (manipulation becomes less costly) then the incentives will become even weaker.

Holmstrom says:

"...an optimal incentive design should consider not only rewards 
but also instruments for influencing the agent's opportunity cost..."

Firm Boundaries

In this section the paper provides two variants of former models to illustrate some points. We quickly summarize their conclusions below.

Variant 1

This is a variant on the above model of multitasking but using asset ownership instead. In this variant there is one asset that is used by employee 1 in production of just one factor, but not the other factor, and not by employee 2. It turns out that it is optimal not to have employee 1 own the asset. Here balanced incentives, even when low powered, turn out to be better than imbalanced incentives (for all parameter values).

Holmstrom says that here:

"...the logic of integration is not one of asset complementarities as defined in Hart and Moore (1990), but rather one of incentive complementarities caused by contractual externalitities as in Holmstrom and Milgrom (1994)."

Variant 2

A worker works for firm [math]A\;[/math] which has the multitasking model above, and a key asset that enables it and it alone to produce output [math]y_2\;[/math]. Firm [math]B\;[/math] wants to buy just output [math]y_2\;[/math], but can only contract on measure [math]x_2\;[/math] (not on [math]x_1\;[/math]). Only firm [math]A\;[/math] can contract with its worker, and firm [math]B\;[/math] can not observe this contract. With cheap enough manipulation ([math]\lambda\;[/math] low enough), firm [math]B\;[/math] will stop contracting for the output [math]y_2\;[/math] altogether, because the worker will massively mis-allocate his effort. Firm [math]B\;[/math] can then decide if it wants to live without [math]y_2\;[/math] or try to buy the key asset.

The problem is the the output prices have measurement costs in them. A price contract does not allow [math]B\;[/math] to specify what it wants from [math]A\;[/math] precisely enough. This contrasts with Bernheim and Whinston (1986) (see [Baron 2001 - Theories of Strategic Nonmarket Participation| Baron (2001) for a write up and references), where there are two principals and one agent (i.e. a common agency problem).