Bolton Farrell (1990) - Decentralization Duplication And Delay

From edegan.com
Revision as of 19:14, 29 September 2020 by Ed (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Article
Has bibtex key
Has article title Decentralization Duplication And Delay
Has author Bolton Farrell
Has year 1990
In journal
In volume
In number
Has pages
Has publisher
© edegan.com, 2016

Reference(s)

  • Bolton, Patrick and Joseph Farrell (1990), "Decentralization, Duplication And Delay," Journal of Political Economy, 98, pp. 803-26. pdf

Abstract

We argue that although decentralization has advantages in finding low-cost solutions, these advantages are accompanied by coordination problems, which lead to delay or duplication of effort or both. Consequently, decentralization is desirable when there is little urgency or a great deal of private information, but it is strictly undesirable in urgent problems when private information is less important. We also examine the effect of large numbers and find that coordination problems disappear in the limit if distributions are common knowledge.

The Model

Basic Setup

There are two firms [math]A\,[/math] and [math]B\,[/math] both considering entry into a natural-monopoly industry, as follows:

  • Entry requires sinking a cost [math]S\,[/math]
  • The natural monopoly is worth [math]V\,[/math], normalized to [math]1\,[/math]
  • A single entrant will get [math]\lambda\,[/math] if they enter, for a payoff of [math]\lambda - S\,[/math]
  • If both firms enter they each get [math]\mu\,[/math], for a payoff of [math]\mu - S\,[/math]
  • [math]S \sim F(\cdot)\,[/math], with support such that [math]\mu \lt S \lt \lambda\,[/math] for all realizations
  • There are infinite periods ([math]t\,[/math]), discounted by [math]\delta\,[/math], and the strategy space is [math]\{Enter,Wait\}\,[/math]
  • The game ends when one or more firms [math]Enter\,[/math]


The General Result

In equilibrium firms with lower costs enter sooner.

Suppose two values of [math]S_A\,[/math]: [math]S_A^1 \lt S_A^2\,[/math]

Suppose [math]S_A^1\,[/math] leads [math]A\,[/math] to put positive probability on entering in [math]t_1\,[/math] and likewise [math]S_A^2\,[/math] leads [math]A\,[/math] to put positive probability on entering in [math]t_2\,[/math].

Then it must be the case that [math]t_1 \lt t_2\,[/math]


To see this assume that [math]A\,[/math] believes that there is:

  • a hazard-rate probability that [math]B\,[/math] will enter at exactly [math]t\,[/math] of [math]h(t)\,[/math]
  • a probability that [math]B\,[/math] will not have entered prior to [math]t\,[/math] of [math]\alpha(t)\,[/math], where we denote [math]a(t) = \delta^t \alpha(t)\,[/math].


Then [math]A\,[/math]'s expected payoff from entering at [math]t\,[/math] is:

[math]a(t)(\lambda - S) + a(t)(h(t)(\mu - \lambda)) = a(t)(\lambda - S - h(t)(\lambda -\mu))\,[/math]


For [math]t_1\,[/math] to be prefered with [math]S_A^1\,[/math], and [math]t_2\,[/math] to be prefered with [math]S_A^2\,[/math], it must be true that:

[math]a(t_1)(\lambda - S_A^1 - h(t)(\lambda -\mu)) \ge a(t_2)(\lambda - S_A^1 - h(t)(\lambda -\mu))\,[/math]


and

[math]a(t_2)(\lambda - S_A^2 - h(t)(\lambda -\mu)) \ge a(t_1)(\lambda - S_A^2 - h(t)(\lambda -\mu))\,[/math]


So:

[math]a(t_1)(\lambda - S_A^1 - h(t)(\lambda -\mu)) + a(t_2)(\lambda - S_A^2 - h(t)(\lambda -\mu)) \ge a(t_2)(\lambda - S_A^1 - h(t)(\lambda -\mu)) + a(t_1)(\lambda - S_A^2 - h(t)(\lambda -\mu))\,[/math]


[math]\therefore (a(t_1) - a(t_2))(S_A^1 - S_A^2) \le 0\,[/math]


As [math]a(t)\,[/math] is decreasing in [math]t\,[/math], and [math]S_A^1 \lt S_A^2\,[/math] by assumption, it must be that [math]t_1 \lt t_2\,[/math].

So low cost firms enter sooner, and if just one firm enters then it will be the low cost firm (but if the low cost firm enters it is still possible that the high cost firm 'accidentally' enters, depending on whether parameter values allow the sorting process to be complete or not).

Th paper then sets up the Fundamental Difference Equation, where is supposes cutoffs [math]S_1\,[/math], [math]S_2\,[/math] and so forth such that firms with costs between [math]S_{t-1}\,[/math] and [math]S_t\,[/math] will enter in [math]t\,[/math] (providing no previous entry has occurred). Using the indifference of such a firm between periods [math]t\,[/math] and [math]t+1\,[/math] we have:

[math]\lambda - S_t - h(t)(\lambda - \mu) = \delta(1- h(t))(\lambda - S_t - h(t+1)(\lambda - \mu)\,[/math]


Suppose that the hazard rate is non-increasing. As [math]t \to \infty\,[/math], the cutoff type [math]S_t\,[/math] must go to [math]S^{max}\,[/math] (the upper support of [math]F(\cdot)\,[/math]. The fraction of firms that have entered converges to [math]1\,[/math], but is bounded above by [math]1 - (1-h(1))^t\,[/math]. The proof is in the paper.


Two Types

To make the analysis easy we now consider two types [math]S_L\,[/math] and [math]S_H\,[/math], with [math]p(S=S_L) = q\,[/math], such that:

[math]\mu \lt S_L \lt S_H \lt \lambda\,[/math]


The welfare of the first best outcome (2(one low, one high),both low, both high) is then:

[math]W^* = 1 - (2 \cdot S_L(q(1-q)) + S_L(q)^2 + S_H(1-q)^2) = 1 - (1-q)^2S_H - q(2-q)S_L\,[/math]


We work with the equilibria where low cost types enter in [math]t=1\;[/math], and high cost types enter strictly afterwards.

One Bayesian equilibrium is then:

If low-cost enter immediately. If high cost, after t=1 enter 
in each period with probability p, if there has been no prior entry.

The parameterization required for this equilibrium to work is:

[math]\mu \le S_l + \delta(1-q)(S_H -S_L) \le \lambda -q(\lambda - mu) \le S_H \le \lambda\,[/math]


The High Types

For the two high types, where [math]S=S_H\,[/math] there is a mixed strategy waiting game. To solve for the probability of entry we use an indifference condition (necessary for mixing) on the continuation value [math]v\,[/math]:

[math]v = p(1-p)(\lambda - S) + p^2(\mu-S) = (1-p)^2 \delta v \,[/math]

(Note that this equation is not the same as in the paper, but does give the correct [math]v\,[/math]).


[math]v= 0 \implies p = \frac{\lambda - S}{\lambda - \mu}\,[/math]


To calculate the expected social surplus we need the probability that nothing happens until [math]t\,[/math] and then one firm enters, [math]q_t\,[/math], and the probability that nothing happens until [math]t\,[/math] and then both firms enter, [math]r_t\,[/math]:

[math]q_t = (1-p)^{2(t-1)}\cdot 2p(1-p)\,[/math]


[math]r_t = (1-p)^{2(t-1)}\cdot p^2\,[/math]


Then the social surplus created in this game, [math]G\,[/math], is:

[math]W^G = \sum_{t=1}^{\infty} \delta^{t-1}(q_t(1-S) + r_t(1-2S)\,[/math]


[math]\therefore W^G = (1-S) - \underbrace{\left (1 - \frac{p(2-p)}{1-\delta(1-p)^2}\right)(1-S)}_{\mbox{Delay Loss}} - \underbrace{\frac{p^2}{1-\delta(1-p)^2}S}_{\mbox{Duplication Loss}}\,[/math]


There is an inherent trade-off between delay and duplication - both can be expressed in terms of [math]p\,[/math], or just in terms of each other.

The probability of duplication is:

[math]x = \frac{p^2}{2p(1-p) + p^2} = \frac{p}{2-p}\,[/math]


Whereas the mean of delay is:

[math]y = \sum_{t=1}^{\infty} (q_t + r_t)t -1 = \frac{1}{p(2-p)} - 1\,[/math]


So:

[math]y = \frac{(x-1)^2}{4x}\,[/math]


By changing the parameters [math]\lambda\,[/math], [math]\mu\,[/math], or [math]S\,[/math], a planner could make this trade-off, allowing less delay at the expense of more duplication, or vice versa.


The Decentralization Result

The full welfare of decentralization, [math]D\,[/math], is:

[math]W^D = q^2(1-2S_L) + 2q(1-q)(1-S_L) + (1-q)^2\cdot \delta W^G\,[/math]


This can be written as the difference from the first-best:

[math]W^* - W^D = \underbrace{q^2 S_L + (1-q)^2\delta \cdot\frac{p^2}{1-\delta(1-p)^2}S_H}_{\mbox{Duplication Loss}} + \underbrace{\left (1-q)^2\delta \cdot(1 - \frac{p(2-p)}{1-\delta(1-p)^2}\right)(1-S_H)}_{\mbox{Delay Loss}}\,[/math]


Central Planning (with Incomplete Information)

In this model we assume that the central planner has no information whatsoever about cost types and can not acquire it (such as through a mechanism). However, for some parameter values even a completely ignorant social planner will outperform a decentralized (market) solution, as he can implement something with no delay!

Using a random pick the social planner can get welfare [math]R: :\lt math\gt W^R = 1 -(qS_L + (1-q)S_H)\,[/math]


Which gives:

[math]W^* - W^R = q(1-q)(S_H - S_L)\,[/math]


Comparing Results

The comparison is generally fairly complex, but can be done quickly for the two cases where [math]\delta = 0\,[/math] and [math]\delta = 1\,[/math].


For urgent problems, when [math]\delta = 0\,[/math], decentralization gives

[math]W^D (\delta=0) = q^2(1-2S_L) + 2q(1-q)(1-S_L)\,[/math]


or

[math]W^* - W^D (\delta=0) = q^2S_L + (1-q)^2(1-S_H)\,[/math]


Then decentraliazation is better than random choice iff:

[math]W^* - W^D (\delta=0) \lt W^* - W^R\,[/math]


[math]q^2S_L + (1-q)^2(1-S_H) \lt q(1-q)(S_H - S_L)\,[/math]


Which is less likely to hold when [math]S_H\,[/math] and [math]S_L\,[/math] are close (private info is almost unimportant), and more likely to hold when [math]1-S_H\,[/math] is small (delay is not costly - not this is opposite of what the paper says) and [math]S_L\,[/math] is large (duplication is costly).


For non-urgent problems, when [math]\delta = 1\,[/math], decentralization gives:

[math]W^* - W^D (\delta=1) = q^2 S_L + (1-q)^2 \cdot\frac{p^2}{1-(1-p)^2}S_H + \left (1-q)^2 \cdot(1 - \frac{p(2-p)}{1-(1-p)^2}\right)(1-S_H)\,[/math]


Subbing in for [math]p\,[/math] we get:

[math]W^* - W^D (\delta=1) = q^2 S_L + \left (1-q)^2 \cdot(1 - \frac{\lambda - S_H}{\lambda + S_H - 2 \mu}\right)(1-S_H)\,[/math]


Then decentralization is better than random chocie iff:

[math]W^* - W^D (\delta=1) \lt W^* - W^R\,[/math]


[math]q^2 S_L + \left (1-q)^2 \cdot(1 - \frac{\lambda - S_H}{\lambda + S_H - 2 \mu}\right)(1-S_H) \lt q(1-q)(S_H - S_L)\,[/math]


As [math]S_H \gt \mu\,[/math], it appears that this should hold - making decentralization better in non-urgent situations.


Large Numbers

The paper contains a small section on what happens when the are many firms all drawn from the same cost distribution.

Suppose that:

  • There are [math]n\,[/math] potential entrants
  • Each with sunk costs [math]S\,[/math] drawn from [math]F(\cdot)\,[/math]
  • The gross benefits to a fraction [math]f are \lt math\gt b(f)\,[/math] where [math]b\,[/math] is continous and decreasing, with [math]b(0) \ge S^{max}\,[/math] and [math]b(1) \le S^{min}\,[/math]


Because [math]B(F(\cdot)\,[/math] is decreasing there exists a unique cutoff [math]\overline{S}\,[/math] such that [math]b(F(\overline{S})) = \overline{S}\,[/math], so that in the limit as [math]n \to \infty\,[/math], all firms with [math]S \lt \overline{S}\,[/math] enter in the first period and there is no further entry. The proof of this is in the paper.