Friday, February 26, 2010

In The Papers: Open Source Economics

The economics of open source software is a topic of interest to me.  Without a sound understanding of economics, you might be tempted to believe that collaborative software in which you give away the end product for free simply cannot exist in a market-based society as such, and must instead be a proto-socialist endeavor.  Eric S. Raymond wrote a very powerful book, drawing on Hayekian themes, to counter this notion, and there is a basic explanation of how it can be in the self-interest of developers to contribute to open-source software (here’s the trick:  think about the services, not the software).

Michael Schwarz and Yuri Takhteyev take a slightly different tack in Half a Century of Public Software Institutions:  Open Source as a Solution to Hold-Up Problem.

Abstract:

We argue that the intrinsic inefficiency of proprietary software has historically created a space for alternative institutions that provide software as a public good. We discuss several sources of such inefficiency, focusing on one that has not been described in the literature: the underinvestment due to fear of holdup. An inefficient holdup occurs when a user of software must make complementary investments, when the return on such investments depends on future cooperation of the software vendor, and when contracting about a future relationship with the software vendor is not feasible. We also consider how the nature of the production function of software makes software cheaper to develop when the code is open to the end users. Our framework explains why open source dominates certain sectors of the software industry (e.g., the top ten programming languages all have an open source implementation), while being almost none existent in some other sectors (none of the top ten computer games are open source). We then use our discussion of efficiency to examine the history of institutions for provision of public software from the early collaborative projects of the 1950s to the modern “open source” software institutions. We look at how such institutions have created a sustainable coalition for provision of software as a public good by organizing diverse individual incentives, both altruistic and profit-seeking, providing open source products of tremendous commercial importance, which have come to dominate certain segments of the software industry.

One of the things that the authors point out is that “open source software” is a lot broader than we might imagine.  To the average person who has heard of the term, that person thinks Linux.  What they don’t usually think is Apache, the server of choice for more than half of the internet.  They don’t think BIND or one of the thousands of other vital tools for big corporations, even those big corporations which are not themselves open-source organizations (like Yahoo!, who support open source development, but don’t release many such tools).  Another point the authors bring up is that “employees of just five companies (Red Hat, IBM, Novell, Intel and Oracle) jointly contributed 32% of the changes for a recent release of the Linux kernel” (3).  The players have changed over the past five decades, and so have their motives, so a simple understanding of current motives (like I alluded to above) doesn’t do enough.  As they point out, even IBM has changed—they were a big open source company in the 1950s and 1960s, but for a different reason.

The authors’ theory is that “proprietary software causes underinvestment in complementary products and technologies due to the fear of hold up.”  They use this to explain “why open source software dominates some sectors of the industry, while playing [a] negligible role in others” (3).

What they mean by a “hold up” scenario is as follows:  when you purchase a piece of software from Company X, you may buy it for your own use, or for furthering your business.  In scenarios in which people buy business software, there is the fear that the company may be forced to lock in to Company X, and cannot go to a competitor.  At that point, Company X basically has a monopoly, in the sense that the cost of switching to Company Y’s software offerings is just too high.  Your company has, by that point, built a lot of processes and maybe some additional software around Company X’s offerings, and to switch it all over to what Company Y has would simply be too expensive to consider.  But the business needs of your company will likely change over time, and so you need Company X to remain responsive to you.  Unfortunately, “the exact nature and cost of such future modifications often cannot be foreseen ex ante.  Their price and quality must therefore be negotiated ex post” (5).  If you are concerned that Company X will screw you over later, once you need changes and are locked in, you either will not be as willing to pay as much for the product (or your own alterations), or you simply will forego the product.  In either event, there is a net loss, as the product and your modifications would allow you to provide services to your customers more easily, but it would not be worth the anticipated hold-up price.  Instead, you have in-house developers write software—vertical integration, as it’s called in economics.

There are a few other justifications for considering software purchases a potential hold-up problem, and the authors give some examples of these.  After that, they provide their explanation of how to get around this problem:  make the source code of software available to end users.  By doing that, you ensure that even if you change the way in which things work, your end users will still be able to make any modifications they require.  Maybe they want Module 49 to do something totally different—they could build their own custom version of the code and have that happen.  This also ensures that Company X will not exploit the company’s relationship with your firm later on.

So, given this, why is it that binary packages are the default for pretty much all software?  Because there are considerations that outweigh the problems listed above.  There is a free-rider problem here:  file sharing.  If I get the source code to a game, I can distribute it to others more easily and allow them to obtain the software without charging for it.  I can do the same with binary software, but there are some protections there which make it a bit harder—licensing provisions, etc.  When, then, should we see binary packages versus open source?  The authors “would especially expect this to be the case for software that does not require large complementary investments, is unlikely to need modifications, and is offered to users that have no capacity to modify software even when the source code is offered to them.  Computer games offer a quintessential example of such software:  they are typically used for a limited period of time, rarely require substantial game-specific complementary investments, offer limited opportunities for useful modifications, and are mostly offered to consumers with no programming skills” (9).  In contrast, web servers, database servers, and the like require significant complementary investments, and as a result, are more likely to see this solution to the hold-up problem.

The rest of the article is an interesting discussion of various solutions over the past 50 years, from IBM’s SHARE association (a number of IBM clients providing source code for various applications for each other) to ARPANET, and AT&T (and BSD) to Netscape and GNU.  It’s an entertaining history of some of the economic motives behind business and legal decisions, and worth a read.

[Via http://36chambers.wordpress.com]

No comments:

Post a Comment