The Agoric Papers

De JFCM
Aller à : navigation, rechercher

The Agoric Papers

These three papers by Mark S. Miller and K. Eric Drexler appeared in The Ecology of Computation, Bernardo Huberman (ed.) Elsevier Science Publishers/North-Holland, 1988.


Sommaire


Markets and Computation: Agoric Open Systems[modifier]

Mark S. Miller - Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304

K. Eric Drexler - MIT Artificial Intelligence Laboratory, 545 Technology Square, Cambridge, MA 02139

Introduction[modifier]

Like all systems involving goals, resources, and actions, computation can be viewed in economic terms. Computer science has moved from centralized toward increasingly decentralized models of control and action; use of market mechanisms would be a natural extension of this development. The ability of trade and price mechanisms to combine local decisions by diverse parties into globally effective behavior suggests their value for organizing computation in large systems.

This paper examines markets as a model for computation and proposes a framework-agoric systems-for applying the power of market mechanisms to the software domain. It then explores the consequences of this model at a variety of levels. Initial market strategies are outlined which, if used by objects locally, lead to distributed resource allocation algorithms that encourage adaptive modification based on local knowledge. If used as the basis for large, distributed systems, open to the human market, agoric systems can serve as a software publishing and distribution marketplace providing strong incentives for the development of reusable software components. It is argued that such a system should give rise to increasingly intelligent behavior as an emergent property of interactions among software entities and people.

2. Overview of later sections[modifier]

Section 3
Computation and economic order. Basic characteristics of human markets illuminate the expected nature of computational markets. This section describes some of these characteristics and sketches some of the special issues raised in the context of computation.
Section 4
Foundations. The foundations needed for agoric open systems may be summarized as support for the encapsulation and communication of information, access, and resources. This section describes these foundations and their role in computational markets.
Section 5
Agents and strategies. The foundations of computational markets handle neither resource management (such as processor scheduling and garbage collection) nor market transactions. This section describes the idea of business agents and their use both in replacing centralized resource-allocation algorithms (discussed further by [III]) and in managing complex market behavior.
Section 6
Agoric systems in the large. Large, evolved agoric systems are expected to have valuable emergent properties. This section describes how they can provide a more productive software market in human society-opening major new business opportunities-and how they can further the goal of artificial intelligence.
Section 7
The absence of agoric systems. If market-based computation is a good idea, why has it not yet been developed? This section attempts to show why the current absence of agoric systems is consistent with their being a good idea.
Appendix I
Issues, levels, and scale. Agoric open systems will be large and complex, spanning many levels of scale and complexity. This section surveys how issues such as security, reasoning, and trust manifest themselves at different levels of agoric systems.
Appendix II
Comparison with other systems. Here are reviewed works ranging from those that draw analogies between human society and computational systems to those that explore adaptive computation from an economic point of view.

3. Computation and economic order[modifier]

The basic features of computational markets are best understood by comparing them with human markets. Many important tradeoffs, such as those between market mechanisms and central planning, have already been examined in the context of human society.

3.1. Market organization[modifier]

Consider the awesome dimensions of the American community. . .a labor force of 80,000,000. . .11,000,000 business units. . . .Who designed and who now directs this vast production-and-distribution machine? Surely, to solve the intricate problems of resource allocation in a vast economy, central guidance is required. . . .But American economic activity is not directed, planned, or controlled by any economic czar-governmental or private.

"A. A. Alchian and W. R. Allen, 1968 [5]

Two extreme forms of organization are the command economy and the market economy. The former attempts to make economic tradeoffs in a rational, centrally-formulated plan, and to implement that plan through detailed central direction of productive activity. The latter allows economic tradeoffs to be made by local decisionmakers, guided by price signals and constrained by general rules.

The command model has frequently been considered more "rational", since it involves the visible application of reason to the economic problem as a whole. Alternatives have frequently been considered irrational and an invitation to chaos. This viewpoint, however, smacks of the creationist fallacy-it assumes that a coherent result requires a guiding plan. In actuality, decentralized planning is potentially more rational, since it involves more minds taking into account more total information. Further, economic theory shows how coherent, efficient, global results routinely emerge from local market interactions. (The nature and function of prices and of market mechanisms are a notorious source of lay confusion-just as Aristotle threw rocks and yet misunderstood mechanics, so people trade and yet misunderstand markets. Alchian and Allen [5] give a good grounding in the basic concepts and results of economic analysis.)

Should one expect markets to be applicable to processor time, memory space, and computational services inside computers? Steel mills, farms, insurance companies, software firms-even vending machines-all provide their goods and services in a market context; a mechanism that spans so wide a range may well be stretched further.

As will be shown, however, a range of choices lies between pure central planning and the universal fine-grained application of market mechanisms. Computational markets, like human markets, will consist of islands of central direction in a sea of trade.

3.2. Encapsulation and property[modifier]

The rationale of securing to each individual a known range within which he can decide on his actions is to enable him to make the fullest use of his knowledge. . . .The law tells him what facts he may count on and thereby extends the range within which he can predict the consequences of his actions.

"F. A. Hayek, 1960 [6]

. . .the law ought always to trust people with the care of their own interest, as in their local situations they must generally be able to judge better of it than the legislator can do.

"A. Smith, 1776 [7]

Computer science began, naturally enough, with central planning applied to small, manageable machines. The first programs on the first computers were like Robinson Crusoe on an empty island. They had few problems of coordination, and the complexity of their affairs could (at first) be managed by a single mind.

As the complexity of software grew, programs with multiple subroutines became the equi- valent of autocratic households or bureaucracies with extensive division of labor. Increasingly, however, bugs would appear because the right hand would not know what the left hand had planned, and so would modify shared data in unexpected ways.

To combat this problem, modern object-oriented programming (to paraphrase) "secures to each object a known space within which it can decide on its actions, enabling the programmer to make the fullest use of his knowledge. Encapsulation tells him what facts he may count on and thereby extends the range within which he can predict the consequences of his actions". In short, motivated by the need for decentralized planning and division of labor, computer science has reinvented the notion of property rights.

Central direction of data representation and processing has been replaced by decentralized mechanisms, but central direction of resource allocation remains. Rather than "trusting objects with the care of their own interest, in their local situations", the systems programmer attempts to legislate a general solution. These general solutions, however, provide no way to make tradeoffs that take account of the particular merits of particular activities at particular times.

3.3. Tradeoffs through trade[modifier]

. . .a capacity to find out particular circumstances. . .becomes effective only if possessors of this knowledge are informed by the market which kinds of things or services are wanted, and how urgently they are wanted.

"F. A. Hayek, 1978 [8]

.

. .the spontaneous interaction of a number of people, each possessing only bits of knowledge, brings about a state of affairs in which prices correspond to costs, etc., and which could be brought about by deliberate direction only by somebody who possessed the combined knowledge of all those individ- uals. . . .the empirical observation that prices do tend to correspond to costs was the beginning of our science.

"F. A. Hayek, 1937 [9]

Trusting objects with decisions regarding resource tradeoffs will make sense only if they are led toward decisions that serve the general interest-there is no moral argument for ensuring the freedom, dignity, and autonomy of simple programs. Properly-functioning price mechanisms can provide the needed incentives.

The cost of consuming a resource is an opportunity cost-the cost of giving up alternative uses. In a market full of entities attempting to produce products that will sell for more than the cost of the needed inputs, economic theory indicates that prices typically reflect these costs.

Consider a producer, such as an object that produces services. The price of an input shows how greatly it is wanted by the rest of the system; high input prices (costs) will discourage low-value uses. The price of an output likewise shows how greatly it is wanted by the rest of the system; high output prices will encourage production. To increase (rather than destroy) value as 'judged by the rest of the system as a whole',a producer need only ensure that the price of its product exceeds the prices (costs) of the inputs consumed. This simple, local decision rule gains its power from the ability of market prices to summarize global information about relative values.

As Nobel Laureate F. A. Hayek observes, ". . .the whole reason for employing the price mechanism is to tell individuals that what they are doing, or can do, has for some reason for which they are not responsible become less or more demanded. . . .The term `incentives' is often used in this connection with somewhat misleading connotations, as if the main problem were to induce people to exert themselves sufficiently. However, the chief guidance which prices offer is not so much how to act, but what to do."[8] This observation clearly applies to the idea of providing incentives for software; the goal is not to make software sweat, but to guide it in making choices that serve the general interest.

These choices amount to tradeoffs. With finite processing and memory resources, taking one action always precludes taking some other action. With prices and trade, objects will have an incentive to relinquish resources when (and only when) doing so promises to increase their net revenue. By trading to increase their revenue, they will make tradeoffs that allocate resources to higher-value uses.

3.4. Spontaneous order[modifier]

Modern civilization has given man undreamt of powers largely because, without understanding it, he has developed methods of utilizing more knowledge and resources than any one mind is aware of.

"F. A. Hayek, 1978 [10]

Will prices, trade, and decentralized tradeoffs be valuable in computation? This depends in part on whether central planning mechanisms will be able to cope with tomorrow's computer systems.

Systems are becoming available having performance tradeoffs that are nightmarishly complex compared to those of a von Neumann machine running a single program. The world is becoming populated with hypercubes, Connection Machines, shared-memory multi-procesors, special-purpose systolic arrays, vectorizing super-computers, neural-net simulators, and millions of personal computers. More and more, these are being linked by local area networks, satellites, phones, packet radio, optical fiber, and people carrying floppy disks. Machines in the personal-computer price range will become powerful multi-processor systems with diverse hardware and software linked to a larger world of even greater diversity. Later, with the eventual development of molecular machines able to direct molecular assembly (the basis of nanotechnology) [11], we can anticipate the development of desktop machines with a computational power greater than that of a billion of today's mainframe computers [12,13].

One might try to assign machine resources to tasks through an operating system using fixed, general rules, but in large systems with heterogeneous hardware and software, this seems doomed to gross inefficiency. Knowledge of tradeoffs and priorities will be distributed among thousands of programmers, and this knowledge will best be embodied in their programs. Computers are becoming too complex for central planning, with its bottlenecks in computation and knowledge acquisition. It seems that we need to apply "methods of utilizing more knowledge and resources than any one mind is aware of." These methods can yield a productive spontaneous order through decentralized planning-through the application of local knowledge and local computational resources to local decisions, guided by non-local market prices. Instead of designing rules that embody fixed decisions, we need to design rules that enable flexible decisionmaking.

Markets are a form of "evolutionary ecosystem" [I], and such systems can be powerful generators of spontaneous order: consider the intricate, undesigned order of the rain forest or the computer industry. The use of market mechanisms can yield orderly systems beyond the ability of any individual to plan, implement, or understand. What is more, the shaping force of consumer choice can make computational market ecosystems serve human purposes, potentially better than anything programmers could plan or understand. This increase in our power to utilize knowledge and resources may prove essential, if we are to harness the power of large computational systems.

3.5. Command and price mechanisms[modifier]

An economist thinks of the economic system as being co-ordinated by the price mechanism. . . .Within a firm, the description does not fit at all. . . .It is clear that these are alternative methods of co-ordinating production. . . .if production is regulated by price movements. . .why is there any organization?

"R. H. Coase, 1937 [14]

Coase asks, "Why are there firms?". Firms are economic organizations that typically make little use of market mechanisms internally. If reliance on market forces always produced more efficient use of resources, one would expect that systems of individuals interacting as free-lance traders would consistently out-compete firms, which therefore would not exist. In reality, however, firms are viable; analogous results seem likely in computational markets.

Market transactions typically incur higher overhead costs than do transactions inside firms [14,15]. These transaction costs (in both human and computational markets) are associated with advertising, negotiation, accounting, and problems of establishing adequate trust-typically, inside a firm, matching consumers with producers does not require advertising, instructions do not require negotiation, movement of goods does not require invoices and funds transfer, and coworkers share an interest in their joint success. Firms lower the high overhead cost of market transactions among numerous small entities by bundling them together into fewer, larger entities. Not only does this save costs on what are now internal transactions, but by creating larger entities, it raises the size of typical transactions, making relatively fixed per-transaction overhead costs a proportionally smaller burden. (For small enough transactions, even the simplest accounting would be too expensive.)

Similar considerations hold among computational objects. For small enough objects and transactions, the cost of accounting and negotiations will overwhelm any advantages that may result from making flexible, price-sensitive tradeoffs. For large enough objects and trans- actions, however, these overhead costs will be small in percentage terms; the benefits of market mechanisms may then be worth the cost. At an intermediate scale, negotiation will be too expensive, but accounting will help guide planning. These scale effects will encourage the aggregation of small, simple objects into "firms" with low-overhead rules for division of income among their participants.

Size thresholds for accounting and negotiations will vary with situations and implementation techniques [16]. Market competition will tune these thresholds, providing incentives to choose the most efficient scale on which to apply central-planning methods to computation.

3.6. Can market objects be programmed?[modifier]

The objects participating in computational markets must initially be much simpler than the human participants in human markets. Can they participate successfully? Human markets are based on intelligent systems, but this does not show the impossibility of basing markets on simple objects-it merely shows that the argument for agoric systems cannot rest on analogy alone. Explicit attention must be paid to the question of the minimal competence and complexity necessary for an object to participate in a market system. (These issues provide another motivation to form computational "firms" and to open computational markets to human participation.)

Experimental double-auction markets on a laboratory scale [17] give some indication of the requirements for market participation. Though involving human beings, some of these experiments have excluded most of the range of human abilities: they have excluded use of natural language (indeed, of any communications channel richer than simple bids and acceptances) and they have replaced goods with abstract tokens, excluding any cultural or historic information about their value. The participants in these markets have performed no sophisticated calculations and have lacked any knowledge of economic theory or of other players' preferences. Yet in this informationally-impoverished environment, these markets rapidly converge to the prices considered optimal by economic theory. Spencer Star [18] has successfully run double-auction markets among software entities employing simple decision procedures, and has achieved comparable efficiency.

Another reason for confidence in the applicability of market mechanisms to computation is the existence of primitive market mechanisms (outlined in this paper and presented in [III]) able to cope with such recognized software problems as garbage collection and processor scheduling. With evidence for the workability of market mechanisms both at this low level and at the sophisticated level of human society, there is reason to expect them to be workable at intermediate levels of sophistication as well.

3.7. Complexity and levels[modifier]

Large computational ecosystems linked to the human market will have many parts, many aspects, many levels, and great complexity. Failure to recognize the differences among these levels will open many pitfalls. The field of biology suggests how to approach thinking about such systems.

Biological ecosystems obey physical law, but to understand them as ecosystems requires abstractions different from those used in physics. The existence of physics, chemistry, cell biology, physiology, and ecology as separate fields indicates that the concepts needed for understanding biological systems are naturally grouped according to the scale and complexity of phenomena to which they apply. Such a grouping may be called a level. Some issues arise repeatedly at different levels. For example, cells, organs, organisms, and hives all expend effort to maintain a boundary between their internal and external environments, and to bring only selected things across that boundary.

The concepts needed for understanding agoric open systems may likewise be grouped according to different levels, ranging from computational foundations through increasingly complex objects to market systems as a whole. As in biology, there are issues which appear in some form at all levels. Appendix I examines some of these issues, including security, compatibility, degrees of trust, reasoning, and coordination. In considering these issues in computational markets, it will be important to avoid misapplying concepts from one level to a very different level-that is, to avoid the equivalent of trying to analyze biological ecodynamics in terms of conservation of momentum.

The next three sections of this paper examine computational markets at successively higher levels, examining first foundations, then decision-making entities, and finally the emergent properties of large systems.

4. Foundations[modifier]

Computation takes place in a context that determines what sorts of events can and cannot occur; this context can be viewed as the foundation of the rest of the system. Computational markets will require foundations that permit and forbid the right sorts of events. To simplify this discussion, the following explores foundations that provide for a basic uniformity in the nature of objects and their interactions despite differences in complexity and scale. In real systems, uniform foundations will ease the process of changing scale-dependent decisions and will make possible a unified set of conceptual and software tools spanning different scales.

It should be emphasized, however, that implementation of an agoric system will not demand adoption of a standard programming language. So long as certain constraints are met at the interfaces between objects coded by different parties, the language used inside an object can be freely chosen. The necessary constraints can be imposed by either the language or the operating system.

Computational foundations are frequently expressed in the form of programming language or operating system designs. Programming languages have evolved chiefly to provide abstractions for organizing computation on a small scale-the computation occurring inside a single machine and serving a single user. This has unfortunately led many programming lan- guage designers to make decisions (such as providing for global variables or access to arbitrary memory addresses) that make these languages unsuitable for organizing computation on a very large scale. The Actor languages, Argus, the concurrent logic programming languages (such as FCP), and the Mach operating system are examples of systems which have been designed to be extensible to large, open systems. These are covered in this book respectively in [II], [III], [IV], [V], and [VI]. All these projects have arrived at broadly similar models of computation from different directions, suggesting that their common elements will be of general value. This section briefly outlines some of the properties they share-properties which seem important for the implemention of computational markets.

4.1. Information and access[modifier]

As indicated in Figure 2, the systems capable of supporting open computation all share support for the encapsulation and communication of information and access. Communication of information is fundamental to computation that involves more than a single object. Encapsulation of information involves separating internal state and implementation from external behavior, preventing one object from examining or tampering with the contents of another.

In conventional practice, encapsulation of information increases modularity and conceptual clarity during design, a feature of considerable value. In agoric systems, though, secure encapsulation will be essential during operation. Without security against examination, theft of proprietary information would be rampant, and the rewards for the creation of valuable code and information would be reduced or destroyed. Without security against tampering, objects could not trust each other's future behavior, or even their own. Encapsulation provides a sphere within which an object may act with complete control and predictability.

Encapsulation and communication of access-capability security-ensures that the ability to communicate with an object can only be obtained in certain ways, such as through deliberate communication. With capability security, object A can get access to object B only by:

  • (1) being born with it, when object A's creator already has that access;
  • (2) receiving it in a message (from an object that already has that access); or
  • (3) being the creator of object B.

Capability security is a common foundation for protection in operating systems. It appears to be a flexible and general mechanism able to support a wide variety of policies for providing access protection. In an open system without capability security, every object would have to verify the nature and legitimacy of every message it received, imposing unacceptable overhead on small, simple objects. With capability security, simple objects can "assume" that their messages come from legitimate sources, because their creators can implement policies that limit access to trusted parties.

Together, the above properties yield security while preserving flexibility. Despite the Turing-equivalence of most programming languages, they can nevertheless differ formally and absolutely in their ability to provide for security [25]. How can this be, if one can write an interpreter for a secure language in an insecure one?

Turing-equivalence describes the abilities of a system, but security rests on inabilities-on the inability to violate certain rules. Adding an interpreter on top of a system cannot subtract abilities from the system itself (even if the interpreted language consists of nothing but inabilities, as can easily be arranged). Thus, adding interpreters cannot establish the inabilities needed for security.

The question is not "what functions can be computed?", but "given that I am a computational object, what is my relationship to an already populated computational environment?". Let us call a set of computational objects coded in an insecure programming language "reference level objects", and those which exist on top of a reference-level interpreter "interpreted objects". If the interpreter implements a secure language, then the interpreted objects are protected from each other. Reference level objects, however, can simply ignore the interpreter and wreak havoc on the interpreted objects.

4.2. Ownership and trade[modifier]

As software systems have evolved toward greater modularity, encapsulation of information and access have become more clean, uniform, and reliable. As has been discussed, encapsulation in software serves the same crucial function as property rights in human affairs: it establishes protected spheres in which entities can plan the use of their resources free of interference from unpredictable external influences. This enables entities to plan and act despite the limited, local nature of most knowledge; it thus permits more effective use of divided knowledge, aiding the division of labor. The value of protected spheres and local knowledge has thus far been the sole motivation for giving software modules "property rights" through encapsulation.

In economic systems, property rights also enable economic entities to accumulate and control the results of their efforts, providing the basis for an incentive system having the desirable evolutionary properties outlined in [I]. In agoric systems, encapsulation will begin to serve this function as well.

Agoric systems also require the encapsulation and communication of computational resources, such as a memory block or a processor time slice. This prevents the evolution of parasitic objects [I], confines the costs of inefficiency to inefficient objects and their customers, and (in suitable implementations) makes performance information available locally.

Encapsulation and communication of resources correspond to ownership and voluntary transfer, the basis of trade.

A familiar systems programming construct which violates encapsulation of resources is the round-robin scheduler. In such a scheduler, the amount of processing power allocated to a process depends simply on the number of other processes. The processing power allocated to a given process will be reduced whenever some other process decides to spawn yet more processes. Under a round-robin scheduler, the processor is treated as a commons; given a diversity of interests, the usual tragedy is to be expected [26].

Artsy's paper on "The Design of Fully Open Computing Systems" (FOCS) [22] discusses an approach for an operating system design having the desirable properties specified above. Artsy's use of the term "fully open computing systems" corresponds to what would here be termed "extreme separation of mechanism and policy", where the mechanism is the support of protected transfer of ownership and the verification of ownership on access. All other resource allocation is then provided as user-level policy. Thus, schedulers and memory allocators are completely outside the secure operating system kernel and operate via an ownership-and-trade model. One can, for example, own and trade time-slices of a particular processor. Scheduling is performed at the user level by exchanging such commodities.

Starting from direct ownership of physical computational resources, more abstract models of ownership can be built. For example, a deadline scheduler can be viewed as follows: When a task is to be scheduled in a hard real-time application (i.e., one that must meet real-time deadlines), it should be known beforehand how long it will take and by what time it must be done. When a process wishes to insure that it will be able to schedule a set of such tasks, it can purchase "abstract future time slices"-not specific time slices, but rights to a time slice of a certain duration within a certain period. Since this gives the seller of time slices greater flexibility with respect to other clients, such time slices should cost less than concrete ones. This is like a futures market, but with guaranteed availability-an honest seller of time slices will not obligate himself to sell time slices he may not be able to get. (See also [27].)

4.3. Resource ownership and performance modularity[modifier]

The activity of a running program may be analyzed in terms of competence and performance. Competence refers to what a program can do given sufficient resources, but without explicit consideration of these resources. Competence includes issues of safety-what the program will not do, and liveness-whether the program will eventually do what it is supposed to, or will instead infinitely loop or deadlock. Performance refers to the resources the program will use, the efficiency with which it will use them, and the time it will take to produce results-precisely those issues ignored by competence. Both these issues have been the subject of formal analysis: the competence aspects of a programming language may be formalized as a programming language semantics and used to analyze safety properties via proofs of partial correctness and liveness properties via proofs of termination. The performance aspects of a program may be formally analyzed via complexity theory and proofs of response time (for real-time programming).

Formalization alone, however, is insufficient for dealing with these issues in large programs-a complex non-modular program in a formalized language will often resist formal (or informal) validation of many important properties; modularity is needed to make analysis tractable. Modularization proceeds by separating interface from implementation, allowing concern with what a module does to be somewhat decoupled from concern with how it does it. Object-oriented programming and abstract data types aid modularization of competence issues, with message protocols serving as an abstract interface for competence effects [28]. Similarly, computational markets will aid modularization of performance issues, with prices serving as an abstract interface for resource costs.

4.4. Currency[modifier]

For a broad market to emerge from these foundations, a system must provide for ownership and trade not only of basic computational resources, but also of virtual, user defined resources. Such resources can serve as tokens for establishing a system of currency. Public key communications systems [29] enable implemention of a secure banking system; within a mutually trusted hardware subsystem, capability-based security plus unforgeable unique identifiers are sufficient for establishing a public key system without resorting to encryption [25].

Accounting mechanisms have been used in software to some extent. Old time-sharing systems are one of the more familiar models-a fact which may raise grave concerns about the desirability of agoric systems. But using an agoric system would not mean a return to the bad old days of begging for a grant of hundreds of dollars of computer time and storage to edit a medium-sized document late at night, or to perform some now-inexpensive computation. The cost of computers has fallen. It will continue to fall, and personal computers will continue to spread. Aside from overhead (which can be made small), accounting for the costs of computation will not make computation more expensive. Making human beings pay for computer time is not the goal of computational markets.

In agoric systems, objects will charge each other and the machine will charge the objects. Given low enough communications costs and the right sorts of demand, a personal computer could earn money for its owner by serving others, instead of remaining idle. A machine's owner need not pay to use it, since the internal charges and revenues all balance. In a stand-alone computer, currency will simply circulate, incurring a computational overhead but providing internal accounting information which can guide internal decisions.

Inside one machine, one could have the foundations establish an official currency system. No secure way has yet been found to do so between mutually distrustful machines on a network without relying on mutually-trusted, third-party machines serving as banks. In accord with the goal of uniformity, such banks are here suggested as the general model for transfer of currency [25,30]. These banks can maintain accounts for two parties; when party A transfers money to party B, the bank can verify for B that the money has been transferred. (The cost of verification provides an incentive for A and B to establish enough trust to make frequent verification unnecessary.) In this model, it is unnecessary and perhaps impossible to establish any one currency as standard. There will instead be a variety of local currencies with exchange rates among them; it has been argued that this will result in greater monetary stability, and hence in a more efficient market, than one based on a single currency [31].

4.5. Open problems[modifier]

At the foundational level, many open issues remain. Actors and FCP seem to be clean, simple open-systems programming languages, but they have no evident mechanism for dealing with machine failure. Argus is an open-systems language able to deal with this problem, but only by directly providing distributed abortable transactions as a basic mechanism. While such transactions provide much leverage, they are quite complex. A promising line of investigation is the design of a language having the simplicity of Actors or FCP, but which provides mechanisms for failure-handling that enable user-level policy to support Argus-style transactions. Even more satisfying than such a design would be a demonstration that Actors or FCP already have sufficient mechanism.

More central to agoric systems is adequate resource accounting. There is as yet no open-systems language which provides for ownership and trade of basic computational resources while preserving semantic uniformity and supporting the emergence of charging and prices. It seems this has been accomplished in the realm of operating systems design [22], but unfortunately in a way which is not yet amenable to distributed systems. It would be exciting to apply Artsy's work to open-systems oriented operating systems like Mach [IV].

5. Agents and strategies[modifier]

When a problem needs to be solved frequently, but no single solution is right in all situations, it is desirable to permit the testing and use of many solutions. To allow this freedom, one seeks to separate mechanism and policy, designing foundations that support a wide range of policies via general-purpose mechanisms.

Foundational mechanisms for ownership and trade of information and computational resources allow the choice of policy to be delegated to individual objects; these objects may in turn delegate their choices to other objects, termed agents. Even policies for such fundamental processes as processor scheduling and memory allocation can be so delegated. The following argues that agents at a higher level can accomplish adaptive automatic data structure selection, guide sophisticated code transformation techniques, provide for competition among business agents, and maintain reputation information to guide competition.

5.1. Resource allocation and initial market strategies[modifier]

Systems programming problems such as garbage collection and processor scheduling have traditionally been addressed in the computational foundations, casting the architect in the role of omniscient central planner. In this approach, the architect imposes a single, system-wide solution based on global aggregate statistics, precluding local choice. In the market approach, however, these problems can be recast in terms of local negotiation among objects. Solutions in this framework also provide objects with price information, allowing them to make profitable use of the resulting flexibility.

This enables programmers to provide objects with specialized resource allocation strategies, but it need not force programmers to attend to this. Objects can delegate these strategic issues to business agents, and a programming environment can provide default agents when the programmer does not specify otherwise.

The companion paper "Incentive Engineering for Computational Resource Management" [III] describes and analyzes initial market strategies which, if followed by a set of business agents, result in distributed algorithms for allocation of processor time, memory space, and communication channels. Initial market strategies (whether these or others) will play a key role in establishing agoric systems: from a traditional programming perspective, they will provide initial policies for garbage collection and processor scheduling; from a market perspective, they will help provide initial resource prices and thus an environment in which more sophisticated entities can begin to operate. Thus, they will help bridge the gap between current practice and computational markets. As markets evolve, the scaffolding provided by initial market strategies may be largely or entirely replaced by other structures.

The initial strategies for processor scheduling are based on an auction in which bids can be automatically escalated to ensure that they are eventually accepted. The initial strategies for memory allocation and garbage collection are based on rent payment, with objects paying retainer fees to objects they wish to retain in memory; objects that are unwanted and hence unable to pay rent are eventually evicted, deallocating their memory space. These approaches raise a variety of issues (including the threat of strategic instabilities stemming from public goods problems) that are addressed in our companion paper. Together, these strategies provide a proof of concept (or at least strong evidence of concept) for the notion of decentralized allocation of computational resources.

Initial market strategies will provide a programmable system that generates price information, enabling a wide range of choices to be made on economic grounds. For example, processor and memory prices can guide decisions regarding the use of memory to cache recomputable results. Given a rule for estimating the future rate of requests for a result, one should cache the result whenever the cost of storage for the result is less than the rate of requests times the cost of recomputing the result (neglecting a correction for the overhead of caching and caching-decisions). As demand for memory in a system rises, the memory price will rise, and these rules should free the less valuable parts of caches. If the processing price rises, caches should grow. Thus, prices favor tradeoffs through trade.

5.2. Business location decisions[modifier]

Price information can guide a variety of other choices. Many of these resemble business location decisions.

Main "core" memory is a high-performance resource in short supply. Disk is a lower performance resource in more plentiful supply. In an agoric system, core memory will typically be a high-rent (business) district, while disk will typically be a low-rent (residential) district. Commuting from one to the other will take time and cost money. An object that stays in core will pay higher rent, but can provide faster service. To the degree that this is of value, the object can charge more; if increased income more than offsets the higher rent, the object will profit by staying in core. Treating choice of storage medium as a business location problem takes account of considerations-such as the relative value of prompt service-that traditional virtual memory algorithms do not express.

Small objects would have an incentive to "car-pool" in disk-page sized "vehicles". But given the issues described in Section 3.5, a typical object buying resources on the market may occupy many pages. Instead of deciding whether it should be completely in or out of core, such an object might decide how many of its pages should be part of an in-core working set, perhaps relying on a traditional algorithm [32] to dynamically select the in-core pages.

The variety of types of memory also suggests a need for more flexibility than the traditional two-level approach provides. The many kinds of memory differ in many ways: consider fast RAM cache, write-once optical disk, and tape archived in multiple vaults. Memory systems differ with respect to:

  • Latency
  • Storage cost
  • Access cost
  • Reliability
  • Transfer rate
  • Locality structure
  • Predicability of access time
  • Security

Tradeoffs will change as technology changes. To be portable and viable across these changes, programs must be able to adapt.

Much has been written about the need to migrate objects in a distributed system in order to improve locality of reference [30,33,34]. Again, this resembles the problem of choosing a business location. Machines linked by networks resemble cities linked by highways. Different locations have different levels of demand, different business costs, and different travel and communications costs. Various traditional approaches correspond to:

  • staying put and using the phone (as in Mach [VI] and the V kernel [35]),
  • commuting to wherever the momentary demand is (as in Apollo [36]),
  • moving only when there are no local customers (as in the Bishop algorithm [37]),
  • coordinating multiple offices (as in Grapevine [38] and in [39]),
  • and moving where labor costs are lower (load balancing, as in [40]).

If limited to any one of these methods, human societies would suffer terrible inefficien- cies. One expects the same for large, diverse software systems. If a system's mechanisms support a range of policies, different objects can select different approaches.

The notion of location in a space is still rare in object-oriented programming (for an exception see [41]). All memory in an ideal von Neumann computer is effectively equidistant, and many real computers approximate this ideal, but in a widely distributed system, differing distances are of real importance. When objects are given an account of the costs of communicating and commuting, they gain a useful notion of distance for making economic decisions.

5.3. Business agents[modifier]

In a market containing sophisticated, potentially malicious objects, how can simple objects hope to negotiate, compete, and survive? One answer would be to shelter simple, mutually-trusting objects within large, sophisticated objects, building the latter out of the former. This model, however, would preclude turning loose small objects as service-providers on the open market. Other means are required for giving small objects the market sophistication they need.

Just as delegation of tasks to other objects can enable a small, simple object to offer sophisticated services, so delegation can enable it to engage in sophisticated market behavior. In this work's terminology, an object can delegate competence-domain actions to a subcontractor; this corresponds to the normal practice of hierarchical decomposition, which originated with the subroutine. An object can likewise delegate performance-domain actions to an agent; this seems likely to be a normal practice in agoric systems. Simple objects then can make their way in a complex world by being born with service relationships to sophisticated agents (which themselves can be composed of simple objects, born with. . .). Initially, human decisions will establish these relationships; later, specialized agent-providing agents can establish them as part of the process of creating new economic objects. The initial market strategies mentioned in Section 5.1 could be provided by simple agents.

One might object that a simple object and its collection of agents together constitute a complex object. But these objects, though complex in the performance domain, can remain extremely simple in the competence domain. Further, large agents need not burden a simple object with enormous costs; in general a large number of objects would share the agents and their overhead. The object-and-agent approach thus can enable entities of simple competence to compete in the open market.

=5.3.1. Data-type agents[modifier]

In object-oriented programming, one can supply multiple implementations of an abstract data type, all providing the same service through the same protocol, but offering different performance tradeoffs [28]. An example is the lookup table, which may be implemented as an array, linked list, hash table, B-tree, associative memory, or as any of several other devices or data structures. In an object-oriented system, code which uses such an abstract data type is itself generally abstract, being independent of how the data type is implemented; this provides valuable flexibility. In contrast, code which requests an instance of such an abstract data type is usually less abstract, referring directly to a class which provides a particular implementation of that type. The resulting code embodies decisions regarding implementation tradeoffs in a relatively scattered, frozen form.

In a market, agents can unfreeze these decisions: instantiation requests can be sent to a data-type agent, which then provides a suitable subcontractor. In human markets, someone seeking a house can consult a real-estate agent. The real-estate agent specializes in knowing what is available, what tradeoffs are important, and what to ask clients regarding those tradeoffs. Similarly, a lookup table agent could know what lookup table implementations are available, what tradeoffs they embody, and (implicitly, through its protocol) what to ask clients regarding those tradeoffs (e.g., a request might indicate "I will often be randomly indexing into the table"). The agent could also "ask questions" by providing a trial lookup table that gathers usage statistics: once a pattern becomes clear, the agent can transparently switch to a more appropriate implementation. Long term, sporadic sampling of usage patterns can provide a low-overhead mechanism for alerting the agent to needed changes in implementation.

An agent can do more. For example, the relative price of memory and processor time may vary with the time of day or with the state of technology; depending on the cost of different implementations and the cost of switching among them, a change may be profitable. Likewise, the table-user may desire faster responses; again, a change may be profitable.

If a newly-invented lookup table implementation is superior for some uses, it could be advertised (by its advertising agent) to prominent lookup table agents. "Advertising" could include paying these agents to test its performance under different patterns of use, enabling them to determine which of their clients could benefit from switching to it. The new table would soon be used by programs that were coded without knowledge of it, and which started running prior to its creation.

Unrelated agents can interact synergistically. Consider the case of a lookup table with distinct read and write ports and distributed users. As long as there are writers, this lookup table chooses to exist on only one machine (in order to preserve serializable semantics without the complexity of distributed updates). This implementation imposes substantial delays and communication costs on the readers: if all objects dropped access to its write port, the lookup table could transmit static copies of itself to all readers, lowering these costs. The table can represent this cost by charging an artificially high retainer fee for the write port, giving users an incentive to drop this capability and permit a shift to the less expensive implementation. This illustrates how local economic decisions can encourage appropriate performance tradeoffs involving such distinct aspects of the system as Garbage collection, network traffic, and representation of data structures.

Given sufficiently powerful partial-evaluation agents [42], a data-type agent could offer to extend an object with a new protocol. For example, the user of a lookup table might frequently look up the values of the first N entries following a particular key. Rather than doing so by running a procedure using the existing protocol, it could offer to pay for partially evaluating the procedure with respect to the lookup table, and add a lookup-next-N request to the table's protocol. This would typically make servicing such requests more efficient; a portion of the resulting savings could paid as dividends to the object that invested in the partial evaluation.

5.3.2. Managers[modifier]

Different agents of the same type will have different reputations and pricing structures, and they will compete with each other. An object must select which of these agents to employ. Just as an object may employ a lookup table agent to decide which lookup table to employ, so an object may employ an agent-selection agent to decide which agent to employ. Agent-selection agents are also in competition with each other, but this need not lead to an infinite regress: for example, an object can be born with a fixed agent-selection agent. The system as a whole remains flexible, since different objects (or versions of a single object) will use different agent-selection agents. Those using poor ones will tend to be eliminated by competition.

A generalization of the notion of an agent-selection agent is that of a manager. In addition to the functions already outlined, a manager can set prices, select subcontractors, and negotiate contracts. To select good agents and subcontractors, manager-agents will need to judge reputations.

5.3.3. Reputations[modifier]

A reputation system may be termed positive if it is based on seeking objects expected to provide good service, and negative if it is based on avoiding those expected to provide bad service. Negative reputation systems fail if effective pseudonyms are cheaply available; positive reputation systems, however, require only that one entity cannot claim the identity of another, a condition met by the identity properties of actors [4,43] and public key systems [29]. Accordingly, computational markets are expected to rely on positive reputation systems.

It would seem that new objects could not acquire positive reputations ("Sorry, you can't get the job unless you show you've done it before."), but they need not have given good service to make one expect good service. For example, a new object can give reason to expect good service-thereby establishing a positive reputation-by posting a cash bond guaranteeing good performance. (This requires, of course, that both parties to the contract trust some third parties to hold the bond and to judge performance.) Despite the idea that software entities cannot make commitments [44], contracts with enforceable penalty clauses provide a way for them to do so.

The demand for reputation information will provide a market for reputation services, analogous to credit rating agencies, investment newsletters, Underwriters Laboratories, the Better Business Bureau, and Consumer Reports. When the correctness and quality of the service can be judged, it seems that an effective reputation service could work as follows. A reputation server approaches a service provider, offering money for service. (The server funds these purchases by selling information about the results.) The reputation agent has an incentive to appear to be a regular customer (to get an accurate sample), and regular customers have an incentive to appear to be reputation agents (to get high quality service). A restaurant reviewer has been quoted as saying "If someone claims to be me, he's not!" [45]. Given unforgeable identities, the best either can do is maintain anonymity. Service providers then have an incentive to provide consistently good service, since their reputation might be at stake at any time. This scenario generalizes to tests of reputation services themselves.

5.3.4. Compilation[modifier]

Tradeoffs in compilation can often be cast in economic terms. For example, the best choice in a time-space tradeoff will depend on processor and memory costs, and on the value of a prompt result. Another tradeoff is between computation invested in transforming code versus that spent in running the code; this is particularly important in guiding the often computation-intensive task of partial evaluation.

Investment in code transformation is much like other investments in an economy: it involves estimates of future demand, and hence cannot be made by a simple, general algorithm. In a computational market, compilation speculators can estimate demand, invest in program transformations, and share in the resulting savings. Some will overspend and lose investment capital; others will spend in better-adapted ways. Overall, resources will flow toward investors following rules that are well-adapted to usage patterns in the system, thereby allocating resources more effectively. This is an example of the subtlety of evolutionary adaptation: nowhere need these patterns be explicitly represented.

Current programming practice typically sacrifices a measure of structural simplicity and modularity for the sake of efficiency. Some recent compilation technologies [42,46] can make radical, non-local transformations that change not only performance, but complexity measure. Use of such technology could free programmers to concentrate on abstraction and semantics, allowing the structure of the code to more directly express the structure of the problem. This can reduce the tension between modularity and efficiency.

As we later argue, computational markets will encourage the creation of reusable, high-quality modules adapted for composition into larger objects. The resulting composite objects will typically have components paying dividends to different investors, however, imposing internal accounting overhead. Again, there is a tension with efficiency, and again it can be reduced by means of compilation technology. Much compilation involves (invisibly) "vio- lating" internal boundaries-compiling well-separated components into a complex, non-modular piece of code. In an agoric system, a compilation-agent can do the same, while also analyzing and compiling out the overhead of run-time accounting.

A Pareto-preferred compiler is one which performs this transformation so as to guarantee that some component will be better off and none will be worse off. This can be achieved even if the resulting division of income only approximates the original proportions, since the total savings from compilation will result in a greater total income to divide. The expectation of Pareto-preferred results is enough to induce objects to submit to compilation; since multiple results can meet this condition, however, room will remain for negotiation.

5.4. The scandal of idle time[modifier]

Current resource allocation policies leave much to be desired. One sign of this is that most computing resources-including CPUs, disk heads, local area networks, and much more-sit idle most of the time. But such resources have obvious uses, including improving their own efficiency during later use. For example, heavily-used programs can be recompiled through optimizing compilers or partial evaluators; pages on disk can be rearranged to keep files contiguous; objects can be rearranged according to referencing structure to minimize paging [47], and so forth. In a computational market, a set of unused resources would typically have a zero or near-zero price of use, reflecting only the cost of whatever power consumption or maintenance could be saved by genuine idleness or complete shutdown. Almost any use, however trivial, would then be profitable. In practice, contention for use would bid up prices until they reflected the marginal value of use [5]. Idle time is a blatant sign of wasteful resource allocation policies; one suspects that it is just the tip of a very large iceberg.

Terry Stanley [48] has suggested a technique called "post-facto simulation" as a use of idle (or inexpensive) time. It enables a set of objects to avoid the overhead of fine-grained accounting while gaining many of its advantages. While doing real work, they do no accounting and make no attempt to adapt to internal price information; instead, they just gather statistics (at low overhead) to characterize the computation. Later, when processing is cheap and response time demands are absent (i.e., at "night"), they simulate the computation (based on the statistics), but with fine-grained accounting turned on. To simulate the day-time situation, they do not charge for the overhead of this accounting, and proceed using simulated "day" prices. The resulting decisions (regarding choice of data structures, use of partial evaluation, etc.) should improve performance during future "days". This is analogous to giving your best real-time response during a meeting, then reviewing it slowly afterward: by considering what you should have done, you improve your future performance.

6. Agoric systems in the large[modifier]

In describing the idea of market-based computation and some of its implications, this paper has implicitly focused on relatively isolated systems of software performing relatively conventional functions. The following examines two broader issues: how market-based computation could interact with existing markets for software, and how it could be relevant to the goal of artificial intelligence.

6.1. Software distribution markets[modifier]

An agoric open system would provide a computational world in which simple objects can sell services and earn royalties for their creators. This will provide incentives that differ from those of the present world, leading to qualitative differences in software markets.

6.1.1. Charge-per-use markets[modifier]

Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way.

"R. W. Hamming, 1968 [49]

Consider the current software distribution marketplace. Producers typically earn money by charging for copies of their software (and put up with extensive illegal copying). Occasional users must pay as much for software as intense users. Software priced for intense users is expensive enough to discourage purchase by occasional users-even if their uses would be of substantial value to them. Further, high purchase prices discourage many potentially frequent users from trying the software in the first place. (Simply lowering prices would not be more efficient if this lowers revenues for the sellers: with lower expected revenue, less software would be written, including software for which there is a real demand.)

Now consider trying to build and sell a simple program which uses five sophisticated programs as components. Someone might buy it just to gain access to one of its components. How large a license fee, then, should the owners of those components be expected to charge the builder of this simple program? Enough to make the new program cost at least the sum of the costs of the five component programs. Special arrangements might be made in special circumstances, but at the cost of having people judge and negotiate each case. When one considers the goal of building systems from reusable software components, with complex objects making use of one another's services [50], this tendency to sum costs becomes pathological. The peculiar incentive structure of a charge-per-copy market may have been a greater barrier to achieving Hamming's dream than the more obvious technical hurdles.

In hardware markets, it can be better to charge for the use of a device than to sell a copy of it to the user:

Why was [the first Xerox copier] so successful? Two thing contributed to the breakthrough, McColough says. . .technical superiority. . .and equally important, the marketing genius of the pricing concept of selling [the use of the copier], not machines. `One aspect without the other wouldn't have worked,' he said. `. . .we couldn't sell the machines outright because they would have been too expensive.'

"Jacobson and Hillkirk, 1986 [51]

Agoric systems will naturally support a charge-per-use market for software. In any market, software producers will attempt to extract substantial charges from high-volume users. With charge per use, however, the charges to be paid by high-volume users will no longer stand in the way of low-volume users; as a result, they will use expensive software that they could not afford today. At the same time, high-volume users will experience a finite marginal price for using software, rather than buying it and paying a zero marginal price for using it; they will cut back on some of their marginal, low-value uses. The overall benefit of numerous low-volume users making high-value use of the software will likely outweigh the loss associated with a few high-volume users cutting back on their low-value uses, yielding a net social benefit. It seems likely that some of this benefit will appear as increased revenues to software producers, encouraging increased software production.

In a charge-per-copy market, users face an incentive structure in which they pay nothing to keep using their present software, but must pay a large lump sum if they decide to switch to a competitor. A charge-per-use market will eliminate this artificial barrier to change, encouraging more lively competition among software producers and better adaptation of software to user needs.

By enabling small objects to earn royalties for their creators, charge-per-use markets will encourage the writing, use, and reuse of software components-to do so will finally be pro- fitable. Substantial improvement in programming productivity should result; these improvements will multiply the advantages just described.

6.1.2. Hardware encapsulation[modifier]

This charge-per-use scenario presents a major technical problem: it depends on the ability to truly protect software from illicit copying. True encapsulation would ensure this, but true encapsulation will require a hardware foundation that blocks physical attacks on security. Two approaches seem feasible: either keeping copies in just a few secure sites and allowing access to their services over a network, or developing a technology for providing users with local secure sites to which software can migrate.

In the limit of zero communication costs (in terms of money, delay, and bandwidth limitations), the disincentive for remote computation would vanish. More generally, lower communication costs will make it more practical for objects located on remote machines to offer services to objects on user machines. Remote machines can provide a hardware basis for secure encapsulation and copy protection-they can be physically secured, in a vault if need be. This approach to security becomes more attractive if software can be partitioned into public-domain front-ends (which engage in high-bandwidth interaction with a user), and proprietary back-ends (which perform sophisticated computations), and if bandwidth requirements between front- and back-ends can be minimized.

One system that might lend itself to this approach is an engineering service [13]. The user's machine would hold software for the representation, editing, and display of hardware designs. The back-end system-perhaps an extensive market ecosystem containing objects of diverse functionality and ownership-would provide computation-intensive numerical modeling of designs, heuristics-applying objects (perhaps resembling expert systems) for suggesting and evaluating modifications, and so forth.

Two disadvantages of separating front- and back-ends in this way are communications cost and response time. If hardware encapsulation can be provided on the local user's machine, however, software can migrate there (in encrypted form) and provide services on-site. Opaque boxes are a possible design for such secure hardware:

Imagine a box containing sensors and electronics able to recognize an attempt to violate the box's integrity [52]. In addition, the box contains a processor, dynamic RAM, and a battery. In this RAM is the private key of the manufacturer's public-key encryption key pair [29]; objects encrypted with the public key can migrate to the box and be decrypted internally. If the box detects an attempt to violate its physical integrity, it wipes the dynamic RAM (physically destructive processes are acceptable), deleting the private key and all other sensitive data. All disk storage is outside the box (fast-enough disk erasure would be too violent), so software and other data must be encrypted when written and decrypted when read. The box is termed opaque because no one can see its contents.

Internally, the opaque box would require encapsulation among software objects. This can be done by using a secure operating system [VI], by using capability hardware [53,54,55], or by demanding that objects be written in a secure programming language and either run under a secure interpreter or compiled by a secure compiler [IV,56]. Among other objects, the box would contain one or more branches of external banks, linked to them from time to time by encrypted communications; these banks would handle royalty payments for use of software.

Will greater hardware cost make opaque boxes uncompetitive for personal computer systems? If the added cost is not too many hundreds of dollars, the benefit-greater software availability-will be far greater, for many users. Opaque boxes can support a charge-per-use market in which copies of software are available for the cost of telecommunications. CD-ROMs full of encrypted software might be sold at a token cost to encourage use.

An intermediate approach becomes attractive if opaque boxes are too expensive for use as personal machines. Applications could be split into front and back-ends as above, but back-ends could run on any available opaque box. These boxes could be located wherever there is sufficient demand, and linked to personal machines via high-bandwidth local networks. People (or software) would find investment in opaque boxes profitable, since their processors would earn revenue. With high enough box-manufacturing costs, this approach merges into the remote-machine scenario; with low enough costs, it merges into the personal-machine scenario.

6.1.3. Inhibiting theft[modifier]

As society embodies more and more of its knowledge and capabilities in software, the theft of this software becomes a growing danger. An environment that encourages the creation of large, capable, stand-alone applications sold on a charge-per-copy basis magnifies this problem, particularly when the stolen software will be used in places beyond the reach of copyright law.

A charge-per-use environment will reduce this problem. It will encourage the development of software systems that are composites of many proprietary packages, each having its security guarded by its creator. Further, it will encourage the creation of systems that are distributed over many machines. The division and distribution of functions will make the problem faced by a thief less like that of stealing a car and more like that of stealing a railroad. Traditional methods of limiting theft (such as military classification) slow progress and inhibit use; computational markets promise to discourage theft while speeding progress and facilitating use.

6.1.4. Integration with the human market[modifier]

It has been shown how an agoric system would use price mechanisms to allocate use of hardware resources among objects. This price information will also support improved decisions regarding hardware purchase: if the market price of a resource inside the system is consistently above the price of purchasing more of the resource on the external market, then incremental expansion is advantageous. Indeed, one can envision scenarios in which software objects recognize a need for new hardware, lease room for it, and buy it as an investment.

It has been shown how objects in an agoric system would serve human needs, with human minds judging their success. Similarly, when objects are competent to judge success, they can hire humans to serve their needs-for example, to solve a problem requiring human knowledge or insight.

Conway's law states that "Organizations which design systems are constrained to produce systems which are copies of the communications structures of these organizations" (from [57] as quoted in [58]). If so, then software systems developed in a distributed fashion can be expected to resemble the organization of society as a whole. In a decentralized society coordinated by market mechanisms, agoric systems are a natural result.

6.2. The marketplace of mind[modifier]

Artificial intelligence is unnecessary for building an agoric open system and achieving the benefits described here. Building such a system may, however, speed progress in artificial intelligence. Feigenbaum's statement, "In the knowledge lies the power", points out that intelligence is knowledge-intensive; the "knowledge acquisition bottleneck" is recognized as a major hindrance to AI. Stefik has observed [VII] that this knowledge is distributed across society; he calls for a "knowledge medium" in which knowledge contributed by many people could be combined to achieve greater overall intelligence.

Agoric systems should form an attractive knowledge medium. In a large, evolving system, where the participants have great but dispersed knowledge, an important principle is: "In the incentive structure lies the power". In particular, the incentives of a distributed, charge-per-use market can widen the knowledge engineering bottleneck by encouraging people to create chunks of knowledge and knowledge-based systems that work together.

Approaches based on directly buying and selling knowledge [VII,23] suffer from the peculiar incentives of a charge-per-copy market. This problem can be avoided by embodying knowledge in objects which sell knowledge-based services, not knowledge itself. In this way, a given piece of knowledge can be kept proprietary for a time, enabling producers to charge users fees that approach the value the users place on it. This provides an incentive for people to make the knowledge available. But in the long run, the knowledge will spread and competition will drive down the price of the related knowledge-based services-approaching the computational cost of providing them.

Agoric open systems can encourage the development of intelligent objects, but there is also a sense in which the systems themselves will become intelligent. Seeing this entails distinguishing between the idea of intelligence and the ideas of individuality, consciousness, and will. Consider the analogous case of human society.

It can be argued that the most intelligent system now known is human society as a whole. This assertion strikes some people as obvious, but others have a strong feeling that society should be considered less intelligent than an individual person. What might be responsible for these conflicting views?

The argument for the stupidity of society often focuses not on the achievements of society, but on its suboptimal structure or its slow rate of structural change. This seems unfair. Human brains are presumably suboptimal, and their basic structure has changed at a glacial pace over the broad time spans of biological evolution, yet no one argues that society is worse-structured than a brain (what would this mean?), or that its basic structure changes more slowly than that of a brain. Great intelligence need not imply optimal structure, and suboptimal structure does not imply stupidity.

Other arguments for the stupidity of society focus on the behavior of committees, or crowds, or electorates. This also seems unfair. Human beings include not only brains but intestines; our intelligence is not to be judged by the behavior of the latter. Not all parts need be intelligent for a system to be so. Yet other arguments focus on things individuals can do that groups cannot, but one might as well argue that Newton was stupid because he did not speak Urdu. A final argument for the stupidity of society focuses on problems that result when a few individuals who are thought to somehow represent society attempt to direct the actions of the vast number of individuals who actually compose society-that is, the problems of central planning, government, and bureaucracy. This statement of the argument seems an adequate refutation of it.

The argument for society's intelligence is simple: people of diverse knowledge and skills, given overall guidance by the incentives of a market system, can accomplish a range of goals which, if accomplished by an individual, would make that individual a super-human supergenius. The computer industry is a small part of society, yet what individual could equal its accomplishments, or the breadth and speed of its ongoing problem-solving ability?

Still, it is legitimate to ask what it means to speak of the "intelligence" of a diverse, distributed system. In considering an individual, one commonly identifies intelligence with the ability to achieve a wide range of goals through complex information processing. But in agoric systems, as in human society, the component entities will in general have diverse goals, and the system as a whole will typically have no goals [59]. Nonetheless, a similar concept of intelligence can be applied to individuals, societies, and computational markets.

Individuals taking intelligence tests are judged by their ability to achieve goals set by a test-giver using time provided for the purpose. Likewise, the intelligence of a society may be judged by its ability to achieve goals set by individuals, using resources provided for the purpose. In either case, the nature and degree of intelligence may be identified with a combination of the range of goals that can be achieved, the speed with which they can be achieved, and the efficiency of the means employed. By this measure, one may associate kinds and degrees of intelligence not only with individuals, but with corporations, with ad-hoc collections of suppliers and subcontractors, and with the markets and institutions that bring such collections together at need. The idea of intelligence may thus be separated from the ideas of individuality, consciousness, and will.

The notion of intelligence emerging from social interactions is familiar in artificial intelligence: Minsky [60] uses the society metaphor in his recent work on thinking and the mind; Kornfeld and Hewitt [61] use the scientific community as a model for programs incorporating due process reasoning. Human societies demonstrate how distributed pieces of knowledge and competence can be integrated into larger, more comprehensive wholes; this process has been a major study of economics [8] and sociology [63]. Because these social processes (unlike those in the brain) involve the sometimes-intelligible interaction of visible, macroscopic entities, they lend themselves to study and imitation. This paper may thus be seen as proposing a form of multi-agent, societal approach to artificial intelligence.

7. The absence of agoric systems[modifier]

Market-style software systems are a fairly obvious idea and have received some attention. However, in considering any fairly-obvious idea with (allegedly) great but unrealized potential, it is wise to ask why that potential has in fact not been realized. When an idea of this sort neither lends itself to formal proof nor to small, convincing demonstrations, the difficulty of making a case for it grows. Support from abstract arguments and analogies can be helpful, as can an examination of the practical issues involved. But in addition, it helps to see whether the idea has been tested and found wanting. Considering this major category of possible negative evidence is an aspect of due-process reasoning.

Why have agoric open systems not been implemented already? In part, because the software community has lacked an immediate, compelling need. Advances have been made, through better programming environments and methodologies (including the encapsulation and communication of information and access), and through tools for making larger structures visible to programmers [64]-all without building markets. These environments and method- ologies have extended the programmer's conceptual span of control, enabling one mind or a few closely-coordinated, mutually-trusting minds to build ever larger and more complex programs. These advances have decreased the urgency of enabling extensive cooperation without mutual trust or extensive communications.

Another problem has been the scale-sensitivity of the market approach. In small systems, the overhead of accounting and negotiations is unjustified; further, incremental increases in scale have thus far been possible without markets. Robust service-trading objects must have a certain minimum complexity, or have access to trusted business-agents of a certain minimum complexity. The virtues of markets are greatest in large, diverse systems.

There has, perhaps, also been a cultural factor at work. Large, research-oriented computer networks have focused on academic and government work-that is, toward non-profit use. Further, the academic community already has an informal incentive structure that rewards the creators of useful software in an incremental way, in rough proportion to its usefulness. These reputation-based reward mechanisms facilitate the development of software systems that build on others' work; the differing incentives in the commercial community may be responsible for its greater tendency to build redundant systems from scratch.

These considerations seem sufficient to explain the lack of agoric systems today, while giving reason to expect that they will become desirable as computer systems and networks grow. In the large, open, evolving software systems of the future, the overhead of accounting will be less important than robustness and flexibility. Further, the development of automated programming systems will introduce "programmers" having (initially) a sharply limited ability to plan and comprehend. This will re-emphasize the problem of the "programmer's" span of conceptual control, and increase the need for mechanisms that strengthen localization and system robustness.

8. Conclusions[modifier]

A central challenge of computer science is the coordination of complex systems. In the early days of computation, central planning-at first, by individual programmers-was inevitable. As the field has developed, new techniques have supported greater decentralization and better use of divided knowledge. Chief among these techniques has been object-oriented programming, which in effect gives property rights in data to computational entities. Further advance in this direction seems possible.

Experience in human society and abstract analysis in economics both indicate that market mechanisms and price systems can be surprisingly effective in coordinating actions in complex systems. They integrate knowledge from diverse sources; they are robust in the face of experimentation; they encourage cooperative relationships; and they are inherently parallel in operation. All these properties are of value not just in society, but in computational systems: markets are an abstraction that need not be limited to societies of talking primates.

This paper has examined many of the concrete issues involved in actually creating computational markets, from hardware and software foundations, to initial market strategies for resource management (chiefly in [III]), to the organization of systems of objects and agents able to interact in a market context. As yet, no obstacle to their realization has been found.

Distributed systems based on the charge-per-use sale of software services and computational resources promise a more flexible and effective software market, in which large systems will more often be built from pre-existing parts. With many minds building knowledge and competence into market objects, and with incentives favoring cooperation among these objects, the overall problem-solving ability of the system can be expected to grow rapidly.

On a small scale, central planning makes sense; on a larger scale, market mechanisms make sense. Computer science began in a domain where central planning made sense, and central planning has thus been traditional. It seems likely, however, that some modern computer systems are already large and diverse enough to benefit from decentralized market coordination. As systems grow in scale and complexity, so will the advantages of market-based computational systems.

References[modifier]

Papers referenced with roman numerals can be found in the present volume: Huberman, Bernardo (ed.), Ecology of Computation (Elsevier Science Publishers/North-Holland, 1988).

[I] Miller, Mark S., and Drexler, K. Eric, "Markets and Computation: Agoric Open Systems", this volume.

[II] Miller, Mark S., and Drexler, K. Eric, "Incentive Engineering for Computational Resources", this volume.

[III] Lenat, Douglas B. and Brown, John Seely,, "Why AM and EURISKO Appear to Work", this volume.

[IV] Kahn, Kenneth, and Miller, Mark S., "Language Design and Open Systems", this volume.

[V] Rashid, Richard F., "From RIG to Accent to Mach: The Evolution of a Network Operating System", this volume.

[VI] Perlis, Alan, "The Evolution of Software Systems", this volume.

[VII] Stefik, Mark, "The Next Knowledge Medium", this volume.

[1] Dawkins, Richard, "The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design (W.W. Norton and Company, New York, 1986

[2] Hayek, Friedrich A., "Cosmos and Taxis", in: Law, Legislation and Liberty, Vol. 1: Rules and Order (University of Chicago Press, Chicago, 1973) pp.35-54.

[3] Gardner, Martin, Wheels, Life, and Other Mathematical Amusements (W. H. Freeman, New York, 1983).

[4] Popper, Karl R., "Evolutionary Epistemology", in Miller, David, (ed.), Popper Selections (Princeton University Press, Princeton NJ, 1985) pp.78-86.

[5] Axelrod, Robert, The Evolution of Cooperation (Basic Books, New York, 1984).

[6] Lenat, Douglas B., "The Role of Heuristics in Learning by Discovery: Three Case Studies",in Michalski, Ryszard S., Carbonell, Jaime G., and Mitchell, Tom M. (eds.), Machine Learning: An Artificial Intelligence Approach (Tioga Publishing Company, Palo Alto, CA, 1983) pp.243-306.

[7] Dawkins, Richard, The Selfish Gene (Oxford University Press, New York, 1976).

[8] Wilson, Edward O., Sociobiology (Belknap Press/ Harvard University Press, Cambridge, MA, 1975).

[9] Hofstadter, Douglas R., "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation", in Metamagical Themas: Questing for the Essence of Mind and Pattern (Basic Books, New York, 1985) pp.715-716.

[10] Smith, Maynard J. and Price, G.R., "The Logic of Animal Conflicts", in Nature (1973) 246, pp.15-18.

[11] Drexler, K. Eric, "Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation", in: Proceedings of the National Academy of Science USA (Sept. 1981) Vol. 78, No 9, pp. 5275-5278.

[12] Drexler, K. Eric, Engines of Creation (Anchor Press/Doubleday, Garden City, New York, 1986).

[13] Rivest, R., Shamir, A., and Adelman, L., "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems" in: Communications of the ACM (Feb. 1978) Vol. 21, No. 2, pp. 120-126.

[14] Miller, Mark S.; Bobrow, Daniel G.; Tribble, Eric Dean; and Levy, Jacob, "Logical Secrets", in Shapiro, Ehud, (ed.), Concurrent Prolog: Colected Papers (MIT Press, Cambridge, MA, 1987) in press.

[15] Hayek, Friedrich A., Denationalisation of Money (The Institute of Economic Affairs, Westminster, London, 1978, Second Edition).

[16] Keynes, John Maynard, The General Theory of Employment, Interest, and Money (Harcourt Brace Jovanovich, San Diego, CA, 1964).

[17] Wickler, Wolfgang, Mimicry in Plants and Animals (World University Library/ MaGraw-Hill, New York, 1968).

[18] Dawkins, Richard, The Extended Phenotype (Oxford University Press, New York, 1982).

[19] Holland, John H., Holyoak, Keith J., Nisbett, Richard E., and Thagard, Paul R., Induction: Processes of Inference, Learning, and Discovery (MIT Press, Cambridge, MA, 1986).

[20] Ames, Bruce N., Magaw, Renae, and Gold, Lois Swirsky, "Ranking Possible Carcinogenic Hazards", in Science (17 April 1987) Vol. 236, pp. 271-285.

[21] Nisbett, Richard, and Ross, Lee, Human Inference: Strategies and Shortcomings of Social Judgment (Prentice-Hall, Englewood Cliffs, NJ, 1980).

[22] Haase, Kenneth W., Jr., "Discovery Systems", in ECAI '86: The 7th European Conference on Artificial Intelligence (July 1986) Vol. 1, pp. 546-555.

[23] Hardin, Garret, "The Tragedy of the Commons", in Science (13 December 1968) Vol. 162, pp. 1243-1248.

[24] Tullock, Gordon, The Organization of Inquiry (Duke University Press, Durham, NC, 1966).

[25] Nelson, Theodor, Literary Machines (published by author, version 87.1, 1987, available from Project Xanadu, 8480 Fredricksburg #8, San Antonio, TX 78229. Available as hypertext on disk from Owl International, 14218 NE 21st St., Bellevue, WA 98007. 1981).

[26] Drexler, K., Eric, Hypertext Publishing and the Evolution of Knowledge (Foresight Institute, Los Altos, CA, 1986).

[27] Kornfield, William A., and Hewitt, Carl, "The Scientific Community Metaphor", in IEEE Transactions on Systems, Man, and Cybernetics (IEEE, 1981) SMC-11, pp. 24-33.

[28] Kornfield, William A., "Using Parallel Processing for Problem Solving" (MIT AI Lab, Cambridge, MA, 1979) MIT-AI-561.

[29] Hayek, Friedrich A., Unemployment and Monetary Policy: Government as a Generator of the "Business Cycle" (Cato Institute, San Francisco, CA, 1979).

[30] Friedman, David, The Machinery of Freedom: Guide to a Radical Capitalism (Harper and Row, New York, 1973).

[31] Friedman, Milton, and Schwartz, Anna, "The Great Contraction", in A Monetary History of the United States, 1867-1960 (Princeton University Press/ National Bureau of Economic Research, 1963) Chap. 7.

[32] McGee, John S., "Predatory Price Cutting; The Standard Oil (NJ) Case", in Journal of Law and Economics (October 1958) 1, 137.

[33] Epstein, Richard A., Takings: Private Property and the Power of Eminent Domain (Harvard University Press, Cambridge, MA, 1985).

[34] Buchanan, James M. and Tullock, Gordon, The Calculus of Consent: Logical Foundations of Constitutional Democracy (University of Michigan Press, Ann Arbor, MI, 1965).

[35] Tullock, Gordon, The Vote Motive (The Institute of Economic Affairs, Westminster, London, 1976).

[Home] [Tech Library]

Comparative Ecology: A Computational Perspective[modifier]

A long-standing dream in the field of artificial intelligence has been to use evolutionary processes to produce systems of greater competence than those we can directly design. This paper compares different evolutionary models-such as biological ecosystems, markets, and EURISKO-with respect to this goal. This comparison suggests that a form of ecosystem here termed a direct market (as opposed to the indirect market of human society) is a promising basis for computational ecosystems. Related papers [I,II] in this book elaborate a vision of direct computational markets termed agoric open systems.

1. Introduction[modifier]

A major problem in making effective use of computers is dealing with complexity. As we design programs of increasing functionality, we find ourselves overwhelmed by their ever greater complexity. It might seem that the complexity of computational systems must be limited by our ability to understand and design them, but this is not so. In the world outside computation are working systems that developed without design-indeed, systems of a complexity far beyond present design capabilities [1].

A patterned system which develops without deliberate design is termed a spontaneous order [2]. Crystals provide a simple example: nowhere in the nature of an element's individual atomic forces is its crystal structure specified, much less designed, yet that structure emerges as a result of those forces. Similar examples include patterns observed in Conway's game of Life [3] and the spiral arms of galaxies. A particularly powerful spontaneous ordering principle is evolution. Among the spontaneous orders in the world, the most intricate and impressive are those-such as human bodies and languages-that have emerged through evolutionary processes.

The goal of this work is to understand how to build systems that will develop increasingly effective (even intelligent) computation through spontaneous ordering processes. In pursuit of this goal, this paper examines several evolutionary systems to determine which properties will serve best when generalized to the computational domain.

Biology provides the most familiar examples of evolutionary processes. A simple generalization from these examples might suggest that evolution is necessarily slow, and that it must proceed by random mutations. But human culture-including technology and scientific know- ledge-is also a result of evolutionary processes [2,4]. This shows that evolution can proceed quickly, and that "mutations" in an evolving system can result from thought and design. The essence of evolution is trial and the weeding out of error, but the trials need not be random.

Evolution often proceeds in what, in the biological case, is called an ecosystem. Here, the concept of an ecosystem is generalized to cover any set of evolving, interacting entities operating within a framework of rules.

Ecosystems vary in their ability to solve externally defined problems. One can imagine putting a person or an ecosystem in a box and then presenting problems and contingent rewards through a window in the box. A box full of algae and fish will "solve" a certain narrow set of problems (such as converting light into chemical energy), and will typically pay little attention to the reward. A box containing an intelligent person will solve a different, broader range of problems. A box containing, say, an industrial civilization (with access to algae, fish, and Bell Labs) will solve a vastly greater range of problems. This ability to solve externally posed problems can be taken as a measure of an ecosystem's "intelligence" (see Section 6.2 of [I]).

Ecosystems discussed in this paper include Axelrod's iterated prisoner's dilemma game [5], which can serve as a sort of E. coli of ecosystems, and biological ecosystems, which are familiar enough to serve as a point of reference for the rest. Others discussed are Lenat's EURISKO program [III,6], political ecosystems, and what will here be termed direct and indirect market ecosystems. Markets are the central theme-other ecosystems are compared to markets, and markets themselves are generalized from existing human systems to proposed computational systems.

2. Evolution in ecosystems[modifier]

Evolution proceeds by the variation and selection of replicators. The most familiar example, of course, is biological evolution, in which genes are the replicators [7,8], mutation is the variation mechanism, and a variety of environmental pressures (predation, parasitism, competition for scarce resources) act as selection mechanisms. The following will explore a variety of other ecosystems, the nature of their replicators, and the nature of their processes of variation and selection.

2.1. Ecosystems Ecosystems provide contexts for evolution in two distinct senses.[modifier]

First, the ecosystem's replicators evolve in response to unchanging aspects of their environment such as climate and physical principles. Second, the replicators interact (through predation, competition, cooperation, and so forth), with each replicator providing part of the context for others. When replicators respond to external selective pressure, but not to each other, the result is an evolutionary system, but not an ecosystem. In such a non-ecological system, the subtlety and sophistication of the selective pressures are fixed.

The environmental richness and complexity generated when evolving entities can interact arguably leads to increased richness and complexity of the resulting system: selective pressures are then themselves the results of evolution. Analyzing such a system may be difficult, however. With many complex, multi-way feedback loops possible, can one be confident in the stability or direction of the overall system? This depends on the nature of the forces and feedback.

The analysis of ecosystems frequently involves non-intuitive secondary and tertiary effects. The Axelrod tournament provides a simple example of these effects.

2.2. The Axelrod tournament=[modifier]

Robert Axelrod developed an ecosystem in which the entities interact in rounds of iterated prisoner's dilemma games [5]. To understand it, one must first understand the dilemma itself. Instead of the traditional scenario of prisoners being interrogated, Hofstadter's illustration with the following scenario seems more intuitive.

Two dealers arrange to trade items. Each dealer agrees to place his item in a bag and leave it at a designated place to be retrieved by the other dealer. The dilemma presents itself when each dealer considers how best to interact with the other-given that they are strangers, will not meet face to face for this exchange, and will never deal with each other in the future. As Hofstadter describes, each dealer reasons as follows:

"`If the [other] dealer brings a full bag, I'll be better off having left an empty bag, because I'll have gotten all that I wanted and given away nothing. If the dealer brings an empty bag, I'll be better off having left an empty bag, because I'll not have been cheated. I'll have gained nothing but lost nothing either. Thus it seems that no matter what the dealer chooses to do, I'm better off leaving an empty bag. So I'll leave an empty bag.' . . .And so both of you, with your impeccable (or impeccable-seeming) logic, leave empty bags, and go away empty handed. How sad, for if you both had just cooperated, you could have each gained something you wanted to have. Does logic prevent cooperation? This is the issue of the Prisoner's Dilemma." [emphasis in the original] [9]

The underlying strategic situation can be made precise in the following fashion: In a single prisoner's dilemma interaction, two players each choose between moves (termed cooperate and defect) in ignorance of the other's choice. If both cooperate, both are rewarded (in Axelrod's case, with a payoff of 3 points). If one cooperates and the other defects, the defector receives an even greater reward (5 points), while the cooperator receives nothing (0 points). If both defect, both are punished (by receiving only 1 point).

In a single move, each player has an incentive to defect regardless of the other player's move, but double-cooperation is better than double-defection. Overall, pairs of players that cooperate earn higher scores than those that do not.

In an iterated prisoner's dilemma game, two players go through a long series of moves, and can base their actions on the history of play. When one expects (and hopes for) further transactions with the other party, simple defection no longer seems as attractive. Indeed, by running a computer tournament, Axelrod showed that the logic of an iterated prisoner's dilemma actually fosters cooperation.

Robert Axelrod ran a Computer Prisoner's Dilemma Tournament based on the above rules. A diverse group of game theorists were invited to submit programs to play against each other in a round-robin of games, each averaging 200 single moves. After the first tournament, Axelrod circulated the results-including the nature of the winning program, judged by cumulative points-and solicited entries for a second tournament.

Axelrod's pair of tournaments may be described as a simple evolutionary ecosystem. The replicators were the programs themselves (or the strategies those programs embody), the variation mechanism was human ingenuity (since programs were modified between tournaments), and the selection criterion during a tournament was simply the number of points earned. Programs interacted with each other in an environment imposing certain rules, and their success depended on each others' behavior. Further, Axelrod went on to simulate the population dynamics of a set of programs, given the assumption that points earned determined the "population density" of that program in the next time period.

In both tournaments a very simple program won. That program was TIT FOR TAT, submitted by psychologist Anatol Rapoport. In the population dynamics simulation, the success of TIT FOR TAT was even more pronounced. Analyzing TIT FOR TAT's success can suggest how to analyze other ecosystems.

2.2.1. The triumph of TIT FOR TAT[modifier]

All sorts of strategies were submitted, including many which used complex reasoning based on past interactions, and one which responded randomly. The success of a strategy depended on whether it was:

  • Nice-never defected first,
  • Retaliatory-responded to defection with defection (thereby punishing defectors), and
  • Forgiving-eventually stopped defecting in response to cooperation.

TIT FOR TAT is the simplest example of a nice, retaliatory, and forgiving strategy. It cooperates on the first move and then does whatever the opposing player did on the previous move.

Other strategies can be classified according to which of the above traits they lack. A strategy which initiates a defection (and thereby is not nice) may be termed a con man, since it is trying to gain at the expense of (and with the foolish cooperation of) its opponent-the simplest con man always defects. A nice strategy which does not defect in response to a defection (and thereby is not retaliatory) may be termed a sucker, since it can be taken advantage of by con men.

Con men have an advantage over TIT FOR TAT in that they can fully exploit suckers, while TIT FOR TAT only cooperates with them. Given the mix of strategies submitted to Axelrod's tournaments, TIT FOR TAT won both. A con man strategy could have won, however, had the initial population included enough suckers. Since con men could have won in this case, how can one claim that TIT FOR TAT is fundamentally more viable than con men? Axelrod's population dynamics simulation helps answer this question.

3. Evolutionarily stable strategies[modifier]

In the population dynamics simulations, a population number is associated with each strategy, to indicate how many "organisms" in the overall population are following the strategy. In each generation, the score received by an organism is the score it would receive in playing a series of one-on-one games with every organism in a representative sample of the total pop- ulation. At the end of each generation, the total score accumulated by organisms using a given strategy determines how many organisms of that type will exist in the next generation. (A process of this sort-in which success in one round determines influence or existence in the next-distinguishes evolutionary game theory from conventional game theory. Evolutionary game theory need make no assumptions about motives or values, though it is natural to think in terms of a "survival motive" which generates a "success motive".)

As Figure 2 shows, in a population dominated by suckers, a small population of con men have a great advantage, rising to temporary dominance. But in doing so, they drive down the population of suckers and so lose their advantage; they are then driven to extinction by TIT FOR TAT. As shown in the diagram, an environment with enough TIT FOR TAT players to fight off an invasion of con men can support a stable population of suckers.

In this ecosystem, TIT FOR TAT can be termed an evolutionarily stable strategy, or ESS. An ESS is a strategy which, given that it dominates an ecology, cannot be invaded by any other strategy [10,7]. In Maynard Smith's terminology, "ESS" refers to a specific detailed strategy (such as TIT FOR TAT); here, the term will refer to general classes of strategies that share basic characteristics, such as being "nice and retaliatory". The rules governing a given ecology determine the success of different strategic characteristics, and therefore the ESS.

As shown, suckers are not an ESS in Axelrod's system because they can be invaded by con men. Con men are not an ESS except in a trivial sense-a population consisting purely of con men cannot be invaded by a small, scattered group of TIT FOR TAT players. A small but significant number of invaders, however, can cooperate, expand, and completely replace a population of con men. A population dominated by any strategy that is both nice and sufficiently retaliatory cannot be invaded by any strategy that is not. Any population dominated by a strategy that is not both nice and sufficiently retaliatory can be invaded. Therefore, the ESS of the Axelrod tournament is to be nice and retaliatory.

A single population-dynamics process can be described as an ecosystem, but not as an evolutionary ecosystem, since there is no source of variation. Axelrod's series of two tournaments included variation from one to the next, and hence qualifies. To transform it into a better example of an evolutionary ecosystem, imagine a continuing tournament open to outside contestants able to create new strategies and introduce them in small numbers. In any such open, evolving ecosystem, given no special restrictions on allowable variations, one should expect the ecosystem to become populated primarily by an ESS. The properties of such an ESS will often indicate emergent properties of the system. For example, in any open Axelrod ecosystem most moves will be cooperative.

The above discussion of Axelrod's system benefits from hindsight. The original game-theory experts who submitted the original strategies knew the rules, and those who submitted strategies for the second tournament even had the benefit of a bit of hindsight from the first tournament. Nevertheless, in both cases most of the strategies submitted were not nice, and so were not ESSs. The analyses which follow cover much more complex ecosystems, in which the nature of strategies and payoffs are much more subtle.

As a result, many of the points made in the rest of this paper are not settled conclusions, but merely initial hypotheses to be tested. The best testing methodology is that used by Axelrod: run a system and open it to outside contributors. The goal is to understand what properties of a computational ecosystem will result in useful system behavior.

4. Biological and market ecosystems[modifier]

This section compares biological and idealized market ecosystems, focusing on the differing evolutionarily stable strategies they foster. Idealized markets, as defined here, are computationally-inspired abstractions having parallels with human markets but omitting certain of their complexities and problems. The idea of a computational market is not developed in detail in this paper; see [I,II] for a more concrete discussion.

4.1. Rules of the games[modifier]

The above analysis of Axelrod's ecosystem began with an examination of the rules of the game. These rules are the constraints within which organisms of that ecosystem must operate. The only fundamental constraint on either biological or human market ecosystems is physical law: any action not forbidden by physical law is in theory possible for players in either ecosystem. However, in order to analyze these ecosystems, it is useful to consider idealized versions in which players operate under additional constraints.

4.1.1. Foundations of biological ecosystems[modifier]

An example of a physical constraint in biological ecosystems is the conservation of mass and energy. Among animals, the critical resources-biomass and free energy-are downwards conserved: they can be transferred and reduced by the transactions animals are capable of, but not increased. Plants can obtain these resources by photosynthesis; access to sunlight and land area are among their critical limited resources. Biomass and free energy are needed to sustain activity, and can be transferred by predation.

Other constraints in biology (at least if evolutionary history is any guide) include the primary use of proteins for construction of molecular machinery and the use of ribosomes programmed by nucleic acids for their manufacture. These constraints (while not essential to the present discussion) have been shown to limit severely the materials and processes available to biological systems [11,12].

4.1.2. Foundations of idealized markets[modifier]

In the attempt to characterize an idealized biological ecosystem, it is fortunate that the boundary between living and non-living systems is still fairly clear. In contrast, human markets exist in the context of governmental activity and crime-this makes idealization a larger and riskier task. The following will analyze idealized market ecosystems with simple foundational rules of sorts that can be rigorously enforced in a computational context. This analysis draws heavily on analogies with human markets, while omitting certain difficulties that are excluded by the idealization. Section 6.2 will build on the concept of an idealized market, describing a direct market ecosystem with further differences from human markets; again, these differences are inspired by the possibilities of computational systems.

The basic rules of human markets are typically encoded in legal systems and enforced by attempting to catch and punish violators. In a computational setting, these rules can be enforced as unbreakable "physical" laws. In particular, rights of property (or ownership) can be implemented through encapsulation [I,IV,V]; unforgeable currency and trademarks can be implemented through public key systems [13,14]. The computational entities within such a system could no more steal than human entities can travel faster than light. These abilities can provide the foundations for idealized markets.

An idealized market can contain a great variety of resources, analogous to items ranging from land to airplanes to currency tokens. These resources can be owned by individual entities or groups. No one may take such resources from their owner, but an owner may voluntarily transfer them to any other party. When this occurs in a reciprocal exchange, we refer to the transaction as a trade. Exchanges cannot increase the quantity of any resource-resources are locally conserved across transfers. Productive activity, however, can increase supplies of many resources.

By the rules of the game, anyone may produce new airplanes. Were this the case for currency tokens, however, they would be useless as currency. The rules of an idealized market therefore permit manufacture of a given currency only by a corresponding mint [14]. The effects of minting and introducing new currencies into a market ecosystem are complex [15,16], and are ignored in this paper. The following assumes a currency which is both locally and globally conserved.

A key difference between biological and idealized market ecosystems is the ability to establish and use unforgeable identities. Nature abounds in examples of mimicry and in imperfect attempts to avoid being mimicked [17]. A right to trademark is here defined to be one of the rules of idealized markets. Any entity may establish a new trademarked identity and attach it to that entity's product or place of business. No entity may do this with another's trademark.

4.1.3. Variation and selection[modifier]

In biology, the replicators are genes, and the variation mechanism is relatively random mutation of an organism's genetic code-its genotype. This does not mean biological variation is random. An organism's phenotype-its body structure, abilities, and so forth-determines its success [18]. An organism's phenotype is decoded from its genotype through a complex translation process. The encoding of a phenotype determines what other phenotypes are reachable by small mutations of a genotype. Therefore this encoding itself can embody heuristics for plausible phenotypic mutations (for example, leg-lengthening mutations interact with the heuristic rule of embryonic development that says, in effect, "do the same to both sides"). As explained in [III], AM and EURISKO employ computational embodiments of this principle (as do genetic algorithms in classifier systems [19]). Nevertheless, biological variation is essentially short-sighted, limited to incremental changes.

The variation and selection mechanisms of market ecosystems are less constrained to local optimization or hill climbing than are those of biological ecosystems. In biological evolution, temporary changes for the worse (temporary travel downhill) will typically lead to extinction through competition from organisms that have not gone downhill. This, together with the small steps possible through typical mutations, greatly limits the ability to reach distant peaks, however high they may be.

Variation in the human marketplace (as in the computational markets described below) frequently results from invention and design by people (or other entities able to plan ahead) who can design coordinated sets of non-incremental changes. Investors in a market (e.g., venture capital firms, in the human market) can take into account arguments for anticipating future success despite present failure, and invest in crossing a valley to get to a higher hill. Biological variation cannot take such arguments into account. By rewarding successful investors, markets select for entities that can facilitate these large jumps. Design and evolution are sometimes presented as mutually exclusive principles, but market ecosystems make use of both.

4.1.4. Success metrics[modifier]

Success (or "fitness") in an evolutionary process is sometimes defined in terms of long-term survival, but doing so would give little help in analyzing the short term. Also, the goal here is to use evolutionary reasoning to predict the nature of an ecosystem, not to determine what types of creatures will be around at some distant time. For these purposes, a useful criterion of a replicator's success is the magnitude of its ability to affect its environment. This is itself hard to measure, giving reason to seek a metric which is positively correlated with this ability.

In biology, control of biomass and free energy correlates with ability to engage in biological activity. In a market ecosystem, an entity's net worth is the measure of its resources, and hence a rough measure of its potential ability to engage in market activity. The following analyzes strategies for achieving these kinds of success.

4.2. ESSs in biological ecosystems[modifier]

To survive, animals must eat animals or plants; this happens most often though predation. (Here entities which are eaten, whether animals or plants, are termed "prey".) This is not a synergistic or symbiotic process-the incentives are not toward cooperation between predator and prey. If they were, the participants would both try to facilitate the predatory transaction.

________________________________________

Figure 4: Predator and prey. In a biological ecosystem, predators forcibly overcom prey defenses to obtain food. Force and defenses evolve in an arms race, adding overhead to the predatory "transaction"; the lines representing force and defense are accordingly thick. Since attack may come from any direction, defenses are shown surrounding the prey.

________________________________________

Instead, the incentives lead to an arms race in which predators develop better "teeth" and prey develop better "armor". "Teeth" here refers to any mechanism for facilitating predation, such as a cheetah's legs or a dog's nose. "Armor" here refers to any mechanism for avoiding being preyed upon, such as a gazelle's legs or a skunk's scent. An animal without effective "teeth" would starve, and one without effective "armor" would rarely live long enough to reproduce.

Plants are seldom predators, but are often prey. As prey, they develop spines, grit to wear down teeth, and poisons (many are carcinogenic [20]). This has spurred another arms race in which animals have developed large, grinding molars and biochemically complex livers to deal with poisons. Plants compete for light in yet another arms race. This has led to the growth of trees which invest in huge wooden columns for the sake of tallness, to intercept their neighbors' light. Efficient, cooperating plants would instead cover the Earth with grassy or mossy growth; their energy would be stored, not in inert wood, but in sugar, starch, oil, or some other metabolizable reserve.

Predation is a negative-sum relationship: one of the participants benefits, but only at a greater cost to the other. Biological competition is roughly zero-sum in the short term, but spurs a wasteful negative-sum arms race over the long term. Of course, there are many examples of symbiotic, positive sum relationships in biology, but the basic ESS of biology is one of "teeth and armor".

4.3. ESSs in idealized market ecosystems[modifier]

In order to sustain activity, players in the idealized market must obtain valuable resources, or goods. They can do so through the equivalent of solitary prospecting and manufacture, but the limited competence of any one entity will favor obtaining goods from others-that is, division of labor. Since the rules of the idealized market make it impossible to seize goods by force, one entity can obtain another's goods only by inducing it to engage in a voluntary transaction. An entity which simply gives away goods would steadily lose resources and influence compared to one which does not; such a strategy would not be an ESS. Therefore, the strategy of simply accumulating donated gifts would also not be an ESS.

To induce "gifts", an entity must offer something in exchange. For both sides to want to trade, each must value the goods received more than the goods given. Pair-wise barter deals of immediate mutual benefit are hard to find, and would yield only part of the full potential benefit of trade. Large multi-way deals would yield the full benefit, but are difficult to negotiate. Trade of goods for currency is therefore the expected dominant pattern; currency makes it easier for the equivalent of large multi-way barter deals to occur through separate pair-wise trades.

Each trade in a market can be seen as moving the system toward a condition (one of many possible) in which no transaction that will make both parties better off remains to be done (a condition known as Pareto optimality). Each trade can be seen as a hill-climbing step. Pair-wise barter amounts to hill-climbing across a rough terrain with few available moves; trade in a system with currency and prices amounts to hill-climbing across a smoother terrain with many available moves.

In a trade of goods for currency, the player paying currency in exchange for goods we term a consumer and the one selling goods in exchange for currency we term a producer. A set of producers competing to provide the same (or very similar) goods we term an industry.

Consumer-producer relationships can be contrasted to predator-prey relationships. Producers, unlike prey, will voluntarily seek out those who want to consume what they have, using advertising, distribution networks, and so forth. Consumers, less surprisingly, will seek out producers (in human markets, they do so by reading advertising, traveling to stores, and so forth). The symbiotic nature of this interaction is shown by the interest each side has in facilitating it. Since trade typically increases the viability of both participants, it also raises the viability of the pair considered together as an entity.

There are many negative-sum pair-wise relationships in even an ideal marketplace-the most common is competition among producers in the same industry. In judging the nature of the market as a whole, however, it is important to note that when producers compete, each is competing to do a better job of cooperating with the rest of the world, of attracting consumers into beneficial trade relationships.

In the absence of perfectly-effective rules to prevent it (which seem difficult to define, even in the computational domain), markets will suffer from fraud. A fraudulent transaction occurs when a producer induces a consumer to pay for an undesired good under false pretenses.

It is worth distinguishing fraudulent trades from those in which (in light of later information) a different alternative would have yielded a better product or a better price. Non-optimal trades are universal, given imperfect knowledge (which will be ubiquitous in computational markets), but this observation would argue against the use of market mechanisms only if someone could find a better way to use imperfect knowledge. Unlike fraudulent trades, non-optimal trades are still symbiotic; they merely fall short of an imagined perfection.

The possibility of fraud, together with the difference in quality among available deals, creates an incentive for consumer wariness. Wary consumers will in turn create an incentive for producers to avoid fraud, and for them to offer high quality (though not necessarily optimal) deals. The resulting ESS is to be "productive and wary"-wary as a consumer and productive and honest as a producer-for many of the same reasons that "nice and retaliatory" is Axelrod's ESS. Given a variety of strategies in a brand-new market ecosystem, one can expect that fraudulent producer strategies will initially profit at the expense of non-wary consumer strategies. As a result, wary consumers will grow in importance, driving out fraudulent producer strategies. These considerations will even drive out honest producer strategies which are noticeably less productive than their competitors. At a finer level of resolution, of course, there will be as many detailed strategies for being productive and wary as there are niches for entities in the market.

How can a consumer be effectively wary? The producer-consumer relationship is similar to an iterated prisoner's dilemma. If a producer fraudulently sells a worthless product, he is "defecting" on the arrangement. A wary consumer must take the trouble to notice this defection in order to retaliate (for example, by doing business elsewhere and warning others away). Checking for a defection can be expensive, however, and consumers are frequently in non-iterated situations. Reputation agencies like Consumer Reports can lower the cost of wariness and make it more effective by putting the producer in an iterated situation with the community as a whole (see the discussion of reputation agents in [I]). Trademarking of ser- vices and products enables producers to establish valuable reputations. The lack of this mechanism in biology [17] contributes to the relative sparseness of symbiosis there.

4.4. Food webs and trade webs[modifier]

Biological and market ecosystems both contain a mixture of symbiotic and negative-sum relationships. This paper argues that biological ecosystems involve more predation, while idealized market ecosystems involve more symbiosis. Indeed, one can make a case that this is so even for human market ecosystems-that biological ecosystems are, overall, dominated by predation, while market ecosystems are, overall, dominated by symbiosis.

In human markets (as in idealized markets) producers within an industry compete, but chains of symbiotic trade connect industry to industry. Competition in biology likewise occurs most often among those occupying the same niche, but here, it is predation that connects from niche to niche. Because of the lack of reputations and trademarks, symbiosis in biology occurs most often in situations where the "players" find themselves in a highly-iterated game. In the extreme, the symbiotic system itself becomes so tightly woven that it is considered a single organism-as with lichens composed of fungi and algae, or animals composed of eukaryotic cells containing mitochondria. Predation, of course, links one symbiotic island to the next.

Ecology textbooks show networks of predator-prey relationships-called food webs-because they are important to understanding ecosystems; "symbiosis webs" have found no comparable role. Economics textbooks show networks of trading relationships circling the globe; networks of predatory or negative-sum relationships have found no comparable role. (Even criminal networks typically form cooperative "black markets".) One cannot prove the absence of such spanning symbiotic webs in biology, or of negative-sum webs in the market; these systems are too complicated for any such proof. Instead, the argument here is evolutionary: that the concepts which come to dominate an evolved scientific field tend to reflect the phenomena which are actually relevant for understanding its subject matter.

4.5. Is this picture surprising?[modifier]

Nature is commonly viewed as harmonious and human markets as full of strife, yet the above comparison suggests the opposite. The psychological prominence of unusual phenomena may explain the apparent inversion of the common view. Symbiosis stands out in biology: we have all heard of the unusual relationship between crocodiles and the birds that pluck their parasites, but one hears less about the more common kind of relationship between crocodiles and each of the many animals they eat. Nor, in considering those birds, is one apt to dwell on the predatory relationship of the parasites to the crocodile or of the birds to the parasites. Symbiosis is unusual and interesting; predation is common and boring.

Similarly, fraud and criminality stand out in markets. Newspapers report major instances of fraud and embezzlement, but pay little attention to each day's massive turnover of routinely satisfactory cereal, soap, and gasoline in retail trade. Crime is unusual and interesting; trade is common and boring.

Psychological research indicates that human thought is subject to a systematic bias: vivid and interesting instances are more easily remembered, and easily remembered instances are thought to be more common [21]). Further, the press (and executives) like to describe peaceful competition for customer favor as if it were mortal combat, complete with wounds and rolling heads: again, vividness wins out. These factors go far to explain the common view of market and biological ecosystems.

For contrast, imagine that symbiosis were as fundamental to biology as it is to markets. Crocodiles would not merely have birds to pick their teeth, symbiotic bacteria in their guts, and the like; they would have symbiotes to provide them with orthodontia and tooth crowns, to say nothing of oral surgery, heart surgery, and kidney transplants, as well as shoes, clothing, transportation, housing, entertainment, telecommunications, massage, and psychiatric care.

Likewise, imagine that predation were as fundamental to markets as it is to biology. Instead of confronting occasional incidents of theft in a background of trade, one would be surrounded by neighbors who had stolen their cars from dealers who had mounted an armed assault on factories in Detroit, which in turn had grabbed their parts and equipment by pillaging job-shops in the surrounding countryside. So-called "hostile corporate takeovers" would involve, not purchase of shares of ownership from willing stockholders, but a sudden invasion of offices by an armed gang.

Biological ecosystems have evolved creatures and environments of great beauty and complexity, and they exhibit a grand spontaneous order, but that order is quite different from the synergistic, symbiotic order of the market. If the aim in building computational ecosystems were to maximize their beauty and complexity, biology might be an excellent model. Given the goal of building a computational ecosystem which will organize itself to solve problems, however, one should seek a system that fosters the cooperative use of specialized knowledge and abilities. Market ecosystems seem better suited to this.

4.6. Are markets just biological?[modifier]

It might be objected that the mechanisms which facilitate widespread symbiosis in market ecosystems are achievable within the rules of biological ecosystems. After all, these rules do not forbid organisms from pooling their resources to defend against predators or from establishing reputation and trademark systems. Indeed, this has been done. Through such institutions as laws, courts, banks, and trademarks, talking primates have taken their species through the transition from "nature, red in tooth and claw" to an industrial civilization spanning a planet. Though this has been achieved within the rules of biology, biological rules do not deserve the credit, any more than a machine language deserves credit for the virtues of Lisp.

This comparison of biological and market ecosystems suggests some of the strength of markets as a model for computation. The following examines a simpler computational ecosystem and considers whether market mechanisms would be useful in similar systems.

5. EURISKO and markets[modifier]

Lenat's EURISKO system [6] may be viewed as an ecosystem in which the replicators are interacting, evolving heuristics-that is, evolving, computational rules of thumb. Two kinds of heuristics populate EURISKO: object-level heuristics whose subject domain is some topic to be explored (such as war games or three-dimensional VLSI), and meta-heuristics whose subject domain is heuristics. Variation occurs as meta-heuristics create new heuristics from old ones via mutation, and selection occurs as meta-heuristics judge and determine the "interestingness" of heuristics (a numeric value associated with a heuristic). This quantity determines allocation of processing resources, so heuristics compete for survival and influence based on their degree of "interestingness".

To apply EURISKO to set theory, for example, one would start it with a set of object-level heuristics representing basic concepts of set theory (such as set equality and union), meta-heuristics for creating plausible new concepts out of old ones (e.g., by removing a conjunct from a predicate to weaken it), and meta-heuristics that attempt to capture a sense of what is mathematically interesting (such as an unexpected coincidence). EURISKO's predecessor, AM, started with exactly this, and was able to discover (and judge interesting) in turn: whole numbers, addition, multiplication, factorization, primality, and Goldbach's conjecture.

(There has been some controversy over the sense in which AM can be said to have discovered these concepts, and whether it is a reproducible result. For a discussion of these issues, see [III]. Also, note that many of these results have been reproduced by Ken Haase's EURISKO-like program, CYRANO [22].)

In AM, meta-heuristics mutated and judged object-level-heuristics, but did not themselves evolve under each others' influence. This was changed in EURISKO: here both kinds of heuristics exist at the same level of the system and can operate on each other freely. This enabled the mechanisms of variation (including the representation language) and the selective pressures to evolve and adapt with the rest of the system.

5.1. Parasitic loops[modifier]

During a run of EURISKO, however, one meta-heuristic rose to the maximum interestingness value merely by listing itself as a creator of whatever heuristics were judged interesting. As a "creator" of such interesting heuristics, it was itself judged interesting. Lenat reports that EURISKO has, when run for long periods, consistently invented ways to enter infinite loops of one sort or another. This problem may be viewed as the evolution of parasites, of systems that consume resources while returning nothing of value.

Lenat has limited this problem by hand-tuning sets of meta-heuristics, and by placing certain meta-heuristics on a frozen meta-level protected from evolutionary modification. But doing this can solve the problem of parasitism only if all entities which assign interestingness are placed in this frozen meta-level. A major part of the system is thus unable to evolve beyond its initial design, and hence unable to adapt to unforeseen changes within the system. This type of solution loses much of the flexibility sought in the move from AM to EURISKO.

"Interestingness" is a standard of value which can be used to claim system resources; if evolving meta-heuristics are allowed to assert value-and hence to generate claims from nothing-then parasites can evolve. If direct self-reward is forbidden, jointly self-rewarding "conspiracies" would spontaneously arise. For example, if a heuristic is consistently being judged interesting by a particular meta-heuristic, it is an ESS for it to discover some way to feed some of the resulting resources back to that meta-heuristic, that is, to find a way to pay a "kickback" to a judge (not to influence the judge in its favor, but to increase a favorable judge's influence). This problem can also be seen as a "tragedy of the commons" [23]: processing resources are the commons, and since the cost of using them (in terms of forgone alternative uses, etc.) is borne almost exclusively by others, each entity has an incentive to gobble resources wantonly.

5.2. Conservation laws[modifier]

This dilemma can be avoided by imposing restrictions on value-assertion at a simple, foundational level, rather than restricting valuation to a static set of complex heuristics. How can this be accomplished? Biology and markets both have locally-conserved quantities (matter, energy, currency) that are measures of success, and both systems have steadily generated new, genuinely interesting results. Sterile self-reinforcement cannot lead to success, because it cannot bring in resources. This principle can be applied to EURISKO-like systems.

An attractive approach is reward based on a locally-conserved currency, used to pay for services and computational resources. This inhibits parasitism through stable foundations which themselves embody no knowledge of heuristics (other than this argument). In such a system, a heuristic must pay for processing and for services provided by other heuristics. Non-productive loops of mutually-rewarding heuristics then go broke, since (by conservation of currency) a group of heuristics can only gain net funds by receiving them from a solvent entity outside the group-an entity that, presumably, either receives or expects to receive some valuable service. A productive entity may unwittingly support an unproductive process, but competitive pressures will help weed out this behavior.

The elimination of unsupported, non-productive entities by letting them go broke is a robust result of a foundational constraint, not a chancy result of hand-tuned heuristics. It achieves its robustness and universality in the same manner as many physical principles: it relies on a conservation law, and hence is independent of subsystem boundaries and system structure.

5.3. A market-based EURISKO?[modifier]

Market mechanisms suggest how a EURISKO-like system could operate without level boundaries or protected sets of supervisory heuristics. In a system of this sort, when a heuristic is invoked it charges the user enough to pay the costs of providing service, plus a royalty that rewards the heuristic's creators. As users learn to make discriminating choices, heuristics will compete to maximize their performance per unit of system resources consumed. Meta-heuristics that create new heuristics will earn royalties from their creations to the extent that these creations prove useful. Where several heuristics are responsible for creating a given heuristic, they can be viewed as its "stockholders", splitting royalties (or dividends) according to a prior contractual agreement.

Rules for negotiating the terms of such agreements can themselves evolve; the proper division of rewards will itself be rewarded, since it will lead to the evolution of better subsystems. Being able to evolve better division of rewards is important for a capable learning system (see the discussion of genetic algorithms and connectionism in Appendix II of [I]).

The above has outlined how money circulates among heuristics, and how it is ultimately used to pay for processing resources. Where does this flow of money originate? The next section answers this first by re-introducing a protected meta-level, though one that avoids the above problems, and then by explaining how to remove this meta-level as well.

5.4. External funders and open systems[modifier]

In a closed, market-based EURISKO-like system, heuristics pay for the storage space and processor time they use; the funds collected are then recycled to fund computation. If entities external to the economic system control the re-introduction of funds, then the heuristics within the system will be selected for their effectiveness in solving the problem of meeting the criteria for funds allocation. Ultimately, these criteria represent the problem being posed to the system; they could be simple contingent rewards for solving problems.

The funding agency is outside the system, and so not subject to heuristic evolution. It might seem equivalent to Lenat's protected meta-level, but, unlike EURISKO, a system of this sort can contain a fully-capable set of evolving meta-heuristics. The external funders reward only end results; meta-heuristics inside the system can act as internal investors, accelerating the adaptation of the system to the externally-defined goals. Investors and the activities in which they invest all participate in the same one-level market system. Use of this sort of meta-level avoids freezing the criteria for judging intermediate results, or for judging the judges of intermediate results (this resembles suggestions for funding scientific research [24]).

The unobjectionable role of the external funding agency is clear when the system is considered as part of a broader economy, in which external human users provide the funding and hence the feedback. The evolution of programs is ultimately guided by human judgment of what constitutes good performance [VI]; in a market-based, EURISKO-like system, the "super- visory" heuristics that judge other heuristics would themselves be judged by people. This supervisory position entails no special privilege; it results from their role as entities directly funded by users. The EURISKO experience may also be viewed in this light. Lenat's protected meta-heuristics were not really immune from evolution: Lenat himself varied and selected them with respect to the behaviors they encouraged, and allocated physical processors to those versions of EURISKO which he judged more promising.

In a distributed system consisting of EURISKO-like nodes subcontracting to each other, the external funding agency of any one node can consist of other nodes (of course, this principle does not require separate machines). In an open computational market-one using real dollars and connected to the human market-participating humans may be thought of as the ultimate meta-heuristics. They will be the ultimate source and drain for the flow of funds in the system, and will be the ultimate source of variation (at least initially). However, they are in no sense protected meta-heuristics. The flow of real money, and the provision of actually useful software services, will provide an incentive for humans to work to make the system more useful, and for them to buy the useful services being offered (see "Agoric Systems in the Large" in [I]

6. Memes and markets: direct and indirect market ecosystems[modifier]

Human cultures evolve. Their replicators are any of the information patterns which make up a culture, and which spread (with variation) from human to human via imitation. By analogy with genes, these replicators are known as memes [7]; they include ideas, beliefs, habits, morals, fashions, designs, techniques, jokes, and more. Any pattern which can spread via imitation is a meme, even if its human host cannot articulate it or is unaware of its existence.

It is important to recognize that the replicators of human culture are memes, not people. The lack of this distinction has led to the unfortunate confusion called "social darwinism". Our ability to change our minds allows cultural evolution to proceed not by selection of humans, but, as Karl Popper says, by "letting our theories die in our stead" [4].

Recognition of the evolutionary nature of human culture has inspired computational proposals for aiding the spread of memes among humans [25,26] and for establishing a memetic ecosystem among software entities [VII]. The memes making up human culture are diverse, as are their variation and selection mechanisms. Rather than studying human culture as a single, generic kind of ecosystem, it makes more sense to view it as composed of several interacting memetic ecosystems.

For example, Karl Popper describes science in evolutionary terms [4]. The replicators of science are theories, and their evolution proceeds through a process of conjecture and refutation, that is, variation and selection. The selection criteria include the requirement that a theory be both falsifiable and not actually falsified. (Falsifiable means that if it were false, it could be refuted by experiment.) In science only falsifiable but true theories are ESSs-any theory which is false either can be refuted, or is not falsifiable, and so is subject to rejection. In memetic systems whose replicators are theories, but which apply other selection criteria, theories which are not true may nevertheless be ESSs. Idealizations of scientific inquiry have also inspired computational ideas and systems [27,28].

6.1. Market memes and the indirect market[modifier]

In this paper, the memetic systems of interest are those that shape activities in markets, here called market memes. They include ideas that shape strategies for production, organization, marketing, investment, and much more. Market memes can be embodied in individuals or in groups such as firms, but their mechanisms of selection are indirect, working through the human brain.

Money flows not to successful market memes, but to their hosts. No matter how much money it brings in, a meme is unable to rent more brain space-indeed, it cannot even protect itself from being displaced. Entities directly interacting with an ideal market can own assets which cannot be seized; memes can own no such assets.

Market memes can replicate by spreading from human to human, but for some, this process is difficult. Complex market memes, such as business management skills or organizational patterns, are hard to communicate without introducing great variation. Biological systems can generate and test many small variations of a genetic pattern, replicating the more successful, but human markets can seldom do the same with organizations.

Meta-market memes are memes responsible for generating new market memes; an example would be an idea for how to educate better entrepreneurs. When their results are successful, however, no reward reliably propagates back to the memes responsible. Since meta-market memes do not receive credit for their efforts, people are led to underinvest in them.

Thus, market memes are able neither to benefit directly from their own successes, nor (in general) to replicate and pass on their successful characteristics. These defects in the system for creating, expanding, and replicating market memes make their evolution a slow and clumsy process. Successful practices are recognized and imitated, but quite imperfectly.

Although institutions such as patents, trade secrets, and copyrights attempt to strengthen feedback loops, there is only an indirect coupling between market forces and the replicators of the human market-this system thus constitutes what has here been called an indirect market. In software, however, it seems possible to achieve a direct market-an ecosystem in which the replicators that dominate the evolutionary process are directly rewarded by market success.

6.2. Direct market ecosystems[modifier]

In a direct market implemented in software, a successful heuristic or strategy can directly acquire more processing power and can replicate itself with small variations if it chooses. In these ways, a direct market resembles the biological ecosystem more than it does human markets. In addition, meta-heuristics can generate new software entities from old ones (that give access to the requisite information) by plausible mutation and recombination of the patterns that embody them. The generation of new entities will generally occur only after the participants have negotiated a division of any later rewards (a portion of their shares will, in turn, propagate back to their own creators). These mechanisms directly reward (and thereby encourage) "meta-market" activities, such as inventing new forms of business.

Direct markets have other advantages over human markets. In human markets rules against theft and extortion are enforced imperfectly through mechanisms such as police, courts, and jails. In software, however, these rules can be like "laws of physics". Human markets are plagued by negative externalities (such as air pollution) resulting from the unowned and non-local nature of many common resources (such as air). In software, it seems that these problems can be largely avoided. The basic resources of computation-processor time, memory space, and communications bandwidth-can be allocated without negative externalities [II]. No commons seem needed in computational ecosystems; computational environments need have no analogues of air, water, lines of sight, or rainforests.

The discussion thus far has assumed that computational markets are "idealized markets", in the sense introduced in Section 4.1.2, operating under only simple, foundational rules, preventing non-voluntary transactions analogous to theft. Human markets, however, operate under a wider range of less rigorously enforced rules, imposed by a variety of legal and regulatory institutions. The next section examines whether such institutions might be of use in computational markets.

7. Computational legal systems and markets[modifier]

Like human markets, direct computational markets will have many problems. Computational markets are enough like human societies that it is worth examining mechanisms-such as law and regulation-used by human societies to try to deal with these problems. However, these analogies must be used with care-there are also many differences between human and computational markets. For example, in proposed computational markets:

  • No negative externalities exist in basic resources (processor, memory, communications), hence there are no problems analogous to pollution.
  • Replicators directly own resources, making the evolutionary process more like "social darwinism" than like the actual evolution of human societies.
  • Participants are not people (or even animate), hence they are not hurt when they go broke.
  • Object encapsulation prevents force absolutely, hence there is no "who will watch the watchers" problem.
  • Only information services are sold, hence there are no depletable inventories of manufactured goods.

Given all these differences, one should not demand that government-like systems proposed for computation be closely patterned on those evolved by human societies. A more appropriate use of the social model is as a source of ideas and analogies, and ideas need not be workable in society to be worth considering in computation. Since computer science has already examined centralized forms of organizations, a promising direction to explore is the other extreme, that of highly decentralized models.

7.1. Remaining problems of direct computational markets[modifier]

Direct computational markets can be built so as to exclude theft and negative externalities; this leaves problems such as fraud, overall fluctuations (such as business cycles and depressions), monopolies, and provision of public goods (goods whose provision will benefit entities that do not purchase them, breaking the usual link between public benefit and producer reward). These problems are fundamentally different from the problem of theft, which can be eliminated in computation via simple local rules such as encapsulation. Eliminating any of the problems just mentioned requires the recognition of complex emergent phenomena:

  • Fraud involves the non-delivery of promised value; its elimination would require (at least) understanding all possible representation languages in which these promises can be expressed, and recognition of the conditions defining the fulfillment and non-fulfillment of all promises.
  • Overall fluctuations are typically measured in terms of economic aggregates such as GNP, which involves collecting considerable information about transactions and determining rules for aggregating them.
  • Monopolies can only be recognized by first determining both what constitutes an industry and what constitutes an "anti-competitive" practice.
  • Public goods situations can be recognized only if benefits can be recognized, along with the absence of ways to reward producers for them through normal market mechanisms.

An official fraud-deterring function must at least understand the "advertising" in the system, which is difficult in computation-every inter-object protocol constitutes a different sub-language, and in general, none is globally understood. As explained in Section 4.3, however, and in Section 5.3.3 of [I], computational markets can themselves deter types of fraud which local entities can recognize, just as they routinely judge the comparative values of different opportunities. For a workable system, it seems that the one essential anti-fraud law is also implementable-to prevent fraudulent use of another's trademark.

In human markets, overall fluctuations of an economy (such as business cycles and depressions) have stimulated the creation of governmental institutions such as federal reserve banks and deficit spending; the rise of monopolies in certain industries has stimulated the creation of anti-trust laws. In both cases, there is controversy regarding whether the problems themselves are intrinsic to human markets or whether they result from political intervention [16,29,30,31,32]. Computational markets will be different enough from human markets that these problems may either be severe, or not occur at all. The game theory of these phenomena is so complex that a priori argument may be fruitless; an experimental methodology seems more promising. The simplest approach may be to build computational markets that lack any foundational mechanisms to prevent these problems, and then observe the results.

A public good is one that, if provided, benefits members of a group independently of whether those members have paid for it. Examples include national defense and residential streets. Consider the incentives of a potential contributor: "If I refuse to pay, the good may be provided anyway, and I will have gotten it for free. If I do pay, my contribution will be such a small fraction of the total that I will not significantly increase the amount of the good being provided, so I will refuse to pay." (This is also known as the free-rider problem, because some entities can get a free ride on the efforts of others.) Thus, if left to the market, public goods will be underproduced, from the standpoint of overall benefit.

Many public goods problems can be avoided in a computational setting. Ingenuity can convert many seemingly public goods into marketable goods, thereby enabling market incentives to reward appropriate production. Examples include the "Dividend Algorithm" presented in [II], and the difference between selling information and selling information-based services explained in Section 6.2 of [I]. Nevertheless, for true public goods, the problem is intractable in a market framework. In the case of human society, a legal principle has been proposed to deal with this issue.

7.2. Public goods and "takings"[modifier]

Richard Epstein has proposed [33] a legal principle derived from the "takings" clause of the U.S. Constitution, which grants to the federal government the power of eminent domain over private property. The takings clause limits forcible seizure of property, stating ". . .nor shall private property be taken for public use, without just compensation." Epstein argues for the principle that any taking from an entity, including taxation, must be compensated by the return of benefits of equal or greater value. It may readily be seen that, where the taking is indeed of net benefit, full compensation (whether in money, goods, or services) will be possible. Where full compensation is quantitatively impossible, the net cost must exceed the net benefit-and the taking itself is therefore undesirable.

To apply this principle requires a complex evaluation of costs and benefits on a case-by-case basis. Epstein, as a legal scholar writing about human society, proposes the use of legal mechanisms (courts and evolved systems of law) presently unavailable in a computational setting. The closest equivalent of such mechanisms would comprise a complex set of heuristics-so complex that it would have to evolve, rather than be built into the computational foundations as a frozen set of rules. How might complex laws evolve in computation?

7.3. Political ecosystems[modifier]

In human societies, legal systems and governmental activities provide a framework for the market. They can be seen as a meta-level with respect to market activity, but they are not protected from evolutionary forces; instead, they evolve within a political ecosystem with its own mechanisms for the variation and selection of laws and interventions.

In recent years, the tools of economic analysis have been applied to the evolution of policies within democratic political ecosystems [34,35]. This work defines vote and growth motives that parallel the well-known profit motive. All these motives have some claim to be motives to act in the public interest; all, in practice, have their flaws:

  • The profit motive: do what people want, as shown by their willingness to pay for the result (but cheat customers, if your reputation won't catch up with you).
  • The vote motive: do what people want, as shown by their willingness to vote for you and your platform (but lie and sell out to special interests, if it will win votes).
  • The growth motive: do what people want, as shown by their elected leaders' willingness to expand your agency (but do not achieve goals too economically, or your budget will be cut).

In each case, evolutionary forces-not psychology-virtually guarantee behavior in accord with these motives: profitable businesses, vote-winning politicians, and growing agencies will (almost by definition) dominate their fields. Evolution selects for those that act in accord with these motives, whether or not they feel these motives, just as evolution selects for genes that act in accord with the reproduction motive, though genes have no minds or motives at all. And when genes build organisms with minds, those minds may feel a sex motive rather than a reproduction motive. In a like fashion, selection of individuals and ideas could, in a hypothetical world, evolve institutions led by public-spirited executives, politicians, and bureaucrats, all subjectively selfless, but all acting in accord with the profit, vote, and growth motives (if only to keep their positions of influence, to enable future good works).

Analysis of the vote motive shows how socially destructive policies can win elections [34,35], hence the idea of correcting computational markets with computational democracies should be approached warily, at best. Further, it is not immediately obvious how a computational democracy would work. If one were to substitute "one object, one vote" for "one person, one vote", the result would be the immediate creation of vast numbers of otherwise useless voting-objects. One would prefer a system with a better incentive structure.

7.4. Meta-market ecosystems[modifier]

Imagine a system in which computational objects can choose to operate in any one of a number of legal environments, and in which new legal environments can be created at any time. Since environments could be copies of other environments, they can replicate; since they can also vary and be selected, they can evolve. A measure of the evolutionary success of a legal environment is its level of use (objects can vote with their nonexistent feet); one should expect the behavior of evolved systems of this sort to be describable in terms of an "attractiveness motive".

Something of this sort is seen in the human world. There are many human markets, each with its own set of rules, and each interacting and competing with the others. Stock and commodity exchanges, diversified corporations, and nations each employ a different set of rules governing their internal markets. In each case, entities with different internal markets are able to trade and compete with each other. Factoring out the other dimensions, these amount to a system of competing legal systems.

Each of these legal systems would have an incentive to approximate Epstein's system, which allows any action that will benefit all participants. When a public goods situation occurs which involves subscribers of several different systems, it would be settled according to prior treaties-when these have been negotiated (for a discussion of similar notions, see [30]). When such treaties have not been negotiated, the public goods problem may go unsolved, and the participants are left with only simple market rules. The penalty for leaving some public goods unprovided may be minor in a computational market ecosystem; no strong example of a public goods problem has so far been proposed.

Even under the selective pressure of competition, it may not be possible to establish a computational legal system that can enforce Epstein's system well. If so, then the simple, stable, low-overhead rules of the computational market ecosystem will be the system of choice. This system is a simple starting point and enables experimentation with alternatives; experience can show whether any are better.

8. Conclusions[modifier]

Although evolutionary reasoning is most often applied to biological ecosystems, it is also of use in understanding human markets, culture, and politics, and adaptive computational systems such as EURISKO. By assuming that an ecosystem's foundational rules will shape its evolutionarily stable strategies, and that these strategies will dominate behavior in the ecosystem, one can relate foundations to emergent properties-including properties sought by a designer. This paper has examined a variety of evolutionary ecosystems and compared them with "direct, idealized-market ecosystems"; for the purpose of evolving useful computational behavior, the latter have strong advantages. Other papers in this volume explore the implementation and properties of computational markets of this sort in greater depth [I,II].

References[modifier]

Papers referenced with roman numerals can be found in the present volume: Huberman, Bernardo (ed.), Ecology of Computation (Elsevier Science Publishers/North-Holland, 1988).

[I] Miller, Mark S., and Drexler, K. Eric, "Comparative Ecology: A Computational Perspective", this volume.

[II] Miller, Mark S., and Drexler, K. Eric, "Markets and Computation: Agoric Open Systems", this volume.

[III] Liskov, Barbara, "Guardians and Actions: Linguistic Support for Robust, Distributed Programs", this volume.

[IV] Rashid, Richard F., "From RIG to Accent to Mach: The Evolution of a Network Operating System", this volume.

[V] Kahn, Kenneth, and Miller, Mark S., "Language Design and Open Systems", this volume.

[1] Lieberman-Hewitt Algorithm: Lieberman, Henry, and Hewitt, Carl, "A Real-Time Garbage Collector Based on the Lifetimes of Objects", in: Communications of the ACM (June 1983) 26, 6, pp.419-429.

[2] Quarterman, John S., Silbershatz, Abraham, Peterson, James L., "4.2BSD and 4.3BSD as Examples of the UNIX System", in: ACM Computing Surveys (December 1985) Vol. 17 No. 4 pp.379-418.

[3] Rees, Jonathan A., and Adams, Norman I., IV, "T: a Dialect of Lisp or, Lambda: The Ultimate Software Tool", in: Proceedings of the 1982 ACM Symposium on Lisp and Functional Programming (August 1982).

[4] Bishop, Peter B., Computers with a Large Address Space and Garbage Collection (MIT, Cambridge, MA, May 1977) MIT/LCS/TR-178.

[5] Smith, Vernon L., "Experimental Methods in the Political Economy of Exchange", in: Science (10 October 1986) Vol. 234, pp.167-173.

[6] Friedman, Daniel, "On the Efficiency of Experimental Double Auction Markets", in: American Economic Review (March 1984) Vol. 24, No. 1, pp. 60-72.

[7] Theriault, D., Issues in the Design and Implementation of Act 2 (MIT AI Lab, Cambridge, MA., 1983) AI-TR-728.

[8] Kornfeld, William A., "Using Parallel Processing for Problem Solving" (MIT AI Lab, Cambridge, MA, 1979) MIT-AI-561.

[9] Ungar, David Michael, The Design and Evaluation of a High Performance Smalltalk System (MIT Press, Cambridge, MA, 1987).

[10] Axelrod, Robert, The Evolution of Cooperation (Basic Books, New York, 1984).

[11] Agha, Gul, Actors: A Model of Concurrent Computation in Distributed Systems (MIT Press, Cambridge, MA, 1986).

[12] Raffia, Howard, Decision Analysis: Introductory Lectures on Choices under Uncertainty (Addison-Wesley, Reading, MA, 1970).

[13] Dawkins, Richard, The Selfish Gene (Oxford University Press, New York, 1976).

[14] Hofstadter, Douglas R., "Dilemmas for Superrational Thinkers, Leading Up to a Luring Lottery", in: Metamagical Themas: Questing for the Essence of Mind and Pattern (Basic Books, New York, 1985) pp. 739-755.

[15] Genesereth, M. R., Ginsberg, M. L., and Rosenschein, J. S., Cooperation without Communication (1984) HPP Report 84-41.

[16] Shapiro, Ehud, (ed.), Concurrent Prolog: Collected Papers (MIT Press, Cambridge, MA, 1987) in press.

[17] Hirsh, Susan, Kahn, Kenneth M., and Miller, Mark S., Interming: Unifying Keyword and Positional Notations (Xerox PARC, Palo Alto, CA, 1987) in press.

Incentive Engineering:for Computational Resource Management[modifier]

Agoric computation [I,II] will require market-compatible mechanisms for the allocation of processor time and storage space. Recasting processor scheduling as an auction process yields a flexible priority system. Recasting storage management as a system of decentralized market negotiations yields a distributed garbage collection algorithm able to collect unreferenced loops that cross trust boundaries. Algorithms that manage processor time and storage in ways that enable both conventional computation and market-based decision making will be useful in establishing agoric systems: they lie at the boundary between design and evolution. We describe such algorithms in some detail.

1. Introduction[modifier]

In the agoric model of computation [II], market mechanisms-prices and negotiations-coordinate the activity of objects. As in existing markets, these mechanisms will allocate resources in a decentralized fashion, guided by local knowledge and decisions rather than by central planning and direction. Through selective rewards, markets will encourage the evolution of useful and efficient objects.

This model raises questions at several levels, ranging from foundations (how can hardware and software support secure market mechanisms?) to high-level emergent properties (how will evolved computational markets behave?). Here we focus on a question at an intermediate level, asking how the basic computational resources of processor capacity and data storage can be managed in a market-compatible fashion, given suitable foundations. We first examine objectives and constraints for decentralized resource management, and then describe a promising set of initial market strategies. These are algorithms that can help seed a system with a workable initial set of objects. Initial market strategies (or initial strategies, for short), must be compatible with current programming practice and (when widely used) must provide an environment in which resources typically have prices that reflect costs-that is, an environment in which other objects can make economic decisions that make sense.

1.1. The role of initial market strategies[modifier]

Agoric systems will evolve from lesser to greater complexity. To begin evolving, they will need a comparatively simple structure from which to build. The role of initial market strategies is to serve as a sort of scaffolding: they must support the right things and have the right general shape, but they must also be replaceable as construction proceeds. Though they must embody sound engineering principles, they need not themselves be architectural wonders.

For computation to occur, storage and processing power must be allocated and managed-functions that existing algorithms handle [1,2]. To get a computational market started, we will need initial strategies that also enable market-based decision making. To enable conventional computation, the use of an initial strategy by a set of objects must ensure the reliable sched- uling of processes and allocation and deallocation of storage. To enable globally-sensible, market-based decision making, the use of an initial strategy by a set of objects must provide an environment in which the price of using an object reflects the real cost of that use.

These initial market strategies, however, cannot use price information to guide their own actions. To behave reliably in a conventional computational sense, they must pursue their goals regardless of cost. If resource prices are to make economic sense, however, objects at some level (perhaps contracting with systems of initial-strategy objects) must be willing to forgo or delay actions which will cost more than their estimated worth. Initial market strategies exist to provide an environment in which more advanced market strategies can function and evolve (even evolving to replace the initial strategies themselves).

Initial programming environments can provide initial market strategies as default parts of object definitions. Though initial strategies will perform functions ordinarily handled by an operating system or language environment, they need not be foundational or uniform across a system. The foundational mechanisms of agoric systems will delegate decisions regarding processor scheduling and storage management to unprivileged objects, enabling them to follow diverse policies. Initial strategies will simply provide policies that work, until better policies emerge.

The initial market strategies described here are intended to serve two purposes: first, to provide a proof-of-concept (or at least evidence-of-concept) for the feasibility of decentralized resource management meeting the constraints we describe; and second, to provide a point of departure for further work-a set of ideas to criticize and improve upon. For several of our choices, straightforward alternatives can likely be found. Many will prove superior.

Throughout the present discussion, "object" should be considered a scale-independent notion: an object should frequently be regarded as a large, running program, such as an expert system or on-line database. Large objects may or may not be themselves composed of objects, and objects in general need not incorporate any notion of class hierarchy or inheritance. Some of the following algorithms have relatively high overhead and are not proposed for use with small objects; large objects might use conventional algorithms for fine-grained internal resource management.

1.2. Auction-based processor scheduling[modifier]

Most objects will need only a fraction of the service of a processor, hence we expect rental to emerge as a major means of acquiring processor time. Since objects will frequently be able to trade off processor use against storage requirements, communications use, or service quality, processor time will have a price relative to these other resources. This price will vary from processor to processor and from moment to moment. If an agoric system is open, extensible, and uses real currency, and if machine owners are alert, then the long-term average price of processor time in the system will reflect the external market price of adding more processors to the system-if it were much higher, the owners could profit by adding processors; if it were much lower, they could profit by removing and selling them.

Since the demand for processor time is apt to fluctuate rapidly, proper incentives will require rapidly fluctuating prices. This can be arranged by auctioning each slice of processor time to the highest-bidding process. The urgency of a task can be reflected in the size of its bid. Auctions can also be used to schedule other resources allocated on a time-slice by time-slice basis, such as communication channels. Again, fluctuating prices can provide incentives for delaying less urgent tasks, leveling loads, and so forth.

In discussing the allocation of processing resources, we describe the allocation of raw processor time. Some objects in an agoric system might not purchase time this way, but might instead purchase interpretation services on a similar (or very different) basis. A system that can provide a market in raw processor time can serve as a foundation for more sophisticated services of this sort.

1.3. Rent-based storage management[modifier]

Because storage needs will frequently be transient, we expect that rental from owners will emerge as a major means of holding storage. As with processor time, storage will have a price relative to other resources, and this price will vary across different media, locations, and times. As with processors, the long-term average price of storage in the system will reflect the external market price of adding more storage to the system, if owners are alert to opportunities for profit. We discuss allocation of raw storage space here, but a system that can provide a market in raw space can serve as a foundation for more sophisticated storage services.

The basic idea of our initial strategy and the emergent garbage collection algorithm is as follows:

  • Landlords own storage space.
  • They charge other objects rents at a rate determined through auction.
  • Referenced objects (consultants) charge referencing objects (clients) retainer fees, using these to pay their rent (and the retainer fees charged by their consultants).
  • When objects fail to pay rent they are evicted (that is, garbage collected).

These arrangements provide a natural way to free storage from unproductive uses. If an object cannot or will not pay its rent, then some other object must be bidding more for the available space (if no object was bidding for the space, its price would be zero). Since an object's income (and hence rent-paying ability) reflects the value placed on the object by its users, eviction of non-paying objects will typically improve the overall usefulness of the contents of storage.

Frequently-used consultants will be able to pay their rent out of their usage fees. Rarely-used (but referenced) consultants can charge their clients retainer fees adequate to cover their rent (and that of any consultants they themselves retain). In these relationships, pointers are bi-directional: a consultant also knows its clients. Unreferenced objects will be unable to earn usage fees or charge retainer fees; they will be unable to pay, and will be evicted, thereby accomplishing garbage collection (or forcing migration to a fixed-entry-price archive). This is the basis of the market sweep approach to garbage collection.

Rent-based storage management also allows a generalization of pointer types. Some systems distinguish between the traditional strong pointers and weak pointers [3]. A strong pointer retains a referenced object regardless of the cost: it represents an unbounded commitment to maintaining access. A weak pointer maintains access as long as the object has not been garbage collected, but does not itself cause the object to be retained. Weak pointers are an existing step toward economic storage management: they represent a small value placed on access-in effect, an infinitesimal value. This suggests a generalization in which an object will pay only a bounded amount for continued access to an object. This may be termed a threshold pointer. Thresholds may be set in various ways, for example, by limiting the total retainer fee that will be paid, or the total fee that will be paid in a given time period. When multiple threshold pointers reference an object, their strengths add; thus, they integrate information about the demand for retaining the object in a given region of storage. (As we will see, however, any situation in which a consultant asks retainer fees from multiple clients presents a challenge in incentive engineering-why should a client pay, if others will do so instead?)

Differing rents in differing media give objects an incentive to migrate to the most cost-effective locations. If clients offer a premium for fast service and demand service frequently, a consultant might best be located in RAM; if slow service is adequate and demand is low, a consultant might best be located on disk, or on a remote server. Caching decisions can thus be treated as a business location problem.

Rent-based storage management solves a standing problem of distributed garbage collection. Consider a loop of objects, each pointing to the next, each on a different company's machine, and all, collectively, garbage. Garbage collection could be accomplished by migrating objects to collapse the loop onto one machine, thus making its unreferenced nature locally visible [4]. But what if the companies don't fully trust one another? By sending the representation of an object to an untrusted machine, the algorithm would allow encapsulation to be violated, giving away access to critical objects and resources. With rent-based storage management, however, migration is unnecessary: since unreferenced loops have no net income but still must pay rent, they go broke and are evicted.

The problem of unreferenced loops crossing trust boundaries highlights the lack of a notion of payment-for-service in traditional approaches to storage management. Introducing this notion seems essential when hardware and data are separately owned. In its absence, distributed systems will be subject to problems in which one person (or entity) forces the retention of another's storage but has no incentive to free it.

As suggested earlier, we do not expect explicit rental arrangements to be economical for very small objects. The appropriate minimum scale is an open question; the ultimate test of answers to this question will be market success.

1.4. Design constraints[modifier]

As we have noted, initial market strategies must satisfy various constraints, which fall into two classes. First, they must result in a programmable system; this can most easily be guaranteed by ensuring that they meet the familiar constraints that have evolved in systems programming practice. Second, they must result in a system with market incentives, making possible the evolution of the new programming practices expected in agoric open systems.

Systems programming constraints often guarantee some property regardless of cost-for example, guaranteeing that referenced objects will be retained. Sound market incentives require that all resources used be paid for, since to do otherwise in an evolving system would foster parasitism. These two constraints would seem to be in conflict. To resolve this, we introduce the notion of the well-funded object. An object is well-funded if it has immediate access to ample funds to pay for the computations it spawns. A well-funded object might typically represent a human user and fund some computations serving that user. These computations are required to satisfy traditional systems programming constraints only so long as the well-funded object remains solvent, that is, so long as the user is willing to pay their cost.

The chief systems-programming constraint in processor scheduling is that processes be scheduled-that is, that there be a way to arrange bidding such that a well-funded process can be guaranteed non-starvation and hence eventual execution. The chief market-compatibility constraints in processor scheduling are that processor prices fluctuate to adjust demand to the available supply, that objects be able to make scheduling decisions based on price information, and that opportunities for malicious strategies be limited-for example, that a process not be able to force a high price on a competing process while avoiding that high price itself.

Several systems-programming constraints are important in storage management. First, non-garbage-everything reachable by a chain of strong pointers leading from a well-funded object-must not be collected. Second, garbage-everything that is unreferenced and cannot induce other objects to reference or pay it-should eventually be collected. Finally, overhead costs should be reasonable: bookkeeping storage per object should be bounded and the computational burden of the algorithms should scale roughly linearly (at most) with the number of objects.

Market-compatibility constraints are also important in storage management. Objects should manage retainer-fee relationships such that there is an incentive for clients to pay retainers, lest there be an incentive to shirk. A consultant's total retainer fees (which amount to a price for its availability) should reflect real storage costs, to provide non-initial-strategy clients with a reasonable basis for decisions. Finally, objects should not require unbounded cash reserves to avoid improper garbage collection.

Non-initial-strategy objects need not themselves meet system-programming constraints, since they are free to act in any manner that furthers their market success. They will still typically require reasonable computational costs, smooth interaction with other strategies, and bounded cash reserves. A complex market-level object, however, will be unlikely to point strongly at an object having unpredictable or uncontrollable costs. It must therefore be prepared for such consultants to go away. It may also spawn low-priority processes; some of these may never run.

How can a complex object be prepared for loss of access? Current practice already provides many examples of objects able to deal with the unexpected unavailability of objects they use. Programs are frequently prepared for the breaking of inter-machine or inter-process connections, or for the inability to open an expected file. Files are commonly updated so that they are in a recoverable state even if they should suffer the sudden loss of the updater. Argus provides abortable transactions and exception handling [III]. Additional recovery mechanisms can be expected among complex objects in a market environment.

2. Processor scheduling[modifier]

This section describes initial market strategies for both sellers and buyers. In processor scheduling, we will term the time-seller (or agent for the seller) an auction house, and a time-buyer (or agent for a buyer) a bidder. A system may have any number of competing sellers.

2.1. Auctioning processor time: the escalator algorithm[modifier]

A standard approach to scheduling processes uses a "first-come, first-served" queue. A newly-ready process always joins the tail of the queue, and the processor always runs the process at the head of the queue. This ensures that each process will eventually run (regard- less of processor demand), guaranteeing what is known as non-starvation or fairness. This mechanism does not enable market trade-offs among the needs of different processes, how- ever. A natural approach to doing so is the "highest-bid, first-served" queue. This corresponds to auctioning time-slices, with the queue corresponding to an auction house. Naïvely applied, this would lead to disaster: if the market price of the processor stays above a process's posted bid, the process will never run, and hence never learn that it needs to raise its bid. This defines a central problem in auctioning processor time.

2.1.1. Auction-house initial strategies[modifier]

A basic question in an auction-based strategy is the nature of the auction: kinds include the double auction, English auction, Dutch auction, and first-price and second-price sealed-bid auctions [5,6]. In a double auction, sellers offer lower and lower prices while buyers offer higher and higher prices until they meet. In the familiar English auction, buyers bid higher and higher prices until the process plateaus; the seller accepts the highest bid. In a Dutch auction, a seller offers lower and lower prices until a buyer claims the item at the present price. In a first-price sealed-bid auction, fixed bids are submitted, and the highest is accepted; in a second-price sealed-bid auction, the highest is accepted, but the highest bidder pays the amount bid by the second-highest.

These auction institutions have differing applicability to the sale of time slices. The double, English, and Dutch auctions (at least in naïve implementations) require that processes be active while bidding for the very processor they need in order to be active-a major problem. Sealed-bid auctions avoid this problem, but they fail to guarantee non-starvation: if the processor price remains above what a process has bid, it will never be scheduled-and if the process is never scheduled, it cannot raise its bid. Thus, auctioning processor time is a bit like trying to auction wakeup pills to a sleeping crowd.

The approach explored here will be a variant of a sealed-bid auction, but the choice between first- and second-price forms remains. In laboratory experiments with human bidders, second-price sealed-bid auctions are known to give results similar to those of English auctions, and both lead to efficient markets (as does the double auction) [5,6]. In the English auction, the winning bidder pays only slightly more than the second-highest bidder; a second-price sealed-bid auction yields a similar result directly. Dutch and first-price sealed-bid auctions lead to less efficient markets.

First-price sealed-bid auctions give an incentive to guess what the next-highest bid will be, and to bid just slightly more. This strategic guessing serves no useful purpose in a market system. Second-price auctions give an incentive to consider only the question: "At what given price would my best decision change from `buy' to `don't-buy'?" This is the price one should bid, since bidding any other price might result in buying (or not buying) when one should not. Estimating this price means estimating actual value, which serves a decidedly useful purpose in the market system.

We have selected a variant of a second-price, sealed-bid auction for our initial market strategy. It may be called an escalating-bid auction.

This system may be visualized as an auction house full of escalators (admittedly a strange image). A process enters the auction by placing a bid on one of the escalators-the greater the bid, the greater the initial height. Each escalator rises at a different rate, raising its bids at that rate. (A special stationary escalator holds fixed bids.) Together, the initial bid and escalation rate are a form of priority. A processor always runs the highest-bidding process. A house rule sets a maximum allowable initial bid-you can get on only at (say) the first five floors.

Each auction house owns or leases a processor, or a certain fraction of its operating time. Escalator data structures make the highest bid readily available (i.e., each escalator is a priority queue). Each non-stationary escalator is characterized by a rate of escalation, escalationRate, measured in currency units per time unit. At a time t, the value of a bid of zero initial value placed on an escalator at time timeOfBid is simply escalationRate * (t - timeOfBid). A non-zero initial bid of value initialBid is assigned a virtual bid-time, timeOfBid, equal to t - (initialBid/escalationRate), and entered accordingly. Thus, each non-stationary escalator is marked with a fixed escalationRate and holds a current list of bids, sorted in time- OfBid order. Each bid includes its timeOfBid, a suspended process, and access to an expense account. The stationary escalator is a special case; instead of a timeOfBid it records a fixed initialBid. (A negative initialBid is acceptable on a moving escalator. We assume that two idle processes are entered with zero bids on the stationary escalator to avoid accepting a negative bid-value; the first always stands ready to run, at the price set by the second.)

To place a bid on an escalator, one sends a suspended process, an initial bid, and access to an expense account from which the auction house is to withdraw money. When the bid is placed, the auction house immediately withdraws from the expense account enough funds to cover the worst-case cost of handling that bid.

At the beginning of each time slice, the auction house examines the top bid on each escalator, taking the highest bid among them (and promoting its follower) while noting the second-highest bid (taking into account the newly-promoted bid). It then charges the high bidder the amount of the second-highest bid, and gives the high bidder a slice of processor time. If the highest bidder's expense account fails to cover the (escalated) bid, however, it is removed without running, and a bid equaling the balance of its expense account is entered for this bidder on the stationary escalator.

The escalating-bid auction seems well suited to the processor scheduling problem. It avoids the sleeping-bidder problem and it ensures that a processor can accept a bid at any time-crucial, when the commodity to be sold is as perishable as time itself.

=2.1.2. Bidder initial strategies[modifier]

A sufficient initial strategy for a bidder is simply to place a zero bid on the fastest escalator backed by the bidder's own expense account. Note that, among initial-strategy bidders, the escalator algorithm reduces to a round-robin scheduler. In a slight variation, a negative initialBid can be placed to ensure a delay of at least (- initialBid/escalationRate) until the bidder next runs.

2.1.3. Analysis[modifier]

In an open system, where total processor capacity and demand will be responsive to market forces, the market price of time on a processor will be bounded. Accordingly, a bid placed on any non-stationary escalator will eventually grow large enough to ensure that it is accepted. Thus, non-starvation would be ensured.

Where too much of the demand is unresponsive to price, other conditions are necessary to ensure non-starvation, such as limiting the maximum initial bid, maxInitialBid, to some fixed value (the "fifth floor") as suggested above. Consider a process Z with a bid on a moving escalator. Z will either run or have its bid escalated past maxInitialBid within a fixed time; at that time, only a finite number of other processes can have bids higher than Z's, and if Z is riding the fastest escalator, no new process can be scheduled to run ahead of it. Thus, the auction house guarantees non-starvation to any process that follows the strategy of always entering a bid on the fastest escalator.

The relationship among bids, prices, and rates of use is simple in certain illustrative cases. Assume a stable price for time-slices, equal to P, and an escalator that raises bids at a rate R per time slice; an object that repeatedly reschedules itself after running by placing a zero initial bid on this escalator will receive a fraction of the processor time roughly equal to R/P. Consider an auction house in which a fixed number of processes repeatedly run and reschedule themselves, placing bids with zero initial values and a fixed distribution across the various escalators; assume further that bids are numerous enough and uniform enough to make second-highest bids approximately equal to highest bids. There will then be a steady-state price for a time-slice (with small fluctuations); this price will equal the sum over the escalators of the number of bids on each times the amount of escalation during a time-slice. (This quantity equals the per-time-slice increase in the sum of the bids, and all the money bid is eventually spent on processor time.) Non-zero initial bids will have an effect roughly like that of different escalation rates, and fluctuating rates of bid-placement will cause fluctuations in processor price.

Given fluctuating prices (see Figure 1), faster escalation rates will result in higher average costs per time slice. If scheduled at random times, rapidly-escalating bids will strike the market-price line at nearly random times (random sampling would hold strictly if escalation were infinitely fast). As may be seen, slowly escalating bids are unable to strike the price line at the top of sharp price peaks; they are more likely to strike the down-side of price troughs. Figure 1 also illustrates how a strategy of re-bidding at zero on an escalator after every run will, on the average, use more time-slices during broad troughs than during broad peaks, yielding a cost per time slice that is lower than the average cost; conversely, bids placed on fast escalators will pay a higher than average cost.

The overhead of the escalator algorithm is modest and insensitive to the number of bids being escalated. Assume N is the number of bids on an escalator and M is the number of escalators. Placing or removing a bid is then an operation taking a time proportional to log(N), given a suitable choice of escalator data structure (a priority queue). Finding the highest and second-highest bids by searching the top bids is an operation taking a time proportional to M.

2.1.4. Variations[modifier]

The simplest auction-house initial strategy provides a fixed set of escalators, as described; more complex strategies could create and delete escalators to suit bidder demand. Other extensions would allow bidding for multiple time-slices as a block (up to some maximum size), or enable refunding payment on unused portions of a time slice (and starting the next full time-slice early). Where multiple processors are equally accessible, a single auction house could serve them all. Finally, the owner of a processor could run an auction procedure for a fraction of the available time slices and an entirely different procedure (perhaps some form of futures market for real-time scheduling) in another.

As described, the simplest bidder initial strategy is to schedule a zero bid on the fastest escalator. A more complex strategy might use a fast escalator only for fast service at a (likely) higher price, or slower escalators for slower service, at a (likely) lower price. A positive initial bid on a slow escalator can speed service while still giving better odds of running at a low price than does a bid on a fast escalator. Tasks of strictly limited value (which need not be completed) can be scheduled on the stationary escalator; they will run only if the price of processor time falls low enough. A regularly-scheduled rebidding agent can be used to implement a very broad class of strategies, taking into account new information from bid to bid.

There are several open issues in this approach to processor scheduling. These include finding procedures for:

  • choosing maxInitialBid (where this parameter is needed),
  • choosing the numbers and rates of escalators, and
  • charging for bid-record storage.

In addition to solving the sleeping-bidder problem peculiar to process scheduling, the escalator algorithm provides a low-overhead auction procedure for allocating other resources that are naturally divided into time slices. For example, parameters for a bidding strategy could be part of a packet traversing a network, enabling the packets to bid for access to communication channels.

2.2. Expense accounts[modifier]

We have described initial market strategies for the relationship between owners and bidders; we also need strategies among bidders, to ensure that they can pay for processing time. Since bidders typically need processing time in order to satisfy external requests, the initial market strategy should follow the dynamic structure of relationships created by request messages from client objects to their consultants.

When a client requests service from a consultant, we assume the client will pay to satisfy the request. We need an initial strategy that enables consultants to charge and clients to pay, all with a minimum of programmer attention (the following strategy does, however, require that objects distinguish between request and response messages). The initial strategy should itself provide neither profit nor loss, and hence should simply require that consultants charge for their operating costs, and that clients pay for them. This initial market strategy must (as always) interact smoothly with other strategies. The initial strategy must accommodate clients wishing to monitor charges or limit payments, and consultants wishing to charge less or more than their expenses (e.g., to promote a new service or to collect royalties).

The initial strategy is as follows: Each process draws operating expenses from its current expense account. A client includes access to its current expense account in each outgoing request. The consultant then uses this account as its current expense account while satisfying the request. This strategy is identical to the protocol specified in the Act 2 language [7] for passing sponsors. Like expense accounts, these give bounded access to processor time [8].

In a set of objects following this initial market strategy, all computation serving an external request will be paid for by the account contained in that request. Since no computation will be cut off while that account remains solvent, well-funded computations will be completed.

In variations on this strategy, a consultant may charge according to whatever policy it wishes, since it is free to draw funds from the incoming account and to use a different account to pay for its computation. If a consultant requires a minimum sum to complete a computation, it can ensure access to this sum by transferring it to a new account at the outset.

A client may limit its payments to a consultant by sending a new account with the request and placing a limited sum in that account. This is like a threshold pointer, in that the client limits its liability at the risk of cutting off valid computation.

A client may monitor its payments to a consultant by sending a shadow account which passes charges through to an actual account while remembering their sum. When the consultant finishes, the client recovers the record of the total charges and shuts down the shadow account. This enables clients to accumulate cost information to guide further requests.

3. Storage management[modifier]

In rent-based storage management, we again must specify strategies both for the relationships between buyers and sellers (here, renters and landlords) and for the relationships among renters (in their roles as clients and consultants). The latter are complex.

3.1. Renting memory: the rental-auction algorithm[modifier]

In a fully competitive market for storage space, a landlord (having many competitors) will maximize revenue by seeking full storage utilization, setting its rental price at a level at which supply equals demand (the market-clearing rate). An auction-based initial market strategy can approximate this rather well.

A landlord maintains (or uses) an auction house which keeps two data structures, a bid list and a drop list. The bid list records requests for blocks of storage; each request is associated with an object, a desired quantity of storage (limited to a maximum request, maxBlockRequest, of perhaps 1% of total local storage), and a price-per unit of memory, per of unit of time-bid for acquiring it (bidPrice). Bids are accompanied by deposits to cover handling charges. The drop list records already-leased blocks of storage, each associated with an object, a block size, and a unit rental price at which the object would prefer to release it (drop- Price). The lists are ordered by bidPrice and dropPrice respectively. A running total is kept of the amount of wasted space.

We consider the bid list to also contain an infinite number of bids at zero rental price for atomic blocks of storage to be allocated to a free memory sponge object. The sponge will be allocated memory only when no one has any use for free storage; any memory so allocated is entered on the drop list with a zero price. In a mature agoric open system, the demand for memory space should be enormous at low enough prices. With a charge-per-use policy, there is no bound to the amount of software that would migrate to a machine offering a zero storage price; storage of debugging traces and caching of calculated results would likewise expand at a zero storage price. Therefore, one would not expect to see a zero price or see any memory allocated to the sponge.

Fresh unheld space becomes available at the beginning of operations, when space is vacated, when objects are evicted for nonpayment of rent, or when more memory is purchased and added to the system. The auction house then accepts bids from the top of the bid-list, highest bidPrice first. It continues allocating blocks to bidders until it encounters a bid for a block larger than the remaining unheld space. This bid is shelved, the allocation process stops, and the price of this unsuccessful bid is taken as the rental price of storage for all objects during the next time segment.

If, as expected, the blocks requested total more than the storage available, then the maximum unallocated storage will be smaller than maxBlockRequest. If this is 1% of the total, storage utilization will be at least 99%. For example, consider a computer with ten megabytes of main memory and a memory management unit that maps addresses in 1kilobyte blocks. Memory to be allocated and traded would consist of integral numbers of these 1kilobyte blocks (which can be mapped arbitrarily, hence we can ignore fragmentation). One percent of 10 megabytes is 100 kilobytes, so this is maxBlockRequest and the largest amount that can be wasted by the above procedure. We assume that any object needing a block bigger than 100 kilobytes can afford the trouble of acquiring it 100 kilobytes at a time.

When a new bid is placed, its bidPrice is compared to the highest bidPrice on the bid list. If it is lower, it is placed on the bid list; if it is equal to or greater than the highest bidPrice and equal to or less than the lowest dropPrice on the drop list, then it is accepted if it requests a block that can be allocated from unheld space, and otherwise is placed on the bid list. If it is greater than the lowest dropPrice, then room may be freed for it.

In this case, the auction house attempts to identify enough space to accommodate the new bidder, starting with the unheld storage and then proceeding to the held blocks lowest on the drop list. Objects responsible for identified blocks are asked to vacate them or to set a higher dropPrice. On vacating, renters are refunded the unused portion of their rent money. This process stops when (1) enough space has been freed, or (2) a block is encountered having a drop price equal to or greater than the bidPrice of the new bidder. In case (1), the new bidder receives storage space and is placed on the drop list at a dropPrice of its choosing; in case (2), the new bidder is placed at the head of the bid list. In either case, the rental price of storage becomes the bidPrice of the highest unsuccessful bidder.

To guarantee that resources used will be paid for (and avoid incentives for the evolution of parasitic software), landlords must require payment for a rent period in advance. This payment should cover the cost of the next billing cycle and include a deposit to cover the cost of deallocating memory and of any special services specified in the lease agreement, such as erasure of vacated space (a sort of cleaning deposit). Rental rates will fluctuate during a rent period, with the length of a rent period varying as the inverse of the average rental rate.

Landlords can accept lease agreements of varying lengths, requiring varying amounts of pre-paid rent to allow objects to tune their storage management overhead. They can likewise agree to provide a period of advance notice before collecting rent, giving the renter time to raise money, find alternative storage, or close out its affairs.

A more complex strategy would offer prompt storage allocation (from pre-emptable cache or unheld storage), charging a premium for this service. Alternatively, this and other services could be provided by renters subletting space in their blocks. A useful service would allow a renter to split off a piece of its storage block and post a new drop-list entry for it, allowing the sale of portions of allocated blocks without the overhead of the auction house procedure.

3.2. The market-sweep algorithms[modifier]

An initial market strategy for renters is to get space by placing high (perhaps escalating) bids, and to keep it by paying whatever is necessary, so long as funds hold out. The challenge is to have funds hold out while the renter should stay, and eventually run out when the renter should vacate. Since a consultant must pay its rent in order to serve referencing clients, the initial market strategy follows the referencing structure among consultants and their clients. This structure is a directed graph containing cycles and changing over time. As a result, these strategies are more complex than those above.

A consultant must not be evicted if can be reached through a chain of strong pointers starting from a well-funded object (i.e., non-garbage must not be collected). Many objects will pay their rent out of fees charged for their services, but some objects-though never before used-may be of great value in rare contingencies: consider an object that contains plans for coping with the next terrestrial asteroid impact. Objects that are needed but rarely used must survive by charging their clients retainer fees; an initial strategy must assume this worst case.

A system based on retainer fees must avoid several problems. In one approach, objects would, when charged rent, send alert messages to their clients, asking for the needed sum; these clients would do likewise until a solvent object was found to pay the bill. This system has low capital costs, requiring little cash on hand, but it leads to an explosion of circulating alert messages, and hence to unacceptable transaction costs. In an alternative approach, objects would keep enough cash on hand to cover worst-case rent and retainer fees. This system has minimal transaction costs per rent period, but in the absence of information on worst-case rents and fees, capital requirements are unbounded and garbage collection would be indefinitely postponed.

For a system to be acceptable, transaction and capital costs must both be bounded. Transactions should require on the order of one message per pointer per rent cycle, capital required should be some fixed multiple of per-cycle expenses, and the per-cycle retainer fees paid by a client should be reasonably related to the rent paid by its dependent consultants.

Satisfying these constraints proves to be difficult. The component algorithms of the market-sweep approach include the following:

  • The dividend algorithm, an initial market strategy which provides incentives to pay retainer fees to an object and provides it with estimates of future use. It has strategic properties which should make it useful in contexts outside storage management.
  • The retainer-fee algorithm, an elementary algorithm for billing clients. This provides for cash flow in normal circumstances.
  • The alert algorithm, which provides for fast cash flow when needed to prevent improper eviction. It is an initial strategy aimed at guaranteeing systems programming constraints, but it involves a race which can fail given a sufficiently unfavorable combination of cash reserves, rental notice period, message-passing speed, and message path length.

The base-demand algorithm, which provides estimates of future cash requirements to minimize the need for alerts. It is essentially heuristic, aimed at tuning reserves and estimating costs. As a cost-estimater, it can underestimate the drop in retainer charges that will occur when a new client points into a cyclic reference structure.

3.2.1. The dividend algorithm[modifier]

Raising money by simply dividing the total retainer charge equally among several directly-pointing clients won't work; it suffers from the classic public-goods problem: each client has an incentive to shirk, as long as another will pay. To pay to retain a consultant while a competitor uses it without paying is an evolutionarily unstable strategy-clients following it will lose in price competition. Multiple clients can pretend to be single clients (and thus cut their liability) by pointing via a middleman (Figure 2). To make simple retainer-charging schemes of this sort work would require peculiar and uneconomical limitations on object interactions, such as (somehow) preventing one object from serving as a middleman for another, or preventing a consultant from offering service to new clients.

A better alternative is for the consultant to collect a large enough surcharge per use to compensate all clients (or their heirs and assigns) for their earlier retainer payments. This approach converts retainer-payment into a form of investment, to be repaid through dividends raised by imposing a per-use surcharge. To make such investment attractive, dividends must be proportional to the amounts invested, and must include compensation for the risk that a consultant may in reality be used seldom (or not at all), yielding few or no dividends.

This raises the problem of estimating a consultant's future use rate (or use probability). The lower the expected use rate, the higher the per-use charges and dividends must be to compensate clients for their investment. These use-rate estimates must somehow reflect the client's own judgment, lest clients be unwilling or too-willing to pay.

Future use rates for a consultant could be estimated using the sum, average, or median of future use rates estimated and reported by its clients. But why should these reports be accurate? Mechanisms which yield estimates proportional to the sum or average are clearly unstable: if all clients share equally in retainer payments, high-use-rate clients should estimate an infinite use rate, to drive the per-use surcharge for dividend payments to zero; low-use-rate clients should estimate a zero rate, to maximize their dividends. Use of the median mechanism throws away information (e.g., a single high-use-rate client among several low-use-rate clients will have no effect on the estimate); this presumably leads to wasteful strategic behavior.

Another approach would be to accept bids for the privilege of investing, with the winning bidder asking the smallest surcharge-and-dividend per future use. This approach is stable, since payment is voluntary and will typically be justified at some level of dividend, but it fails to integrate market information effectively. Instead, it encourages clients to give a falsely-high impression of their future use rates, to drive down others' bids and hence their own future usage charges. Thus, the prospective bidder must guess others' actions based on what may be actual disinformation. Further, the special role of the low bidder encourages messy strategic behavior.

One would like an algorithm for collecting retainer payments and paying dividends that has better properties. Ideally, it should give clients an incentive to report accurate use estimates, enabling a synthesis of estimates made by those in the best position to know; further, it should provide incentives for simple strategic behavior in typical situations, and it should be insensitive to issues of entity definition-to whether one treats a buying club as an object or as a collection of objects.

3.2.1.1. Description[modifier]

This section provides a brief, abstract description of the dividend algorithm in its simplest form. Later sections describe its operation more informally, analyze its properties, and describe a slight modification that yields a more practical version.

Definitions of variables:

3.2.1.2. Basic analysis[modifier]

Clients evolved under competitive pressures will tend to act so as to minimize the expected net costs of their actions. These costs may be analyzed as follows.

3.2.1.3. Analysis of incentives[modifier]

If other clients suffer from a systematic bias in their usage estimates, this benefits those that estimate their own use correctly. All the inaccurate clients can be modeled as one large, inaccurate client which (by hypothesis) is in an environment in which the remaining client(s) make correct estimates. Accordingly, its inaccuracy is suboptimal, causing losses. As this is a zero-sum game, those losses accrue to the accurate estimators.

But if a client knows that other clients are systematically submitting inaccurate estimates of their future usage, and knows the direction of their bias, it can bias its own estimate to improve its expected earnings. The direction of optimal bias-whether in the same or opposite direction to the others' bias-depends on additional knowledge of the relative magnitudes of the usage rates involved. It should be rare for a client to have all this knowledge. In a typical round, a client submits a number, then pays a request. It has no direct way to infer others' estimates or their actual future usage rates.

In addition to the price-sensitivity of demand, the fixed transaction costs of paying retainer fees modify these conclusions somewhat, giving low-rate users an incentive not to participate in the process. If enough do not, the resulting underestimate of usage will give a positive return on investment to the clients that do, covering their transaction costs. A full analysis of optimal behavior seems likely to be complex.

3.2.1.4. Adding a time horizon[modifier]

This would change the sense of what is being estimated from the number of future uses to the probability of use within a bounded time interval. An object R may at any time announce that future estimates will be entered into new dividend accounts subject to a new function w(t), so long as it pays dividends that result from summing the results of the new accounts with the results of the old accounts (which must be updated according to the old algorithm). This maintains all the incentive properties described above and allows retuning of W in a fair way, at the expense of additional overhead.

This algorithm allows a natural way to account for the time value of money, which may be important, since objects recover their investments only after a delay. If all clients submit low estimates, then all will receive greater dividends when R is used; this corresponds to receiving a return on their investment. For example, if they all bias their estimates by assuming a slightly greater value of W than R uses in its calculations, then the result will be as if they receive a certain rate of interest while their investments are repaid. (This holds on the average, assuming that actual use rates match expected use rates.) If different objects seek different rates of return, strategic considerations become more complex. Time-value-of-money considerations should be small in systems open to the external market, because market interest rates measured in percent per year are tiny per day or second. Long-run interest rates will equilibrate in a connected open system-investment will move toward higher rates, driving them down.

3.2.1.5. Accounting costs[modifier]

A remaining problem with this algorithm is that R must maintain a dividend account for every client ever charged a retainer. This may be corrected in a way that demonstrates a generally-applicable principle for lowering accounting overhead.

What is important to the incentive structure of an algorithm (in the absence of risk-averseness considerations, as is appropriate with small enough sums of money) is not that actual costs have a certain magnitude, but that average costs do so. Random variations in actual expenses and payments make no difference if the amounts are small and the averages are correct. Accordingly, with proper attention to these points and to conservation of currency, charges and payments may be rounded or made on a statistical basis.

In the present case, we seek a principled way to cut off payments to former clients, cutting short the long, exponential tail of the dividend account. This can be done by freezing the magnitude of the account when it reaches a small-enough level, and then giving the account a suitable half-life for total deletion (using decisions based on random numbers to give a certain probability of deletion per unit time). This leaves the expected payback in all future periods unchanged, but makes the expected cost of maintaining the account asymptotically approach zero.

3.2.1.6. Circulation of usage estimates[modifier]

can use in estimating the rate at which it will use its own consultants, thereby propagating usage information through the system. The dividend algorithm thus provides local incentives for the combination and propagation of accurate estimates of future service demand, perhaps making possible sophisticated heuristics for anticipatory resource allocation-heuristics that reflect global conditions through purely local interactions. A conservative initial market strategy, however, might be to base usage estimates initially on some global average of initial-object-usage rates, and later on actual experience.

3.2.1.7. Open problems: the dividend algorithm[modifier]

Several open problems are associated with the dividend algorithm. These include selecting an appropriate value of W and choosing an initial usage estimate in the absence of prior history.

A particularly interesting problem is the exploration of strategies which rapidly propagate future-usage estimates though a network of objects. If R's clients report increases in their expected usage, then R very likely has good reason to report increases in its expected usage of its consultants. General rules for revising these estimates must be stable in the presence of cyclic reference structures, and stable in the presence of clever, self-interested participants.

3.2.2. Normal money flow: the retainer-fee algorithm[modifier]

The retainer-fee algorithm is the basic strategy for collecting funds to cover an object's rent and retainer-fee obligations. We earlier described initial market strategies as a sort of scaffolding for building a market. We expect the dividend algorithm just described to be scaffolding of a sort that eventually becomes a structural member; the retainer-fee and other initial-strategy algorithms for storage management seem like scaffolding of a sort one expects to be replaced as the construction proceeds. They raise fewer issues of incentives and strategic stability and are intended chiefly to ensure adequate money flow on a heuristic basis.

The retainer-fee algorithm proceeds as follows. In a given cycle each renter-object R has a number of clients, numberOfClients, and a current balance, currentBalance; it calculates (as discussed below) a desiredBalance (a target cash reserve) and a balanceReserve to be set aside for certain classes of payment. The latter is chosen such that the expected expenses for the next rent cycle can be paid without dipping into the reserve. Section 3.2.4, on the base demand algorithm, will explain how these expenses are estimated.

Since this process always eliminates a client from consideration, it can be iterated (counting the first round) no more than numberOfClients times. Given the addition of a billing charge, the maximum un-reimbursed message cost at any time is numberOfClients times the cost per message, hence the cash on hand needed to follow this protocol is strictly bounded and may be covered by a fixed per-client deposit.

If iteration of this request process fails to produce enough money to prevent an improper eviction, further measures must be taken. These are the subject of the alert algorithm.

3.2.2.1. Open problems: the retainer fee algorithm[modifier]

Two open problems related to the retainer-fee algorithm involve essentially heuristic choices of parameters. One is the choice of how much cash to keep on hand to meet cash-flow contingencies, another is the choice of the length of a pre-paid rent period. An initial market strategy for the former is presented below as the base-demand algorithm. The latter depends on (at least) the cost of storage rental, the cost of processing rent requests, and the likelihood (as a function of time) that one should vacate storage.

Generation scavenging [9] and the Lieberman-Hewitt garbage collection algorithm [1] both rely on the insight that objects have a high infant mortality-that a good predictor of the longevity of an object is its age. Both check new objects more frequently, thereby collecting more garbage with less effort and cost. Our initial strategy can do likewise simply by pre-paying rent for longer times as the renter ages.

3.2.3. Special money flow: the alert algorithm[modifier]

If the retainer-fee algorithm fails to produce enough money to prevent R's eviction, R sends each of its clients an alert message. Strong pointing fundamentally means responding to alert messages appropriately: other money-handling procedures can fail, but (with caveats about execution time, notice, and rent periods) an unbroken chain of correct alert-processing objects leading back to a well-funded object will suffice to retain R in storage.

The idea of the alert process is to send requests for funds as far up and out through the chain of client relationships as is needed to find an object able to supply ample money. Failure to collect the needed money is assumed to imply that no solvent entity is willing to pay for continued storage of the renter, which may therefore be garbage-collected.

An algorithm for accomplishing this is described in Figures 4 and 5, and code implementing it is listed in the appendix to this paper. Recipients of alert messages first seek funds through application of the retainer-fee algorithm, then send further alert messages if needed. Propagation of alert messages in endless loops is avoided by giving each a unique identifier; objects refuse all but the first alert message with a given identifier.

All alert messages seek the full funds needed to satisfy the original alerter, plus enough to compensate the participating objects for their handling costs. This enables them to maintain a balance (the alertReserve) adequate to process further alert messages. Maintaining an alert- Reserve is like keeping a quarter for use in emergencies. Should you ever need to use it, you should use that phone call not only to take care of the emergency, but also to request a new quarter.

When a client pays the requested amount, the recipient sends cancel messages to its other clients, informing them that payment is no longer necessary. Due to asynchrony, however, multiple clients may pay before being cancelled. Further, a payment propagating down a chain of client-consultant references may be met by a corresponding cancellation propagating up the same chain. When a consultant finds it has received an unnecessary payment, it reimburses the payer-client. If the payer itself had merely passed the payment down, then it needs to pass the reimbursement up to its payer-client; thus, it needs to remember which client paid it. Since we require bounded storage costs for the algorithm, the payer needs to know when it can forget this knowledge. This occurs when the payer is reimbursed or thanked for payment-when an alert payment is actually used for rent, thank you messages propagate back up the path the payment came from. This algorithm allows a client to process only a single alert message at a time (another source of potential delay). A client must have enough storage to queue up one message per consultant, and enough cash on hand to send one message per client, hence the resources required to follow this protocol are strictly bounded.

From the perspective of the dividend algorithm, the alert process may be viewed as an- other iteration of the retainer-fee request cycle. Accordingly, objects that are prepared to pay alert-message requests promptly will have a competitive advantage over those that are not.

3.2.3.1. Open problems: the alert algorithm[modifier]

The greatest problem with this alert-message mechanism is the time it requires. Even in the parallel-processing case, the time needed is proportional to the distance from a distressed to a solvent object. In the sense of guaranteeing non-collection of non-garbage objects, it works in the worst case only under the idealized assumption that alert processing times are negligible in comparison to rent pay periods. In practice, this algorithm's range of effectiveness will depend on the real times involved. The outstanding open problem is to develop an algorithm which avoids these delay problems in ensuring that non-garbage objects receive the money they need, or to develop clear (preferably locally-computable) bounds on the correctness of the alert algorithm-that is, to characterize when the algorithm is guaranteed to work.

3.2.4. Cost estimation: the base-demand algorithm[modifier]

Given the overhead of the alert-message mechanism, one would prefer to minimize its use. To do so requires forestalling emergencies by making sound, conservative estimates of future cash needs. Further, if most objects maintain substantial cash reserves, alert messages need not propagate far to reach a source of funds.

A renter might seek to determine its cash needs for the next cycle by multiplying the last cycle's expenses by a safety margin. Though initially plausible, this approach is unstable in the presence of cyclic client-consultant relationships: objects request money to build up safety margins, and these requests become expenses for their clients; when propagated around a loop, safety-margin multipliers cause an exponential explosion in the cash reserves and requests.

The essential idea of the base-demand algorithm is to circulate estimated-cost information to aid planning, and to do so independently of the more irregular and opportunistic circulation of money. This algorithm operates in parallel with the retainer-fee and alert algorithms just described.

A candidate standard for the adequacy of a desiredBalance is that it call for enough cash on hand to eliminate shortfalls during any rent period, in a steady-state system. This requires determining an upper bound for the rent and retainer requests that may arrive and adding the resulting value to the alertReserve discussed above. One such upper bound consists of R's totalBaseDemand times R's rent period, to account for average expenses, plus the sum of all R's consultants' demand chunks, to account for a worst-case peak in demand (in which, for example, a set of long-period renters all charge retainer fees during one of R's shorter rent periods).

3.2.4.1. Analysis[modifier]

The dynamic behavior of this system may be visualized in terms of a physical model in which each object's rent obligations are a source of "demand flux lines." When a new renter is introduced in a system with uniform rent periods, the demand-flux lines stemming from its rent extend one consultant-to-client step per rent period until they end in a non-retained funds source-that is, in an entity that pays its obligations out of earnings, capital, and so forth. Where a retained consultant has several clients, the bundle of lines splits but conserves total flux. When an established renter disappears, its associated flux lines suffer a wave of termination, propagating at the same speed.

Figure 6 illustrates several states in a system before and after a new client begins pointing at a looped structure. Where pointing relationships loop, but some pointers enter from the outside (as shown), a certain fraction of demand-flux escapes in each circuit of the loop. This gives the total demand flux an exponential settling behavior in which the equilibrium total- BaseDemand values accurately predict per-cycle expenses. (Non-uniform rent periods change the speed with which lines propagate, but do not change the essential dynamics.)

3.2.4.2. Open problems: the base-demand algorithm[modifier]

The base-demand algorithm propagates base demand information at an awkwardly slow rate, particularly in the presence of cyclic structures. This can sometimes make emergency cash demands (and resort to the alert algorithm) unavoidable. Better heuristics for determining cash on hand would be desirable.

With the present algorithm, objects in cyclic structures will also report biased cost numbers. In a modified version of the situation shown in Figure 6, if D's estimated use of B is arbitrarily low, then B may report an arbitrarily high base demand estimate to E, although E will actually be charged no more than the sum of A, B, and C's rent. This results in B appearing less competitive than it is. This bias is unpleasant, but at least has stable consequences: if E does decide to retain B, it will be favorably surprised, and hence will have no incentive to immediately reverse its decision.

Many of the problems of this algorithm result from objects participating in cyclic structures of which they are unaware. Finding cycles by propagating full referencing information would violate the privacy of the objects involved. An open problem is to determine how little information about reference structure can be revealed while still alleviating the above problems.

There are also problems with the incentive structure of this algorithm. Information on base demand can be viewed as an indication of the expected storage surcharge for using a consultant. Objects therefore have an incentive to attempt to gain clients by deviating from the algorithm and understating costs. This is similar to the "low-balling" problem in cost-plus contracts-companies may knowingly provide a low estimate of costs while a contractor is being selected; overruns occur later, after enough time and money have been invested in the project to prevent clients from easily switching. A more market-like alternative for estimating future costs might provide rewards and penalties that would avoid this peculiar incentive.

3.3. Applications[modifier]

How practical are these algorithms for storage management? They are substantially more complex than typical garbage collection algorithms, but this complexity need not be visible to a programmer-it presents a simple interface. In terms of computational resources, though, they are substantially more expensive than typical garbage collection algorithms; this restricts their applicability. One would not use them to allocate and free cons cells in a Lisp system, but one could use them to allocate and free space for large objects-even Lisp systems themselves-which might use conventional garbage collection internally.

In general, algorithms like these will make less sense when objects are small, simple, short-lived, and mutually trusting. They will make more sense when objects are large enough to make their storage costs worth considering (in the sense of "worth the overhead of computing costs and making tradeoffs"). They will make more sense when objects are complex enough to make economic decisions, and long-lived enough for the cost of making those decisions to be amortized over a significant storage time. Finally, some form of market-based storage management seems necessary if objects coded by different groups for different purposes are to make efficient use of machine resources and each other.

Some of the flaws of these algorithms become unimportant if the consequence of evicting an object is merely clearing a copy of it from a local cache or migrating it to a different machine or a different form of long-term storage. If failure to pay rent does not destroy an object, then delays in alert processing can no longer threaten program correctness. Further, large objects are more often candidates for migration than for deletion. Information of the sort circulated by the dividend and base-demand algorithms can help to tune local working-sets of objects in distributed open systems.

________________________________________

4. Initial strategies for trust[modifier]

Many of the above algorithms make strategic sense only if one object can trust another object to follow them. For example, there are direct financial incentives to embezzle funds or misreport earnings by violating the dividend algorithm, and there are market-share incentives to produce falsely low cost estimates by violating the base-demand algorithm. Further, improper market intelligence (who is using what services?) can be gleaned by comparing alert- ID values arriving via different consultants. Thus, one needs what may be called initial strategies for trust.

The simplest strategy is for an object to trust whatever existing objects it is initially instructed to trust. This need not lead to great inflexibility or put a great burden on the programmer. Standard initial market strategies for resource management can be provided by a programming environment. In one kind of implementation, a wide range of objects will use instances of the same, small set of initial-strategy objects; these objects will recognize and trust each other, and will be able to interact with other objects in ways that do not assume trust. (Unforgeable identities are an essential foundation for trust.) Thus, use of standard initial strategies can itself be an initial strategy for trust.

Other means of building trust are discussed in [II]. They include creating or noticing situations having the characteristics of indefinitely-iterated prisoner's dilemma games [10] (see also the discussion in [I]), use of posted bonds, use of positive-reputation systems, and use of behavior-certification agencies.

5. Probabilistic cash flows[modifier]

As noted in the discussion of accounting overhead in the dividend algorithm, the incentive structure of an algorithm (in the absence of risk aversion) is determined by its average expected payoffs, which can deviate from its actual payoffs on any given occasion. This principle has general applicability.

5.1. Processor accounting[modifier]

The overhead of the escalator algorithm may be acceptable at the scale of, say, tasks in the Mach operating system [IV], but not at the finer-grained level of Mach threads, Actor tasks [11], or FCP processes [V]. Scheduling of light-weight processes like these might best be handled by a simple round-robin scheduler, which itself buys time through an auction house. How might these light-weight processes be charged so as to subject them to price incentives and compensate the round-robin process for the time it buys-all at low overhead? One approach is to use probabilistic charging: at random, uniformly-distributed times (a Poisson process with mean interarrival time T), note which light-weight process is currently running and charge its sponsoring account T times the current price of processor time. On the average, the round-robin process receives the market price for time; on the average, each light-weight process pays it. And yet on a typical occasion, a light-weight process will run without being charged, and hence without accounting overhead.

5.2. Gambling[modifier]

A different kind of probabilistic cash flow is gambling, wagering money on a chance event. This too has its place.

Consider an object which has just received an alert message asking for more money than it can pay or raise though retainer-fee requests. Sending an alert message may be expensive, in terms of direct communication costs and costs imposed on clients. It is an elementary result of decision analysis [12] that when X% more money has over X% more utility, for some value of X (which requires that the utility-vs.-money curve somewhere be concave upwards) there exists a fair bet (or one with a small "house percentage") that is rationally worth taking. This can be the case both in alert processing and elsewhere in an agoric system.

To illustrate the principle (albeit with absurd numbers), assume that an object has a balance of $50 and receives an alert message demanding $100. Assume further that the object has 10 clients, and that transmitting an alert costs $1 per message. If the object simply alerts its clients and then pays its bill, it will pay a total of $110. If, however, the object gambles the $50 in a fair bet on a double-or-nothing basis, its expected net payment will be half the net payment that will result if the gamble is won (1/2 * $50) plus half the net payment that will result if the gamble is lost, (1/2 * ($50 + $100 + $10)). This equals $105, for an expected savings of $5. Similar bets can be profitable, so long as the house percentage amounts to less than $5. Thus, gambling might profitably be made part of a market strategy for alert processing.

One can predict that market forces will favor the emergence of rational gambling in agoric systems. To provide gambling services, one expects to see lottery objects with substantial cash reserves. These will accept payments of X units of currency with a request for a greater sum Y, and return Y with a probability slightly less than X/Y.

5.3. Insurance[modifier]

Another (and more respectable) form of gambling is insurance, or risk pooling. This can be based on a form of trust that will arise naturally in an agoric system.

A set of objects sharing a single program (code, script, class) is like a set of organisms sharing a single genome. It is an elementary result of evolutionary theory [13] that the genes of such organisms (in, say, a colony) will be selected for complete altruism among "individuals". And indeed, colonial polyps often share digestive tracts, and thus all their food.

Objects sharing a script can likewise (with full trust) offer to share cash reserves, in effect insuring one another against temporary shortages and expensive alert processing. In insurance terms, the shared incentives of these objects eliminate the problem of "moral hazard", that is, of insured entities taking uneconomic risks because "the insurance company will pay for any losses". Here, objects care as much about the "insurance company" as about themselves (more accurately, "evolutionary pressures will favor those objects which behave in a manner that can be regarded as `caring' in this way"). Objects of types which abuse this mechanism to prevent proper garbage collection will in general have higher costs and lose in price competition. This is a case in which Hofstader's "superrationality" [14] and Genesereth's "common behavior assumption" [15] will apply.

6. Conclusions[modifier]

This paper has explored mechanisms for the allocation of processor time and storage that are compatible both with programming practice and with market mechanisms. Processor scheduling through an auction process yields a flexible, decentralized priority system, allowing a variety of strategies that make tradeoffs involving the speed, certainty, and cost of service. Storage can be managed through auctioning of rental space and decentralized networks of client-consultant relationships. This yields a distributed garbage collection algorithm able both to collect unreferenced loops that cross trust boundaries and to accumulate rough price information to guide economic decisions regarding, for example, local caching in distributed systems.

Some of these algorithms (e.g., for processor scheduling) have per-decision costs comparable to those of non-market mechanisms in current use; others have costs that are much greater. In general, these costs will be acceptable for objects of sufficient size and processes of sufficient duration. The question of the appropriate scale at which to apply market mechanisms can be addressed by additional study but will best be addressed by experience in actual computational markets. The proposals made here can doubtless be improved upon; they are merely intended to illustrate some of the issues involved in incentive engineering for computational markets, and to provide a starting point for discussion and design. Any advances toward lower costs, greater effectiveness, and better incentive structures will shift tradeoff points in favor of finer-grained application of market mechanisms.

Even heavy overhead costs would leave intact a solid case for market mechanisms in computation. This case rests on the value of doing the right thing (or something like it) with some overhead costs, rather than doing something blatantly wrong with polished efficiency. And when finding the right thing to do requires cooperation, competition, and freewheeling experimentation, the value of decentralized systems with market accountability becomes very great indeed.

Appendix: Code for the alert algorithm[modifier]

The alert algorithm is the most procedurally intricate of the initial strategies described here, hence it is the one least suited to description in English. It is documented here by code written in the programming language FCP [V,16]; this code has not been run. To facilitate object-oriented programming, we are using a form of syntactic sugar known as "keyword terms" [17]. A keyword term can be distinguished from the familiar positional term by use of curly braces instead of parentheses. The arguments of a keyword term are identified by the keyword to the left of the colon instead of by position. All unmentioned keywords are considered to be associated with unbound variables. The keyword term "foo{KTerm but bar:a, baz:b}" is identical to the keyword term "KTerm" except that "bar" is associated with "a" and "baz" is associated with "b". Keyword terms can be efficiently translated into positional terms.

% mem

       % Alert
       % Not idle

mem([Msg | Self], State) :-

       alert{alertID:ID, alerter:Alerter} = Msg,
       state{stateName:SName, alertID:ID} = State,
       SName =\= idle |
       Alerter = [refusal{alertID:ID}],
       mem(Self?, State).
       % IDs differ

mem([Msg | Self], State) :-

       alert{alertID:ID1} = Msg,
       state{stateName:SName, alertID:ID2} = State,
       SName =\= idle,
       ID1 =\= ID2 |
       tryToPayAlert(Msg, Self, NewSelf, State, NewState),
       mem(NewSelf?, NewState?).
       % Idle
       % alertQ empty

mem([Msg | Self], State) :-

       alert{} = Msg,
       state{stateName:idle, alertQ:[]} = State |
       tryToPayAlert(Msg, Self, NewSelf, State, NewState),
       mem(NewSelf?, NewState?).
       % alertQ non-empty

mem(Self, State) :-

       state{stateName:idle, alertQ:[AlertMsg | AlertMsgs],

clients:Clients} = State |

       alert{alertID:ID, alerter:Alerter, amount:Amount} = AlertMsg,
       alertClients(Clients, alert{AlertMsg but alerter:Self1},

NumClients, NewClients),

       NewState = {State but stateName:tryingToPayAlert,
       alertID:ID, alertQ:AlertMsgs, count:NumClients?,
       clients:NewClients?},
       merge(Self?, Self1?, NewSelf),
       mem(NewSelf?, NewState?).
       % Cancel
       % Trying to pay rent

mem([cancel{alertID:ID} | Self], State) :-

       state{stateName:tryingToPayRent, alertID:ID} = State |
       mem(Self?, State).
       % Trying to pay alert

mem([cancel{alertID:ID} | Self], State) :-

       state{stateName:tryingToPayAlert, alertID:ID, clients:Clients} = 

State |

       cancelClients(Clients, cancel{alertID:ID}, NewClients),
       NewState = state{State but stateName:idle, clients:NewClients},
       mem(Self?, NewState?).
       % Waiting to be thanked

mem([cancel{alertID:ID} | Self], State) :-

       state{stateName:waitingToBeThanked, alertID:ID} = State |
       mem(Self?, State).
       % Ids differ

mem([cancel{alertID:ID1} | Self], State) :-

       state{stateName:SName, alertID:ID2, alertQ:Q} = State,
       SName =\= idle,
       ID1 =\= ID2 |
       forgetAlert(Q?, ID1, NewQ),
       NewState = state{State but alertQ:NewQ?},
       mem(Self?, NewState?).
       % Idle

mem([cancel{} | Self], State) :-

       state{stateName:idle, alertQ:[]} = State |
       mem(Self?, State).
       % Thank you
       % Trying to pay rent

mem([Msg | Self], State) :-

       thankYou{alertID:ID} = Msg,
       state{stateName:tryingToPayRent, alertID:ID} = State |
       error(Msg),
       mem(Self?, State).
       % Trying to pay alert

mem([Msg | Self], State) :-

       thankYou{alertID:ID} = Msg,
       state{stateName:tryingToPayAlert, alertID:ID} = State |
       error(Msg),
       mem(Self?, State).
       % Waiting to be thanked

mem([Msg | Self], State) :-

       thankYou{alertID:ID} = Msg,
       state{stateName:waitingToBeThanked, alertID:ID, payer:Payer} = 

State |

       Payer = [Msg],
       NewState = state{State but stateName:idle},
       mem(Self?, NewState?).
       % Ids differ

mem([thankYou{alertID:ID1} | Self], State) :-

       state{stateName:SName, alertID:ID2} = State,
       SName =\= idle,
       ID1 =\= ID2 |
       mem(Self?, State).
       % Idle

mem([thankYou{} | Self], State) :-

       state{stateName:idle} = State |
       mem(Self?, State).
       % Reimbursement
       % Trying to pay rent

mem([Msg | Self], State) :-

       reimbursement{alertID:ID} = Msg,
       state{stateName:tryingToPayRent, alertID:ID} = State |
       error(Msg),
       mem(Self?, State).
       % Trying to pay alert

mem([Msg | Self], State) :-

       reimbursement{alertID:ID} = Msg,
       state{stateName:tryingToPayAlert, alertID:ID} = State |
       error(Msg),
       mem(Self?, State).
       % Waiting to be thanked

mem([Msg | Self], State) :-

       reimbursement{alertID:ID} = Msg,
       state{stateName:waitingToBeThanked, alertID:ID, payer:Payer} = 

State |

       Payer = [Msg],
       NewState = state{State but stateName:idle},
       mem(Self?, NewState?).
       % Ids differ

mem([Msg | Self], State) :-

       reimbursement{alertID:ID1, check:Check} = Msg,
       state{alertID:ID2} = State,
       ID1 =\= ID2 |
       deposit(Check?, State?, NewState),
       mem(Self?, NewState?).
       % Idle

mem([Msg | Self], State) :-

       reimbursement{check:Check} = Msg,
       state{stateName:idle} = State |
       deposit(Check?, State?, NewState),
       mem(Self?, NewState?).
       % Payment
       % Trying to pay rent

mem([Msg | Self], State) :-

       payment{alertID:ID, check:Check, payer:Payer} = Msg,
       state{stateName:tryingToPayRent, alertID:ID, clients:Clients} = 

State |

       payRent(Check?, State?, State1),
       creditPayer(Payer, Check?, State1?, State2),
       Payer = [thankYou{alertID:ID}],
       cancelClients(Clients, cancel{alertID:ID}, NewClients), NewState = state{State2? but stateName:idle, clients:NewClients},
       mem(Self?, NewState?).
       % Trying to pay alert

mem([Msg | Self], State) :-

       payment{alertID:ID, check:Check, payer:Payer} = Msg,
       state{stateName:tryingToPayAlert, alertID:ID, alerter:Alerter,

clients:Clients} = State |

       creditPayer(Payer, Check?, State?, State1),
       Alerter = [payment{Msg but payer:Self1}],
       cancelClients(Clients, cancel{alertID:ID}, NewClients), NewState =

state{State1? but stateName:waitingToBeThanked, payer:Payer,

clients:NewClients},

       mem(Self?, NewState?).
       % Waiting to be thanked

mem([Msg | Self], State) :-

       payment{alertID:ID, check:Check, payer:Payer} = Msg,
       state{stateName:waitingToBeThanked, alertID:ID} = State |
       Payer = [reimbursement{alertID:ID, check:Check}],
       mem(Self?, State).
       % Ids differ

mem([Msg | Self], State) :-

       payment{alertID:ID1, check:Check, payer:Payer} = Msg,
       state{stateName:SName, alertID:ID2} = State,
       SName =\= idle,
       ID1 =\= ID2 |
       Payer = [reimbursement{alertID:ID, check:Check}],
       mem(Self?, State).
       % Idle

mem([Msg | Self], State) :-

       payment{alertID:ID, check:Check, payer:Payer} = Msg,
       state{stateName:idle} = State |
       Payer = [reimbursement{alertID:ID, check:Check}],
       mem(Self?, State).
       % Refusal
       % Trying to pay rent
       % count = 0

mem(Self, State) :-

       state{stateName:tryingToPayRent, count:0} = State |
       liquidate(Self, State).
       % count > 0

mem([refusal{alertID:ID} | Self], State) :-

       state{stateName:tryingToPayRent, alertID:ID, count:Count} = State,
       Count > 0 |
       NewCount := Count - 1,
       NewState = state{State but count:NewCount?},
       mem(Self?, NewState?).
       % Trying to pay alert
       % count = 0

mem(Self, State) :-

       state{stateName:tryingToPayAlert, alertID:ID, count:0,

alerter:Alerter} = State |

       Alerter = [refusal{alertID:ID}],
       NewState = state{State but stateName:idle},
       liquidate(Self, NewState?).
       % count > 0

mem([refusal{alertID:ID} | Self], State) :-

       state{stateName:tryingToPayAlert, alertID:ID, count:Count} = State,
       Count > 0 |
       NewCount := Count - 1,
       NewState = state{State but count:NewCount?},
       mem(Self?, NewState?).
       % Waiting to be thanked

mem([refusal{alertID:ID} | Self], State) :-

       state{stateName:waitingToBeThanked, alertID:ID} = State |
       mem(Self?, State).
       % Ids differ

mem([refusal{alertID:ID1} | Self], State) :-

       state{stateName:SName, alertID:ID2} = State,
       SName =\= idle,
       ID1 =\= ID2 |
       mem(Self?, State).
       % Idle

mem([refusal{} | Self], State) :-

       state{stateName:idle} = State |
       mem(Self?, State).

% Other predicates

% tryToPayAlert

tryToPayAlert(AlertMsg, Self, NewSelf, State, NewState) :-

       alert{amount:Amount} = AlertMsg,
       collectRetainer(Amount?, State?, State1, Check, Ok),
       tryToPayAlert1(Ok?, Check?, AlertMsg, Self, NewSelf, AlertsForQ),
       state{alertQ:Q} = State1?,
       append(AlertsForQ?, Q?, NewQ),
       NewState = state{State1? but alertQ:NewQ?}.

tryToPayAlert1(true, Check, AlertMsg, Self, NewSelf, []) :-

       alert{alertID:ID, alerter:Alerter} = AlertMsg,
       Alerter = [payment{alertID:ID, check:Check, payer:Self1}],
       merge(Self?, Self1?, NewSelf).

tryToPayAlert1(false, Check, AlertMsg, Self, NewSelf, [Alert]).

% alertClients

alertClients([], AlertMsg, 0, []) :-

       alert{alerter:[]} = AlertMsg.

alertClients([Client | Clients], AlertMsg, NumClients, [NewClient |

NewClients]) :-

       alert{alerter:Alerter} = AlertMsg,
       Client = [alert{AlertMsg but alerter:Alerter1} | NewClient?],
       alertClients(Clients, alert{AlertMsg but alerter:Alerter2},

NCMinus1, NewClients),

       NumClients := 1 + NCMinus1,
       merge(Alerter1?, Alerter2?, Alerter).

% cancelClients

cancelClients([], CancelMsg, []).

cancelClients([Client | Clients], CancelMsg, [NewClient | NewClients]) :-

       Client = [CancelMsg | NewClient?],
       cancelClients(Clients, CancelMsg, NewClients).

% forgetAlert

forgetAlert([], ID, []).

forgetAlert([AlertMsg | Q], ID, Q) :-

       alert{alertID:ID, alerter:[]} = AlertMsg | true.

forgetAlert([AlertMsg | Q], ID1, [AlertMsg | NewQ?]) :-

       alert{alertID:ID2} = AlertMsg,
       ID1 =\= ID2 |
       forgetAlert(Q, ID1, NewQ).

Acknowledgments Note: see the paper "Markets and Computation: Agoric Open Systems" in this book [II] for general discussion, acknowledgments, and comparison with other work.