The Best Program is a Rich API Pat Gunn 26 July 2009 This essay is released into the public domain The growth of computer literacy and the integration of computers into most aspects of modern life have implications for the shape software should take as we move forward. Despite efforts to make software that's universally appealing that commands respect and acceptance of its user model from any potential user, the increased scope instead demands that we make software more flexible and powerful to deal with the specifics of individuals and their differing digital lives. Likewise, we may expect the children of today to have on average much greater computer savvy and greater needs, as they mature, than the adults of today. Basic programming will, in the generation being raised now, come to be as important in future times as literacy is today. Even in the present, the commodification of basic programming skill demands we design our applications differently - a program that hoards its data, that is not deeply controllable by other pieces of software, or does not even use common data formats and treats those datafiles as not being necessarily its exclusive domain is less valuable than one that does. Provision of a finished product like most pieces of modern software may remain useful, even if only as a demonstration of what one might do, but today and in the future, the best program is a rich API. Early consumer software mimiced things done on paper, allowing obvious improvements from the change in media but not breaking significant new ground. The spreadsheet was a mechanised implementation of accounting ledgers and worksheets, while computer word processors evolved from semi-mechanised publishing processes based around typewriters. Easification of repetitive character-manipulation tasks that existed before computers were ideas accessible without requiring the computer revolution to have permeated culture - the deeper changes remained beyond the reach of all but the greatest visionaries (such as Vannevar Bush) until the technology was already in place (France's Minitel network was one of the earliest forward-reaching projects of this type). Consumer access to electronics and the growth of hobbyist (and later commercial) software markets brought our use of software into a new stage - early consumer software acted as a private service and was often accessed as such, with little expectation of interoperability, often no competition, and having functionality that was very manual. While this change is not yet complete, modern software now must be able to interoperate with other software, sharing data formats and accepting input from other programs (mail merge was an early much-touted feature of this sort). In the 1990s, techologies like OLE and SOM and APIs like CORBA and COM provided standard ways for applications to embed their components in each other allowing compounding of documents. Custom applications written internally for businesses are a very large software market, requiring powerful and flexible access to application functionality. The shift to the web disrupted most of these technologies, providing new technologies that only recently have been able to begin to work together without explicit effort and posing challenges to future development. However, it also inspired new kinds of applications - those involving collaboration, location-based function, and distributed or disconnected computation (the emerging cloud computing applications). These changes are the latest in a series of shifts on the location of computation - thin clients/thick servers vie against peer computation and trends have moved over time. Computation is inexpensive enough and the most common technologies have developed, in this iteration, to allow an additional shift - the untrustworthy client. Emergence of the untrustworthy client is essential for progress (but poses a risk to domains that demand a higher degree of order, such as games) - servers can not rely on clients doing exactly what they are told, with the expected behaviour of a software client being "just a default" that information vendors can provide. This is easiest on the web, where various browser add-ons (such as greasemonkey, adblock, and ubiquity) change the user experience far from the default without requiring (but benefitting from when possible) developer knowledge or consent. Automation of common tasks, integrating and visualising data from multiple sources, and blocking of unwanted content or unlocking of protected content are now possible for the user, developable as and through a variety of means and APIs. The basic API for data access, which at this time is often Atom/HTTP or HTML/HTTP is the front door for web browsers acting in the typical fashion but is also used for these automation tasks. While developers have the ability to get around web applications (and other applications) not written with deep collaboration in mind, providing explicit, documented, powerful and fast APIs for everything an application might do (apart from, perhaps, its main interface) provides the most value to software users (initially through developers who offer them options on how to mix this data). (Atom (and its predecessor, RSS) was developed with this in mind, although presently it is mostly restricted in use in blogging and news syndication) Although users might find some parts of an application interesting or useful, they do not expect to benefit from all of it. Often the most interesting functions of an application take most developer time or sophistication - requiring users to use an application as a whole means they must accept the parts that are less interesting or developed, while allowing them to mix it as they like with other applications often allows them to mix those applications' central strengths. While this is doable with reclusive programs by various means (e.g. greasemonkey for web applications, hacks or laborious effort by developers to decode things for local apps), these solutions are more fragile than designing applications with an expectation of componentry in another app. Even for applications which do not significantly benefit from attachment to others, there is benefit to this type of design - applications involve design decisions that both provide structure for and restrict their user, while a properly layered, API-driven software product (that may or may not be an application) more naturally provides functionality without restriction, providing several levels of development between that of the full (perhaps theoretical) application and having the developer implement the functionality from scratch. The necessary difference between two full applications in the same category is usually much larger than that provided (at an abstract layer) by their API stacks taken as a whole - progressing upwards in complexity from starting from scratch, the point of diminishing returns for flexibility and power usually tops out well before a specific interface is added on top of the APIs. As creativity in "remixing" products grows as custom among developers, a focus on providing rich APIs to one's software will be more important than making the basic application appealing - we can expect that eventually many people will flock to applications that remix components, replacing the program-type main interface that are provided (if and how revenue/economic sustainability will work is an interesting matter but outside the concern of this document) - regardless of the programmer-effort gatherable in any given area by a development group, there will be significant effort gatherable outside of it that provides functionality missing from the first group, and demand for integration of these functions, whether in modifications to a mostly-complete application or in gluing multiple applications together, suggests that provided the technology to make this possible is widespread and the basic promise of given components is often-enough visible to allow it, remixes will dominate and APIs will grow in importance. Programming is the means by which people reformulate their desires into discrete expressions and in turn the process by which those expressions become software. The necessary complexity of software (in the general sense) is not very high - programming is not conceptually distant from teaching or giving instructions. Exposure to software components at an early age (meaning programs, different types of data, etc) makes these abstractions familiar and their use as tools more natural - provided the means is present and remixing develops as we may expect, the benefits of learning to program will approach (or surpass) that of having basic tool use, and we may expect everyone to develop some programming ability. There are two challenges immediately visible for this development - intellectual property/funding and games/achievement. In the first, we are challenged to find a model that funds investment of time, creativity, developers, and machine resources in this ecosystem. The first three are to a certain extent subsidised by human nature - people are creative and inclined to share by nature, and provided developers can make a living through other means (e.g. custom integration work), large numbers of part-time developers can make a workable software development community (particularly as those skills become less rare). The latter poses a more substantial challenge unless public funding or other means ensures that sufficient resources are available (servers, disks) for the software services being built. The second major challenge ties into how humans externalise and pose achievement in their lives. People more easily adapt to well-defined external struggles and challenges than internal or self-defined ones - by making it hard for people (typically "artists") to set these challenges for their audience and having them kept, they threaten this type of endeavour. As an example, right now MMORPG developers have levels of achievement in certain tasks, using artificial scarcity of products, ranks/social stature, and the like to create an order or story. These tie into parts of human nature and participants imagination to make participation in these endeavours enjoyable - it is precisely the authorship of the artist (and their structuring of the authorship of others, if applicable) and the denial of letting people breach those barriers, as an external and mandatory restriction, that allows those games to be fulfilling to their participants. The same means that, as described above, allow developers to do things without the permission or cooperation of software developers permission also allow these systems to be destroyed - successful MMORPGs frequently see people writing software to automate (entirely or partly) playing the game, sometimes for real-life profit (which tends to undermine the economy of MMORPGs), sometimes to allow people to achieve unearned status by replacing what is meant to be human effort with enhanced or replaced effort by another piece of software. Presumably without full externalisation of the struggle of the game, this would not occur, although without that externalisation it is more difficult for people to enjoy games - game players thus rely on opposition of their will to win to create a struggle worth winning, and technologies that unbalance that sufficiently destroy games (in times past, owners of devices like the Game Genie for the NES eventually developed the discipline to try to play through new games they bought for a time before using codes as they grew to a greater (intuitive and often unexamined) understanding of how the psychology of gaming works). Countering these capacities enough to discourage people from realising self-ruining efforts towards playing games in ways that simultaneously win and ruin them will be a challenge in times to come, particularly when these games engage the social instinct.