The Guardian (Manchester); 06 September 1990; Claire Neesham; p. 33
NEXT year the IBM PC and the Apollo workstation will en- joy their tenth birthdays. But, rather than signalling the beginning of the second decade of desktop computing, these birthdays may mark the return of more centralised computer management, and ``intelligent terminals''. Few people want to give up the independence personal computing has brought, or the wide range of applications. But the use of workstations in big companies, and the drive to link more of these machines over non-proprietary local area networks, has highlighted some of the problems of making personal computing public. It has also revealed the need for a desktop computer that is as easy to control as a terminal but as flexible as a personal workstation. Nigel Martin of The Instruction Set says it is controlling the data on distributed machines that is difficult, not linking the hardware together. In fact networks of thousands of machines are common Carnegie Mellon University in Pittsburgh, Pennsylvania, has 10,000 devices on its network, while DEC has a massive network linking all its staff worldwide. The management problems arise from being able to store data locally. In smaller networks data is often just distributed across all the machines. This data is easily accessible and it is not uncommon for two users to work on the same file at the same time. As the file has to be copied to the local disc the two copies will diverge and without careful management, the result will be a lot of inconsistent data. One way round this problem is to use a fileserver and disc-less workstations. Sun and other Unix-based workstation suppliers such as DEC and Hewlett-Packard have offered these products for some time. More recently PC firms such as Apricot and Compaq have introduced fileservers and PCs suited to discless operation. As well as simplifying file management, using fileservers and discless personal workstations improves data security. LAN suppliers are starting to offer file locking programs which run on the fileserver. Over the next five years, the California-based research firm Dataquest expects that revenues from the sale of discless PCs (which it terms processing terminals) will grow from $144 million to $807 million. But it admits that so far the market has not lived up to expectations. Dataquest puts this down partly to the lack of suitable software. However, there are other things that might limit the usefulness of discless workstations on a network. With existing networks there is a limit to the number of discless workstations that can be supported before the performance becomes so poor it is impractical to use the machine. This is a result of having to pass files over the network constantly. John Holodnik, a system engineer in the network group at CMU, says they found that if they had 30 or more discless workstations on a single network, the server or network would grind to a halt. A research fellow at City University, London, Phil Winterbottom says another drawback of discless workstations is that they are expensive for what is effectively a terminal. Their extra processing power is only available to the user with the machine on their desk. With existing networking systems it is not possible for network users to take advantage of the power on each others' desktops. A cheaper alternative to discless workstations is the X terminal. These are glorified graphics terminals that run the X Window System locally. But they have disadvantages. The large number of data packets that have to be passed through the network means there is a limit to the number of terminals that can be used before the network saturates. Also, a recent study of X terminals by the Rutherford Appleton Laboratory concludes that the price of X terminals is unlikely to fall significantly below that of discless workstations because both require the same expensive components. There are, however, developments in the X terminal market that begin to blur the distinction between discless workstations and PCs. At Xhibition, a show held in San Jose, California, in July, Dataquest noted the use of a graphics accelerator and optional floating point unit in a 32-bit terminal from Jupiter, while Micronics had integrated virtual memory into its X terminal's architecture. This convergence of terminal and workstation ideas should result in the device that will be at the front end of the new breed of distributed systems. An insight into what this could look like comes from AT&T's Bell Labs. The Labs' Unix veterans have developed Plan 9, a distributed computing environment premiered at the UK Unix User Group conference held in London in July. The Plan 9 ``front end'' is the Gnot. This resembles a workstation, having a high resolution display, Motorola 68020 processor, four to eight megabytes of memory, keyboard and mouse. But it does not have any disc drives, does not compile programs and has no expansion bus: it is a terminal. The principle behind the Plan 9 distributed environment is to link users' terminals over high speed networks to the computing power, which is concentrated in large multi-processor servers and fileservers for storage. These need not be concentrated in one place. In fact one of the goals of the Plan 9 team is to incorporate the whole of AT&T Bell Labs computing (about 30,000 people) into one Plan 9 system covering thousands of CPU and file servers spread throughout the company's various departments. Providing access to widely distributed computers from a user's desktop is the crux of client/server computing. Organisations such as the Open Software Foundation (with CMU's Mach operating system) and researchers such as Winterbottom's team at City University (Meshix) and a group at Vrije University in Amsterdam (Amoeba) are developing ``micro kernel'' operating systems that make widely distributed processing possible. In a client/server system, users should be able to sit at their desktops and have access to the processor that is most suited to running their application, whether it is in the next room or the next city. Ultimately applications are likely to be split across several processors including the machine on the user's desktop. As the Gnot illustrates, the client/server workstation is far from being a dumb terminal. It needs a processor to cope with running parts of programs. The demand for graphical user interfaces means it has to manipulate high resolution graphics, which requires a lot of memory. There is no need for local data storage, but if distributed systems are to work fast, fibre optic (FDDI) networks will be essential. System management will be done by a few, distributed system managers. Although personal workstations are not going to disappear overnight, some suppliers are already accepting the need for a more specialised client/ server devices like Plan 9's Gnot. The generic term for these is yet to be coined. Current suggestions include node, window, and junction. But whatever it is, it is not a PC, a workstation or even a terminal