Why word processing is not suitable for text publishing
The history of computing seen through letters rather than numbers
In any historical account, nothing is more striking than the inertia of the trajectories and the weakness of the reasons for choosing one of several possibilities. Everything could have been different, and at the same time everything is too complicated to change. Word-processing programmes are the most used software in the world and are accepted as a matter of course, when in fact each of their component elements derive from a period with its own technical constraints and vested interests. The result today is that they have limitations that are not logically impossible to overcome. A software programme seems to be a pure mathematical potentiality, free to be the best it possibly can, yet it is a text that takes times to write, always grappling with what already exists, and which is as unwieldy and difficult to reform as the law itself. This article will provide an overview of the history of computer word processing from an IT engineer’s point of view, and embraces that professional peculiarity of finding bugs and taking note of hacks – genius tricks that can inspire even today. We shall begin with IT before the existence of the computer and end with the stabilized form of Microsoft Word in the early 2000s. The article’s objective is to construct a critical relationship to software in order to grasp its model of texts.
Computerizing text, transmitting and storing characters
Is a musical box a computing object? Computing is not computer science; it is a science of information and its automation, and can concern mechanical or human machines. Closer to our bodies, these analogies enable us to get back to the atomic operations of algorithms, which are executed much quicker by machines. Thus, in the simple mechanism of a musical box, a tune is inscribed on a metallic cylinder using little pins that represent minimal logical information, 1 or 0 – bits. When the cylinder is turned, the pins touch tuned teeth (or lamellae), which vibrate to play the notes from the musical score. On more complex models, it is possible to change the cylinder to play different tunes. The reproduction of a song has a material cost, as it functions only on the same model of machine, with the same range of notes and the same distances between the teeth. Thus we can see that such objects were not actually of a computing nature when compared with a more elaborate system.
A mechanical piano (like a pianola) is similar to a musical box in that a song is also inscribed on a roll in the form of bits. A mechanism reads the information and plays a tune by striking the strings. However, the message here is on paper, which in itself is too light to move a piano hammer. The energy required to obtain the expected effect – hearing the music – and the reading of the information are decoupled. This independence means that the device which reads the song on paper can be connected to another instrument, for example an organ. This file format can be standardized because a piano keyboard, with its 88 keys, is too. Many of these rolls were thus written and sold during the first half of the twentieth century. Major jazz musicians and even classical composers like Stravinsky composed music for the pianola because the power and sound quality were superior to those of gramophones. We should, however, note that this digitization was not optimized, as a different line was required for each note.
If one wanted to send a piece of music over distance, for example using an electrical wire, 88 bits would be needed for each note. For the letters of a text, at least 26 bits and the spaces between the words would be needed. This problem was quickly encountered in telegraphy, and thus the Morse code (1832) works on the basis of the combination of a maximum of five short and long signals to transmit letters. From the very start Morse had designed devices to read and record messages on paper so they could be checked. The success of the telegraph machine was helped along by underwater cables and led to the multiplication of techniques to send a signal in parallel without pulling on several wires, i.e. multiplexing. As electrical current is transmitted at the speed of light, information could be transmitted much quicker than by an operator and the next problem was to develop a temporal distributer that correctly redistributed the signals received. In 1874, Émile Baudot, a civil servant at the French Post and Telegraph Service, developed an electromechanical machine that responded to the pressing requirements of the market of his day: several operators recorded messages which were transmitted in the form of electrical pulses on a single wire and were printed out at the distant destination. Rather than using Morse’s sequential code, with its unequal signal lengths corresponding to the letters, Baudot adopted binary numbering using 5 bits in parallel, whose order, as shown above, does not seem very mathematical to us today. The vowels are followed by consonants, with the series interrupted by 2 × 2 opposing control signals, and the overall arrangement was devised for mechanical reasons. The principle of encoding text in 5 bits was used on the Telex network until at least the end of the 1970s in France. The original table shown above already shows its limitations. There are five positions, which can be used to encode 25, or 32, different characters, which is enough for the alphabet but not for punctuation and numbers. The FIGS (figures) and LTRS (letters) codes work in a slightly similar way to the shift key on a modern keyboard, by enabling users to pass from one set of characters to another. Why was a code with 6 bits or more not used straightaway? The code transmits the characters to paper via a daisy wheel of letters, and a wheel with 64 characters would have taken longer to turn than one with 32. Each typebar had two letters on it, as with typewriters.
7-bit coding of 128 characters was introduced in 1963 with ASCII, the American Standard Code for Information Interchange, which still causes problems for the transmission of characters with accents in the different European languages, as the Scandinavian and Slavic languages as well as Greek cannot fit all their characters using 7 or even 8 bits. Each language has produced its own national 8-bit coding system. In 1998, UNICODE 4.0 enabled the most important world languages, notably Chinese, to be coded using 16 bits (216 = 32,768), but it is still not universally adopted, particularly by Apple and Microsoft, who prefer to maintain a dependency on their own in-house coding systems for different languages.
Telegraphy made it necessary to standardize the coding of alphabets and even textual phenomena like line breaks or tabulations (see ASCII controls 0 to 31), so that documents structured into paragraphs or columns of data could be transmitted as well as just words. The lead typesetting machines, Monotype and Linotype, could be equipped with a caster to read perforated paper tapes and then print previously saved books or journals. This device had a long history, with amateur computer programmers in the 1970s using teleprinters more often than screens to communicate with their machine. Thus Microsoft’s first product, written by Bill Gates and Paul Allen, was a BASIC language interpreter for the Altair 8800 (1975), which was delivered on paper tape and put into the computer using a Teletype Model 33 ASR. These machines could have been used for word-processing purposes because the device worked in the same way as cinema editing equipment. History has not recorded any experimentations in this field because telegraph operators who transmit content are not typesetters, while writers wish to first print out their work to reread it.
Computerizing pages, modelling documents
Word processing did not derive from telegraphy or computing, but from office typewriters. In 1962 IBM began to sell the Selectric, an electric typewriter that was an enormous success because of its speed. The former typebars were replaced with a typeball moved by cables. It was not compatible with ASCII’s 128 characters but in 1964 IBM decided it would be interesting to be able to connect its typewriting keyboards to a mass storage device (magnetic cards) to store texts to be reprinted. Doctor Wang’s company pursued and perfected the idea firstly with two tape recorders that could be used to cut/paste and insert pre-written content, before moving on to mini-computers which piloted several Selectric machines. The main function expected of a word processor at that time was mass mailing, i.e. adding addresses to personalize template letters. Thus it was not literature but office requirements that led to the development of word processing. Around 1970, one third of working women were secretaries who retyped letters for generally male bosses.
The appearance of the micro-processor in 1971 (Intel) opened up the possibility of commercial micro-computing, which owed nothing to telegraphy or typewriters and initially took little interest in text. With the Apple 2 (1977), computers moved out of laboratories, circles of enthusiasts or large-scale industries and into small companies, schools and families. Following on from that came so-called family computers, such as the ZX 81 or the Commodore 64, which could be used for programming purposes. However, this 1980s flash in the pan, which came at the cost of school computing plans in developed countries, above all served up gaming consoles with cassettes that were easier to pirate than the cartridges of specialized consoles (ATARI 2600). The Apple 2 could be used to play games but also to work, programme and invent things. It provided hundreds of software programmes, several of which introduced concepts that are still in use, such as the first spreadsheet programme: VisiCalc. This image of an accessible, useful and user-friendly computer had already been devised with the Xerox PARC (1973), but this was very costly in memory and in terms of the processor. The mouse, the equivalence between screen and paper (WYSIWYG – what you see is what you get), the laser printer, image formats, the Ethernet network: everything had been devised and prototyped.
1979, Xerox PARC, Office Alto
It unbelievable how many premonitions of the future arose in this short sequence. The dialogue between the boss and his electronic secretary imitates the anthropomorphism of Hal 9000 in 2001: A Space Odyssey, with the difference being that the computer cannot speak but is much better at drawing. Did the developers of that time already realize that personal computers would replace secretaries? Screens did not remain vertical like pages because, although this would have been much more practical for reading and writing, it was less so for watching films. Alto had a word-processing programme, image editors and, well before the Internet, an (internal) electronic mail programme. For the Macintosh (1984), 10 years later, Steve Jobs above all reworked a laboratory project that had been abandoned by a photocopier company to make it more accessible thanks to the reduced cost of computer chips.
Current hagiography tends to oppose Steve Jobs (Apple) and Bill Gates (Microsoft) without really measuring their differences and contributions. A technical analysis makes these clearer. Apple was a manufacturer of machines while Microsoft created software. The Macintosh was able to get ahead in interfaces because it imposed a closed and limited material configuration on which software could be optimized, though the screen was small and in black and white 512 × 342 pixels) and the 128K memory could not be expanded before the next model (Macintosh 512K) came out. Apple used marketing to profit fully from the technical progress it had made. It also wrote software like MacWrite, MacDraw and MacPaint and initially even printers (ImageWriter and LaserWriter), thus providing a perfect equivalence between screen and paper thanks to complete control of the environment.
In 1981 IBM brought out its first PC, which was already designed along the lines of an open architecture with standard components, and which could also be extended by other manufacturers. Despite its success, this was not the computer giant’s most profitable branch of activity. Thus IBM did not prevent copies, which meant that Compaq’s first compatible PC could use the same peripheral devices and extension cards as the original IBM, and therefore the same software. The PC initially could be run with three different operating systems: PC/IX (a kind of UNIX), CP/M 86 and Microsoft’s DOS, which quickly won the equipment race. With its BASIC language interpreters, Microsoft had already adapted to the diversity of the components of the time, including processors, memories and discs. Access to the machine with a command-line interface was the sine qua non for the price to be reasonable and for the machine to be powerful enough for games to dialogue directly with the equipment, without requiring a graphic layer. However, Microsoft did know about the Xerox PARC and had indeed poached the company’s key engineers, including Simonyi, who ran the Word project from 1981 onwards. Apple and Microsoft had the same objective but not the same number of users and old character-mode screens to work with. Windows 95 finally provided graphic windows on PCs and Microsoft Word was for a long time the only decent word-processing programme on Macintosh. The conclusion to this decade was plain to see on all computers with keyboards, and word processing became mixed up with desktop publishing. Users found that they could be typographers and layout artists, and paid much more attention to the appearance of their pages, perhaps to the detriment of the text itself.
George R. R. Martin (1948–), the author of the series Game of Thrones, likes to point out that he still writes with Wordstar 4.0 on PC-DOS. This was the dominant word-processing programme in 1985 and was in character mode. This is not just an artist’s whim; he actually considers that graphic software distracts him from his text. Using a mouse means all clickable controls have to be visible whereas keyboard shortcuts that are memorized and instinctively used by the hands free the screen and make writers more productive, as all programmers know. The 1980s were the most fertile period for inventions in word processing, with several firms with different origins and intentions competing in the field of PC-compatible products. The winner knew the prize would be the business of all companies, schools and homes.
In 1986 the leader was WordPerfect, a programme that came from a Mormon university. This programme was initially aimed at legal practitioners and was the first (WP 4.2) to include line and paragraph numbering, footnotes and endnotes, and styles (WP 5.1). These are major functions for anyone producing a text but at that time WordPerfect was still developed for a niche market, namely the American government and legal practitioners. The programme’s speed came from a very interesting data model. The text flows freely and is interspersed with control codes, which communicate the text formats to printers. The flow of characters does not assume any structure; there are no paragraphs, lists or sections. Among the most frequent controls, the soft line break is inserted automatically just before the edge of the screen, while the hard line break separates paragraphs. There are two types for the other controls: open or in pairs. Italic or underline are a pair of controls. Unlike tree models, all overlays are authorized and thus a segment of text in italic can overlay an underlined segment, with the intersecting text printed in italic and underlined. Open controls trigger a behaviour that can be stopped at the next hard line break or actually continue as long as the control is not cancelled. Open controls work very well for overall formatting such as centred text or text in columns. From version 5.1 onwards, users could create their own controls and styles, which were shortcuts to apply several formats with just one code. This function means that formatting can be automated, for example to change the format of a title (font, spaces, etc.) without having to change every title in a long document. Styles can also be used for more semantic reasons. The Diccionario Griego-Español was created using WordPerfect, which was the only software capable of enabling the dictionary’s creators to keep rigorous control of the hierarchy of meanings, the distinction between examples and definitions, or even usage notes. Compared to formalisms that are more frequent today, like XML, the absence of an implicit tree allows for the encoding of parallel phenomena which do not correspond to division into paragraphs and chapters, like, for example, the narrative functions of a story. The most distinctive and much-missed feature of this software is the reveal codes window, which totally changes the user’s relationship to the text by displaying each formatting code. A document thus becomes like a programme that pilots a printer.
Microsoft used all the innovations from WordPerfect except the reveal codes window. Their attempts at the latter were not very convincing, and thus users were actually given less control over their texts. This failure partly results from a formal impossibility within the text model. Firstly, Word distinguishes three levels of formatting: section, paragraph and character. This seems logical enough, but many textual components are transversal. For example, italics can be used to distinguish a word from another language in a paragraph, a quotation that is several paragraphs long, or the introductory section of a book. Section-level formatting can be used only to modify margins and columns; it is not possible to give prefaces a default setting in italics. A paragraph-level style can have text in italics overall, but if it is applied to a normal paragraph with just a few words in italics then the typographical distinction between the two formats will be lost. Everything becomes more complicated as soon as a document is longer than a letter. Users actually no longer realize how many formatting contortions they have to perform just to obtain what they want, because they are now so used to using it. A Word file, particularly in its original binary format (*.doc), is a chaotic jumble that only the software itself can understand, as a reveal codes window would have shown. This complication firstly results from the acceleration of innovations in the 1980s, when functions were added to models that had not been designed for this purpose. There is also another reason, which is more legitimate and intrinsic to the initial Word project. When adapting the Bravo software from Xerox PARC, Microsoft immediately made a word-processing programme to be used with a mouse, even on a screen in character mode, and with the undo function. The possibility of tracking back through the record of interactions with the user was a computing feat that simply prevented there being a straightforward model like WordPerfect, which moreover did not offer an undo function. To be able to undo an action, the previous version needs to be saved. At the time, memories were unable to conserve several versions of the full document and thus software with a record would have to use a history-based model, in which the visible condition of the document resulted from the different operations performed by the user. The undo function consists in cancelling the previous operation, for example putting back text that has been deleted. In that kind of model, the software does not “know” what the document looks like at every instant and re-performs the user’s operations to display, print or save. To provide a real-time reveal codes window would have been prohibitive from a calculations point of view.
WordPerfect did not succeed in transitioning to Windows 95, unlike Word 95. Novell bought WP with the hope of winning a legal claim against Microsoft for hindering competition, arguing that the latter had given itself an advantage by not publishing all the specifications of its new system in a timely manner. No one has ever accused Steve Jobs of making all the Apple II software useless when the Macintosh system came out. Moreover, the system could not be developed without at least one application to test its functions. An analysis of the sources shows that Microsoft’s WYSIWYG strategy was already part of the code 15 years beforehand. Was it still possible for WordPerfect to be completely rewritten in so little time? Microsoft did not even have to poach the key engineers this time – the result of all the work put in was inevitable, with the victory overwhelming and discouraging but just about fair.
In 2000 Sun released the sources of StarOffice, an office software package it had bought to adapt for its UNIX workstations. The software was initially based on a one-window, easy-on-the-eye interface, which meant it could work on machines that remained in character mode. Sun and other companies like IBM lent good engineers to enable the release of a free and open software, initially for linux and then also for Windows. By 2002, OpenOffice.org 1.0 had lost all the originality of StarOffice and is just a slower, less pleasant Word. The open syntax of XML files does not modify the text model and there is no innovation involved. On the contrary, there is a total mirroring of Microsoft to make sure users do not lose any formatting when switching between the two applications. The free software validates the Word model and all that it owes to the equipment and software standardization of the IBM-PC and Microsoft, despite the lack of regard its makers had for both.
Conclusion
The histories of computing do not always correctly measure the importance of peripheral devices, and yet the computer could not have become so toweringly important had it not distinguished itself by input devices like the keyboard and output devices like printers, and by its persistence with mass memory (paper then magnetic). Telegraphy was definitely the most active industry in dematerializing text, standardizing encoding, and developing models from printing (Monotype, Linotype) and even computing. Türing’s machine was not as abstract as we may have been led to believe. It looks like a telegraphic paper tape that could be deleted, or a magnetic tape. After Baudot’s machine in 1870, the field of electromechanics began to develop different equipment enabling humans to communicate with integrated circuits, which would not actually appear until around 1970.
Word-processing software does not require much in the way of calculation but is above all based on interaction with the user. It is less a calculator than a computer that moves and classifies the information in its memory. Though it is sometimes forgotten, the production of written texts requires a long phase of development, which is clear when one looks at the handwritten work of authors with their crossed-out text, insertions and recopied segments. Part of this work has no added intellectual value and was previously done by many secretaries in companies and administrative bodies. Character encoding and storage, followed by on-screen manipulation, insinuated themselves between the keyboard and the paper of the electric typewriter. Office software (as well as games) was important in driving the development of personal computing. Word-processing software now seems transfixed on the WYSIWYG paradigm in which, with laser printers, computers above all serve as perfectly honed typewriters.
Microsoft Word has thus well and truly established its document model, which combines a flow of characters, formatting controls, a tree of textual components, images with an absolute position on a page, and even objects from other applications such as pie charts from Excel or WordArt ornamentation. This incoherence costs a lot in machine time, user disappointment and human proofreading. The main misunderstanding concerns page layout. The principle underlying word processing is to fit characters into the geometry of the pages. The result is acceptable for a novel but quickly becomes disappointing if a page needs to include different flows and illustrations (footnotes, boxes, tables, images, etc.). For example, when text is being written it pushes all images down, producing blank spaces that were not required. Word-processing software should not be expected to act as a desktop publishing programme. The principle governing the latter is different, in that it first establishes the structure of the pages, their rules of repetition and the links between zones, with the text being added afterwards. If the aim of a single software programme were to manage writing and page layout correctly, it would be better to have two separate interfaces with very different buttons – one for writing, with a screen-optimized continuous ribbon, and the other for page layout and printer previews. If the user were used to working with two very distinct screens, then it would be possible to add a third, the reveal code window from WordPerfect, which would provide him or her with complete control over the document. This trio would have cost less in machine resources and improved the structure of texts. The display screen for in-progress text could stay in character mode and text could be printed as a draft without complex typographical effects. The printable version would not need to be displayed in real time and could just be provided when requested. This was possible and did not happen because word processing was above all aimed at the most profitable market, namely companies. Software is not written for universities, which pay academic prices; it is designed for commercial communication. Other more numerous but less profitable types of texts have to fit with tools that were not designed for them, which requires increased self-awareness to redirect forms of usage rather than to be directed by them. An academic writer is forced to understand computing better than the product’s actual target audience.
Featured image: Detail of punches by Rostislav Lisovy under Creative Commons BY-NC-SA license.
This is a translation from French by Richard Dickinson, INIST-CNRS translator, revised by Helen Tomlinson.
OpenEdition suggests that you cite this post as follows:
Frédéric Glorieux (June 15, 2017). Why word processing is not suitable for text publishing. Anthology. Retrieved December 10, 2024 from https://doi.org/10.58079/b7uj