Ubiquitous computers: The Computer for the 21st Century

by Mark Weiser

Scientific American Ubicomp Paper after Sci Am editing

September, 1991

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

Consider writing, perhaps the first information technology: The ability to capture a symbolic representation of spoken language for long-term storage freed information from the limits of individual memory. Today this technology is ubiquitous in industrialized countries. Not only do books, magazines and newspapers convey written information, but so do street signs, billboards, shop signs and even graffiti. Candy wrappers are covered in writing. The constant background presence of these products of “literacy technology” does not require active attention, but the information to be conveyed is ready for use at a glance. It is difficult to imagine modern life otherwise.

Silicon-based information technology, in contrast, is far from having become part of the environment. More than 50 million personal computers have been sold, and nonetheless the computer remains largely in a world of its own. It is approachable only through complex jargon that has nothing to do with the tasks for which which people actually use computers. The state of the art is perhaps analogous to the period when scribes had to know as much about making ink or baking clay as they did about writing.

The arcane aura that surrounds personal computers is not just a “user interface” problem. My colleagues and I at PARC think that the idea of a “personal” computer itself is misplaced, and that the vision of laptop machines, dynabooks and “knowledge navigators” is only a transitional step toward achieving the real potential of information technology. Such machines cannot truly make computing an integral, invisible part of the way people live their lives. Therefore we are trying to conceive a new way of thinking about computers in the world, one that takes into account the natural human environment and allows the computers themselves to vanish into the background.

Such a disappearance is a fundamental consequence not of technology, but of human psychology. Whenever people learn something sufficiently well, they cease to be aware of it. When you look at a street sign, for example, you absorb its information without consciously performing the act of reading.. Computer scientist, economist, and Nobelist Herb Simon calls this phenomenon “compiling”; philosopher Michael Polanyi calls it the “tacit dimension”; psychologist TK Gibson calls it “visual invariants”; philosophers Georg Gadamer and Martin Heidegger call it “the horizon” and the “ready-to-hand”, John Seely Brown at PARC calls it the “periphery”. All say, in essence, that only when things disappear in this way are we freed to use them without thinking and so to focus beyond them on new goals.

The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. “Ubiquitous computing” in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box. By analogy to writing, carrying a super-laptop is like owning just one very important book. Customizing this book, even writing millions of other books, does not begin to capture the real power of literacy.

Furthermore, although ubiquitous computers may employ sound and video in addition to text and graphics, that does not make them “multimedia computers.” Today’s multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background.

Perhaps most diametrically opposed to our vision is the notion of “virtual reality,” which attempts to make a world inside the computer. Users don special goggles that project an artificial scene on their eyes; they wear gloves or even body suits that sense their motions and gestures so that they can move about and manipulate virtual objects. Although it may have its purpose in allowing people to explore realms otherwise inaccessible — the insides of cells, the surfaces of distant planets, the information web of complex databases — virtual reality is only a map, not a territory. It excludes desks, offices, other people not wearing goggles and body suits, weather, grass, trees, walks, chance encounters and in general the infinite richness of the universe. Virtual reality focuses an enormous apparatus on simulating the world rather than on invisibly enhancing the world that already exists.

Indeed, the opposition between the notion of virtual reality and ubiquitous, invisible computing is so strong that some of us use the term “embodied virtuality” to refer to the process of drawing computers out of their electronic shells. The “virtuality” of computer-readable data — all the different ways in which it can be altered, processed and analyzed — is brought into the physical world.

How do technologies disappear into the background? The vanishing of electric motors may serve as an instructive example: At the turn of the century, a typical workshop or factory contained a single engine that drove dozens or hundreds of different machines through a system of shafts and pulleys. Cheap, small, efficient electric motors made it possible first to give each machine or tool its own source of motive force, then to put many motors into a single machine.

A glance through the shop manual of a typical automobile, for example, reveals twenty-two motors and twenty-five more solenoids. They start the engine, clean the windshield, lock and unlock the doors, and so on. By paying careful attention it might be possible to know whenever one activated a motor, but there would be no point to it.

Most of the computers that participate in embodied virtuality will be invisible in fact as well as in metaphor. Already computers in light switches, thermostats, stereos and ovens help to activate the world. These machines and more will be interconnected in a ubiquitous network. As computer scientists, however, my colleagues and I have focused on devices that transmit and display information more directly. We have found two issues of crucial importance: location and scale. Little is more basic to human perception than physical juxtaposition, and so ubiquitous computers must know where they are. (Today’s computers, in contrast, have no idea of their location and surroundings.) If a computer merely knows what room it is in, it can adapt its behavior in significant ways without requiring even a hint of artificial intelligence.

Ubiquitous computers will also come in different sizes, each suited to a particular task. My colleagues and I have built what we call tabs, pads and boards: inch-scale machines that approximate active Post-It notes, foot-scale ones that behave something like a sheet of paper (or a book or a magazine), and yard-scale displays that are the equivalent of a blackboard or bulletin board.

How many tabs, pads, and board-sized writing and display surfaces are there in a typical room? Look around you: at the inch scale include wall notes, titles on book spines, labels on controls, thermostats and clocks, as well as small pieces of paper. Depending upon the room you may see more than a hundred tabs, ten or twenty pads, and one or two boards. This leads to our goals for initially deploying the hardware of embodied virtuality: hundreds of computers per room.

Hundreds of computers in a room could seem intimidating at first, just as hundreds of volts coursing through wires in the walls did at one time. But like the wires in the walls, these hundreds of computers will come to be invisible to common awareness. People will simply use them unconsciously to accomplish everyday tasks.

Tabs are the smallest components of embodied virtuality. Because they are interconnected, tabs will expand on the usefulness of existing inch-scale computers such as the pocket calculator and the pocket organizer. Tabs will also take on functions that no computer performs today. For example, Olivetti Cambridge Research Labs pioneered active badges, and now computer scientists at PARC and other research laboratories around the world are working with these clip-on computers roughly the size of an employee ID card. These badges can identify themselves to receivers placed throughout a building, thus making it possible to keep track of the people or objects to which they are attached.

In our experimental embodied virtuality, doors open only to the right badge wearer, rooms greet people by name, telephone calls can be automatically forwarded to wherever the recipient may be, receptionists actually know where people are, computer terminals retrieve the preferences of whoever is sitting at them, and appointment diaries write themselves. No revolution in artificial intelligence is needed–just the proper imbedding of computers into the everday world. The automatic diary shows how such a simple thing as knowing where people are can yield complex dividends: meetings, for example, consist of several people spending time in the same room, and the subject of a meeting is most likely the files called up on that room’s display screen while the people are there.

My colleague Roy Want has designed a tab incorporating a small display that can serve simultaneously as an active badge, calendar and diary. It will also act as an extension to computer screens: instead of shrinking a program window down to a small icon on the screen, for example, a user will be able to shrink the window onto a tab display. This will leave the screen free for information and also let people arrange their computer-based projects in the area around their terminals, much as they now arrange paper-based projects in piles on desks and tables. Carrying a project to a different office for discussion is a simple as gathering up its tabs; the associated programs and files can be called up on any terminal.

The next step up in size is the pad, something of a cross between a sheet of paper and current laptop and palmtop computers. Bob Krivacic at PARC has built a prototype pad that uses two microprocessors, a workstation-sized display, a multi-button stylus, and a radio network that can potentially handle hundreds of devices per person per room.

Pads differ from conventional portable computers in one crucial way. Whereas portable computers go everywhere with their owners, the pad that must be carried from place to place is a failure. Pads are intended to be “scrap computers” (analogous to scrap paper) that can be grabbed and used anywhere; they have no individualized identity or importance.

One way to think of pads is as an antidote to windows. Windows were invented at PARC and popularized by Apple in the Macintosh as a way of fitting several different activities onto the small space of a computer screen at the same time. In twenty years computer screens have not grown much larger. Computer window systems are often said to be based on the desktop metaphor–but who would ever use a desk whose surface area is only 9″ by 11″?

Pads, in contrast, use a real desk. Spread many electronic pads around on the desk, just as you spread out papers. Have many tasks in front of you and use the pads as reminders. Go beyond the desk to drawers, shelves, coffee tables. Spread the many parts of the many tasks of the day out in front of you to fit both the task and the reach of your arms and eyes, rather than to fit the limitations of CRT glass-blowing. Someday pads may even be as small and light as actual paper, but meanwhile they can fulfill many more of paper’s functions than can computer screens.

Yard-size displays (boards) serve a number of purposes: in the home, video screens and bulletin boards; in the office, bulletin boards, whiteboards or flip charts. A board might also serve as an electronic bookcase from which one might download texts to a pad or tab. For the time being, however, the ability to pull out a book and place it comfortably on one’s lap remains one of the many attractions of paper. Similar objections apply to using a board as a desktop; people will have to get used to using pads and tabs on a desk as an adjunct to computer screens before taking embodied virtuality even further.

Boards built by Richard Bruce and Scott Elrod at PARC currently measure about 40 by 60 inches and display 1024×768 black-and-white pixels. To manipulate the display, users pick up a piece of wireless electronic “chalk” that can work either in contact with the surface or from a distance. Some researchers, using themselves and their coleagues as guinea pigs, can hold electronically mediated meetings or engage in other forms of collaboration around a liveboard. Others use the boards as testbeds for improved display hardware, new “chalk” and interactive software.

For both obvious and subtle reasons, the software that animates a large, shared display and its electronic chalk is not the same as that for a workstation. Switching back and forth between chalk and keyboard may involve walking several steps, and so the act is qualitatively different from using a keyboard and mouse. In addition, body size is an issue — not everyone can reach the top of the board, so a Macintosh-style menu bar may not be a good idea.

We have built enough liveboards to permit casual use: they have been placed in ordinary conference rooms and open areas, and no one need sign up or give advance notice before using them. By building and using these boards, researchers start to experience and so understand a world in which computer interaction casually enhances every room. Liveboards can usefully be shared across rooms as well as within them. In experiments instigated by Paul Dourish of EuroPARC and Sara Bly and Frank Halasz of PARC, groups at widely separated sites gathered around boards — each displaying the same image — and jointly composed pictures and drawings. They have even shared two boards across the Atlantic.

Liveboards can also be used as bulletin boards. There is already too much data for people to read and comprehend all of it, and so Marvin Theimer and David Nichols at PARC have built a prototype system that attunes its public information to the people reading it. Their “scoreboard” requires little or no interaction from the user other than to look and to wear an active badge.

Prototype tabs, pads and boards are just the beginning of ubiquitous computing. The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them. The hundreds of processors and displays are not a “user interface” like a mouse and windows, just a pleasant and effective “place” to get things done.

What will be most pleasant and effective is that tabs can animate objects previously inert. They can beep to help locate mislaid papers, books or other items. File drawers can open and show the desired folder — no searching. Tabs in library catalogs can make active maps to any book and guide searchers to it, even if it is off the shelf and on a table from the last reader.

In presentations, the size of text on overhead slides, the volume of the amplified voice, even the amount of ambient light, can be determined not by accident or guess but by the desires of the listeners in the room at that moment. Software tools for instant votes and consensus checking are already in specialized use in electronic meeting rooms of large corporations; tabs can make them widespread.

The technology required for ubiquitous computing comes in three parts: cheap, low-power computers that include equally convenient displays, a network that ties them all together, and software systems implementing ubiquitous applications. Current trends suggest that the first requirement will easily be met. Flat-panel displays containing 640×480 black-and-white pixels are now common. This is the standard size for PC’s and is also about right for television. As long as laptop, palmtop and notebook computers continue to grow in popularity, display prices will fall, and resolution and quality will rise. By the end of the decade, a 1000×800-pixel high-contrast display will be a fraction of a centimeter thick and weigh perhaps 100 grams. A small battery will provide several days of continuous use.

Larger displays are a somewhat different issue. If an interactive computer screen is to match a whiteboard in usefulness, it must be viewable from arm’s length as well as from across a room. For close viewing the density of picture elements should be no worse than on a standard computer screen, about 80 per inch. Maintaining a density of 80 pixels per inch over an area several feet on a side implies displaying tens of millions of pixels. The biggest computer screen made today has only about one fourth this capacity. Such large displays will probably be expensive, but they should certainly be available.

Central-processing unit speeds, meanwhile, reached a million instructions per second in 1986 and continue to double each year. Some industry observers believe that this exponential growth in raw chip speed may begin to level off about 1994, but that other measures of performance, including power consumption and auxiliary functions, will still improve. The 100-gram flat-panel display, then, might be driven by a single microprocessor chip that executes a billion operations per second and contains 16 megabytes of onboard memory along with sound, video and network interfaces. Such a processor would draw, on average, a few percent of the power required by the display.

Auxiliary storage devices will augment the memory capacity. Conservative extrapolation of current technology suggests that match-book size removable hard disks (or the equivalent nonvolatile memory chips) will store about 60 megabytes each. Larger disks containing several gigabytes of information will be standard, and terabyte storage — roughly the capacity of the Library of Congress — will be common. Such enormous stores will not necessarily be filled to capacity with usable information. Abundant space will, however, allow radically different strategies of information management. A terabyte of space makes deleting old files virtually unnecessary, for example.

Although processors and displays should be capable of offering ubiquitous computing by the end of the decade, trends in software and network technology are more problematic. Software systems today barely take any advantage of the computer network. Trends in “distributed computing” are to make networks appear like disks, memory, or other non-networked devices, rather than to exploit the unique capabilities of physical dispersion. The challenges show up in the design of operating systems and window systems.

Today’s operating sytems, like DOS and Unix, assume a relatively fixed configuration of hardware and software at their core. This makes sense for both mainframes and personal computers, because hardware or operating system software cannot reasonably be added without shutting down the machine. But in an embodied virtuality, local devices come and go, and depend upon the room and the people in it. New software for new devices may be needed at any time, and you’ll never be able to shut off everything in the room at once. Experimental “micro-kernel” operating systems, such as those developed by Rick Rashid at Carnegie-Mellon University and Andy Tanenbaum at Vrije University in Amsterdam, offer one solution. Future operating systems based around tiny kernels of functionality may automatically shrink and grow to fit the dynamically changing needs of ubiquitous computing.

Today’s window systems, like Windows 3.0 and the X Window System, assume a fixed base computer on which information will be displayed. Although they can handle multiple screens, they do not do well with applications that start out in one place (screen, computer, or room) and then move to another. For higher performance they assume a fixed screen and input mode and use the local computer to store information about the application–if any of these change, the window system stops working for that application. Even window systems like X that were designed for use over networks have this problem–X still assumes that an application once started stays put. The solutions to this problem are in their infancy. Systems for shared windows, such as those from Brown University and Hewlett-Packard Corporation, help with windows, but have problems of performance, and do not work for all applications. There are no systems that do well with the diversity of inputs to be found in an embodied virtuality. A more general solution will require changing the kinds of protocols by which application programs and windows interact.

The network connecting these computers has its own challenges. On the one hand, data transmission rates for both wired and wireless networks are increasing rapidly. Access to gigabit-per-second wired nets is already possible, although expensive, and will become progressively cheaper. (Gigabit networks will seldom devote all of their bandwidth to a single data stream; instead, they will allow enormous numbers of lower-speed transmissions to proceed simultaneously.) Small wireless networks, based on digital cellular telephone principles, currently offer data rates between two and 10 megabits per second over a range of a few hundred meters. Low-power wireless networks transmitting 250,000 bits per second to each station will eventually be available commercially.

On the other hand, the transparent linking of wired and wireless networks is an unsolved problem. Although some stop-gap methods have been developed, engineers must develop new communication protocols that explicitly recognize the concept of machines that move in physical space. Furthermore the number of channels envisioned in most wireless network schemes is still very small, and the range large (50-100 meters), so that the total number of mobile devices is severely limited. The ability of such a system to support hundreds of machines in every room is out of the question. Single-room networks based on infrared or newer electromagnetic technologies have enough channel capacity for ubiquitous computers, but they can only work indoors.

Present technologies would require a mobile device to have three different network connections: tiny range wireless, long range wireless, and very high speed wired. A single kind of network connection that can somehow serve all three functions has yet to be invented.

Neither an explication of the principles of ubiquitous computing nor a list of the technologies involved really gives a sense of what it would be like to live in a world full of invisible widgets. To extrapolate from today’s rudimentary fragments of embodied virtuality resembles an attempt to predict the publication of Finnegan’s Wake after just having invented writing on clay tablets. Nevertheless the effort is probably worthwhile:

Sal awakens: she smells coffee. A few minutes ago her alarm clock, alerted by her restless rolling before waking, had quietly asked “coffee?”, and she had mumbled “yes.” “Yes” and “no” are the only words it knows.

Sal looks out her windows at her neighborhood. Sunlight and a fence are visible through one, but through others she sees electronic trails that have been kept for her of neighbors coming and going during the early morning. Privacy conventions and practical data rates prevent displaying video footage, but time markers and electronic tracks on the neighborhood map let Sal feel cozy in her street.

Glancing at the windows to her kids’ rooms she can see that they got up 15 and 20 minutes ago and are already in the kitchen. Noticing that she is up, they start making more noise.

At breakfast Sal reads the news. She still prefers the paper form, as do most people. She spots an interesting quote from a columnist in the business section. She wipes her pen over the newspaper’s name, date, section, and page number and then circles the quote. The pen sends a message to the paper, which transmits the quote to her office.

Electronic mail arrives from the company that made her garage door opener. She lost the instruction manual, and asked them for help. They have sent her a new manual, and also something unexpected — a way to find the old one. According to the note, she can press a code into the opener and the missing manual will find itself. In the garage, she tracks a beeping noise to where the oil-stained manual had fallen behind some boxes. Sure enough, there is the tiny tab the manufacturer had affixed in the cover to try to avoid E-mail requests like her own.

On the way to work Sal glances in the foreview mirror to check the traffic. She spots a slowdown ahead, and also notices on a side street the telltale green in the foreview of a food shop, and a new one at that. She decides to take the next exit and get a cup of coffee while avoiding the jam.

Once Sal arrives at work, the foreview helps her to quickly find a parking spot. As she walks into the building the machines in her office prepare to log her in, but don’t complete the sequence until she actually enters her office. On her way, she stops by the offices of four or five colleagues to exchange greetings and news.

Sal glances out her windows: a grey day in silicon valley, 75 percent humidity and 40 percent chance of afternoon showers; meanwhile, it has been a quiet morning at the East Coast office. Usually the activity indicator shows at least one spontaneous urgent meeting by now. She chooses not to shift the window on the home office back three hours — too much chance of being caught by surprise. But she knows others who do, usually people who never get a call from the East but just want to feel involved.

The telltale by the door that Sal programmed her first day on the job is blinking: fresh coffee. She heads for the coffee machine.

Coming back to her office, Sal picks up a tab and “waves” it to her friend Joe in the design group, with whom she is sharing a virtual office for a few weeks. They have a joint assignment on her latest project. Virtual office sharing can take many forms–in this case the two have given each other access to their location detectors and to each other’s screen contents and location. Sal chooses to keep miniature versions of all Joe’s tabs and pads in view and 3-dimensionally correct in a little suite of tabs in the back corner of her desk. She can’t see what anything says, but she feels more in touch with his work when noticing the displays change out of the corner of her eye, and she can easily enlarge anything if necessary.

A blank tab on Sal’s desk beeps, and displays the word “Joe” on it. She picks it up and gestures with it towards her liveboard. Joe wants to discuss a document with her, and now it shows up on the wall as she hears Joe’s voice:

“I’ve been wrestling with this third paragraph all morning and it still has the wrong tone. Would you mind reading it?”

“No problem.”

Sitting back and reading the paragraph, Sal wants to point to a word. She gestures again with the “Joe” tab onto a nearby pad, and then uses the stylus to circle the word she wants:

“I think it’s this term ‘ubiquitous’. Its just not in common enough use, and makes the whole thing sound a little formal. Can we rephrase the sentence to get rid of it?”

“I’ll try that. Say, by the way Sal, did you ever hear from Mary Hausdorf?”

“No. Who’s that?”

“You remember, she was at the meeting last week. She told me she was going to get in touch with you.”

Sal doesn’t remember Mary, but she does vaguely remember the meeting. She quickly starts a search for meetings in the past two weeks with more than 6 people not previously in meetings with her, and finds the one. The attendees’ names pop up, and she sees Mary. As is common in meetings, Mary made some biographical information about herself available to the other attendees, and Sal sees some common background. She’ll just send Mary a note and see what’s up. Sal is glad Mary did not make the biography available only during the time of the meeting, as many people do…

In addition to showing some of the ways that computers can find their way invisibly into people’s lives, this speculation points up some of the social issues that embodied virtuality will engender. Perhaps key among them is privacy: hundreds of computers in every room, all capable of sensing people near them and linked by high-speed networks, have the potential to make totalitarianism up to now seem like sheerest anarchy. Just as a workstation on a local-area network can be programmed to intercept messages meant for others, a single rogue tab in a room could potentially record everything that happened there.

Even today, although active badges and self-writing appointment diaries offer all kinds of convenience, in the wrong hands their information could be stifling. Not only corporate superiors or underlings, but overzealous government officials and even marketing firms could make unpleasant use of the same information that makes invisible computers so convenient.

Fortunately, cryptographic techniques already exist to secure messages from one ubiquitous computer to another and to safeguard private information stored in networked systems. If designed into systems from the outset, these techniques can ensure that private data does not become public. A well-implemented version of ubiquitous computing could even afford better privacy protection than exists today. For example, schemes based on “digital pseudonyms” could eliminate the need to give out items of personal information that are routinely entrusted to the wires today, such as credit card number, social security number and address.

Jim Morris of Carnegie-Mellon University has proposed an appealing general method for approaching these issues: build computer systems to have the same privacy safeguards as the real world, but no more, so that ethical conventions will apply regardless of setting. In the physical world, for example, burglars can break through a locked door, but they leave evidence in doing so. Computers built according to Morris’s rule would not attempt to be utterly proof against cracker, but they would be impossible to enter without leaving the digital equivalent of fingerprints.

By pushing computers into the background, embodied virtuality will make individuals more aware of the people on the other ends of their computer links. This development carries the potential to reverse the unhealthy centripetal forces that conventional personal computers have introduced into life and the workplace. Even today, people holed up in windowless offices before glowing computer screens may not see their fellows for the better part of each day. And in virtual reality, the outside world and all its inhabitant effectively ceases to exist. Ubiquitous computers, in contrast, reside in the human world and pose no barrier to personal interactions. If anything, the transparent connections that they offer between different locations and times may tend to bring communities closer together.

My colleagues and I at PARC believe that what we call ubiquitous computing will gradually emerge as the dominant mode of computer access over the next twenty years. Like the personal computer, ubiquitous computing will enable nothing fundamentally new, but by making everything faster and easier to do, with less strain and mental gymnastics, it will transform what is apparently possible. Desktop publishing, for example, is fundamentally not different from computer typesetting, which dates back to the mid 1960’s at least. But ease of use makes an enormous difference.

When almost every object either contains a computer or can have a tab attached to it, obtaining information will be trivial: “Who made that dress? Are there any more in the store? What was the name of the designer of that suit I liked last week?” The computing environment knows the suit you looked at for a long time last week because it knows both of your locations, and, it can retroactively find the designer’s name even if it did not interest you at the time.

Sociologically, ubiquitous computing may mean the decline of the computer addict. In the 1910’s and 1920’s many people “hacked” on crystal sets to take advantage of the new high tech world of radio. Now crystal-and-cat’s whisker receivers are rare, because radios are ubiquitous. In addition, embodied virtuality will bring computers to the presidents of industries and countries for nearly the first time. Computer access will penetrate all groups in society.

Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.


This is an archive of Mark Weiser’s ubiquitous computing website (ubiq.com) which disappeared from the internet in 2018 some time after Mark Weiser passed away. We wanted to preserve Mark Weiser’s knowledge about ubiquitous computing and are permanently hosting a selection of important pages from ubiq.com.

Windows 10 Spring Creators Update

The new Spring Creators Update for Windows 10 will be released this week.  There will be many new features with this release, including the ability to resume past activities in timeline and a file sharing feature with nearby devices. There will be other features coming as well, including a rebuilt Game Bar with a new Fluent design UI, a diagnostic data viewing tool in the Security and Privacy section, and Cortana is reportedly easier to use with a new Organizer interface and My Skills tab.

Read more about the new Spring Creators Update for Windows 10 here:

https://hothardware.com/news/microsoft-confirms-windows-10-april-update-release-build-17134

Unpatchable Nintendo Switch Exploit

 

A newly published exploit for the Nintendo Switch console is unpatchable.  The exploit can’t be fixed via a downloadable patch because the exploit makes use of a vulnerability in the USB recovery mode, circumventing the lock-out operations that would usually protect the bootROM. The flawed bootROM can’t be modified once the chip leaves the factory. Access to the fuses needed to configure the device’s ipatches was blocked when the ODM_PRODUCTION fuse was burned, so no bootROM update is possible.

Nintendo may be able to detect hacked systems when they sign into Nintendo’s online servers. Nintendo could then ban those systems from accessing the servers and disable the hacked Switch’s online functions.

You can read more about the Unpatchable Nintendo Switch Exploit at:

https://arstechnica.com/gaming/2018/04/the-unpatchable-exploit-that-makes-every-current-nintendo-switch-hackable/

Cyrix 5×86 CPU

Cyrix 5×86 (“M1sc”)

Despite having the same name as AMD’s 5×86 processor, the Cyrix 5×86 is a totally different animal. While AMD designed its 5×86 by further increasing the clock on the 486DX4, Cyrix took the opposite approach by modifying its M1 processor core (used for the 6×86 processor) to make a “lite” version to work on 486 motherboards. As such, the Cyrix 5×86 in some ways resembles a Pentium OverDrive (which is a Pentium core modified to work in a 486 motherboard) internally more than it resembles the AMD 5×86. This chip is probably the hardest to classify as either fourth or fifth generation.

The 5×86 employs several architectural features that are normally found only in fifth-generation designs. The pipeline is extended to six stages, and the internal architecture is 64 bits wide. It has a larger (16 KB) primary cache than the 486DX4 chip. It uses branch prediction to improve performance.

The 5×86 was available in two speeds, 100 and 120 MHz. The 5×86-120 is the most powerful chip that will run in a 486 motherboard–it offers performance comparable to a Pentium 90 or 100. The 5×86 is still a clock-tripled design, so it runs in 33 and 40 MHz motherboards. (The 100 MHz version will actually run at 50×2 as well, but normally was run at 33 MHz.) It is a 3 volt design and is intended for a Socket 3 motherboard. It will run in an earlier 486 socket if a voltage regulator is used. I have heard that some motherboards will not run this chip properly so you may need to check with Cyrix if trying to use this chip in an older board. These chips have been discontinued by Cyrix but are still good performers, and for those with a compatible motherboard, as good as you can get. Unfortunately, they are extremely difficult to find now.

Look here for an explanation of the categories in the processor summary table below, including links to more detailed explanations.

General Information

Manufacturer

Cyrix

Family Name

5×86

Code name

"M1sc"

Processor Generation

Fourth

Motherboard Generation

Fourth

Version

5×86-100

5×86-120

Introduced

1996?

Variants and Licensed Equivalents

Speed Specifications

Memory Bus Speed (MHz)

33 / 50

40

Processor Clock Multiplier

3.0 / 2.0

3.0

Processor Speed (MHz)

100

120

"P" Rating

P75

P90

Benchmarks

iCOMP Rating

~610

~735

iCOMP 2.0 Rating

~67

~81

Norton SI

264

316

Norton SI32

~16

19

CPUmark32

~150

~180

Physical Characteristics

Process Technology

CMOS

Circuit Size (microns)

0.65

Die Size (mm^2)

144

Transistors (millions)

2.0

Voltage, Power and Cooling

External or I/O Voltage (V)

3.45

Internal or Core Voltage (V)

3.45

Power Management

SMM

Cooling Requirements

Active heat sink

Packaging

Packaging Style

168-Pin PGA

Motherboard Interface

Socket 3; or 168-Pin Socket, Socket 1, Socket 2 (with voltage regulator)

External Architecture

Data Bus Width (bits)

32

Maximum Data Bus Bandwidth (Mbytes/sec)

127.2

152.6

Address Bus Width (bits)

32

Maximum Addressable Memory

4 GB

Level 2 Cache Type

Motherboard

Level 2 Cache Size

Usually 256 KB

Level 2 Cache Bus Speed

Same as Memory Bus

Multiprocessing

No

Internal Architecture

Instruction Set

x86

MMX Support

No

Processor Modes

Real, Protected, Virtual Real

x86 Execution Method

Native

Internal Components

Register Size (bits)

32

Pipeline Depth (stages)

6

Level 1 Cache Size

16 KB Unified

Level 1 Cache Mapping

4-Way Set Associative

Level 1 Cache Write Policy

Write-Through, Write-Back

Integer Units

1

Floating Point Unit / Math Coprocessor

Integrated

Instruction Decoders

1

Branch Prediction Buffer Size / Accuracy

!? entries / !? %

Write Buffers

!?

Performance Enhancing Features


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

History of NTFS

Overview and History of NTFS

In the early 1990s, Microsoft set out to create a high-quality, high-performance, reliable and secure operating system. The goal of this operating system was to allow Microsoft to get a foothold in the lucrative business and corporate market–at the time, Microsoft’s operating systems were MS-DOS and Windows 3.x, neither of which had the power or features needed for Microsoft to take on UNIX or other “serious” operating systems. One of the biggest weaknesses of MS-DOS and Windows 3.x was that they relied on the FAT file system. FAT provided few of the features needed for data storage and management in a high-end, networked, corporate environment. To avoid crippling Windows NT, Microsoft had to create for it a new file system that was not based on FAT. The result was the New Technology File System or NTFS.

It is often said (and sometimes by me, I must admit) that NTFS was “built from the ground up”. That’s not strictly an accurate statement, however. NTFS is definitely “new” from the standpoint that it is not based on the old FAT file system. Microsoft did design it based on an analysis of the needs of its new operating system, and not based on something else that they were attempting to maintain compatibility with, for example. However, NTFS is not entirely new, because some of its concepts were based on another file system that Microsoft was involved with creating: HPFS.

Before there was Windows NT, there was OS/2. OS/2 was a joint project of Microsoft and IBM in the early 1990s; the two companies were trying to create the next big success in the world of graphical operating systems. They succeeded, to some degree, depending on how you are measuring success. :^) OS/2 had some significant technical accomplishments, but suffered from marketing and support issues. Eventually, Microsoft and IBM began to quarrel, and Microsoft broke from the project and started to work on Windows NT. When they did this, they borrowed many key concepts from OS/2’s native file system, HPFS, in creating NTFS.

NTFS was designed to meet a number of specific goals. In no particular order, the most important of these are:

  • Reliability: One important characteristic of a “serious” file system is that it must be able to recover from problems without data loss resulting. NTFS implements specific features to allow important transactions to be completed as an integral whole, to avoid data loss, and to improve fault tolerance.
  • Security and Access Control: A major weakness of the FAT file system is that it includes no built-in facilities for controlling access to folders or files on a hard disk. Without this control, it is nearly impossible to implement applications and networks that require security and the ability to manage who can read or write various data.
  • Breaking Size Barriers: In the early 1990s, FAT was limited to the FAT16 version of the file system, which only allowed partitions up to 4 GiB in size. NTFS was designed to allow very large partition sizes, in anticipation of growing hard disk capacities, as well as the use of RAID arrays.
  • Storage Efficiency: Again, at the time that NTFS was developed, most PCs used FAT16, which results in significant disk space due to slack. NTFS avoids this problem by using a very different method of allocating space to files than FAT does.
  • Long File Names: NTFS allows file names to be up to 255 characters, instead of the 8+3 character limitation of conventional FAT.
  • Networking: While networking is commonplace today, it was still in its relatively early stages in the PC world when Windows NT was developed. At around that time, businesses were just beginning to understand the importance of networking, and Windows NT was given some facilities to enable networking on a larger scale. (Some of the NT features that allow networking are not strictly related to the file system, though some are.)

Of course, there are also other advantages associated with NTFS; these are just some of the main design goals of the file system. There are also some disadvantages associated with NTFS, compared to FAT and other file systems–life is full of tradeoffs. :^) In the other pages of this section we will fully explore the various attributes of the file system, to help you decide if NTFS is right for you.

For their part, Microsoft has not let NTFS lie stagnant. Over time, new features have been added to the file system. Most recently, NTFS 5.0 was introduced as part of Windows 2000. It is similar in most respects to the NTFS used in Windows NT, but adds several new features and capabilities. Microsoft has also corrected problems with NTFS over time, helping it to become more stable, and more respected as a “serious” file system. Today, NTFS is becoming the most popular file system for new high-end PC, workstation and server implementations. NTFS shares the stage with various UNIX file systems in the world of small to moderate-sized business systems, and is becoming more popular with individual “power” users as well.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Commodore Plus/4

Plus/4 – 121 colors in 1984!

Model:           Commodore Plus/4 

Manufactured:    1984 

Processor:       7501/8501 ~0.88MHz when the raster beam is on the
visible screen and ~1.77MHz the rest of the time. (The TED chip
generates the processor frequency). The resulting speed is equal to the
vic-20. A PAL vic-20 is faster than this NTSC machine, but a PAL Plus/4
is just a little faster than a PAL vic-20.

Memory:          64Kb (60671 bytes available in Basic)

Graphics:        TED 7360 (Text Editing Device 7360 HMOS)
          
Hi-Resolution:   320x200                 
                 Colors: 121 (All can be visible at the same time)     
                 Hardware reverse display of characters     
                 Hardware blinking
                 Hardware cursor
                 Smooth scrolling
                 Multicolor 160x200
                 (No sprites)

Sound:           TED (7360)
                 2 voices (two tones or one tone + noise)
"OS"             Basic 3.5
Built in         Tedmon, software:
                 "3-plus-1" = word processor, spreadsheet, database and
                 graphs.

History and thoughts

The Plus/4 was called 264 as a prototype (January 1984) and was supposed to have customer selectable built in software. But they decided to ship all with the same built in software and rename the computer Plus/4 (June 1984). (The reason for the long delay was that Commodore’s factories were busy producing C64s). There was other versions available of the same “TED” computer (more or less): The C16 – Looks like a black Vic20 with white keys but is the same computer as the Plus/4, but with no built in software (except for Tedmon), only 16kb of ram, and no RS232. Why it looks like a vic-20 is because Commodore intended it as a replacement for the vic-20 when it was cancelled in 1984. There was also a C116 with the same features as the C16 but looked like a Plus/4 with rubber keys. About 400,000 Plus/4s were made (compared to 2,5 million vic-20s and something like 15 million C64s).

The reason why the Plus/4 wasn’t more popular was one: The C64! Commodore kind of competed with themselves. Let’s list the benefits with the two computers:

 Plus/4:
   * 121 colors (compared to c64's 16)
   * Very powerful basic
   * Built in machine language monitor
   * A little faster
   * Built in software
   * Lower price

 C64:
   * Sprite graphics
   * Better sound
   * Lots of software available
   * All your frieds have one
   * Your old vic-20 tape recorder will work without an adapter
   * Your old vic-20 joysticks will work without adapters

Well, which would you choose?

Well, Basic 3.5 is quite powerful. It has commands for graphics, sound, disk commands, error handling etc. I counted 111 commands/functions (compared to 70 for the C64). On the c64, POKE and PEEK is the only way to access graphics, sprites and sound. And with most of those registers being two bytes big and the chips a bit complex to set up, that is quite troublesome and time consuming for the basic. And drawing graphics with lines, circles etc using only basic on the c64 is just impossible (or would take a year!) On the other hand – if basic programming doesn’t interest you, but copying pirate copied games from your friends, then the c64 is your computer… I mean back then! 😉

There was more reasons than just the c64 for the Plus/4’s lack of success. There are many theories about this on the internet so instead of just repeating them, I would like to contribute with another one: The strange names! Why on earth name the same line of computers so differently! The Plus/4, C16 and C116 is more compatible than a vic-20 with and without memory expansion! And they even look different! I would have made two different computers:”TED-64″ (Plus/4) and”TED-16″ (The C16, but in a Plus/4 case).

They would also have normal joystick and tape ports (or adapters included with the computer). The 3-plus-1 software could have been left out and been sold separately on a cartridge to bring down the price of the computer. It could have been sold together with the computer in a bundle at a reduced price if you wanted to. This way the original 264 idea about customer selectable included software could have been doable with all the selectable software on different cartridges.


My impressions

I have just got the Plus/4, but my impression of it so far is very positive. It’s little and neat. I like the basic and the graphics. The computer has very much “Commodore” feeling. I would say it’s like a mix between the vic-20 (for the simplicity, one graphcis/sound chip and default colors), the C64 (for the similar graphics) and the C128 (for the powerful basic and the similarities with the 128’s VDC chip features like blinking etc.) The Plus/4 also have the Esc codes that the C128 has. The machine language monitor is also almost the same. But in the same time the Plus/4 is simple and easy to survey like the vic-20. I think it’s a well designed computer. The only thing I don’t like about the Plus/4 is the lack of a Restore key. But there are work-arounds (Runstop+reset for example). I have written some more tips about this in the manuals below.

The same people designing the Plus/4 (except for one) later designed the C128.

If you plan to get a Plus/4, then you might want to know that the 1541 diskdrive is working, the video cable is the same as for the c64 (at least composite and sound that my cable is using). But for joysticks, you need to make a little adapter, also for the tape recorder (if it isn’t of the black type that has a built in adapter).

My Plus/4 is a NTSC machine with a 110V power supply. And living in Sweden I needed to buy a 220->110v converter. The Plus/4 does not need the frequency from the PSU (like the C64), so a simple converter that generates 110v 50Hz is fine. My Plus/4 has a square power plug. Others have a round one, and then I could have used an European c64 power supply instead. There are of course PAL Plus/4s as well, but I got mine for free and I like the NTSC display too. No BIG border around the screen like on all PAL Commodores. The NTSC Plus/4 has also a little faster key repeat, so it feels a little faster even though the PAL version runs faster. BUT – There is MUCH more PAL software available it seems…


This is an archive of pug510w’s Dator Museum which disappeared from the internet in 2017. We wanted to preserve the knowledge about the Commodore Plus/4 and are permanently hosting a copy of Dator Museum.

Commodore Plus/4 Service Manual

Convert FAT Disks to NTFS

This article describes how to convert FAT disks to NTFS. See the Terms sidebar for definitions of FAT, FAT32 and NTFS. Before you decide which file system to use, you should understand the benefits and limitations of each of them.

Changing a volume’s existing file system can be time–consuming, so choose the file system that best suits your long–term needs. If you decide to use a different file system, you must back up your data and then reformat the volume using the new file system. However, you can convert a FAT or FAT32 volume to an NTFS volume without formatting the volume, though it is still a good idea to back up your data before you convert.

Note  Some older programs may not run on an NTFS volume, so you should research the current requirements for your software before converting.

Choosing Between NTFS, FAT, and FAT32

You can choose between three file systems for disk partitions on a computer running Windows XP: NTFS, FAT, and FAT32. NTFS is the recommended file system because it’s is more powerful than FAT or FAT32, and includes features required for hosting Active Directory as well as other important security features. You can use features such as Active Directory and domain–based security only by choosing NTFS as your file system.

Converting to NTFS Using the Setup Program

The Setup program makes it easy to convert your partition to the new version of NTFS, even if it used FAT or FAT32 before. This kind of conversion keeps your files intact (unlike formatting a partition).

Setup begins by checking the existing file system. If it is NTFS, conversion is not necessary. If it is FAT or FAT32, Setup gives you the choice of converting to NTFS. If you don’t need to keep your files intact and you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than converting from FAT or FAT32. (Formatting a partition erases all data on the partition and allows you to start fresh with a clean drive.) However, it is still advantageous to use NTFS, regardless of whether the partition was formatted with NTFS or converted.

Converting to NTFS Using Convert.exe

A partition can also be converted after Setup by using Convert.exe. For more information about Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the command window, type help convert, and then press ENTER.

It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact (unlike formatting a partition.

To find out more information about Convert.exe

1.After completing Setup, click Start, click Run, type cmd, and then press ENTER.
2.In the command window, type help convert and then press ENTER.
Information about converting FAT volumes to NTFS is made available as shown below.
Converting FAT volumes to NTFS

To convert a volume to NTFS from the command prompt

1.Open Command Prompt. Click Start, point to All Programs, point to Accessories, and then click Command Prompt.
2.In the command prompt window, type: convert drive_letter: /fs:ntfs 

For example, typing convert D: /fs:ntfs would format drive D: with the ntfs format. You can convert FAT or FAT32 volumes to NTFS with this command.

Important  Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or FAT32. You will need to reformat the drive or partition which will erase all data, including programs and personal files, on the partition.

Commodore Computer History Archive

As I train new our computer systems engineers I have found that few of them know anything about the Commodore home computer systems. In the early 1990s, when I first started getting into electronics and computers, Commodores were everywhere. By the mid 90s they were ancient relics. I always had five or six laying around the shop. Most were given to me for spare parts from customers. The majority of them had no issues, they were just out dated. For fun and to train new guys, we repaired many of them over the years. Over the years, less and less of our computer systems engineers had any experience on Commodores. Today, virtually no one under 35 knows what a Commodore computer system is.

The MOS 6502 chip

The reason why a 15 year old could work on a Commodore was that the systems were all based around simple CPUs. The MOS 6502 was very easy to diagnose issues with and repair. All I needed to work on the circuits was a simple analog volt meter and a reference voltage. Digital voltmeters were very expensive in the 1990s, I don’t think we had one until the late 90s.

For example, most prominent home computer systems and video game systems in the 1980s and 1990s had a MOS 6502 or a derivative within them. These derivative chips were called the 650x or the 6502 family of chips. The Commodore VIC-20, Commodore 64, Apple II, Atari 800, Atari 2600 and NES all had a 6502 or 650x chips in them. Almost everything made from the mid 1970s to the mid 1980s had a connection to the 6502 family. By the late 1980s newer and faster chips by Motorola and Intel replaced the MOS 6502 family as the primary go to processor.

Commodore History Disappearing

While I train new field engineers here at Karls Technology I have been looking online for reference materials about Commodores. Back in the 1990s reference material was available at the library, in hobby magazines and BBS’s. Today, I find very little good reference material about Commodores, MOS or the 6502 family of chips. Previously, you could find people that worked for MOS, Commodore or GMT around the internet. As those engineers of yesterday pass way their knowledge of the history of computing leaves us.

Before the days of blogs, much of the early computing history was recorded on early engineer’s personal websites. Those websites have gone offline or were hosted by companies that not longer exist.

Computer History Archive

Due to this knowledge leaving us and much of it only existing in an offline capacity; we decided to start archiving Commodore, 6502 family and other early computer history information. Therefore, we will scan and post below any knowledge we find in an offline repository. In addition, any historical personal websites about early computer history from yesteryear will be archived here. Our goal is to document as much early computer history as possible.

Text Editing Device TED 7360 Datasheet

Commodore Plus/4 Specifications

Commodore Plus/4 Service Manual

Commodore Semiconductor Group’s Superfund Site from the EPA

Designing Calm Technology by Mark Weiser, Xerox, 1995.

Designing Calm Technology

by Mark Weiser and John Seely Brown

Xerox PARC
December 21, 1995

Introduction

Bits flowing through the wires of a computer network are ordinarily invisible. But a radically new tool shows those bits through motion, sound, and even touch. It communicates both light and heavy network traffic. Its output is so beautifully integrated with human information processing that one does not even need to be looking at it or near it to take advantage of its peripheral clues. It takes no space on your existing computer screen, and in fact does not use or contain a computer at all. It uses no software, only a few dollars in hardware, and can be shared by many people at the same time. It is called the “Dangling String”.

Created by artist Natalie Jeremijenko, the “Dangling String” is an 8 foot piece of plastic spaghetti that hangs from a small electric motor mounted in the ceiling. The motor is electrically connected to a nearby Ethernet cable, so that each bit of information that goes past causes a tiny twitch of the motor. A very busy network causes a madly whirling string with a characteristic noise; a quiet network causes only a small twitch every few seconds. Placed in an unused corner of a hallway, the long string is visible and audible from many offices without being obtrusive. It is fun and useful. The Dangling String meets a key challenge in technology design for the next decade: how to create calm technology. 

We have struggled for some time to understand the design of calm technology, and our thoughts are still incomplete and perhaps even a bit confused. Nonetheless, we believe that calm technology may be the most important design problem of the twenty-first century, and it is time to begin the dialogue.

The Periphery

Designs that encalm and inform meet two human needs not usually met together. Information technology is more often the enemy of calm. Pagers, cellphones, newservices, the World-Wide-Web, email, TV, and radio bombard us frenetically. Can we really look to technology itself for a solution?

But some technology does lead to true calm and comfort. There is no less technology involved in a comfortable pair of shoes, in a fine writing pen, or in delivering the New York Times on a Sunday morning, than in a home PC. Why is one often enraging, the others frequently encalming? We believe the difference is in how they engage our attention. Calm technology engages both the center and the periphery of our attention, and in fact moves back and forth between the two.

We use “periphery” to name what we are attuned to without attending to explicitly. Ordinarily when driving our attention is centered on the road, the radio, our passenger, but not the noise of the engine. But an unusual noise is noticed immediately, showing that we were attuned to the noise in the periphery, and could come quickly to attend to it.

It should be clear that what we mean by the periphery is anything but on the fringe or unimportant. What is in the periphery at one moment may in the next moment come to be at the center of our attention and so be crucial. The same physical form may even have elements in both the center and periphery. The ink that communicates the central words of a text also, though choice of font and layout, peripherally clues us into the genre of the text. 

A calm technology will move easily from the periphery of our attention, to the center, and back. This is fundamentally encalming, for two reasons.

First, by placing things in the periphery we are able to attune to many more things than we could if everything had to be at the center. Things in the periphery are attuned to by the large portion of our brains devoted to peripheral (sensory) processing. Thus the periphery is informing without overburdening.

Second, by recentering something formerly in the periphery we take control of it. Peripherally we may become aware that something is not quite right, as when awkward sentences leave a reader tired and discomforted without knowing why. By moving sentence construction from periphery to center we are empowered to act, either by finding better literature or accepting the source of the unease and continuing. Without centering the periphery might be a source of frantic following of fashion; with centering the periphery is a fundamental enabler of calm through increased awareness and power.

Not all technology need be calm. A calm videogame would get little use; the point is to be excited. But too much design focuses on the object itself and its surface features without regard for context. We must learn to design for the periphery so that we can most fully command technology without being dominated by it. 

Our notion of technology in the periphery is related to the notion of affordances, due to Gibson by popularized by Norman. An affordance is a relationship between an object in the world and the intentions, perceptions, and capabilities of a person. The side of a door that only pushes out affords this action by offering a flat pushplate. The idea of affordance, powerful as it is, tends to describe the surface of a design. For us the term “affordance” does not reach far enough into the periphery where a design must be attuned to but not attended to.

Three signs of calm technology

Technologies encalm as they empower our periphery. This happens in two ways. First, as already mentioned, a calming technology may be one that easily moves from center to periphery and back. Second, a technology may enhance our peripheral reach by bringing more details into the periphery. An example is a video conference that, by comparison to a telephone conference, enables us to attune to nuances of body posture and facial expression that would otherwise be inaccessible. This is encalming when the enhanced peripheral reach increases our knowledge and so our ability to act without increasing information overload.

The result of calm technology is to put us at home, in a familiar place. When our periphery is functioning well we are tuned into what is happening around us, and so also to what is going to happen, and what has just happened. We are connected effortlessly to a myriad of familiar details. This connection to the world around we called “locatedness”, and it is the fundamental gift that the periphery gives us.

Examples of calm technology

To deepen the dialogue we now examine a few designs in terms of their motion between center and periphery, peripheral reach, and locatedness. Below we consider inner office windows, Internet Multicast, and once again the Dangling String.

inner office windows

We do not know who invented the concept of glass windows from offices out to hallways. But these inner windows are a beautifully simple design that enhances peripheral reach and locatedness. 

The hallway window extends our periphery by creating a two-way channel for clues about the environment. Whether it is motion of other people down the hall (its time for a lunch; the big meeting is starting), or noticing the same person peeking in for the third time while you are on the phone (they really want to see me; I forgot an appointment), the window connects the person inside to the nearby world.

Inner windows also connect with those who are outside the office. A light shining out into the hall means someone is working late; someone picking up their office means this might be a good time for a casual chat. These small clues become part of the periphery of a calm and comfortable workplace.

Office windows illustrate a fundamental property of motion between center and periphery. Contrast them with an open office plan in which desks are separated only by low or no partitions. Open offices force too much to the center. For example, a person hanging out near an open cubicle demands attention by social conventions of privacy and politeness. There is less opportunity for the subtle clue of peeking through a window without eavesdropping on a conversation. The individual, not the environment, must be in charge of moving things from center to periphery and back. 

The inner office window is a metaphor for what is most exciting about the Internet, namely the ability to locate and be located by people passing by on the information highway.

Internet Multicast

A technology called Internet Multicast may become the next World Wide Web (WWW) phenomenon. Sometimes called the MBone (for Multicast backBONE), multicasting was invented by a then graduate student at Stanford University, Steve Deering.

Whereas the World Wide Web (WWW) connects only two computers at a time, and then only for the few moments that information is being downloaded, the MBone continuously connects many computers at the same time. To use the familiar highway metaphor, for any one person the WWW only lets one car on the road at a time, and it must travel straight to its destination with no stops or side trips. By contrast, the MBone opens up streams of traffic between multiple people and so enables the flow of activities that constitute a neighborhood. Where the WWW ventures timidly to one location at a time before scurrying back home again, the MBone sustains ongoing relationships between machines, places, and people.

Multicast is fundamentally about increasing peripheral reach, derived from its ability to cheaply support multiple multimedia (video, audio, etc.) connections all day long. Continuous video from another place is no longer television, and no longer video-conferencing, but more like a window of awareness. A continuous video stream brings new details into the periphery: the room is cleaned up, something important may be about to happen; everyone got in late today on the east coast, must be a big snowstorm or traffic tie-up. 

Multicast shares with videoconferencing and television an increased opportunity to attune to additional details. Compared to a telephone or fax, the broader channel of full multimedia better projects the person through the wire. The presence is enhanced by the responsiveness that full two-way (or multiway) interaction brings. 

Like the inner windows, Multicast enables control of the periphery to remain with the individual, not the environment. A properly designed real-time Multicast tool will offer, but not demand. The MBone provides the necessary partial separation for moving between center and periphery that a high bandwidth world alone does not. Less is more, when less bandwidth provides more calmness. 

Multicast at the moment is not an easy technology to use, and only a few applications have been developed by some very smart people. This could also be said of the digital computer in 1945, and of the Internet in 1975. Multicast in our periphery will utterly change our world in twenty years.

Dangling String

Let’s return to the dangling string. At first it creates a new center of attention just by being unique. But this center soon becomes peripheral as the gentle waving of the string moves easily to the background. That the string can be both seen and heard helps by increasing the clues for peripheral attunement.

The dangling string increases our peripheral reach to the formerly inaccessible network traffic. While screen displays of traffic are common, their symbols require interpretation and attention, and do not peripheralize well. The string, in part because it is actually in the physical world, has a better impedance match with our brain’s peripheral nerve centers.

In Conclusion

It seems contradictory to say, in the face of frequent complaints about information overload, that more information could be encalming. It seems almost nonsensical to say that the way to become attuned to more information is to attend to it less. It is these apparently bizarre features that may account for why so few designs properly take into account center and periphery to achieve an increased sense of locatedness. But such designs are crucial. Once we are located in a world, the door is opened to social interactions among shared things in that world. As we learn to design calm technology, we will enrich not only our space of artifacts, but also our opportunities for being with other people. Thus may design of calm technology come to play a central role in a more humanly empowered twenty-first century.

Bibliography

Gibson, J. The Ecological Approach to Visual Perception. New York: Houghton Mifflin, 1979.

Norman, D.A. The Psychology of Everyday Things. New York: Basic Books, 1988.

MBone. http://www.best.com/~prince/techinfo/mbone.html 

Brown, J.S. and Duguid, P. Keeping It Simple: Investigating Resources in the Periphery. To appear in Solving the Software Puzzle. Ed. T. Winograd, Stanford University. Spring 1996. 

Weiser, M. The Computer for the Twenty-First Century. Scientific American. September 1991.

Brown, J.S. http://www.startribune.com/digage/seelybro.htm 

Weiser, M. http://www.ubiq.com/weiser


This is an archive of Mark Weiser’s ubiquitous computing website (ubiq.com) which disappeared from the internet in 2018 some time after Mark Weiser passed away. We wanted to preserve Mark Weiser’s knowledge about ubiquitous computing and are permanently hosting a selection of important pages from ubiq.com.