COMPUTERS (notes from Grolier
computer is an apparatus built to perform routine calculations with speed,
reliability, and ease. In addition to this basic function, the advance of
technology has enabled computers to provide numerous services for an
ever-increasing number of people. Since their introduction in the 1940s,
computers have become an integral part of the modern world. Besides the readily
apparent systems found in government sites, industries, offices, and homes,
microcomputers are now also unobtrusively embedded in a multitude of everyday
locations such as automobiles, aircraft, telephones, videocassette machines, and
three basic types are digital, analog, and hybrid computers. Digital computers
function internally and perform operations exclusively with digital, or
discrete, numbers (see digital technology). The most familiar and the type on
which most progress has centered, they are the focus of the following article.
Analog computers use continuously variable parts exclusively for internal
representation of magnitudes and to accomplish their built-in operations (see
analog devices). Hybrid computers use both continuously variable techniques and
discrete digital techniques in operation.
analog, and hybrid computers are conceptually similar in that they all depend on
outside instructions. In practice, they differ most noticeably in the means they
provide for receiving new programs to do new calculating jobs. Digital computers
receive new programs quite easily, either through manual instructions or by
automatic means. For analog or hybrid computers, however, reprogramming is
likely to involve partial disassembly and reconnection of components. Because
analog computers are assemblies of physical apparatuses arranged so as to enact
the specific type of mathematical relationship for which solutions are to be
computed, the choice of a new relationship may require a new assembly. To the
extent that analog machines can be considered programmable, their program is
rebuilt into their structure for each job.
the most important early computing instrument is the abacus, which has been
known and widely used for more than 2,000 years. Another computing instrument,
the astrolabe, was also in use about 2,000 years ago for navigation.
Pascal is widely credited with building the first "digital calculating
machine" in 1642. It performed only additions of numbers entered by means
of dials and was intended to help Pascal's father, who was a tax collector. In
1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694;
it could add and, by successive adding and shifting, multiply. Leibniz invented
a special "stepped gear" mechanism for introducing the addend digits,
and this mechanism is still in use. The prototypes built by Leibniz and Pascal
were not widely used but remained curiosities until more than a century later,
when Tomas of Colmar (Charles Xavier Thomas) developed (1820) the first
commercially successful mechanical calculator that could add, subtract,
multiply, and divide. A succession of improved "desk-top" mechanical
calculators by various inventors followed, so that by about 1890 the available
built-in operations included accumulation of partial results, storage and
reintroduction of past results, and printing of results, each requiring manual
initiation. These improvements were made primarily to suit commercial users,
with little attention given to the needs of science.
Tomas of Colmar was developing the desk-top calculator a series of very
remarkable developments in computers was initiated in Cambridge, England, by
Charles Babbage. Babbage realized (1812) that many long computations, especially
those needed to prepare mathematical tables, consisted of routine operations
that were regularly repeated; from this he surmised that it ought to be possible
to do these operations automatically. He began to design an automatic mechanical
calculating machine, which he called a "difference engine," and by
1822 he had built a small working model for demonstration. With financial help
from the British government, Babbage started construction of a full-scale
difference engine in 1823. It was intended to be steam-powered; fully automatic,
even to the printing of the resulting tables; and commanded by a fixed
difference engine, although of limited flexibility and applicability, was
conceptually a great advance. Babbage continued work on it for 10 years, but in
1833 he lost interest because he had a "better idea"--the construction
of what today would be described as a general-purpose, fully program-controlled,
automatic mechanical digital computer. Babbage called his machine an
"analytical engine"; the characteristics aimed at by this design show
true prescience, although this could not be fully appreciated until more than a
century later. The plans for the analytical engine specified a parallel decimal
computer operating on numbers (words) of 50 decimal digits and provided with a
storage capacity (memory) of 1,000 such numbers. Built-in operations were to
include everything that a modern general-purpose computer would need, even the
all-important "conditional control transfer" capability, which would
allow instructions to be executed in any order, not just in numerical sequence.
The analytical engine was to use punched cards (similar to those used on a
Jacquard loom), which were to be read into the machine from any of several
reading stations. It was designed to operate automatically, by steam power, with
only one attendant.
computers were never completed. Various reasons are advanced for his failure,
most frequently the lack of precision machining techniques at the time. Another
conjecture is that Babbage was working on the solution of a problem that few
people in 1840 urgently needed to solve.
Babbage there was a temporary loss of interest in automatic digital computers.
Between 1850 and 1900 great advances were made in mathematical physics, and it
came to be understood that most observable dynamic phenomena can be
characterized by differential equations, so that ready means for their solution
and for the solution of other problems of calculus would be helpful. Moreover,
from a practical standpoint, the availability of steam power caused
manufacturing, transportation, and commerce to thrive and led to a period of
great engineering achievement. The designing of railroads and the construction
of steamships, textile mills, and bridges required differential calculus to
determine such quantities as centers of gravity, centers of buoyancy, moments of
inertia, and stress distributions; even the evaluation of the power output of a
steam engine required practical mathematical integration. A strong need thus
developed for a machine that could rapidly perform many repetitive calculations.
Use of Punched Cards by Hollerith
step toward automated computation was the introduction of punched cards, which
were first successfully used in connection with computing in 1890 by Herman
Hollerith and James Powers, working for the U.S. Census Bureau. They developed
devices that could automatically read the information that had been punched into
cards, without human intermediation. Reading errors were consequently greatly
reduced, work flow was increased, and, more important, stacks of punched cards
could be used as an accessible memory store of almost unlimited capacity;
furthermore, different problems could be stored on different batches of cards
and worked on as needed.
advantages were noted by commercial interests and soon led to the development of
improved punch-card business-machine systems by International Business Machines
(IBM), Remington-Rand, Burroughs, and other corporations. These systems used
electromechanical devices, in which electrical power provided mechanical
motion--such as for turning the wheels of an adding machine. Such systems soon
included features to feed in automatically a specified number of cards from a
"read-in" station; perform such operations as addition,
multiplication, and sorting; and feed out cards punched with results. The
machines were slow, typically processing from 50 to 250 cards per minute, with
each card holding up to 80 decimal numbers. At the time, however, punched cards
were an enormous step forward.
Automatic Digital Computers
the late 1930s punched-card machine techniques had become well established and
reliable, and several research groups strove to build automatic digital
computers. One promising machine, constructed of standard electromechanical
parts, was built by an IBM team led by Howard Hathaway Aiken. Aiken's machine,
called the Harvard Mark I, handled 23-decimal-place numbers (words) and could
perform all four arithmetic operations. Moreover, it had special built-in
programs, or subroutines, to handle logarithms and trigonometric functions. The
Mark I was originally controlled from prepunched paper tape without provision
for reversal, so that automatic "transfer of control" instructions
could not be programmed. Output was by card punch and electric typewriter.
Although the Mark I used IBM rotating counter wheels as key components in
addition to electromagnetic relays, the machine was classified as a relay
computer. It was slow, requiring 3 to 5 seconds for a multiplication, but it was
fully automatic and could complete long computations. Mark I was the first of a
series of computers designed and built under Aiken's direction.
Electronic Digital Computers
outbreak of World War II produced a desperate need for computing capability,
especially for the military. New weapons systems were produced for which
trajectory tables and other essential data were lacking. In 1942, J. Presper
Eckert, John W. Mauchly, and their associates at the Moore School of Electrical
Engineering of the University of Pennsylvania decided to build a high-speed
electronic computer to do the job. This machine became known as ENIAC, for
Electronic Numerical Integrator and Computer (or Calculator). The size of its
numerical word was 10 decimal digits, and it could multiply two such numbers at
the rate of 300 products per second, by finding the value of each product from a
multiplication table stored in its memory. Although difficult to operate, ENIAC
was still many times faster than the previous generation of relay computers.
used 18,000 standard vacuum tubes, occupied 167.3 m(2) (1,800 ft(2)) of floor
space and consumed about 180,000 watts of electrical power. It had punched-card
input and output and arithmetically had 1 multiplier, 1 divider-square rooter,
and 20 adders employing decimal "ring counters," which served as
adders and also as quick-access (0.0002 seconds) read-write register storage.
The executable instructions composing a program were embodied in the separate
units of ENIAC, which were plugged together to form a route through the machine
for the flow of computations. These connections had to be redone for each
different problem, together with presetting function tables and switches. This
"wire-your-own" instruction technique was inconvenient, and only with
some license could ENIAC be considered programmable; it was, however, efficient
in handling the particular programs for which it had been designed. ENIAC is
generally acknowledged to be the first successful high-speed electronic digital
computer (EDC) and was productively used from 1946 to 1955. A controversy
developed in 1971, however, over the patentability of ENIAC's basic digital
concepts, the claim being made that another U.S. physicist, John V. Atanasoff,
had already used the same ideas in a simpler vacuum-tube device he built in the
1930s at Iowa State College. In 1973 the court found in favor of the company
using the Atanasoff claim.
The Modern "Stored Program" EDC
by the success of ENIAC, the mathematician John von Neumann undertook (1945) a
theoretical study of computation that demonstrated that a computer could have a
very simple, fixed physical structure and yet be able to execute any kind of
computation effectively by means of proper programmed control without the need
for any changes in hardware. Von Neumann contributed a new understanding of how
practical fast computers should be organized and built; these ideas, often
referred to as the stored-program technique, became fundamental for future
generations of high-speed digital computers.
stored-program technique involves many features of computer design and function
besides the one named; in combination, these features make very-high-speed
operation feasible. Details cannot be given here, but a glimpse may be provided
by considering what 1,000 arithmetic operations per second implies. If each
instruction in a job program were used only once in consecutive order, no human
programmer could generate enough instructions to keep the computer busy.
Arrangements must be made, therefore, for parts of the job program called
subroutines to be used repeatedly in a manner that depends on how the
computation progresses. Also, it would clearly be helpful if instructions could
be altered as needed during a computation to make them behave differently. Von
Neumann met these two needs by providing a special type of machine instruction
called conditional control transfer--which permitted the program sequence to be
interrupted and reinitiated at any point--and by storing all instruction
programs together with data in the same memory unit, so that, when desired,
instructions could be arithmetically modified in the same way as data.
and programming became faster, more flexible, and more efficient, with the
instructions in subroutines performing far more computational work. Frequently
used subroutines did not have to be reprogrammed for each new problem but could
be kept intact in "libraries" and read into memory when needed. Thus,
much of a given program could be assembled from the subroutine library. The
all-purpose computer memory became the assembly place in which parts of a long
computation were stored, worked on piecewise, and assembled to form the final
results. The computer control served as an errand runner for the overall
process. As soon as the advantages of these techniques became clear, the
techniques became standard practice.
first generation of modern programmed electronic computers to take advantage of
these improvements appeared in 1947. This group included computers using random
access memory (RAM), which is a memory designed to give almost constant access
to any particular piece of information. These machines had punched-card or
punched-tape input and output devices and RAMs of 1,000-word capacity with an
access time of 0.5 microseconds (0.5 x 10(-6) sec); some of them could perform
multiplications in 2 to 4 microseconds. Physically, they were much more compact
than ENIAC: some were about the size of a grand piano and required 2,500 small
electron tubes, far fewer than required by the earlier machines. The
first-generation stored-program computers required considerable maintenance,
attained perhaps 70% to 80% reliable operation, and were used for 8 to 12 years.
Typically, they were programmed directly in machine language, although by the
mid-1950s progress had been made in aspects of advanced programming. These
machines included EDVAC and UNIVAC (see UNIVAC), the first commercially
Advances in the 1950s
in the 1950s two important engineering discoveries changed the image of the
field, from one of fast but often unreliable hardware to an image of relatively
high reliability and even greater capability. These discoveries were the
magnetic-core memory and the transistor-circuit element (see computer memory).
new technical discoveries rapidly found their way into new models of digital
computers; RAM capacities increased from 8,000 to 64,000 words in commercially
available machines by the early 1960s, with access times of 2 or 3 microseconds.
These machines were very expensive to purchase or to rent and were especially
expensive to operate because of the cost of expanding programming. Such
computers were typically found in large computer centers--operated by industry,
government, and private laboratories--staffed with many programmers and support
personnel. This situation led to modes of operation enabling the sharing of the
high capability available; one such mode is batch processing, in which problems
are prepared and then held ready for computation on a relatively inexpensive
storage medium, such as magnetic drums, magnetic-disk packs, or magnetic tapes.
When the computer finishes with a problem, it typically "dumps" the
whole problem--program and results--on one of these peripheral storage units and
takes in a new problem. Another mode of use for fast, powerful machines is
called time-sharing. In time-sharing the computer processes many waiting jobs in
such rapid succession that each job progresses as quickly as if the other jobs
did not exist, thus keeping each customer satisfied. Such operating modes
require elaborate "executive" programs to attend to the administration
of the various tasks.
Advances in the 1960s
the 1960s efforts to design and develop the fastest possible computers with the
greatest capacity reached a turning point with the completion of the LARC
machine for Livermore Radiation Laboratories of the University of California by
the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a
core memory of 98,000 words and multiplied in 10 microseconds. Stretch was
provided with several ranks of memory having slower access for the ranks of
greater capacity, the fastest access time being less than 1 microsecond and the
total capacity about 100 million words.
this period the major computer manufacturers began to offer a range of computer
capabilities and costs, as well as various peripheral equipment--such input
means as consoles and card feeders; such output means as page printers,
cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and
magnetic-disk file storage. These found wide use in business for such
applications as accounting, payroll, inventory control, ordering supplies, and
billing. Central processing units (CPUs) for such purposes did not need to be
very fast arithmetically and were primarily used to access large amounts of
records on file, keeping these up to date. Most computer systems were delivered
for the more modest applications, such as in hospitals for keeping track of
patient records, medications, and treatments given. They are also used in
automated library systems, such as MEDLARS, the National Medical Library
retrieval system, and in the Chemical Abstracts system, where computer records
now on file cover nearly all known chemical compounds.
trend during the 1970s was, to some extent, away from extremely powerful,
centralized computational centers and toward a broader range of applications for
less-costly computer systems. Most continuous-process manufacturing, such as
petroleum refining and electrical-power distribution systems, now use computers
of relatively modest capability for controlling and regulating their activities.
In the 1960s the programming of applications problems was an obstacle to the
self-sufficiency of moderate-size on-site computer installations, but great
advances in applications programming languages are removing these obstacles.
Applications languages are now available for controlling a great range of
manufacturing processes, for computer operation of machine tools, and for many
a revolution in computer hardware came about that involved miniaturization of
computer-logic circuitry and of component manufacture by large-scale
integration, or LSI, techniques. In the 1950s it was realized that "scaling
down" the size of electronic digital computer circuits and parts would
increase speed and efficiency and thereby improve performance--if only
manufacturing methods were available to do this. About 1960, photoprinting of
conductive circuit boards to eliminate wiring became highly developed. It then
became possible to build resistors and capacitors into the circuitry by
photographic means (see printed circuit).
the 1970s vacuum deposition of transistors became common, and entire assemblies,
such as adders, shifting registers, and counters, became available on tiny
"chips." During this decade many companies, some new to the computer
field, introduced programmable minicomputers supplied with software packages.
The size-reduction trend continued with the introduction of personal computers,
which are programmable machines small enough and inexpensive enough to be
purchased and used by individuals (see computer, personal). Many companies, such
as Apple Computer and Radio Shack, introduced very successful personal
computers. Augmented in part by a fad in computer, or video, games, development
of these small computers expanded rapidly.
the 1980s, very large-scale integration (VLSI), in which hundreds of thousands
of transistors are placed on a single chip, became increasingly common. During
the decade the Japanese government announced a massive plan to design and build
a new generation--the so-called fifth generation--of supercomputers that would
employ new technologies in very large-scale integration. This project, however,
was abandoned by the early 1990s (see artificial intelligence). The enormous
success of the personal computer and resultant advances in microprocessor
technology initiated a process of attrition among giants of the computer
industry. That is, as a result of advances continually being made in the
manufacture of chips, rapidly increasing amounts of computing power could be
purchased for the same basic costs. Microprocessors equipped with ROM, or
read-only memory (which stores constantly used, unchanging programs), now were
also performing an increasing number of process-control, testing, monitoring,
and diagnostic functions, as in automobile ignition-system, engine, and
production-line inspection tasks.
the early 1990s these changes were forcing the computer industry as a whole to
make striking adjustments. Long-established and more recent giants of the
field--most notably, such companies as IBM, Digital Equipment Corporation, and
Italy's Olivetti--were reducing their work staffs, shutting down factories, and
dropping subsidiaries. At the same time, producers of personal computers
continued to proliferate, as did specialty companies, each company devoting
itself to some special area of manufacture, distribution, or customer service.
continue to dwindle to increasingly convenient sizes for use in offices,
schools, and homes. Programming productivity has not increased as rapidly, and
as a result software has become the major cost of many systems. New programming
techniques such as object-oriented programming, however, have been developed to
help alleviate this problem. The computer field as a whole continues to
experience tremendous growth. As computer and telecommunications technologies
continue to integrate, computer networking, computer mail, and electronic
publishing are just a few of the applications that have matured in recent years.
The most phenomenal growth has been in the development of the Internet, with all
its attendant ramifications.
A DIGITAL COMPUTER WORKS
Digital Encoding and Processing
order to process numbers and data electronically it is necessary to represent
information as electrical quantities. To represent the ten digits of the
familiar decimal number system, one might choose a set of ten electrical values
and assign one value to each of the ten digits. While this arrangement is
straightforward, it is not employed in computers, because the broad range of
values needed makes practical circuits impossible to build. In addition, when
characters other than numbers are included in the list of items to be processed,
the increased number of distinct values becomes unworkable.
solve the problem, all data is coded as binary numbers. The most reliable
distinction that can be realized in electrical systems occurs when only two
possible values exist. A lightbulb that may be either on or off is an example;
the two possibilities are distinct and unmistakable. The two binary values zero
and one are used to represent the electrical ideas off and on. These individual
digits are usually referred to as bits. A way of extending the set of possible
representations beyond two is also required. If a string of zeros and ones is
allowed to represent a digit or character, then the number of possible
representations becomes the value of two to the power of the number of bits. For
example, if four bits are used, then there are 2(4), or sixteen, possible
four-bit sequences that can be built. The set of sequences is as follows: 0000,
0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101,
1110, 1111. This set can, of course, be used to represent sixteen characters and
digits rather than only ten. If six-bit sequences are used, then sixty-four
possible characters and digits can be represented. This binary coding of items
is the principal means for representing all data in electronic computers. Many
different lengths of bit sequences are in use. Some common lengths are four,
six, and eight. Typically, the ten digits of decimal arithmetic are represented
by the first ten sequences in the four-bit strings listed above. This is called
the binary-coded-decimal representation.
the computer, electronic on and off states are realized using basic logic units
called gates. Ordinary light switches are observable examples of electronic
gates. As the required operations of a computer became more complex, switches
were developed that have a variety of ways in which they can be turned on or
off. In order to systematically describe these ways, two elementary functions
are defined. These are the AND function and the OR function, which operate in
the following manners: the result of the AND function with any number of binary
values is truth if all the values are true; otherwise, it is false. Generally, a
one corresponds to a true value, and a zero corresponds to a false value. The
result of the OR function with any number of binary values is truth if any one
(or more) of the values is true. By applying these two functions along with the
inverse function, which takes any value and produces the opposite value, one can
describe any required activity in digital processing. The logical functions AND
and OR are readily seen to provide for decisions, such as: do A if B AND (NOT C)
are true. Arithmetic can also be described in these logic terms. For example,
addition of two binary digits, A and B, produces a sum term, S, and a carry
term, C, according to the following relationships: S = A AND (NOT B) OR (NOT A)
AND B, and C + A AND B. Mathematics of this kind is called Boolean algebra.
modern electronic computers the transistor is the device that acts as a switch.
When computers using transistors were first built, the size of each transistor
was about 1/8 square-inch. Today, hundreds of transistors can reside in a
comparable space when integrated in a semiconductor chip (see integrated
circuit). The technologies and processes that are used to make microscopic
integrated circuits--such as masking, etching, and epitaxy--are themselves made
possible by computers (see computer-aided design and computer-aided
manufacturing). These technologies take advantage of the unique properties of
silicon to create not only transistors, but also complex conducting pathways and
other elements within single, small chips.
Components of a Digital Computer
digital computer contains four basic elements: an arithmetic and logic unit, a
memory unit, a control unit, and input-output units. Because computers are now
used for all sorts of purposes, ranging from calculating the route of a
spacecraft to controlling a washing machine, the contents of each of the basic
elements vary greatly. Each element must be present, however.
arithmetic and logic unit is that part of the computer where data values are
manipulated and calculations performed. This section of the computer usually
contains numerous registers and paths between these registers. Registers are
collections of memory devices that can save particular values. For example, when
numbers are to be added, they must be present at a physical location in the
computer where the addition is to take place. The register can accommodate this.
A circuit then uses the contents of the register in determining the sum. The
idea is identical to the manner in which one would add two numbers on a piece of
a typical application a computer performs millions of calculations every second.
It is impossible to keep all values needed in registers at every moment, so that
calculational speed becomes a factor in determining a computer's working
capacity. Mathematical functions other than addition may be built into an
arithmetic and logic unit. These include subtraction, multiplication, and
all computers are built to perform calculations with values. Some are designed
to sort out lists of items or select items having a certain property. For
example, a library may have its entire card catalog stored in a computer. When a
borrower is seeking a particular title, the computer is given the task of
searching the library booklist and comparing the desired title with the list.
This is not an arithmetic problem but a logic problem. Logic problems involve
examining values for certain properties and making decisions based on those
properties. All logic problems can be described as a collection of AND, OR, and
inverse (NOT) functions.
all the operands needed for execution of arithmetic and logic functions cannot
be stored in registers, another means is provided: the memory unit (see computer
memory). The memory unit stores data that is not currently being processed. The
operation of a computer requires a list of procedural instructions, which are
also stored in the memory. There are two types of memory: primary memory, which
stores data and instructions that are used most often, and secondary memory,
which stores related information that is used less often. Secondary memory will
typically store a large batch of data that a program in primary memory will
process of summoning content from the memory is referred to as a fetch
operation. The process of saving a value in memory is called a store, or
sometimes write, operation. When contents stored in memory are needed often, as
in the case of procedural instructions, the computer must fetch the information
quickly and not necessarily in sequential order. Primary memory is employed for
this task. The contents of certain segments of primary memory can be replaced
during execution of a program if data that is needed is stored in secondary
memory is sometimes referred to as bulk memory. When information stored in the
secondary memory is to be used, the computer usually must fetch much more than
the specific data needed. For example, the names, heights, weights, ages,
grades, and other data for a class of students might be stored in secondary
memory. In order to determine the average weight of the members of the class,
the computer would fetch all the information into primary memory. It would then
follow instructions for extracting the weight of each member. Finally, an
average would be calculated using the arithmetic and logic unit. Secondary
memory is ordinarily stored on magnetic tapes or disks and segmented into files.
process of getting information into and out of the computer is handled by an
input-output unit. Input-output devices bridge the gap between data in the form
used by the computer and data in the form used with a particular access device.
A computer keyboard is such a device; it is a palette of characters including
letters, numbers, punctuation marks, and special terms. The computer processor
interprets only strings of zeros and ones, so that the keyboard must make the
conversion from typed-in characters to binary sequences.
is also the job of an input-output unit to manage problems of timing. For
example, the rates at which typists can press keys varies greatly. Also, it is
wasteful for the computer to wait for all the necessary characters to be entered
before beginning a processing sequence, and input-output devices can alleviate
this problem. Similar timing problems arise when the computer has completed
operations and then must display results. Most often this is done with printers
and video display terminals. Many times the results are transmitted to another
computer. At certain times during the operations of a computer there may be
several different input and output transactions going on simultaneously. Some of
the burden of managing this activity belongs to the input-output units
themselves, although overall direction is managed by the control unit.
of the computer parts described so far--arithmetic and logic, memory, and
input-output--is able to perform its own functions and communicate results. For
these parts to work together effectively, however, it is necessary to have a
control unit that coordinates the actions. To perform this job, a time frame of
reference is established by the control unit. Generally, time within a computer
is divided into moments whose length is determined by a basic rate at which the
components in the control unit can react. This rate is fixed by a precision
clock. Because all parts of the computer are not able to act at the same rate,
longer periods of time are also developed, based on fractions of the fixed rate.
By designing the computer so that every activity is related to the clock source,
events become predictable, accountable, and easy to coordinate.
control unit initiates an operation by first fetching an instruction from a list
of instructions, called the program. The program is stored in the primary
memory. Each instruction is an exact description of how the hardware units are
to respond at a given moment in the time scheme of the computer. An illustration
of a program is given by a recipe for making a cake. In addition to a list of
ingredients, the recipe contains a step-by-step description of how to manipulate
the ingredients in order to produce the cake. Similarly, the control unit of a
computer is made to proceed through a list of directions describing how to
manipulate certain data. A simple computer is able to perform only one task at a
time. More complex computers have the ability to accomplish several tasks with
one instruction. Regardless of the level of complexity, the control unit acts as
an executive that provides each part of the computer with the information needed
to perform its function, monitor its progress, and determine what to do next.
The arithmetic and logic unit, control unit, and primary memory constitute what
is usually called the central processing unit.
Computer Operation and Programming
has been said, all computers require a program, or list of instructions, to
guide their activity. Sometimes the program is designed, or resides, within the
hardware of the computer and cannot be changed without redesigning the hardware.
More often, the program is entered as software into memory and may be easily
removed or altered. In fact, computers that do not have a hardware program
usually have several programs available in the memory at the same time. Some of
these programs are employed in the general operation of the computer--that is,
they control the executive functions. These programs are part of what is
referred to as the operating system.
computers actually have two sections to the operating system. The first and most
primitive section usually is stored in a permanent memory in the hardware. This
portion of the system provides setup and connections among basic computer
elements, so that more general programs can be entered from a keyboard or loaded
from secondary memory. In today's personal computers this primitive operating
system is the BIOS (basic input/output system) that comes with the hardware. The
second portion of the system is located in secondary storage for execution when
required. These more complex systems, such as DOS and UNIX, perform a wide
variety of file-handling tasks using a very structured command set. Yet another
layer of operating system uses graphics rather than typed commands. Examples are
Windows and Apple products. All these levels have the common goal of making the
machine elements interact for the purpose of transferring data.
programs are written by individual users for specific purposes, such as
calculating the payroll for a company, playing video games, balancing
checkbooks, editing texts, and so on. These are called user programs (see
computer programming). Although an operating system is necessary for a computer
to function at all, user programs accomplish the specific goals for which the
computer is employed.
elements interact in a way that is largely determined by the hardware design,
and they are controlled by operating systems, of which there may be many
suitable kinds. The operation of secondary memory and a variety of input- output
devices, however, introduces requirements that reduce the number of suitable
operating systems. An operating system must be chosen so that it can manage the
interactions of particular types of keyboards, modems, disk drives, printers,
video output systems, and so on.
memories that use magnetic tapes have information stored in sequence along the
tape. In order to gain access to specific data, the tape must be moved and
searched until the desired portion is found. This takes a great deal of time.
Magnetic disks, although more expensive than tape, store information in
concentric rings that can be searched more quickly by scanning in a radial
direction. These differences in access time and organization of contents require
that different types of operating systems be applied to tape and disk systems.
Similar considerations are made for the input-output units. The dominant factor
in the design of operating systems, however, is the interaction with memory.
programs are referred to as either software (the general term) or, sometimes,
firmware. Both are a set of instructions that reside in memory for execution.
Programs are called firmware when the instructions are located in a ROM
(read-only memory), which is unalterable during operation. The BIOS of a machine
is firmware. Instructions executed from a RAM (random access memory) are called
software; this constitutes most of today's programs. Programs are written for a
number of different levels within the computer and in a number of different
programming languages (see computer languages). A computer is capable of
performing a particular function once given a signal, and there are often
several ways in which it can perform the function. Therefore it must be further
instructed to use a certain method. An enumeration of the functions of a
computer, along with its methods for performing the functions, is called the
instruction set of the computer.
instruction set can be viewed from the programmer's perspective or the machine's
perspective. For the machine, any instructions must be encoded in terms of ones
and zeros. From the programmer's perspective, pages filled with ones and zeros
are tedious and difficult to interpret. For this reason acronyms are assigned to
each instruction in the set, making them more readable. As an example, adding
two numbers that are stored in two registers might be given the label ADD A, B.
The language of ones and zeros is called machine language. The language that
uses labels to simplify matters is called assembly language . The process of
converting assembly language into machine language is called assembling. Every
program must eventually be converted into machine language in order to be
language, although much easier to deal with than machine language, is
nevertheless difficult to use, especially in large and complex instruction sets.
Furthermore, assembly-language programs cannot be used on different computers.
Both of these problems are solved with the use of high-level languages.
Instructions in high-level languages are given in terms that are more readily
understandable than those of assembly-language programs, and such
high-level-language instructions are relatively independent of the particular
computer on which they run. A high-level instruction can be broken down into
several assembly-level instructions. When this process is accomplished by a
computer it is called compiling. A single high-level language will have a
different compiler for each kind of computer on which the language is used. Some
examples of common high-level languages are BASIC, C, and Pascal.
programming requires that a task be broken down into methodical steps, or, in
other words, an algorithm, that can be understood not only by the computer, but
also by other programmers. After a program is written, it must be checked
thoroughly in order to remove errors. This process is often as time-consuming as
the program writing.
difficulties that are most frequently encountered with programs are of two
types: logic errors and programming errors. The use of an incorrect series of
steps in the design of an algorithm is called a logic error. Incorrect use of
the programming language is called a programming error. The process of locating
these errors is called debugging. It is an arduous task, but it is crucial,
because the ultimate performance of a computer is entirely dependent on the
strict logic of its programs.
direct or indirect influence of computers is now nearly universal. Computers are
used in applications as diverse as running a farm, diagnosing a disease, and
designing, constructing, and launching a space vehicle. Science is a field in
which computers have been widely applied from the start. Because the development
of computers has been largely the work of scientists, it is natural that a large
body of computer applications serves the scientist. In order to solve scientific
problems it is inevitable that researchers must deal with the language of
science: mathematics. In attempting to understand more deeply complex natural
phenomena, the scientist must use mathematical relationships that become
increasingly difficult, as well as data that becomes more voluminous. It would
be impossible to manage many of the studies of complex scientific phenomena
without the aid of computers.
scientific computer programs inevitably serve the entire population. An area in
which this can be seen and which has experienced a steady growth in computer
technology is farming. When computers are now used to analyze data concerning
the feed intake, size, and food content of farm animals, the benefits of such
analyses eventually trickle down to many people, mainly because of efficiencies
in production that result.
improved accuracy of weather forecasting is another example of more powerful
computer programs. Not only do computers help forecast the weather, but by being
able to analyze larger and larger amounts of data, meteorologists are able to
add to the understanding of the science of weather. Computers also make possible
the now familiar satellite pictures of weather systems. The technology required
to place satellites in orbit is also afforded by large-scale computers. Of
course, there is still a long way to go before weather is predicted with great
accuracy, but it is expected that future generations of supercomputers, using
techniques such as parallel processing, will accomplish this goal.
now use computers extensively and on a worldwide basis. A well-known example is
the banking industry, which is almost entirely dependent on computers. Automated
bank tellers are now ubiquitous and are little more than input-output devices
for a bank's computer. They are also an example of the powerful system called
computer networking (see computer networks), in which computers, databases, and
input-output devices are connected by means of wire, fiber optic cable, or
satellite transmission, sometimes over great distances. The banking business is
typical of many businesses today. The problems of record keeping and
availability of information are similar in all types of businesses, and the
computer is the perfect tool for dealing with such concerns.
relatively new area for computers is that of communications (see
telecommunications). Communications consist of the flow and control of
information. This is part of what a computer does as it manages the data moving
among the elements within itself. By expanding the concept of a computer to
include networks of input-output devices rather than a single, complete device,
the result is a communications system. If the memory unit and arithmetic and
logic unit of a computer are both relatively small but control many input-output
devices, then the computer will act more like a message-handling system than a
computational device (see electronic mail). Gains in such areas as voice
recognition, speech synthesis, and computer-networking software point to an
important future for computers in communications, especially as network systems
offer expanding access to the so-called information superhighway (see Internet).
powerful, and low-cost computers for the home have been made possible by
progress in microelectronics (see microcomputer), enabling desktop machines to
perform many of the functions of larger computers. Initially used in home
applications such as video games and record keeping, by the late 1970s personal
computers were proving useful in business and education as well. The development
of more powerful microprocessors and advances in computer networks in the
mid-to-late 1980s and early 1990s enhanced the power of personal computers so
much that they are now widely used even in large corporations. (See computer,
personal; computers in education.)
important application of personal computers is desktop publishing (see
publishing, desktop). In this growing technology software programs and
inexpensive printers are used to produce text and graphics that are camera ready
for publication. Sitting at a single terminal, a user can write and edit text,
produce such graphics as charts or drawings, lay out text and graphic elements,
and store the results in memory. The results can then be printed out or sent
electronically to a typesetter. Desktop publishing allows individuals to produce
high-quality printed matter inexpensively.
seek new ways to build better computers. The goals of their efforts are usually
in one or more of the following areas: reducing costs, increasing processing
speeds, increasing capabilities, and making computers easier to use. This last
quality--ease of use--is commonly referred to as "friendliness."
Sometimes improvements involve new devices. At other times they are brought
about by new methods of integrating hardware or software elements.
both primary and secondary, is one part of the computer that has received
considerable attention over the years. Originally, the memory unit of any
computer was an array of small iron rings that could each be magnetized with
either of two polarities. The resulting process was slow, bulky, and expensive.
Since then, semiconductor chips have become the mainstays of primary memories,
with magnetic tapes and disks handling secondary-memory storage. Other phenomena
continue to be experimented with as possible forms of memory technologies. These
include magnetic bubble devices, electron tunneling devices, and the compact
disc. In the area of magnetic storage, considerable effort has been applied to
find ways of storing information more densely. As a result, at the present time
the cost per bit of disk storage approximately halves each year. Gigabytes (see
byte) of reliable tape storage are available in inexpensive formats for home
in semiconductor technologies continues, producing increased processing speeds
and the fitting of more circuitry into less space. Very large-scale integration
(VLSI), the integration of hundreds of thousands of circuits on a silicon wafer,
was achieved by the late 1980s. The products of advances in semiconductors give
designers the freedom to build functions into hardware that previously had to be
provided by software, and computers gain both speed and versatility.
use of optical means to store information is attractive primarily because the
inherent high frequency of light implies that it should provide high densities.
On the other hand, the human eye is more tolerant of informational errors than
is a computer. For this reason, as well as the difficulty of creating a material
that can be written on repeatedly using laser light, the appearance of optical
storage media began slowly. Optical media that can be repeatedly rewritten were
becoming available in the 1990s, however, and are expected to become an
inexpensive commodity by late in the decade. Meanwhile the consumer use of
compact discs has driven the cost of read-only discs (CD-ROMs) so low that they
are rapidly becoming the preferred memory for software distribution.
laser-optical technologies are being developed, however, with optical fibers
already used to transmit information in many networks. The use of optical
methods in actual computer processing is still in the early stages of
development but offers the hope of very fast and efficient computers for the
future. One day computers may have, in addition to optical memories, optical
processors (see optical computing).
area of experimentation is in the organization of computer parts. Most memory is
organized so that a location is given an address, and contents are found by
locating that address. Content-addressable memory, CAM, is a newer arrangement
style, in which information is located according to its value content. With this
method a computer can search for one item in a group in order to quickly find
the whole group. As an example, if the text of this article were stored in a
CAM, the task of finding a certain sentence would be made easier if the computer
could search for one particular phrase. Content-addressable memories are playing
a role in attempts to model the brain's capabilities. Data flow machines are a
new type of processing unit in which calculations are performed when all the
arguments of the calculation are present. In this way, even the time to fetch an
argument or wait for a previous calculation is used to do something.
change in computer organization--one that is now used in a growing percentage of
computers--is the use of fewer instructions in order to maximize processing
speeds. This is the basis of so-called reduced instruction set computers,
referred to as RISC computers. As computers find their way into various uses,
architectures are developed that enhance application effectiveness. An
application such as speech compression for use in the cellular phone industry is
now done on a special-purpose computer that is a single integrated circuit
called a digital signal processor (DSP). Its program is entirely self-contained
as firmware in the processor.
significant amount of interest exists in the field of artificial intelligence.
The technologies and benefits that will derive from this area of study will
undoubtedly filter down to all areas of computer science. Much work in
artificial-intelligence research involves programs built to perform in ways
similar to the way in which humans think. An example is the strategy employed in
games programs, in which the computer keeps track of all possible responses,
both winning and losing, and then builds a decision path that reinforces winning
and deters losing. To an extent this is an algorithm for learning. Despite
certain advances in artificial intelligence, computers can still do no more than
their programs instruct them to do. The greatest commercial successes in this
field have come with expert systems--huge database systems that act like expert
consultants in such fields as medicine and chemical analysis. Parallel
processing, in which computers break instructions down into smaller instructions
to be executed simultaneously, is playing a large role in
artificial-intelligence research due to the great computing speeds. Neural
networks, which consist of layers of weighted sums of various input signals, are
a practical form of computing architecture being used to solve more intuitive
problems such as pattern recognition. A weighted sum is the addition of numbers
that are each multiplied by some value.
crime is generally defined as any crime accomplished through special knowledge
of computer technology. Increasing instances of white-collar crime involve
computers as more businesses automate and information becomes an important
asset. Computers are objects of crime when they or their contents are damaged,
as when terrorists attack computer centers with explosives or gasoline, or when
a computer virus--a program capable of altering or erasing computer memory--is
introduced into a computer system. As subjects of crime, computers represent the
electronic environment in which frauds are programmed and executed. An example
is the transfer of money balances in accounts to perpetrators' accounts for
withdrawal. Computers are instruments of crime when used to plan or control such
criminal acts as complex embezzlements or tax-evasion schemes over long periods
of time, or when a computer operator uses a computer to steal valuable
information from an employer.
have been used for most kinds of crime, including fraud, theft, larceny,
embezzlement, burglary, sabotage, espionage, murder, and desktop forgery, since
the first cases were reported in 1958. One study of 1,500 computer crimes
established that most of them were committed by trusted computer users within
businesses--persons with the requisite skills, knowledge, access, and resources.
Much of known computer crime has consisted of entering false data into
computers, which is simpler and safer than the complex process of writing a
program to change data already in the computer. Organized professional
criminals, of course, have been attacking and using computer systems as they
find their old activities and environments being automated.
the ability of personal computers to manipulate information and access computers
by telephone, increasing numbers of crimes--mostly simple but costly electronic
trespassing, copyrighted-information piracy (especially software piracy; see
computer software), and vandalism--have been perpetrated by computer hobbyists,
known as "hackers," who display a high level of technical expertise.
In addition, as computer networks expand, the occasional aggressively bad
behavior of some users toward others in the network--including obscenity and
threats--has become a matter of increasing concern.
are no full statistics about the extent of computer crime. Victims often resist
reporting suspected cases, because they can lose more from the embarrassment,
lost reputation, litigation, and other consequential losses than from the acts
themselves. Evidence indicates, however, that the number of cases is rising each
year, because of the increasing number of computers in business applications
where crime has traditionally occurred. The largest recorded crimes involving
insurance, banking, product inventories, and securities have resulted in losses
of tens of millions to billions of dollars--all facilitated by computers.
and Law Enforcement
number of business crimes of all types is probably decreasing as a direct result
of increasing automation. When a business activity is carried out with computer
and communication systems, data are better protected against modification,
destruction, disclosure, misappropriation, misrepresentation, denial of use,
misplacement, unauthorized use, and contamination. Computers impose a discipline
on information workers and facilitate use of almost-perfect automated controls
that were never possible when these had to be applied by the workers themselves
under management edict. For example, a control in a computer to detect all
transactions above a certain amount and flag them for later audit works
perfectly every time. Computer hardware and software manufacturers are also
designing computer systems and programs that are more resistant to tampering. In
addition, U.S. legislation, both federal and state, includes laws concerning
privacy (see computers and privacy), credit card fraud, and racketeering. Such
laws provide criminal-justice agencies with tools to fight business crime.
beginnings of the computer industry, from the first conception of such devices
on through World War II, are described at length in the opening section of the
computer article. Since the end of that war, the computer industry has grown
into one of the biggest and most profitable industries in the United States. It
now comprises thousands of companies, making everything from multimillion-dollar
high-speed supercomputers to printout paper and floppy disks. It employs
millions of people and generates tens of billions of dollars in sales every
1946, John W. Mauchly, a physicist at the Moore School of Engineering of the
University of Pennsylvania, and John Presper Eckert, Jr., the engineer who had
supervised the design and construction of the first giant digital electronic
calculator, ENIAC (Electronic Numerical Integrator and Computer), left the Moore
School and established the Electronic Control Company, later rechristened the
Eckert-Mauchly Computer Corporation. America's first computer company, it
embarked upon a highly innovative project: the development of a general-purpose
computer system for science, business, and government. In 1950, with their
company short of cash, Eckert and Mauchly sold out to Remington Rand, a highly
diversified corporation. Drawing on Rand's substantial financial resources, they
finally completed their project. The first UNIVAC (Universal Automatic Computer)
was delivered to the Census Bureau in March 1951.
the time the International Business Machines Corporation (IBM) dominated the
tabulator business and for a while had shown no real interest in computers. Its
president, Thomas J. Watson, however, appreciated the value of good publicity.
In 1939, Howard Aiken, an engineering professor at Harvard University,
approached Watson with an idea for building a huge electromechanical calculator.
Watson funded the project, and IBM built the Mark I, donating it, with fanfare,
to Harvard in 1943. IBM then went on to build a much more sophisticated
electromechanical computer, the SSEC (Selective Sequence Electronic Calculator),
installing it in a Manhattan showroom in 1948. Then, having noted the success of
UNIVAC and finally realizing that there was a sizable market for computers after
all, IBM undertook a crash program to build a computer system of its own. The
first machine, called the IBM 701, rolled off the assembly line in 1953.
Although IBM was a latecomer to the computer business, its superior reputation,
ambition, financial resources, and marketing skills gradually lifted it above
its competitors. In 1964, IBM consolidated its position with the introduction of
the enormously successful System/360, the first compatible family of computers.
first computers were made with vacuum tubes; by the late 1950s, computers were
being made out of transistors, which were much smaller, less expensive, more
reliable, and more efficient. In 1959, Robert Noyce, a physicist, and Jack Kilby,
an electrical engineer, separately invented the integrated circuit, a tiny chip
of silicon that contained an entire electronic circuit. The area in and around
Mountain View, Calif. (between San Francisco and San Jose), became the center of
the integrated circuit industry and is now referred to as Silicon Valley (see
invention of the integrated circuit led to the development of small, rugged,
efficient, and relatively inexpensive minicomputers. First produced by the
Digital Equipment Corporation in 1963, they were also later produced by the Data
General Corporation and other firms. Unlike supercomputers and mainframes,
Digital's compact machines could be installed almost anywhere, whether in a
submarine, a laboratory, a bank branch, or a factory.
1971, Marcian E. Hoff, Jr., an engineer at the Intel Corporation, located in
Silicon Valley, invented the microprocessor, and another stage in the
development of the computer began. The microprocessor was a central processor on
a chip, and it enabled computer designers to replace dozens of integrated
circuits with a single chip--thereby further shrinking the size and cost of
computers. This paved the way for the development of personal computers (see
first widely used personal computer was introduced in 1975 by
Microinstrumentation and Telemetry Systems (MITS), a small electronics firm.
Called the Altair 8800, it used an Intel microprocessor and was offered as a
$399 do-it-yourself kit. Other companies, such as Apple, Kaypro, and Morrow,
soon introduced their own versions. The personal computer industry has come a
long way in a very short time. The market for personal computers rose to more
than $70 billion worldwide by the early 1990s, and there was a growing demand
for personal computer hardware (modems, monitors, printers) and personal
computer software such as spreadsheets, word processing, desktop publishing (see
publishing, desktop), and instructional programs. This gave rise to dozens of
ancillary industries while forcing computer giants, most notably IBM, to scale
down their enterprises severely.
importance of the personal computer cannot be overestimated. At first only
microprocessor hardware and limited software--such as the assemblers needed to
build user application programs--were available. Also, because the machines were
difficult for the average person to use, they remained in the hands of a
relatively small number of home and office users for some years. Throughout the
1980s, however, the microprocessor industry was creating new integrated circuits
many times faster and more capable than previous versions. (By the early 1990s,
the power of the IBM 360 could be bought in integrated-circuit form for about
$100.) It soon became clear that the key to this great market was software.
which had been the province of the giant companies, now began to diversify
greatly. The revolution thus initiated in the computer industry is most notably
exemplified by Bill Gates and his Microsoft company, founded in 1975. In
addition, in the later 1970s, a group in Silicon Valley led by Steve Jobs and
Steve Wozniak, deciding that computer responses needed to be made more
"user-friendly," founded the already-mentioned Apple company and, with
it, the first graphic user interface for computers. This concept removed the
mystique from computers and made them household items, much like television.
Taking the Apple concept, Microsoft in the later 1980s created the Windows
operating system as a graphic platform for all sorts of user-application
programs, opening up the writing of software to companies of all sizes. By 1995
the Windows system was installed on the vast majority of personal computers,
propelling Microsoft to the forefront of the software industry. Conversely, the
manufacture of computer parts spread out among numerous companies, creating an
extremely competitive hardware environment.
explosive growth of the Internet during the mid-1990s has fueled speculation
that the next big growth area for the computer industry would involve computer
network hardware and software. The technical challeges of providing ease of
access to the world's online services have been solved, but controlling access
to and garnering revenue from online data have proved elusive tasks for most
companies. Meanwhile, the media, telecommunications, computer, and cable
industries have sought to combine their assets and expertise in an effort to
corner the multimedia marketplace.
central processing unit (CPU) of a computer executes elementary instructions,
and many such operations are required to complete useful tasks. The power of the
computer lies in the rapidity with which it executes these instructions. A
program listing all the elementary instructions needed to perform a complex task
would be long and difficult to write. Fortunately, elementary instructions can
be grouped into commonly used sequences for common tasks such as standard
arithmetic operations and mathematical functions using decimal numbers. In the
early days of electronic computers it was recognized that the computer itself
could be used to translate powerful instructions automatically into sequences of
elementary instructions. Thus the concept of computer languages was born.
Machine and Assembly Languages
CPU operates by responding to electrical signals. Whether voltage is present or
absent on a particular wire can be represented by the binary digits zero and
one. Machine-language programs are composed of long strings of such Binary
numbers. Assembly language is a step up from machine language. The programmer
writes down simplified names of instructions, such as ADD or SUB(tract), and an
assembler program translates a list of such instructions into the
machine-language program--a list of binary numbers. Machine and assembly
languages are called low-level languages; their forms are determined by the
design of the CPU. A low-level program written for one type of CPU is very
difficult to translate into one that can be executed on another type of
programmers need to express problems in a form that is independent of the design
of the CPU, using the vocabulary of human language. They also prefer not to
specify every detail of an operation each time it is needed. For such reasons,
high-level, or symbolic, computer languages have been developed. With a
high-level language a user can obtain the sum of two numbers by giving the
computer a command such as "PRINT 512 + 637." For this to happen,
high-level languages must be translated into low-level languages. This is
accomplished, within the computer, by either a compiler or an interpreter.
compiler translates a source program into machine language. An interpreter
program reads individual words of a source program and immediately executes
corresponding machine-language segments. Interpretation occurs each time the
program is used. Thus, once it has been compiled, a program written into a
compiled language will run more quickly than a program in an interpreted
a program to be interpreted is usually easier than producing a compiled program.
However, fast computers with large amounts of computer memory make possible
compilers that are fast and easy to use. The computer language called BASIC was
once implemented only interpretively. It is now available in sophisticated
versions that allow interpretive execution of simple instructions during program
development yet produce fast compiled machine-language output for rapid
execution of large programs. Compilers that make development and testing of
programs almost as easy as with an interpreter are also now available for
languages usually compiled, such as C and its improved version C + + and Pascal.
These are probably the most widely used languages for developing computer
software for small and medium-size computers.
large computers used in offices and industry, programs are generally used many
times after they have been written, and compiled languages are used almost
exclusively. In addition to the languages mentioned above, Ada and older
languages such as COBOL are used in these environments.
Elements of Programming Languages
structured programming languages use control structures, named procedures, and
local variables. Instructions are ordinarily executed one after another, in the
order that they appear in the program. Control structures change this order of
execution, causing repetition or conditional execution of parts of a program.
Examples are FOR...NEXT, IF...THEN...ELSE, WHILE...REPEAT, and DO...UNTIL. Named
procedures are program parts written as separate subprograms and invoked by name
whenever they are required in a program. This makes a main program easier to
understand. In addition, libraries of such prewritten procedures can be shared.
Local variables prevent unexpected and unwanted interactions between parts of a
program. They are particularly important when different parts of a program are
written by different programmers.
importance of these features can be seen by comparing modern languages to early
versions of BASIC. BASIC relied heavily for control upon such instructions as
the GOTO instruction. These make programs very difficult to understand, because
an instruction such as GOTO 1500 does not indicate to someone reading the
program what operations are performed at line 1500. Similarly, BASIC's method of
dealing with subprograms was with a CALL instruction. A line such as CALL 2000
is as hard to understand as a GOTO, and for the same reasons. Finally, in old
versions of BASIC, all variables were global, which meant that changing the
value of a variable named X in an obscure subroutine would also change the
values of any Xs anywhere in the programs. These considerations do not seriously
handicap BASIC for small programs, but large programs are very difficult to
write without using more sophisticated techniques.
the symbols, words, and styles of different programming languages appear very
different, most languages share many functional features. An example of this is
a loop procedure, used to control repetitive tasks. The following segment of a
program for determining a payroll, written in BASIC, illustrates a loop
200 INPUT "How many employees"; N
210 FOR I = 1 TO N
220 PRINT "Employee number"; I
230 GOSUB 800; REM Subroutine to calculate pay
240 NEXT I.
user responds to the INPUT statement by entering a number, which becomes the
value of variable N. Then the FOR and NEXT instructions cause the computer
repeatedly to invoke the subroutine that calculates the pay until the loop has
been repeated N times.
first electronic digital computers were programmed in machine language, but
assembly languages were then developed. In 1956 the first widely used high-level
language was introduced. Called FORTRAN (from FORmula TRANslator), it was
designed for dealing with the complicated calculations of scientists and
engineers. COBOL (from COmmon Business Oriented Language) was introduced soon
after and was designed for business data processing. ALGOL (1960, for
ALGOrithmic Language) was designed as a tool for computer scientists to use for
finding solutions to mathematical or programming problems. LISP (from LISt
Processor) was also developed in the late 1950s. In particular, LISP was
designed for working on problems of artificial intelligence, as in attempts to
make computers translate human languages.
confusion of many languages led to an attempt in the 1960s to produce a language
that could serve both the business and the scientific-engineering communities.
This project produced the language PL/1 (from Programming Language One). Newer
languages also continued to emerge. BASIC (from Beginner's All-purpose Symbolic
Instruction Code) was developed in the 1960s to give computer novices a readily
understandable programming tool. This was a time when time-sharing operating
systems were making computers more available, particularly in secondary schools
and colleges. APL (from A Programming Language) was also developed for
time-sharing systems, but in this case the goal was to provide a very powerful
language for sophisticated mathematical and scientific calculations.
the 1970s a number of new general-purpose computer languages emerged. Pascal,
designed as a teaching language and to encourage good programming habits in
students, has largely replaced ALGOL as the language used by computer scientists
for the formal description of algorithms. The economical C language, similar to
Pascal, allows the experienced programmer to write programs that make more
efficient use of computer hardware. Extensions such as C + + incorporate
features needed for object-oriented programming. The language Ada was
commissioned by the U.S. Department of Defense. Like PL/1, it is meant to be a
universal language and is complicated, but because its use is required in U.S.
defense projects its industrial employment is assured.
number of languages have been developed for specialized applications. These
include FORTH, designed for use in scientific and industrial control and
measurement applications, and Postscript, designed for controlling printers and
other graphics output devices. LOGO and Scheme are both descended from LISP.
LOGO was developed to help very young children learn about computers, and Scheme
is used in university-level teaching of computer science. Japanese researchers
recently chose a newer language, PROLOG (from PROgramming LOGic), which makes it
convenient for programming logical processes and for making deductions
automatically, and it is used for programming expert systems. Occam is capable
of making effective use of parallel processing on large computers that have
multiple CPUs simultaneously working on the same problem.
digital computer memory is the component of a computer system that provides
storage for the computer program instructions and for data. Internal, or
primary, memory is an essential feature of computers. Many computer operations
involve reading instructions from memory and executing them by reading data from
memory, performing operations, and then writing results back into memory.
Because there is room to store only a limited amount of information in the
primary memory of a computer, permanent storage of large amounts of information
must be accommodated by external, or secondary, memories such as magnetic tapes.
(The subject of secondary memories is discussed in greater detail in information
storage and retrieval.)
primary memory consists of many storage locations, each of which is uniquely
identified by an "address" number. To read from memory, the address of
the desired information must be supplied to the memory along with a command to
"read." The content of the memory is then produced as output. To write
into memory, the address of the location and the data to be written must be
supplied to the memory along with a "write" command. Thus a memory
component has (as subcomponents) storage locations, data paths for moving
information into and out of storage, and the control logic necessary to
read or write operation is called an access. Sequential access memories, such as
magnetic disks or tape, are usually used for secondary storage. In order to move
from one address to another in such memories, it is necessary to proceed
sequentially through all locations between the two addresses--for example, by
winding a tape. In random access memories (RAMs), on the other hand, all
locations can be accessed directly, and equally rapidly, by electronic
switching. RAMs are used in primary memory because they are much faster.
units frequently read from but seldom or never written into, such as the control
programs in hand calculators or programs that load the operating system into a
computer when power is first applied, are normally constructed with read-only
memories (ROMs). ROMs have permanent information placed in them during
manufacture. Because a program written into a ROM cannot be changed, such
programs are referred to as firmware. ROMs are also random access devices, but
when reference is made to the "RAM" of a computer, normally it is not
the ROM but just the writable RAM that is being discussed.
ROMs (PROMs) may be manufactured as standard units and then have information
permanently implanted in them by specialized equipment. Erasable programmable
ROMs (EPROMs) may have their programs erased and new information implanted from
time to time. EPROMS that can be erased by ultraviolet light are used
extensively in industry. Newer electrically erasable PROMS (EEPROMS) are now
incorporated in some computers, to allow for firmware upgrading without
replacing built-in circuitry.
size of a computer memory refers to the amount of information that may be
stored. The smallest unit of information in digital computer systems is the bit,
or binary digit. This unit may have one of two values, 0 or 1, and is the basis
of the binary number system. An addressable unit is the group of bits that
constitutes the information in an addressable storage location. Common sizes for
the addressable unit are 8, 16, 32, and 64 bits. The memory size is specified by
the addressable unit size and by the number of addresses. Modern small computers
(see computer, personal) can be equipped with up to 4 billion addressable
locations, although most are initially equipped with smaller memories--typically
4 to 8 million addressable locations. Larger computers generally have even
part of of a computer's central processing unit (CPU) that serves as temporary
storage for data, addresses, and orders while they are being used is called the
register. The rate of information transfer between the CPU and memory is
determined by the memory cycle time and the path width, or number of bits moved
in each cycle. This is frequently larger than a single addressable unit. Memory
cycle time is the time required by the memory system to carry out a read or
write operation. Cycle times have declined steadily since the introduction of
random access memories (c.1950). By 1995, cycle times of 0.05 microseconds or
less were common.
the 1970s most large primary memories used ferrite cores--rings of magnetized
material about a millimeter in diameter, strung like beads on a wire grid. The
direction of magnetization of each core in the memory determined the binary
value it carried. Ferrite-core memories were "nonvolatile." That is,
they retain their information even after power has been removed. Primary memory
is still often called "core" memory, even though most primary memories
now are made up of small integrated circuit chips, each of which, in the present
state of technology, can hold up to four million bits of information. Such
memories are "volatile"--their information is lost when power is
removed--although standby battery power can be used to overcome this. They have
replaced ferrite cores because of their great advantages in terms of speed,
density of storage, power consumption, and cost.
most widely used type is the DRAM, or dynamic RAM, in which information is
stored electrically in arrays of tiny semiconductor capacitors. These can be
either charged or uncharged, corresponding to binary values of 0 and 1. The
electrical charges must be periodically recharged during operation. Another
type, the static RAM, or SRAM, does not require constant "refreshing"but
uses more circuitry and space. Specialized RAMs made with very-low-power
complementary metal-oxide-silicon (CMOS) integrated circuits are frequently used
in conjunction with small rechargeable batteries to maintain critical
configuration information in computers when power is off. The next generation of
RAMs may incorporate superconductive tunnel junctions that operate at speeds 10
to 100 times faster than present memories (see superconductivity; tunnel
design methods are available to improve primary memory performance. One
technique is called interleaved memory. If two memory modules are built, and
each can be accessed independently to provide instructions and data to the
central processing unit, then the potential exists for transferring twice as
much information in a given time. The full potential is never reached, but the
idea is used to good advantage when odd memory addresses are placed in one
module and even addresses in the other. Thus the name interleave refers to
address interleaving. A second performance improvement method is the insertion
of a higher-performance small memory between the primary memory and the central
processing unit. This memory, called a cache, can be used to hold frequently
referenced instructions or data. This reduces the number of slower primary
memory is the technique of temporarily storing some of the information in
primary memory in a "swap" area on a secondary memory device. This was
formerly used only in large computers, but fast hard-disk systems now allow it
to be used even in personal computers. It enables a user to have the illusion of
almost unlimited memory space, at the expense of occasional small delays as data
that have been "swapped out" are retrieved from the disk. Modern
operating systems need large amounts of memory to have multiple programs
modeling is the use of computers to model objects and to simulate processes.
Computer simulations are valuable because they allow a person to study the
response of a system to conditions that are not easily or safely applied in a
real situation. With a computer model a process can be speeded up or slowed
down. A model can also allow an observer to study how the functioning of an
entire system can be altered by changing individual components of the system.
computer model is usually defined in mathematical terms with a computer program
(see computer programming). Mathematical equations are constructed that
represent functional relationships within a system. When the program is running,
the mathematical dynamics become analogous to dynamics of the real system.
Results of the program are given in the form of data.
type of model involves a representation of an object (see computer graphics),
which can be manipulated on a video display terminal in much the same way that a
three-dimensional clay or wood model might be manipulated. This is the basis of
computer-aided design (see computer-aided design and computer-aided
success of computer models is highly dependent on the mathematical
representations of systems and on chosen input parameters. For many systems, the
mathematical representations must be extremely complex because there are so many
factors present. Factors are often represented as submodels and interact with
one another. Input parameters consist of conditions that are known at the
beginning of a modeling sequence. They often have to be estimated.
variety and sophistication of computer models have increased as computers have
become more powerful. Models are now used to study economic growth, employment,
energy and food resources, population, and housing needs on a world scale as
well as on local levels. Many environmental systems have been modeled. The
effect on soil and water of chemical or nuclear contamination can be evaluated
with a computer. A model of a river can show how using water for dams,
irrigation, and power plants will affect water flow at various points. Models
can help in studying soil erosion and planning commercial use of forestland.
the medical field, computer models are used to predict how molecules and drugs
interact, speeding the development of new pharmaceuticals. Three-dimensional
models of human organs are used to teach anatomy to biology and medical
students, with the advantage that the student can manipulate the computer model
like a real object as graphics and animation reveal otherwise hidden
information. A neural network--a computer approximation of the brain's neural
architecture--aids researchers in understanding brain functions. In engineering,
most designs are developed and tested with computer models. With interactive
computer-aided design, engineers redraw designs quickly and inexpensively, and
the computer aid also allows the user to study the response of the product to
factors such as physical stress, the effects of different materials on physical
properties, and the effects of different designs on cost.
programs are simple computer models that are widely applicable to business
concerns. For example, they can be used to study how changes in levels of sales
and prices affect profits. Educational models allow students to study structures
and behaviors and to build reasoning and problem-solving skills. Thus students
can learn to weigh social, economic, and political issues as they take on such
roles as city mayor, nuclear plant operator, or even Mother Nature. Continued
innovation on many fronts--including processing power, data storage, and video
compression--will make computer models ever more useful for a wide range of
applications. Developers can be expected to construct increasingly accurate
models of weather phenomena, environmental trends, and other complex studies
that involve huge numbers of variables.
music is any music in which computers are used to transmit musical instructions
to electronic instruments or live performers. The transmissions are in the form
of electrical impulses, which are, in turn, reproduced as sounds.
V. Mathews, an electrical engineer, established the pioneering computer music
project at Bell Laboratories, Murray Hill, N.J., in 1957. Intrigued by the
relationship between number and tone in Arnold Schoenberg's twelve-tone piano
music, Mathews began to work on computer music.
the classic computer music studio, the following series of events (direct
synthesis) occurs. These procedures are still used, though many others have
become available to computer composers in the last 30 years. (1) The composer
programs instructions into a computer in a language that it understands. This is
usually done through a console (often an alphanumeric keyboard). Input media may
include cards, paper tapes, magnetic tapes, and magnetic disks. (2) The
instructions are converted to numbers. (3) The computer performs the functions
that the composer has requested. (4) A digital-to-analog converter converts the
resultant information to varying voltages. (5) These voltages drive one or more
loudspeakers, thus creating sound (see electronic music).
compositional technique, analysis-based synthesis, is exemplified by Charles
Dodge's In Celebration (1975). A spoken text is recorded digitally. The
digitized speech information is analyzed, and the resultant information is used
to recombine the speech sounds. Thus the computer both analyzes and synthesizes
sounds. In yet another technique, acoustic sounds-- natural or human-made--are
digitally recorded, then modified by the computer in a manner similar to tape
manipulation in concrete music (musique concrete).
growing sophistication of real-time ("live") computer performance
techniques has freed computer music from unconditional bondage to the studio.
For example, the popular 1983 invention called the MIDI (Musical Instrument
Digital Interface) can expand the capacity of a personal computer to equal or
surpass that of a professional studio system. Using the MIDI, information can be
communicated between computers, electronic keyboards, synthesizers, and any
other necessary modules for the generation of sound. Data to be communicated can
include pitches, dynamic levels, timbres, and other elements common to musical
computer music centers, however, remain important for both composition and
research. Twenty years after the inception of the Bell Laboratories project, the
Institut de Recherche et de Coordination Acoustique/Musique (IRCAM) of Pierre
Boulez was inaugurated in Paris. Significant work in computerized music is also
being done at many U.S. universities, including Princeton, Stanford, the
University of Illinois (at Urbana), and the University of California (at San
Diego). Some composers of computer music are Milton Babbitt, Herbert Brun, John
Cage, Loran Carrier, John Chowning, Emmanuel Ghent, Gary Kendall, Dexter
Morrill, Julia Morrison, Dika Newlin, Laurie Spiegel, Morton Subotnick, James
Tenney, and Yannis Xenakis.
networks are interconnections of many computers for the purpose of sharing
resources. They allow communication between users through electronic mail and
"bulletin boards," and they provide access to unique databases (see
database). They can be thought of as information highways over which data are
transported. They speed up processing and databasing in busy systems, reduce
costs (as in eliminating paperwork), and offer many other conveniences. Networks
are changing the computing paradigm from "number-crunching" to
communicating. In turn they have spawned industries such as the online industry,
a collection of organizations providing information and communication services
to remote customers via a dial-up modem.
a computer network the individual stations, called "nodes," may be
computers, terminals, or communication units of various kinds. Networks that are
contained within a building or small geographical area are called local-area
networks, or LANs. They usually operate at relatively high speeds. The Ethernet,
Token Ring, and FDDI (fiber distributed data interface; see fiber optics) are
examples of transmission technologies often used in LANs. Larger networks,
called wide-area networks, or WANs, use a variety of transmission media, such as
telephone lines, to span states, countries, or the entire globe (see Internet).
are designed and constructed in layers. Each layer is defined by a standard,
called a protocol, that defines how information is organized and transmitted
from one node to another. The lowest layers define how bits of data are
packaged, while the highest layers define how the data are fed into an
application running on a desktop or mainframe computer. There are two basic
approaches to data-layer protocol design: packet-switched and circuit-switched.
The packet-switched approach is most popular in LANs because of its reliability
and simplicity. Perhaps the most widely known is the packet- switched
transmission control protocol/Internet protocol (TCP/IP) used by the Internet.
approaches have traditionally been used by telephone companies to implement WAN
connections. This has been changing, however, as companies convert to digital
communications compatible with computers. For example, the asynchronous transfer
mode (ATM) protocol is a hybrid between circuit- and packet-switched methods.
The ATM divides data into small packets called cells and then routes these cells
over a virtual circuit, much like an ordinary telephone call. A virtual circuit
is a point-to-point connection that may send data over different physical links
during the communication session. The circuit is made "virtual"
because the user never knows that the cells are routed over different links
throughout the session. In either approach, one computer will not be able to
communicate with another unless they use the same protocol. This problem has
been addressed by vendors of devices--called bridges, routers, and hubs--that
convert from one protocol to another. A bridge connects networks, while a router
is a more "intelligent" bridge that can find nodes on the network as
well. A hub is often a sophisticated computer that does conversions, changes
transmission speeds, and routes data to a variety of nodes.
have telephone numbers that uniquely identify the location (in network
geography) of a computer on the Internet. For example, the IP (Internet
protocol) of a computer might be 18.104.22.168. Because such numbers are
difficult to remember, a naming convention has been invented by users. Names
such as "firstname.lastname@example.org" are automatically converted into an IP
number by a name-server program. Here, "joe" is the name of the user,
"mist" is the name of the computer, "ece" is the name of a
department (electrical and compute
programming is the process or activity of developing programs, or lists of
instructions, that control the operation of a computer. Computer systems consist
of hardware and software. Hardware comprises the physical and electronic parts
of the computer, including computer memory. In contrast, computer software
comprises the programs that reside in the memory and control each hardware
component. Without a program, a computer is as useless as a bus without a
number of essential types of programs exist. operating system programs control
the most fundamental operations, such as running an applications program and
processing the user's input and output requests. They do much more, however.
They control every aspect of the computer, from connecting to a network to
interpreting what is wanted when a blank diskette is inserted into the
computer's disk-drive device. Applications programs tailor the computer's powers
to performing specific tasks such as computing payrolls, creating mechanical
designs, and processing text. Word processors, spreadsheets, and graphics
programs are all applications programs. Programmers write operating-system and
application programs by encoding each instruction in a way that the computer can
programs must be expressed in a precise notation called a programming language
(see computer languages). Programs for elementary processes are in low-level
languages that are cumbersome and difficult for ordinary users to follow.
Highlevel languages strike a compromise between the precise meanings required by
the machine and the written language of the human programmer. High-level
languages that are used to design applications programs include BASIC, often
used for personal computer programs, and C and C + +, used for a wide range of
may be broken down into stages: requirements definition, design specification,
coding, testing, and maintenance. The requirements stage involves quantifying
the necessary operations to be performed by the program. As an example, for a
payroll program, requirements specify how many paychecks must be processed, what
calculations need to be performed, and how the information managed by the
computer system should be stored. In the design-specification stage, directions
are provided for meeting the requirements by quantifying the processing steps
and the data-storage structures in greater detail. For this an algorithm, or
step-by -step procedure, is devised. Algorithms are mathematically precise
recipes for how to carry out a process.
is the stage in which the algorithms, data structures, and program design are
turned into steps expressed in a chosen programming language. Particular care
must be taken in this step to make certain the computer is given precise and
correct (detailed) commands to carry out what the requirements demand. Because
computers are not intelligent, very subtle errors can creep into a program and
go undetected by users. This is such a major problem with programming that a
special step, testing, is needed. Testing is the stage in which the program is
verified as being correct with respect to the requirements and design
specification. A number of testing methods are applied to catch a variety of
errors. For example, a simple error would occur if the program attempts to
divide by zero or input the incorrect data. More sophisticated errors are:
incorrect calculations, attempts to process numbers too large for the machine's
capacity, and performing steps in the wrong sequence.
the program is tested and found to be correct, it is released to its users. The
final stage is maintenance, in which enhancements and corrections are made. It
is the most lengthy and costly stage of programming. To illustrate all these
stages, consider the design of a program (at the end of this article) to find
the sum of a list of real numbers.
programming is a costly and time-consuming activity. Approaches to improve
programmer productivity are always being pursued by computer scientists. These
efforts include inventing higher-level or more powerful languages that require
less effort to use; inventing utility programs that assist programmers by
automating the process of requirement analysis, design specification, coding,
testing, and maintenance; and inventing new methods of programming that reduce
the intellectual effort needed to complete the phases of programming in a
single most important lesson learned about programming over recent decades is
that early defect removal leads to higher programmer productivity. Early defect
removal means uncovering defects in the design specification prior to coding. It
costs much more to remove a defect after encoding. As a consequence, a number of
technical and social processes have been invented to make early defect removal a
part of the programming process. For example, languages that enforce strong
type-checking on data possess a technical feature that leads to early defect
removal. An example of a social process that reduces errors in the design step
is "walk-through." In a code walk-through, a group of programmers read
each line of code and discuss it, prior to entering it into the computer.
maintenance is the most expensive stage in programming, and change is
inevitable, recent work has emphasized design, coding, and testing techniques
that accommodate changes in the original program. In one technique, error,
prompt, menu, and help messages are separated from the code, making it easier to
change these messages. For example, the so-called resource file of applications
written for Macintosh and Windows personal computers is where messages are
separated from the logic of the application, thus reducing maintenance effort.
An older technique, called structured programming, also reduces the cost of
maintenance by dividing the program into understandable pieces. Object-oriented
programming takes modularity one step beyond by dividing a program into units
called objects. Object-oriented programming further reduces programming effort
by sidestepping common errors and making it easier for programmers to reuse
previously written objects. The less code that a programmer has to write, the
less is the chance of error, and the greater is individual programmer
the science of programming progresses, more and more of the steps will be
automated. Thus the future programmer will spend more time on requirements
analysis and less time on design, coding, testing, and maintenance. Many tools
already exist for automatically generating a program from graphic descriptions
or from very high-level directions.
computer register is that part of a digital computer memory serving as temporary
storage for data, addresses, and orders while they are actually being used.
Sometimes the term is used to refer to any small-capacity memory device located
in the central processing unit of a computer. The name is derived from
mechanical apparatuses such as the cash register, which has manual data entry
keys and levers. In modern electronic computers, if a number a is added with a
number b to form result c, both a and b are put into registers Ra and Rb that
directly connect to the adder, and c, when formed, is deposited directly by the
adder into register Rc. A computer designed to operate on arithmetic quantities
with a length of 32 binary digits would have registers constructed typically as
a row of 32 binary memory cells equipped with suitable "gates" for
loading and unloading their contents. During a step of computation, some
registers also hold the coded instruction currently being enacted, as well as
the address in main memory where the results of the current step are to be
deposited and the address of the next instruction.
systems are susceptible to various kinds of invasion and sabotage. Hardware and
software may be infected and damaged by computer viruses in programs downloaded
from computer bulletin boards, competitors may log onto a company's computer
network and read confidential marketing plans, students may alter grades in
school records, and so on.
steps can be taken to avoid unauthorized access to a computer system. A password
program assigns a secret password or security code to each person authorized to
use a system; the password or code must be typed in before access is granted.
Additional passwords or codes can be assigned for access to specific files.
Data-encryption programs scramble information in a file prior to storage or
transmission; a special password or decoder is needed to translate the scrambled
data back into its original form. Encryption has become a controversial issue:
businesses and civil libertarians want to ensure that all communications remain
private, but law-enforcement agencies want to have the ability--with court
authorization--to tap into communications among criminals and people viewed as
national-security threats. The solution, in the U.S. government's view, is a
national encryption standard to which it or its representatives would hold the
special code keys.
software consists of the programs, or lists of instructions, that control the
operation of a computer. The term can refer to all programs used with a certain
computer, or it can refer to a single program (see computer programming).
is intangible information stored as electrical impulses in a computer memory, in
contrast to the hardware components such as the central processing unit and
input-output devices. These impulses are decoded by the hardware and interpreted
as instructions, with each instruction guiding or controlling the hardware for a
brief time. As an example, an ADD instruction of a program controls the hardware
for the length of time needed to add two numbers and return the sum.
wide variety of software is used in a computer system. An easy way to understand
this variety is to think of it in layers. The lowest layer is nearest to the
machine hardware. The highest layer is nearest to the human operator. Humans
rarely interact with a computer at the lowest level, but instead use a language
translator called a compiler to do the detailed work. A compiler is a software
program whose purpose is to convert programs written in a high-level, or user,
language into the low-level (binary) form that can be interpreted by the
hardware (see computer languages). Compiler programs eliminate the tedious job
of conversing with a computer in its own binary language. Interpreter programs
perform a similar task.
up a layer, an operating system is a software program that controls the computer
system itself. It organizes hardware functions such as reading and writing input
and output data (from a keyboard, screen, or disk-drive unit) to a screen,
printer, or disk-drive unit; interpreting commands from the user; and performing
housekeeping chores such as allocating time and resources to each application
software adapts the computer to a special purpose such as processing a payroll
or modeling a machine part. Applications are written in any of a variety of
compiler language that are each appropriate for certain applications. These
languages, inventions of the human mind, cannot be directly understood by the
computer hardware. Therefore they must be translated by an appropriate compiler
that converts them into the ones and zeros that the machine can process.
software is as diverse as the applications of computers. It is generally divided
into two groups: horizontal and vertical. A horizontal program can cut across
many application areas. Examples are the spreadsheet and database management
program. Spreadsheet programs can be used to develop business ledgers and
financial models and to perform calculations in areas such as statistics and
engineering. Vertical applications programs are tailored to specific tasks, such
as medical billing, payroll for a certain business, or the design of an
airplane. Other common software programs include word processing, the video
game, desktop publishing programs (see publishing, desktop), and numerical tools
for such activities as accounting and stock-market analysis.
software is usually distributed on magnetic disks (see information storage and
retrieval). In fact, software and the disk that contains it are often thought of
as being the same thing. When a disk is read by the disk-drive unit of a
computer, the software is copied into the memory. Then the computer's operating
system passes control to the application in a process that activates the
program. When the program is terminated, the operating system regains control of
the machine and sets about doing some other application or handling requests
from the user.
sophistication or power of software programs is related to the amount of
information that can be stored on a disk and to the size of a computer's memory.
With tremendous advances in microelectronics, which have led to powerful memory
chips, even microcomputers can now accommodate sophisticated software programs.
Modern software and hardware can handle text, graphics, sound, and movies.
the costs of computer hardware have decreased dramatically over the years, the
cost of software has not. This is mainly because techniques for developing
software have not improved dramatically, but it is also related to the problem
of software copying, or piracy. Most software is protected by copyright law, but
such laws are difficult to enforce because it is a simple matter to copy
software from one diskette to another. This problem is particularly acute in the
business world, where a large company might purchase a single software program
and then copy it for use by hundreds of employees. Some software companies are
attacking this problem by site licensing --offering price discounts to customers
who make bulk purchases of a software program.
expanding roles of personal computers in homes and businesses have caused the
importance of software to grow (see computer, personal). In the years ahead the
software industry will continue to deal with the issues of developmental costs,
piracy, consumer needs, and the power and versatility of programs. In
particular, new methods will be developed that will reduce the length of time
and the costs associated with writing new programs.
enable computer hardware and software to be compatible (work together),
manufacturers and technical organizations have established standards on hundreds
of issues: how graphics are displayed on a monitor, how data are transmitted by
modems, how messages should be transferred within a network, and so on. Some
standards are accepted internationally. Others are unique to a country or
region, which can create problems for people who travel.
technology advances, established standards need to be reconsidered. For example,
the standards for CD-ROM drives and discs (see compact disc) were being reviewed
in 1995 as the entertainment industry pushed for a CD-ROM that would have
sufficient capacity to hold a feature film. This would require not only new
storage-capacity standards but also new standards for access times and
data-transfer rates. Unless everyone agrees on a single format, competing
formats will create a compatibility problem--with some CD-ROMs playable on some
drives but not on others.
computer terminal is a device that allows a user to interact with a central
computer or computer system. Terminals allow for input and display of data but
typically do not contain a central processing unit. Processing of data is
carried out on a remote "server" computer to which the terminal is
connected by direct wiring or telephone lines. Terminals can resemble a personal
computer or assume other forms. The typewriterlike devices used for making
airline and motel reservations are computer terminals, as are the devices
resembling cash registers used in automated checkout counters.
that provide for the movement of information between the central processing unit
(CPU) of a computer and the external world are called input/output (I/O)
devices. They are very important because every computer functions by accepting
input and producing output. Input is the control information (programs and
commands) that directs computing activities. It also includes the data (digital
numbers, characters, or pictures) manipulated by the computing activities.
Output is information produced as a result.
of the wide variety of forms of information, many types of I/O devices are used.
They may be characterized according to the information medium, the hardware
technology, the speed of information transfer, and the amount or capacity of
information involved. Many devices support the movement of information between a
storage medium and processor. Others support communication between the computer
system and the world of noncomputer devices. Storage-oriented devices store
information in computer-readable form. Nonstorage devices are used when it is
not necessary to move the same information both into and out of the computer.
Information is transformed from a computer- readable form into a form readable
by a person or machine.
devices that are not oriented toward machine-readable storage are computer
monitors (cathode ray tubes, or CRTs), keyboards, printers, plotters, optical
scanners, and converters between analog and digital information. CRTs display
information sent from the CPU in text or graphical form. Keyboards support the
input of character information (see keyboard, computer). Nontext information can
be entered into the computer through a microphone (see voice recognition),
musical keyboards, and instruments equipped with the MIDI (Musical Instrument
Digital Interface; see electronic music). On graphics tablets, or
"notebook" computers, users enter handwritten or hand-drawn
information with a stylus directly on a computer's display screen. The joystick,
lightpen, mouse, and trackball are common devices for input of information or
for control of processes through a graphical user interface (a pictorial
representation of information on a CRT). In each case, movement of the device is
translated by a controlling program into a pictorial representation of that
movement on the CRT.
produce graphical output on paper or film. Printers produce paper output of
character information at high speed (see printer, computer). Optical scanners
are input devices that read intensities of reflected light (see scanning). They
can be used to "capture" graphic images for digital storage. Scanners
with optical character recognition software read text on paper and translate the
scanned information into text files. bar code readers are optical scanners used
to read standard product code bars on retail merchandise for input to computers.
facsimile, or fax, machines have both scanning capabilities and the ability to
transmit the information over telecommunications lines to other fax machines.
The analog-to-digital converter, digital-to-analog converter, and modem enable
communication between digital computers and analog devices.
of storage-oriented devices include magnetic tapes and discs, and the optical
compact disc (see also information storage and retrieval). Magnetic devices--the
most popular type --employ the property of magnetic particles that allows them
to be polarized in one of two directions, so that they can carry binary
information. Compact-disc technology is replacing magnetic devices for many
uses, particularly in cases when a software application requires speedy access
to a large store of graphic information.
Interfacing I/O Devices to Computers
provide some standardization of interfaces for the many types of I/O devices and
to increase efficiency of I/O operations, I/O channels were developed. A channel
exists between the computer and perhaps several devices, so that the
specializations of each device are isolated. Channels provide a direct path
between various devices and the computer memory. This feature is known as direct
memory access. Channels are programmable and operate independently of the
processor, once started, allowing I/O to take place simultaneously with
computation. I/O technology is a fast-changing field, and sophisticated forms of
I/O, such as voice, visual, and tactile communication, have been developed and
are being pursued. A trend in the consumer market has been to combine various
peripherals in one package--such as a fax machine with a photocopier or a
scanner with a printer.
computer virus is a portion of computer code designed to insert itself into an
existing computer program, alter or destroy data, and copy itself into other
programs in the same computer or into programs in other computers. Its name was
coined from its ability to "replicate" and "infect" other
computers. A computer virus can spread through shared computer software, through
an online service (see database), or through a network (see computer network).
Programmers who design viruses often are "hackers" who do so as a
prank; a virus of this sort might cause a humorous message to appear on a
computer's video screen. Others programmers design viruses with the deliberate
intent to destroy data. In one well-publicized incident, a virus crippled or
slowed down 6,000 computers in a research network overnight.
the emergence of computer viruses in the early 1980s, the U.S. government and
many states have passed laws making it illegal to introduce viruses into the
computers of unwitting users. Computer-software companies have also designed
programs that safeguard against viruses. The programs are not foolproof,
however, because new computer viruses are constantly being created and
personal computer (PC) is a complete microcomputer that is based on a
microprocessor, a small semiconductor chip that performs the operations of a
central processing unit, or CPU. A PC also has other integrated circuits. It is
designed for use by a single user, and usually includes a keyboard and a
monitor, or Video Display Terminal.
of the chief measures of computing power are computer memory size and processing
speed. The unit of memory is the byte, which can hold one character of text. A
kilobyte (Kbyte) is 1,024 bytes, a megabyte (Mbyte) is 1,024 Kbytes, and a
gigabyte (Gbyte) is 1,024 Mbytes. These measures have been used to distinguish
PCs from larger minicomputers and mainframe computers, but the increasing power
of the PC has blurred these distinctions. The memory capacity of early PCs was
often as small as 16 Kbytes, but by the mid-1990s typical PCs were equipped with
4 to 16 Mbytes of memory. This can often be expanded to as much as 128 Mbytes or
even to several Gbytes in a workstation, which is the most powerful form of PC.
processing speed of PCs is commonly specified by the speed of the electronic
clock that controls internal operations. The latter measure is most commonly
used with PCs. Early PCs had clock speeds of one or two megahertz (MHz), but
speeds of 100 MHz or more are possible in modern designs.
computer system consists of three parts: the CPU, input-output devices (I/O
devices), and memory. A CPU performs arithmetic and logic operations. PCs
generally use processors that can process 16-bit (2-byte) or 32-bit (4-byte)
chunks of information.
most common input devices are keyboards and pointing devices, such as
"mice" or "trackballs." The most common output device is the
cathode-ray tube (CRT) display, or monitor. Displays provide both graphic and
text modes. Graph displays and pointing devices make possible a "point and
click" form of control that is easier for the user than typing commands at
a keyboard. Other common I/O devices are scanners, microphone and speaker sound
interfaces, and modems and network interfaces for communicating with other
computers; the mouse, joystick, and light pen, for making tactile input; and
printers, for producing "hard," or paper, copy (permanent output).
memory refers to memory that is directly accessible by the CPU. Modern
processors can handle from 16 Mbytes to 4 Gbytes. PCs are usually sold with less
primary memory than the CPU can handle. Upgrades can be made later on..
memory refers to external memory required to store data that will not fit into
primary memory or that must be kept permanently. (In most PCs, the contents of
primary memory are lost when power is removed.) Magnetic disks are the most
common form of secondary memory. Hard disks, often called fixed disks because
they cannot be removed from the computer, typically can store from 100 million
to 500 million characters of text information. Flexible (floppy) disks have much
lower capacity but can be removed and stored off-line. Floppy disks are the
usual way new software is introduced into a PC.
secondary memory devices commonly used are CD-ROM (compact-disc read-only
memory) and magnetic tape drives. A CD-ROM can hold about 600 million characters
and is ideal as a repository for a large amount of information (such as this
encyclopedia) that needs to be readable but does not need to be changed by the
computer user. Magnetic tapes have large capacity but are much slower than
disks. Tapes are primarily used as backup devices so that valuable information
can be restored if a fixed disk drive fails.
computer manufacturers were slow to see the potential market for PCs. The
industry was sparked instead by hobbyists, inventive individuals, and new
companies (see computer industry). The first small computers were sold as
do-it-yourself kits in 1976. Only experts could assemble and program these,
however. Incorporation of the easy-to-use BASIC programming language (see BASIC)
as a built-in feature of ready-to-use computers marked the beginning of the true
PC and made small computers accessible to nonexperts. By 1977, Apple and
Commodore were producing such machines.
other small companies were soon competing, and within a year large companies
such as Radio Shack--not previously involved with computers--had entered the
field. Traditional computer companies such as IBM took a relatively long time to
develop PCs. Their eventual entry into the field had a major effect, however,
because potential purchasers who lacked technical knowledge were then able to
buy from well-known and trusted manufacturers. Another important boost came with
the development of computer software that made PCs accessible to users who did
not know how to program. Many new companies entered this market, but very few of
the traditional computer companies made the transition to mass marketing that
came to be demanded.
have become almost indispensable in business, industry, and education. In some
cases they have entered areas where computers were not previously used. In other
cases, large centralized systems have been supplemented or replaced by small
computers, often connected by networks. These developments have had major
economic effects on the companies. On an international scale, tiny but highly
valuable computer components or entire systems can be shipped very economically.
This has had an important impact on trade patterns.
usefulness of the PC has grown steadily with the increased capability of the
machines and the powerful software that has been developed for them. Word
processing programs, spreadsheets, and database management programs are
available to individuals. Video games are just one aspect of what have come to
be known as multimedia applications, in which the computer produces a complex
sight-and-sound environment useful in art, business, and education as well as
for entertainment (see computers in education). Larger primary and secondary
memories, faster processing, and very-high-resolution displays also make
programs easier to use in combinations. Modern PCs can have several programs
active at once, with the status of each program displayed in "windows"
on the screen. This makes it easy to transfer data from one program to another.
personal computer is causing revolutionary changes in many fields. For example,
graphics programs and high-resolution printers relieve architects, designers,
and engineers of many time-consuming tasks. Textbook authors use desktop
publishing to help ensure that technical errors do not creep into the creation
of their books. Many libraries are abandoning traditional card catalogs and
replacing them with computerized systems that can then be accessed over phone
lines and networks by users anywhere.
data communication and portable computers make it possible for many people to do
much of their work outside the office. Networks are used to allow the sharing of
expensive resources as well as for routine communications. Electronic mail
(e-mail) and modems can be used to transmit complete programs, graphic images,
and digitized sound as well as written messages to the PC in the home as well as
to the office workstation.
DESIGN AND COMPUTER-AIDED MANUFACTURING
design and computer-aided manufacturing (CAD/CAM) is the integration of two
technologies. It has been called the "new industrial revolution." In
CAD, engineers use specialized computer software to create models (see computer
modeling) that represent the geometry and other characteristics of objects. Such
models are analyzed by computer and redesigned as necessary. This allows for
flexibility in studying different and daring designs without the high costs of
building and testing physical prototypes. In CAM, engineers use computers for
planning manufacturing processes, controlling manufacturing operations, testing
finished parts, and managing entire plants (see process control). CAD is linked
with CAM through a database shared by design and manufacturing engineers.
design and electronic design are the major applications of CAD/CAM.
Computer-aided mechanical design is most often done with automated drafting
programs that employ interactive computer graphics. Geometric information is
entered into the computer to create basic elements such as points, lines, and
circles. Additional constructions using these elements include drawing tangents
to curves, creating rounded corners, making copies of elements at new positions,
and so on. Elements can be moved, rotated, mirrored, and scaled, and the user
can zoom in on details. Computerized drafting is faster and more accurate than
manual drafting and makes retrieval and modification easier.
representation technique is called solid modeling. A solid model represents an
object's solid nature and not simply its external appearance. One solid-modeling
technique builds up complex parts by combining basic shapes, called primitives,
such as boxes, cylinders, spheres, and cones. Realistic shaded images of the
model in various positions can be generated by the computer, and portions can be
removed to view the interior. Properties such as weight, volume, location of the
center of gravity, and surface area are calculated automatically. A computer
technique called finite element analysis can be used to evaluate the structural
performance of the part when forces are applied.
models are used to link CAD with CAM. An example is numerical control (NC)
technology, which uses geometric information to create computer programs for
machining parts. Each time an NC program is run, a manufacturing machine repeats
operations exactly as programmed, producing parts rapidly and accurately. The
use of robots (see robot) to load and unload NC machines results in complete
CAD techniques are used to design various electronic devices including VLSI
(very large-scale integrated) circuits. Without CAD, today's ultradense VLSI
chips would be impossible. In the "standard cell" technique a designer
builds complex VLSI circuits by interconnecting appropriate small circuits that
are retrieved from a computer library. The computer automatically positions the
cells and routes the interconnections. Simulation software is used to verify the
logic of the design, check voltages, currents, and timing, and produce tests to
check the finished chip.
method for manufacturing VLSI chips uses a photomasking and etching process to
selectively etch through a surface layer to expose another layer. The masks used
in this process are produced by CAM software from the layout.
desktop manufacturing enables a designer to fabricate a model directly from data
stored in computer memory. One such system uses a laser to fuse plastic granules
together, layer by layer, until the model is achieved. Another system uses a
milling machine to mold the model from wax. Expert systems and other software
programs help designers to consider both function and manufacturing consequences
at early stages, when designs are easily modified.
and more manufacturing businesses are integrating CAD/CAM with other aspects of
production, including inventory tracking, scheduling, and marketing. This
concept, known as computer- integrated manufacturing (CIM), contributes to
effective materials management, speeds processing of manufacturing orders, and
proves the basis for significant cost savings.
have identified several potential health hazards for people who spend many hours
of the day using computers. Eyestrain, perhaps accompanied by headaches, is the
most common. Proper lighting, frequent breaks, and regular cleaning of the
display screen are helpful. Backaches and stiff necks comprise the second most
common health problem, frequently decreasing people's ability to concentrate and
perform at peak levels. Good posture, use of a well-designed, adjustable chair
that supports the lower back, periodic breaks, and a program of stretching and
deep-breathing exercises help alleviate stress and tension. A type of repetitive
stress injury, called carpal tunnel syndrome, is the result of severe muscle
fatigue and nerve compression. Proper posture, proper positioning of the wrists
and hands, and frequent breaks are usually good preventive measures.
screens, or monitors (see cathode-ray tube), emit low-level electromagnetic
radiation, which has been linked, some say, to an increased incidence of cancer,
birth defects, and miscarriages. Newer monitors emit reduced levels of
rise of computer technology overturned millennia of physical and economic
limitations on the ability of authorities to collect, organize, process, and
distribute information. While this has revolutionized science, business, and
government operations, and greatly enhanced many aspects of individual and
social life, it has also created severe pressures on the individual's claim in
democratic societies to enjoy reasonable expectations of personal privacy.
societies recognize zones of privacy for many activities of individuals, groups,
and associations by limiting the power of public and private authorities to
compel disclosure of such private matters or to put them under investigative
surveillance. While privacy is not treated as an absolute, its importance to
democratic values of individualism, dissent, and political liberty has led
democracies to give privacy a high place in their social and legal arrangements.
The computer age has forced the public to reconsider and redefine what privacy
should and can mean in a high-technology age.
Background and Dimensions of the Problem
major area of privacy involves the collection and use of personal information by
public and private organizations. By the 1950s, organizational record keeping
had already become a mainstay of organizational life in complex, urban
societies. Government and business organizations compiled and used extensive
records on individuals, utilizing manual files and electromechanical technology
(typewriters, telephones, cameras, mimeographs, and so on). While democratic
nations had installed some legal rules to assure privacy rights in these
operations, the preservation of privacy rested mainly on the high-cost and
manpower limitations in organizational-records use.
began to change as electronic data processing spread through the organizational
world in the late 1950s and the 1960s. Computer systems not only increased
significantly the ability of organizations to collect, store, process, and
disseminate information but also decreased significantly the cost and time
required to do so. Organizations could now afford to collect and share with
other organizations far more personal information and could use the information
in ways that record subjects would not know about or have any control over under
existing U.S. law. In the private sector this would involve credit, banking,
insurance, employment, and medical transactions; in the public sector,
activities such as welfare, law enforcement, taxation, and licensing programs.
early alarms brought about the next step of society's response to the
computers-and-privacy question--detailed empirical studies by government and
private organizations during 1969-74 into just what computers could do, were
doing, and might be able to do in the near future. These studies found that
public and private organizations were not yet collecting more personal data
about their subjects, exchanging these more widely outside traditional networks,
or creating new secret record systems or evaluation practices. However, it would
be only a short time before reduction of hardware costs for data input and
storage, and increases in software capabilities, would make greatly increased
data collection and collation of the data and existing records highly cost
effective. The studies concluded that existing U.S. law did not provide the
safeguards needed to assure basic citizen rights.
findings prompted three quite different reactions: (1) that organizations would
not misuse the new computing power, and no legal change was necessary; (2) that
this technology was so powerful and so likely to be abused that there should be
enacted a combination of bans on many uses and drastic controls on others; and
(3) to let the technology proceed under rules extending traditional privacy and
due process rights to cover computerized information practices.
third view was the one that prevailed. It won out over the
"nothing-is-needed" position because events such as the Watergate
scandal and exposures of federal agency surveillance practices in the early
1970s convinced the public that safeguards against government abuse of
information power were necessary. Also, as social change in the 1960s and 1970s
produced greater acceptance of diversity, the public wanted to be sure that
business or government records were not used to discriminate against
"different" people. At the same time, the many positive uses of
computer technology seemed far too valuable to outlaw computer use because of
Protecting Computerized Data
result of the policy debate was the promulgation during the 1970s and early
1980s of what is considered the first generation of privacy protection rules for
the computer age. Fair information practices were enacted by law or adopted as
organizational rules covering all federal agencies, state agencies in many
states, and various specific areas of record keeping--credit, insurance,
banking, law enforcement, and so on. Such laws (1) required personal-data record
systems to be publicized and their uses described to record subjects; (2) set
rules of confidentiality and for sharing personal data beyond the collecting
organization; and (3) gave record subjects rights to inspect, correct, and
challenge information in their files.
if an individual believed that public officials or private managers were
violating the fair-information-practices laws, and administrative procedures did
not afford relief, the individual could bring a lawsuit and have judicial review
of his or her complaint. Unlike most other democracies that also passed
privacy-protection laws in this period, including Sweden, France, and West
Germany, the United States did not create federal or state data protection or
privacy protection commissions to license computer data banks, receive citizen
complaints, and carry out enforcement activities.
first generation of data protection has made the collection and use of personal
information by government and business organizations far more visible to record
subjects and society than these activities were before. It has also installed
effective rights of inspection and correction, probably increased the accuracy
of record-based decisions about people, and allowed far more public
consideration of "relevancy" and "propriety"issues than
the late 1980s and the 1990s the computers-and-privacy debate was reopened.
Partly, this was because of major technological advances. The enormous
proliferation of minicomputers, desktop terminals, and personal computers meant
that data banks could now be created and used by millions of persons. The
expansion of telecommunications produced vast streams of personal data being
transmitted on the airwaves, and interception by hackers, business competitors,
police, and other interlopers became a growing concern.
new privacy debate centered on the ways government and industry have been using
enhanced computer capabilities. Both state and federal governments are
increasingly using computer matching programs that make computer comparisons of
large welfare, employment, and other automated files in order to kick out what
seem to be instances of fraudulent or improper beneficiary payments. As more and
more office and customer-service work came to be done with computer systems
using video display terminals (VDTs), many private and public employers began to
adopt detailed computer monitoring of their VDT operators' performance, leading
civil libertarians to charge that Big Brother had gone electronic.
another front, American business moved strongly into direct marketing, in which
offering people goods and services by mail or telephone calls depended on
compiling detailed computer profiles of individual consumers' demographic
characteristics and consumer activity. At the same time, the credit reporting
industry, now concentrated in three national companies each maintaining computer
files on more than 175 million Americans, came under criticism for troublesome
levels of mistakes in their reports and insufficiently timely and responsive
procedures for correcting errors. And, as more personal medical information has
become computerized and a system of nationally regulated health-care reform was
debated in 1993-94, and with little legal protection for medical
confidentiality, ensuring the privacy of personal medical records became a
high-visibility public issue.
1994 a Louis Harris survey showed that 84 percent of the American public
reported they were "concerned" about threats to their personal
privacy, and 51 percent said they were "very concerned." Privacy
advocates concluded that the rapid and deepening spread of computers calls for
the creation of a national privacy protection agency or commissioner to examine
impacts of new computer applications, publicize the need for new rules or laws,
and perhaps have the authority to issue privacy-protection regulations.
of the fair-information-practices approach prefer applying a sector-by-sector
scrutiny in each field, such as credit reporting, health care, and use of new
telephone services, reflecting the special balances between legitimate
information uses and valid privacy claims that each sector features. Congress
debated many such updated or new federal laws in 1992-94, and some were passed.
their introduction in schools in the early 1980s computers and computer software
have been increasingly accessible to students and teachers--in classrooms,
computer labs, school libraries, and outside of school. By the mid-1990s there
were about 4.5 million computers in elementary and secondary schools throughout
the United States. Schools buy Macintosh and IBM-compatible computers almost
exclusively, although nearly half of their inventory is composed of computers
based on older designs such as the Apple IIe. Students spend on the average an
hour per week using school computers.
can be used for learning and teaching in school in at least four ways. First,
learning involves acquiring information. Computers--especially linked to CD-ROMs
and video disks that electronically store thousands of articles, visual images,
and sounds--enable students to search the electronic equivalent of an
encyclopedia or a video library to answer their own questions or simply to
browse through a maze of fascinating and visually appealing information.
learning involves the progressive development of skills like reading and
mathematics--skills that are basic academic enablers. Software called
"computer-assisted instruction," or CAI, poses questions to students
and compares each answer with a single correct answer. Typically, such programs
respond to wrong answers with an explanation and another, similar problem.
Sometimes CAI programs are embedded in an entertaining gamelike context that
holds student interest and yet maintains student attention on academic work.
Most CAI programs cover limited material, but some large-scale, multiyear
reading and mathematics curricula have been developed.
learning involves the development of a wide variety of analytic competencies and
complex understandings. Computers help students attain these goals through
software such as word processors (to clarify ideas through writing), graphing
and "construction" tools (to clarify concepts and examine conjectures
in mathematics), electronic painting and computer-assisted drafting (CAD)
programs, music composition programs, simulations of social environments, and
programs that collect data from science laboratory equipment and aid in its
a large element in learning is communicating with others--finding and engaging
an audience with one's ideas and questions. Several types of computer software
can be used in schools for communications: desktop publishing and image-editing
software for making professional-quality printed materials, computer programming
languages such as Hypercard for creating interactive computer exercises, and
telecommunications software for exchanging ideas at electronic speeds with
students in other classrooms all over the world.
spite of the variety and power of education-related computer software, surveys
have shown that students are still using school computers primarily within a
limited range of the possible computer applications--mainly to practice basic
language and math skills and to learn about computers and computer software.
This is very similar to how students used the first school microcomputers back
in the early 1980s. The major change between the 1980s and today in computer use
has been a reduced emphasis on teaching students to program computers and an
increased emphasis on teaching word processing and similar computer
applications. Only a small percentage of secondary school classes in regular
subjects (math, English, science) provide students with substantial experience
in using computers. More elementary school students use computers than do high
school students, but their use is somewhat less extensive. Even high school
students experience computers mostly as another set of skills to master, rather
than using them productively to accomplish understanding and to demonstrate
competence in other subjects.
are several reasons why most students' use of school computers is so limited in
time and variety. The number of school computers, although still growing, is
small compared with the number of students present in schools (roughly one to
ten). Schools continue to locate a majority of their computers in specialized,
teacher-shared spaces like computer labs in order to enable as many students as
possible to have some experience in using computers, but this practice impedes
integrating computers into other learning activities. Most regular classrooms,
if they have any computers at all, have only one or two, which precludes
orchestrating computer access for entire classrooms of students.
problem is the limited capacity of most school computers. Apart from the many
older computers in schools, even many of the newer models have limited
processing power, inadequate computer memory, and a lack of storage capabilities
such as hard disk drives and CD-ROM players. Consequently much of the most
recently produced, most sophisticated software cannot be used on most school
addition, most teachers--with responsibility for teaching five classes of
students or for teaching many different subjects--do not have the time to learn
how to use a wide variety of types of software in their teaching. The more
complex the software, the more difficult it is for teachers to learn to manage
its use. Finally, the cost of both computer hardware and software is much
greater than the cost of traditional teaching and learning materials.
a result of the difficulties that schools have had in exploiting the potential
of computer technology, some critics see computer education as merely the latest
in a series of unsuccessful attempts to revolutionize education through the use
of audio- and visually oriented nonprint media. For example, motion pictures,
broadcast television, filmstrips, audio recorders, and videotapes were all
originally heralded for their instructional potential, but each of these
ultimately became a minor classroom tool alongside conventional methods.
believe, however, that computers are a much more powerful learning medium than
the innovative instructional devices that preceded them. They cite the essential
interactive nature of using computers programmed to provoke decision making and
manipulations of visual environments. Learning tasks can become more
individualized, enabling each student to receive immediate feedback. Experts say
that having students work collaboratively on computers leads to greater
initiative and more autonomous learning. Proponents also argue that because
computers are so pervasive in society, "computer literacy" is itself a
Computer, machine that performs tasks, such as mathematical
calculations or electronic communication, under the control of a set of
instructions called a program. Programs usually reside within the computer and
are retrieved and processed by the computer's electronics, and the program
results are stored or routed to output devices, such as video display monitors
or printers. Computers are used to perform a wide variety of activities with
reliability, accuracy, and speed.
Uses of Computers
People use computers in a wide variety of ways. In
business, computers track inventories with bar codes and scanners, check the
credit status of customers, and transfer funds electronically. In homes, tiny
computers embedded in the electronic circuitry of most appliances control the
indoor temperature, operate home security systems, tell the time, and turn
videocassette recorders on and off. Computers in automobiles regulate the flow
of fuel, thereby increasing gas mileage. Computers also entertain, creating
digitized sound on stereo systems or computer-animated features from a digitally
encoded laser disc. Computer programs, or applications, exist to aid every level
of education, from programs that teach simple addition or sentence construction
to advanced calculus. Educators use computers to track grades and prepare notes;
with computer-controlled projection units, they can add graphics, sound, and
animation to their lectures (see Computer-Aided Instruction). Computers are used extensively in
scientific research to solve mathematical problems, display complicated data, or
model systems that are too costly or impractical to build, such as testing the
air flow around the next generation of space shuttles. The military employs
computers in sophisticated communications to encode and unscramble messages, and
to keep track of personnel and supplies.
How Computers Work
The physical computer and its components are known as
hardware. Computer hardware includes the memory that stores data and
instructions; the central processing unit (CPU) that carries out instructions;
the bus that connects the various computer components; the input devices, such
as a keyboard or mouse, that allow the user to communicate with the computer;
and the output devices, such as printers and video display monitors, that enable
the computer to present information to the user. The programs that run the
computer are called software. Software generally is designed to perform a
particular type of task—for example, to control the arm of a robot to weld a
car's body, to draw a graph, or to direct the general operation of the computer.
The Operating System
When a computer is turned on it searches for instructions
in its memory. Usually, the first set of these instructions is a special program
called the operating system, which is the software that makes the computer work.
It prompts the user (or other machines) for input and commands, reports the
results of these commands and other operations, stores and manages data, and
controls the sequence of the software and hardware actions. When the user
requests that a program run, the operating system loads the program in the
computer's memory and runs the program. Popular operating systems, such as
Windows 95 and the Macintosh operating system, have a graphical user interface
(GUI)—that is, a display that uses tiny pictures, or icons, to represent
various commands. To execute these commands, the user clicks the mouse on the
icon or presses a combination of keys on the keyboard.
To process information electronically, data are stored in
a computer in the form of binary digits, or bits, each having two possible
representations (0 or 1). If a second bit is added to a single bit of
information, the number of representations is doubled, resulting in four
possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit
representation again doubles the number of combinations, resulting in eight
possibilities: 000, 001, 010, 011, 100, 101, 110, or 111. Each time a bit is
added, the number of possible patterns is doubled. Eight bits is called a byte;
a byte has 256 possible combinations of 0s and 1s.
A byte is a useful quantity in which to store information
because it provides enough possible patterns to represent the entire alphabet,
in lower and upper cases, as well as numeric digits, punctuation marks, and
several character-sized graphics symbols, including non-English characters such
as p. A byte also can be interpreted as a pattern that represents a number
between 0 and 255. A kilobyte—1000 bytes—can store 1000 characters; a
megabyte can store 1 million characters; and a gigabyte can store 1 billion
The physical memory of a computer is either random access
memory (RAM), which can be read or changed by the user or computer, or read-only
memory (ROM), which can be read by the computer but not altered. One way to
store memory is within the circuitry of the computer, usually in tiny computer
chips that hold millions of bytes of information. The memory within these
computer chips is RAM. Memory also can be stored outside the circuitry of the
computer on external storage devices, such as magnetic floppy disks, which store
about 2 megabytes of information; hard drives, which can store thousands of
megabytes of information; and CD-ROMs (compact discs), which can store up to 600
megabytes of information.
The bus is usually a flat cable with numerous parallel
wires. The bus enables the components in a computer, such as the CPU and memory,
to communicate. Typically, several bits at a time are sent along the bus. For
example, a 16-bit bus, with 16 parallel wires, allows the simultaneous
transmission of 16 bits (2 bytes) of information from one device to another.
Input devices, such as a keyboard or mouse, permit the
computer user to communicate with the computer. Other input devices include a
joystick, a rodlike device often used by game players; a scanner, which converts
images such as photographs into binary information that the computer can
manipulate; a light pen, which can draw on, or select objects from, a computer's
video display by pressing the pen against the display's surface; a touch panel,
which senses the placement of a user's finger; and a microphone, used to gather
The Central Processing Unit (CPU)
Information from an input device or memory is
communicated via the bus to the CPU, which is the part of the computer that
translates commands and runs programs. The CPU is a microprocessor chip—that
is, a single piece of silicon containing millions of electrical components.
Information is stored in a CPU memory location called a register. Registers can
be thought of as the CPU's tiny scratchpad, temporarily storing instructions or
data. When a program is run, one register called the program counter keeps track
of which program instruction comes next. The CPU's control unit coordinates and
times the CPU's functions, and it retrieves the next instruction from memory.
In a typical sequence, the CPU locates the next
instruction in the appropriate memory device. The instruction then travels along
the bus from the computer's memory to the CPU, where it is stored in a special
instruction register. Meanwhile, the program counter is incremented to prepare
for the next instruction. The current instruction is analyzed by a decoder,
which determines what the instruction will do. Any data the instruction needs
are retrieved via the bus and placed in the CPU's registers. The CPU executes
the instruction, and the results are stored in another register or copied to
specific memory locations.
Once the CPU has executed the program instruction, the
program may request that information be communicated to an output device, such
as a video display monitor or a flat liquid crystal display. Other output
devices are printers, overhead projectors, videocassette recorders (VCRs), and
Programming languages contain the series of commands that
create software. In general, a language that is encoded in binary numbers or a
language similar to binary numbers that a computer's hardware understands is
understood more quickly by the computer. A program written in this type of
language also runs faster. Languages that use words or other commands that
reflect how humans think are easier for programmers to use, but they are slower
because the language must be translated first so the computer can understand it.
Computer programs that can be run by a computer's
operating system are called executables. An executable program is a sequence of
extremely simple instructions known as machine code. These instructions are
specific to the individual computer's CPU and associated hardware; for example,
Intel Pentium and Power PC microprocessor chips each have different machine
languages and require different sets of codes to perform the same task. Machine
code instructions are few in number (roughly 20 to 200, depending on the
computer and the CPU). Typical instructions are for copying data from a memory
location or for adding the contents of two memory locations (usually registers
in the CPU). Machine code instructions are binary—that is, sequences of bits
(0s and 1s). Because these numbers are not understood easily by humans, computer
instructions usually are not written in machine code.
Assembly language uses commands that are easier for
programmers to understand than are machine-language commands. Each machine
language instruction has an equivalent command in assembly language. For
example, in assembly language, the statement “MOV A, B” instructs the
computer to copy data from one location to another. The same instruction in
machine code is a string of 16 0s and 1s. Once an assembly-language program is
written, it is converted to a machine-language program by another program called
an assembler. Assembly language is fast and powerful because of its
correspondence with machine language. It is still difficult to use, however,
because assembly-language instructions are a series of abstract codes. In
addition, different CPUs use different machine languages and therefore require
different assembly languages. Assembly language is sometimes inserted into a
higher-level language program to carry out specific hardware tasks or to speed
up a higher-level program.
Higher-level languages were developed because of the
difficulty of programming assembly languages. Higher-level languages are easier
to use than machine and assembly languages because their commands resemble
natural human language. In addition, these languages are not CPU-specific.
Instead, they contain general commands that work on different CPUs. For example,
a programmer writing in the higher-level Pascal programming language who wants
to display a greeting need include only the following command:
Write (‘Hello, Encarta User!’);
This command directs the computer's CPU to display the
greeting, and it will work no matter what type of CPU the computer uses. Like
assembly language instructions, higher-level languages also must be translated,
but a compiler is used. A compiler turns a higher-level program into a
CPU-specific machine language. For example, a programmer may write a program in
a higher-level language such as C and then prepare it for different machines,
such as a Cray Y-MP supercomputer or a personal computer, using compilers
designed for those machines. This speeds the programmer's task and makes the
software more portable to different users and machines.
American naval officer and mathematician Grace Murray
Hopper helped develop the first commercially available higher-level software
language, FLOW-MATIC, in 1957. Hopper is credited for inventing the term bug,
which indicates a computer malfunction; in 1945 she discovered a hardware
failure in the Mark II computer caused by a moth trapped between its mechanical
From 1954 to 1958 American computer scientist Jim Backus
of International Business Machines, Inc. (IBM) developed FORTRAN, an acronym for
FORmula TRANslation. It became a standard programming language because it
can process mathematical formulas. FORTRAN and its variations are still in use
Symbolic Instruction Code, or
BASIC, was developed by American mathematician John Kemeny and
Hungarian-American mathematician Thomas Kurtz at Dartmouth College in 1964. The
language was easier to learn than its predecessors and became popular due to its
friendly, interactive nature and its inclusion on early personal computers
(PCs). Unlike other languages that require that all their instructions be
translated into machine code first, BASIC is interpreted—that is, it is turned
into machine language line by line as the program runs. BASIC commands typify
higher-level languages because of their simplicity and their closeness to
natural human language. For example, a program that divides a number in half can
be written as
10 INPUT “ENTER A NUMBER,” X
30 PRINT “HALF OF THAT NUMBER IS,” Y
The numbers that precede each
line are chosen by the programmer to indicate the sequence of the commands. The
first line prints “ENTER A NUMBER” on the computer screen followed by a
question mark to prompt the user to type in the number labeled “X.” In the
next line, that number is divided by two, and in the third line, the result of
the operation is displayed on the computer screen.
Other higher-level languages in use today include C, Ada,
Pascal, LISP, Prolog, COBOL, HTML, and Java. New compilers are being developed,
and many features available in one language are being made available in others.
Object-Oriented Programming Languages
Object-oriented programming (OOP) languages like C++ are
based on traditional higher-level languages, but they enable a programmer to
think in terms of collections of cooperating objects instead of lists of
commands. Objects, such as a circle, have properties such as the radius of the
circle and the command that draws it on the computer screen. Classes of objects
can inherit features from other classes of objects. For example, a class
defining squares can inherit features such as right angles from a class defining
rectangles. This set of programming classes simplifies the programmer's task,
resulting in more reliable and efficient programs.
Types of Computers
Digital and Analog
Computers can be either digital or analog. Digital refers
to the processes in computers that manipulate binary numbers (0s or 1s), which
represent switches that are turned on or off by electrical current. Analog
refers to numerical values that have a continuous range. Both 0 and 1 are analog
numbers, but so is 1.5 or a number like p (approximately 3.14). As an example,
consider a desk lamp. If it has a simple on/off switch, then it is digital,
because the lamp either produces light at a given moment or it does not. If a
dimmer replaces the on/off switch, then the lamp is analog, because the amount
of light can vary continuously from on to off and all intensities in between.
Analog computer systems were the first type to be
produced. A popular analog computer used in the 20th century was the slide rule.
It performs calculations by sliding a narrow, gauged wooden strip inside a
rulerlike holder. Because the sliding is continuous and there is no mechanism to
stop at one exact value, the slide rule is analog. New interest has been shown
recently in analog computers, particularly in areas such as neural networks that
respond to continuous electrical signals. Most modern computers, however, are
digital machines whose components have a finite number of states—for example,
the 0 or 1, or on or off of bits. These bits can be combined to denote
information such as numbers, letters, graphics, and program instructions.
Range of Computer Ability
Computers exist in a wide range of sizes and power. The
smallest are embedded within the circuitry of appliances, such as televisions
and wrist watches. These computers are typically preprogrammed for a specific
task, such as tuning to a particular television frequency or keeping accurate
Programmable computers vary enormously in their
computational power, speed, memory, and physical size. The smallest of these
computers can be held in one hand and are called personal digital assistants (PDAs).
They are used as notepads, scheduling systems, and address books; if equipped
with a cellular phone, they can connect to worldwide computer networks to
exchange information regardless of location.
Laptop computers and PCs are typically used in businesses
and at home to communicate on computer networks, for word processing, to track
finances, and to play games. They have large amounts of internal memory to store
hundreds of programs and documents. They are equipped with a keyboard; a mouse,
trackball, or other pointing device; and a video display monitor or liquid
crystal display (LCD) to display information. Laptop computers usually have
similar hardware and software as PCs, but they are more compact and have flat,
lightweight LCDs instead of video display monitors.
Workstations are similar to personal computers but have
greater memory and more extensive mathematical abilities, and they are connected
to other workstations or personal computers to exchange data. They are typically
found in scientific, industrial, and business environments that require high
levels of computational abilities.
Mainframe computers have more memory, speed, and
capabilities than workstations and are usually shared by multiple users through
a series of interconnected computers. They control businesses and industrial
facilities and are used for scientific research. The most powerful mainframe
computers, called supercomputers, process complex and time-consuming
calculations, such as those used to create weather predictions. They are used by
the largest businesses, scientific institutions, and the military. Some
supercomputers have many sets of CPUs. These computers break a task into small
pieces, and each CPU processes a portion of the task to increase overall speed
and efficiency. Such computers are called parallel processors.
Computers can communicate with other computers through a
series of connections and associated hardware called a network. The advantage of
a network is that data can be exchanged rapidly, and software and hardware
resources, such as hard-disk space or printers, can be shared.
One type of network, a local area network (LAN), consists
of several PCs or workstations connected to a special computer called the
server. The server stores and manages programs and data. A server often contains
all of a networked group's data and enables LAN workstations to be set up
without storage capabilities to reduce cost.
Mainframe computers and supercomputers commonly are
networked. They may be connected to PCs, workstations, or terminals that have no
computational abilities of their own. These “dumb” terminals are used only
to enter data into, or receive output from, the central computer.
Wide area networks (WANs) are networks that span large
geographical areas. Computers can connect to these networks to use facilities in
another city or country. For example, a person in Los Angeles can browse through
the computerized archives of the Library of Congress in Washington, D.C. The
largest WAN is the Internet, a global consortium of networks linked by common
communication programs. The Internet is a mammoth resource of data, programs,
and utilities. It was created mostly by American computer scientist Vinton Cerf
in 1973 as part of the United States Department of Defense Advanced Research
Projects Agency (DARPA). In 1984 the development of Internet technology was
turned over to private, government, and scientific agencies. The World Wide Web
is a system of information resources accessed primarily through the Internet.
Users can obtain a variety of information in the form of text, graphics, sounds,
or animations. These data are extensively cross-indexed, enabling users to
browse (transfer from one information site to another) via buttons, highlighted
text, or sophisticated searching software known as search engines.
The history of computing began with an analog machine. In
1623 German scientist Wilhelm Schikard invented a machine that used 11 complete
and 6 incomplete sprocketed wheels that could add and, with the aid of logarithm
tables, multiply and divide.
French philosopher, mathematician, and physicist Blaise
Pascal invented a machine in 1642 that added and subtracted, automatically
carrying and borrowing digits from column to column. Pascal built 50 copies of
his machine, but most served as curiosities in parlors of the wealthy.
Seventeenth-century German mathematician Gottfried Leibniz designed a special
gearing system to enable multiplication on Pascal's machine.
In the early 19th century French inventor Joseph-Marie
Jacquard devised a specialized type of computer: a loom. Jacquard's loom used
punched cards to program patterns that were output as woven fabrics by the loom.
Though Jacquard was rewarded and admired by French emperor Napoleon I for his
work, he fled for his life from the city of Lyon pursued by weavers who feared
their jobs were in jeopardy due to Jacquard's invention. The loom prevailed,
however: When Jacquard passed away, more than 30,000 of his looms existed in
Lyon. The looms are still used today, especially in the manufacture of fine
Another early mechanical computer was the Difference
Engine, designed in the early 1820s by British mathematician and scientist
Charles Babbage. Although never completed by Babbage, the Difference Engine was
intended to be a machine with a 20-decimal capacity that could solve
mathematical problems. Babbage also made plans for another machine, the
Analytical Engine, considered to be the mechanical precursor of the modern
computer. The Analytical Engine was designed to perform all arithmetic
operations efficiently; however, Babbage's lack of political skills kept him
from obtaining the approval and funds to build it. Augusta Ada Byron (Countess
of Lovelace, 1815-52) was a personal friend and student of Babbage. She was the
daughter of the famous poet Lord Byron and one of only a few woman
mathematicians of her time. She prepared extensive notes concerning Babbage's
ideas and the Analytical Engine. Ada's conceptual programs for the Engine led to
the naming of a programming language (Ada) in her honor. Although the Analytical
Engine was never built, its key concepts, such as the capacity to store
instructions, the use of punched cards as a primitive memory, and the ability to
print, can be found in many modern computers.
Herman Hollerith, an American inventor, used an idea
similar to Jacquard's loom when he combined the use of punched cards with
devices that created and electronically read the cards. Hollerith's tabulator
was used for the 1890 U.S. census, and it made the computational time three to
four times shorter than the time previously needed for hand counts. Hollerith's
Tabulating Machine Company eventually merged with other companies in 1924 to
In 1936 British mathematician Alan Turing proposed the
idea of a machine that could process equations without human direction. The
machine (now known as a Turing machine) resembled an automatic typewriter that
used symbols for math and logic instead of letters. Turing intended the device
to be used as a “universal machine” that could be programmed to duplicate
the function of any other existing machine. Turing's machine was the theoretical
precursor to the modern digital computer.
In the 1930s American mathematician Howard Aiken
developed the Mark I calculating machine, which was built by IBM. This
electronic calculating machine used relays and electromagnetic components to
replace mechanical components. In later machines, Aiken used vacuum tubes and
solid state transistors (tiny electrical switches) to manipulate the binary
numbers. Aiken also introduced computers to universities by establishing the
first computer science program at Harvard University. Aiken never trusted the
concept of storing a program within the computer. Instead his computer had to
read instructions from punched cards.
At the Institute for Advanced Study in Princeton,
Hungarian-American mathematician John von Neumann developed one of the first
computers used to solve problems in mathematics, meteorology, economics, and
hydrodynamics. Von Neumann's 1945 Electronic Discrete Variable Computer (EDVAC)
was the first electronic computer to use a program stored entirely within its
John Mauchley, an American physicist, proposed an
electronic digital computer, called the Electronic Numerical Integrator And
Computer (ENIAC), which was built at the Moore School of Engineering at the
University of Pennsylvania in Philadelphia by Mauchley and J. Presper Eckert, an
American engineer. ENIAC was completed in 1945 and is regarded as the first
successful, general digital computer. It weighed more than 27,000 kg (60,000
lb), and contained more than 18,000 vacuum tubes. Roughly 2000 of the computer's
vacuum tubes were replaced each month by a team of six technicians. Many of
ENIAC's first tasks were for military purposes, such as calculating ballistic
firing tables and designing atomic weapons. Since ENIAC was initially not a
stored program machine, it had to be reprogrammed for each task.
Eckert and Mauchley eventually formed their own company,
which was then bought by the Rand Corporation. They produced the Universal
Automatic Computer (UNIVAC), which was used for a broader variety of commercial
applications. By 1957, 46 UNIVACs were in use.
In 1948, at Bell Telephone Laboratories, American
physicists Walter Houser Brattain, John Bardeen, and William Bradford Shockley
developed the transistor, a device that can act as an electric switch. The
transistor had a tremendous impact on computer design, replacing costly,
energy-inefficient, and unreliable vacuum tubes.
In the late 1960s integrated circuits, tiny transistors
and other electrical components arranged on a single chip of silicon, replaced
individual transistors in computers. Integrated circuits became miniaturized,
enabling more components to be designed into a single computer circuit. In the
1970s refinements in integrated circuit technology led to the development of the
modern microprocessor, integrated circuits that contained thousands of
transistors. Modern microprocessors contain as many as 10 million transistors.
Manufacturers used integrated circuit technology to build
smaller and cheaper computers. The first of these so-called personal computers
(PCs) was sold by Instrumentation Telemetry Systems. The Altair 8800 appeared in
1975. It used an 8-bit Intel 8080 microprocessor, had 256 bytes of RAM, received
input through switches on the front panel, and displayed output on rows of
light-emitting diodes (LEDs). Refinements in the PC continued with the inclusion
of video displays, better storage devices, and CPUs with more computational
abilities. Graphical user interfaces were first designed by the Xerox
Corporation, then later used successfully by the Apple Computer Corporation with
its Macintosh computer. Today the development of sophisticated operating systems
such as Windows 95 and Unix enables computer users to run programs and
manipulate data in ways that were unimaginable 50 years ago.
Possibly the largest single calculation was accomplished
by physicists at IBM in 1995 solving one million trillion mathematical problems
by continuously running 448 computers for two years to demonstrate the existence
of a previously hypothetical subatomic particle called a glueball. Japan, Italy,
and the United States are collaborating to develop new supercomputers that will
run these calculations one hundred times faster.
In 1996 IBM challenged Gary Kasparov, the reigning world
chess champion, to a chess match with a supercomputer called Deep Blue. The
computer had the ability to compute more than 100 million chess positions per
second. Kasparov won the match with three wins, two draws, and one loss. Deep
Blue was the first computer to win a game against a reigning world chess
champion with regulation time controls. Many experts predict these types of
parallel processing machines will soon surpass human chess playing ability, and
some speculate that massive calculating power will one day replace intelligence.
Deep Blue serves as a prototype for future computers that will be required to
solve complex problems.
In 1965 semiconductor pioneer Gordon Moore predicted that
the number of transistors contained on a computer chip would double every year.
This is now known as Moore's Law, and it has proven to be somewhat accurate. The
number of transistors and the computational speed of microprocessors currently
doubles approximately every 18 months. Components continue to shrink in size and
are becoming faster, cheaper, and more versatile.
With their increasing power and versatility, computers
simplify day-to-day life. Unfortunately, as computer use becomes more
widespread, so do the opportunities for misuse. Computer hackers—people who
illegally gain access to computer systems—often violate privacy and can tamper
with or destroy records. Programs called viruses or worms can replicate and
spread from computer to computer, erasing information or causing computer
malfunctions. Other individuals have used computers to electronically embezzle
funds and alter credit histories (see
Computer Security). New ethical issues also have arisen, such as how to regulate
material on the Internet and the World Wide Web. Individuals, companies, and
governments are working to solve these problems by developing better computer
security and enacting regulatory legislation.
Computers will become more advanced and they will also
become easier to use. Reliable speech recognition will make the operation of a
computer easier. Virtual reality, the technology of interacting with a computer
using all of the human senses, will also contribute to better human and computer
interfaces. Standards for virtual-reality program languages, called Virtual
Reality Modeling language (VRML), currently are being developed for the World
Communications between computer users and networks will
benefit from new technologies such as broadband communication systems that can
carry significantly more data and carry it faster, to and from the vast
interconnected databases that continue to grow in number and type.
Return to Intro Menu