02 April 2010

songarea playlist

01 April 2010

SeConDaRy StoRaGe

Secondary storage or external memory differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its Input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also an order of magnitude less expensive than primary storage. Consequently, modern computer systems typically have an order of magnitude more secondary storage than primary storage and data is kept for a longer time there.



• FLOPPY DISK

Floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell.
Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive", which is another term for a (nonremovable) type of hard disk drive.



• FLOPPY DRIVES

A floppy drive grabs a disk at its centre and spin it insides its plastics jacket. The floppy drive obtains stored data and instruction from a floppy disk and stores them onto the disk. The drive is made up of a box with a slot into which a user inserts a disk. The slot has a drive gate. This drive rotates the disk with a motor inside the drive. Electronic read or write heads read data from the disk and write data to it while the disk rotates. A microcomputer usually has internal floppy drives inside the computer cabinet, but it sometimes has external floppy drive, a separate component outside the cabinet.




• HARD DISK

A hard disk drive (often shortened as hard disk, hard drive, or HDD) is a non-volatile storage device that stores digitally encoded data on rapidly rotating rigid (i.e. hard) platters with magnetic surfaces. Strictly speaking, "drive" refers to the motorized mechanical aspect that is distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.



• OPTICAL DISKS

Optical disk is a any of a variety of information storage disks that are played or read using a laser. Optical disks include compact discs (CDs and CD-ROMs), laser discs and digital versatile discs (or digital video discs; DVDs and DVD-ROMs). WORM [Write Once/Read Many] disks can be used to record data, but once data is recorded it cannot be altered except by obliterating the old version and storing the new version on a previously unused portion of the disk. Magneto-optical disks, such as the rewritable optical disk and the recordable disk used with the Mini Disc player, have a special layer, as of barium ferrite, that can be magnetically polarized by a recording head when heated with a laser. Data or sound may be recorded to and erased from any portion of a magneto-optical disk multiple times.



• MAGNETIC TAPE

Magnetic tape is sequential storage medium used for data collection, backup and archiving. Like videotape, computer tape is made of flexible plastic with one side coated with a ferromagnetic material. Tapes were originally open reels, but were superseded by cartridges and cassettes of many sizes and shapes.
Tape has been more economical than disks for archival data, but that is changing as disk capacities have increased enormously. If tapes are stored for the duration, they must be periodically recopied or the tightly coiled magnetic surfaces may contaminate each other.



• CACHE (CACHE MEMORY)

To store data locally in order to speed up subsequent retrievals. Pronounced "cash." See Web cache and cache. Reserved areas of memory in every computer that are used to speed up instruction execution, data retrieval and data updating. Pronounced "cash," they serve as staging areas, and their contents are constantly changing. There are two kinds: memory caches and disk caches.

PrimaRy StoRaGe

Primary storage or main memory or internal memory, often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered.
Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually two buses an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile cleared at start up, a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small start up program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory the terminology may be somewhat confusing as most ROM types are also capable of random access.
Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM or similar, because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.



Random-access memory usually known by its acronym, RAM is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.
By contrast, storage devices such as magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item.
The word RAM is often associated with volatile types of memory such as DRAM memory modules, where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and a type of flash memory called NOR-Flash.

Read-only memory usually known by its acronym, ROM is a class of storage media used in computers and other electronic devices. Because data stored in ROM cannot be modified (at least not very quickly or easily), it is mainly used to distribute firmware (software that is very closely tied to specific hardware, and unlikely to require frequent updates).In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state ROM), which is fabricated with the desired data permanently stored in it, and thus can never be modified. However, more modern types such as EPROM and flash EEPROM can be erased and re-programmed multiple times; they are still described as "read-only memory"(ROM) because the reprogramming process is generally infrequent, comparatively slow, and often does not permit random access writes to individual memory locations.



An EPROM, or erasable programmable read only memory, is a type of memory chip that retains its data when its power supply is switched off. In other words, it is non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light from a mercury-vapor light source. EPROMs are easily recognizable by the transparent fused quartz window in the top of the package, through which the silicon chip is visible, and which permits exposure to UV light during erasing.

BuSeS

A bus is a common digital pathway between resources and devices. In a PC, there are two major types: the system bus and peripheral bus. The system bus, also known as the "front side bus" or "local bus," is the internal path from the CPU to memory and is split into address bus and data bus subsets. Addresses are sent over the address lines to signal a memory location, and data are transferred over the data lines to that location.

System buses transfer data in parallel. In a 32-bit bus, data are sent over 32 wires simultaneously. A 64-bit us uses 64 wires.


• Three Main Bus Architectures

ISA (Industry Standard Architecture): Pronounced "i-suh" ISA stems from the original computer. It was an 8-bit bus originally known as the PC bus and then the XT bus. It was later extended to 16 bits and became the AT bus and eventually the ISA bus.

MCA (Micro Channel Architecture): A 32-bit bus used in IBM P/S series and other IBM models. This architecture allows multi processing that allows several processors to work simultaneously. Micro channel architecture is not compatible with PC bus architecture.

EISA ( Extended Industry Standard Architecture): Pronounced "eesa" this bus was a 32-bit extension of ISA created by major vendors to counter IBM's Micro Channel. EISA slots accepted both EISA and ISA cards, but clock speed was still at the slow ISA rate. EISA was used in servers but later abandoned for PCI.



• Local Buses

A local bus is a computer bus that connects directly, or almost directly, from the CPU to one or more slots on the expansion bus. The significance of direct connection to the CPU is avoiding the bottleneck created by the expansion bus, thus providing fast throughput. There are several local buses built into various types of computers to increase the speed of data transfer. Local buses for expanded memory and video boards are the most common. There are two local-bus systems available today. That is:


VL-Bus Local Bus (VESA Local Bus): VL-Bus specification was introduced by the VESA (Video Electronics Standards Association). the VESA Local Bus was very commonplace on 486 motherboards. Probably a majority of 486-based systems had a VESA Local Bus video card, although early 486 systems never had VLB slots, as VLB debuted years after the introduction of the 486 processor. By 1996, the Pentium (driven by Intel's Triton chipset and PCI architecture) had eliminated the 80486 market and the VESA Local Bus with it. Many of the last 80486 motherboards made have PCI slots in addition to (or completely replacing) the VLB slots.



PCI (Peripheral Component Interconnect): The PCI bus, available in 32- and 64-bit versions, is the most popular bus architecture. It is used in PCs as well as many other platforms. In 2002, PCI Express was introduced, providing greatly enhanced speeds. Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or serial, TV tuner cards and disk controllers. Historically video cards were typically PCI devices, but growing bandwidth requirements soon outgrew the capabilities of PCI. PCI video cards remain available for supporting extra monitors and upgrading PCs that do not have any AGP or PCI Express slots.
Many devices traditionally provided on expansion cards are now commonly integrated onto the motherboard itself, meaning that modern PCs often have no cards fitted. However, PCI is still used for certain specialized cards; although many tasks traditionally performed by expansion cards may now be performed equally well by USB devices.

PorTs

Port is an external connecting socket on the outside the computer. This is a pathway into and out of the computer. A port lets users plug in outside peripherals, such as monitors, scanners and printers.

• Serial Ports

Serial port was created as an interface between data terminal equipment and data-communications equipment. It processes data sequentially, as a series of bits, and is used to connect equipment such as a modem or mouse to the computer.



• Parallel Ports

The parallel port processes several data bits in parallel and is used to connect peripherals such as computer printers and optical scanners to the computer. The parallel port is faster, but the serial port is cheaper and requires less power.

ExPanSioN SLoTS/BoarDs

• Open/Closed Architectures

Open Architecture is a system whose specifications are made public to encourage third-party vendors to develop add-on products for it. Most microcomputers adopt open architecture. They allow users to expand their systems using optional expansion boards.

Closed Architecture is a system whose technical specifications are not made public. With a machine that has closed architecture, users cannot easily add new peripherals.

• Expansion Slots



Expansion slots are receptacles inside a system unit that printed circuit boards (expansion boards) are plugged into. Computer buyers need to look at the number of expansion slots when they buy a computer, because the number of expansion slots decides future expansion. In microcomputers, the expansion slots are directly connected to the bus.

• Expansion Boards



A printed circuit board that plugs into an expansion slot on the motherboard and extends the computer's capability to control a peripheral device. Also called a "card," "interface card," "adapter" or "controller," all the printed circuit boards that plug into a computer's bus are technically expansion boards, because they expand the computer's capability. Typical examples are the display adapter, network adapter (NIC) and sound card; however, all of these circuits may be contained in chips on the motherboard.

SysTem CloCK



The clock is a device that generates periodic, accurately spaced signals used for several purposes such as regulation of the operations of a processor or generation of interrupts. The clock circuit uses the fixed vibrations generated from quartz crystal to deliver a steady stream of pulses to the processor. The system clock controls the speed of all the operations within a computer.

The clock speed is the internal speed of a computer. The clock speed is expressed in megahertzes (MHz). 33 MHz means 33 millions= cycles per second. A computer processor’s speed is faster if it has higher clock speed. For example, a 100-MHz processor is four times as fast internally as the same processor running at 25MHz.

MemoRy ChiPS


• RAM Chip
(Random Access Memory) A type of memory that provides direct access to any byte on the chip. This “byte addressing” means that the contents of any byte can be read or written without regard to the bytes before or after it. In addition, read and write speeds are symmetrical. It takes no longer to write a byte than it does to read one. In contrast, writing to non-RAM memories such as flash takes considerably longer than reading (see flash memory).The most common RAM chip is the dynamic RAM (DRAM) used as a computer’s main memory. Any chip that has RAM in its name implies byte addressing and symmetric read and write speeds.



ROM Chip
(Read Only Memory) A memory chip that permanently stores instructions and data. Also known as “mask ROM,” its content is created in the last masking stage of the chip manufacturing process, and it cannot be changed. Stand-alone ROM chips and ROM banks in microcontroller chips are used to hold control routines for a myriad of applications. ROMs were also widely used to hold the BIOS in early PCs as well as plug-in cartridges for video games.

Although EPROMs, EEPROMs, and particularly flash memory, are the kinds of non-volatile storage one hears about more often, ROM technology is mature, inexpensive and easy to integrate into any CMOS chip. The variations on the ROM chip are the following:

PROM (Programmable Read-Only Memory): A permanent memory chip in which the content is created (programmed) by the customer rather than by the chip manufacturer. It differs from a ROM chip, which is created at the time of manufacture. PROMs are used for storage when their content is not expected to change, but in many applications, they have given way to EPROMs and EEPROMs, which can be reprogrammed.


EPROM (Erasable Programmable Read-Only Memory): A rewritable memory chip that holds its content without power. EPROM chips are written on an external programming device before being placed on the circuit board. The chip requires an expensive ceramic chip package with a small quartz window that is covered with opaque, sticky tape. For reprogramming, the chip is extracted from the circuit board, the tape is removed, and it is placed under an intense ultraviolet (UV) light for approximately 20 minutes.

Although still used, EPROMS evolved into EEPROMs and flash memory, both of which can be erased in place on the circuit board.

EEPROM (Electrically Erasable Programmable Read-Only Memory): A rewritable memory chip that holds its content without power. EEPROMs are bit or byte addressable at the write level, which means either the bit or byte must be erased before it can be re-written. In flash memory, which evolved from EEPROMs and is almost identical in architecture, an entire block of bytes must be erased before writing. In addition, EEPROMs are typically used on circuit boards to store small amounts of instructions and data, whereas flash memory modules hold gigabytes of data for digital camera storage and hard disk replacements.

Primary Storage

The computer’s internal memory, which is typically made up of dynamic RAM chips. Until non-volatile RAM, such as magnetic RAM (MRAM), becomes commonplace, the computer’s primary storage is temporary. When the power is turned off, the data in primary storage are lost. Data or instructions are stored in primary storage locations called addresses


The computer’s internal memory, which is typically made up of dynamic RAM chips. Until non-volatile RAM, such as magnetic RAM (MRAM), becomes commonplace, the computer’s primary storage is temporary. When the power is turned off, the data in primary storage are lost. Data or instructions are stored in primary storage locations called addresses

MicroProCeSSor

A microprocessor is a processor whose elements are miniaturized into one or few integrated circuits contained in a single silicon microchip. In order to function as a processor, it requires a system clock, primary storage, and power supply.
A microprocessor incorporates most or all of the functions of a computer’s central processing unit (CPU) on a single integrated circuit (IC, or microchip). The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, and various kinds of automation followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.
Computer processors were for a long period constructed out of small and medium-scale ICs containing the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has been known to generally follow Moore’s Law, which suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every two years.
In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switching losses, static current leakage, and other factors, emerged as a leading developmental constraint.

Microprocessor Capacity
The capacity of a microprocessor chip is represented in word sizes. A word size is the number of bit (e.g. 8, 16, or 32 bits) that a computer (CPU) can process at a time. If word has more bits, the computers (CPU) are powerful and faster. For example, a 16-bit-word computer can access 2 bytes (1 byte=8 bits) at a time, while a 32-bit-word computer can access 4 bytes at a time. Therefore, 32-bit computer is faster than the 16-bit computer.


Type of Microprocessor

Intel 4004



The Intel 4004 is generally considered the first microprocessor, and cost in the thousands of dollars The first known advertisement for the 4004 is dated to November 1971; it appeared in Electronic News. The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom original design called for a programmable chip set consisting of 7 different chips, three of them were used to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four–chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU), which he felt could be integrated into a single chip, although he was not a chip designer. This chip would later be called the 4004 microprocessor.
The architecture and specifications of the 4004 were the results of the interaction of Intel’s Hoff with Stanley Mazor, a software engineer reporting to Hoff, and with Busicom engineer Masatoshi Shima. In April 1970 Intel hired Federico Faggin to lead the design of the four-chip set. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor (and also designed the world’s first commercial integrated circuit using SGT – the Fairchild 3708), had the correct background to lead the project since it was the SGT to make possible the design of a CPU into a single chip with the proper speed, power dissipation and cost. Faggin also developed the new methodology for random logic design, based on silicon gate, that made the 4004 possible. Production units of the 4004 were first delivered to Busicom in March 1971, and shipped to other customers in late 1971.
Although the Intel 4004 is considered the first microprocessor, there were microprocessors embedded in industrial controllers. Some examples are automated gas pumps, traffic controllers, and flow meters.

TMS 1000




The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) in 1971. The result of their work was the TMS 1000, which went commercial in 1974.
TI developed the 4-bit TMS 1000, and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971, which implemented a calculator on a chip. The Intel chip was the 4-bit 4004, released on November 15, 1971, developed by Federico Faggin who led the design of the 4004 in 1970–1971, and Ted Hoff who led the architecture in 1969. The head of the MOS Department was Leslie L. Vadász.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core (CPU), some memory, and I/O (input/output) lines, all on one chip. It is also called as micro-controller. The computer-on-a-chip patent, called the “microcomputer patent” at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is perhaps more akin to a microcontroller.

Pico/General Instrument



In early 1971 Pico Electronics and General Instrument introduced their first collaboration in Ics, a complete single chip calculator IC for the Monroe Royal Digital III calculator. This IC could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator Ics. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott. Pico and GI went on to have significant success in the burgeoning handheld calculator market.

CADC



The design engineer, Ray Holt, a graduate of California Polytechnical University, Pomona CA, in 1968, began his computer design career with the F14 CADC. The central air data computer was shrouded in secrecy for over 30 years from its creation (the year being 1968), it was not publicly known until 1998 at which time, at the request of Mr. Ray Holt, the US Navy allowed the documents into the public domain. Since then several have debated if, in fact, this was the first microprocessor. To date, no one has taken on the task of comparing this microprocessor with those that came later. The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, and was not general purpose – it was more like a set of parallel building blocks you could use to make a special-purpose DSP form. It indicates that today’s industry theme of converging DSP-microcontroller architectures was started in 1971. This convergence of DSP and microcontroller architectures is known as a Digital Signal Controller.
In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy’s new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained “20-bit, pipelined, parallel multi-microprocessor”. However, the system was considered so advanced that the Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown even today.


Intel 8008




The Intel 4004 was followed in 1972 by the Intel 8008, the world’s first 8-bit microprocessor. According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided not to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world’s first 8-bit microprocessor. It was the basis for the famous “Mark-8” computer kit advertised in the magazine Radio-Electronics in 1974.
The 8008 was the precursor to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (designed largely by the same people). The 6502 rivalled the Z80 in popularity during the 1980s.
A low overall cost, small packaging, simple computer bus requirements, and sometimes circuitry otherwise provided by external hardware (the Z80 had a built in memory refresh) allowed the home computer “revolution” to accelerate sharply in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple Iie and Iic personal computers as well as in medical implantable grade pacemakers and efibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978, an ambitious and thought through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as design requirements were getting too complex for purely hard-wired logic only.)
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA’s RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used in NASA’s Voyager and Viking space probes of the 1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication

Intersil 6100



The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory Ics. The microprocessor recognized the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980’s.

IMP-16



The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor Micro Flame 9440, both of which were introduced in the 1975 to 1976 timeframe.
The first single-chip 16-bit microprocessor was TI’s TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple Iigs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead “upsized” their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family’s backwards compatibility.
The integrated microprocessor memory management unit (MMU) was developed by Childs et al. of Intel, and awarded U.S. patent number 4,442,484.

MC68000



The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world’s first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982[23][24]. After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world’s first desktop super microcomputer; in the “Companion”, the world’s first 32-bit laptop computer; and in “Alexander”, the world’s first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today’s gaming consoles. All these systems ran the UNIX System V operating system.
Intel’s first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel’s own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the result for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.
The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11 and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory.
Motorola’s success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix super microcomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-on into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs. The Cold Fire processor cores are derivatives of the venerable 68020.
During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pin out, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032, and a line of 32-bit industrial OEM microcomputers. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design’s few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z8000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, “microprocessor wars” started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel’s Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.

AMD64


While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD’s introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (now called AMD64), in September 2003, followed by Intel’s near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers.
The move to 64 bits by PowerPC processors had been intended since the processors’ design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.

Multicore designs



A different approach to improving a computer’s performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore’s Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock rates compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first personal computer dual-core processors were announced and as of 2009 dual-core and quad-core processors are widely used in servers, workstations and PCs while six and eight-core processors will be available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skull trail motherboard. With the transition to the LGA1366 socket and the Intel i7 chip quad core is now considered mainstream and the upcoming i9 chip will introduce six and possibly dual-die hex-core (12-cores), processors.

RISC



In the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles.
In 1986, HP released its first system with a PA-RISC CPU. The first commercial microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world’s first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.
As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications: SPARC and Power ISA.

GPU



Though the term “microprocessor” has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a “microprocessor”, but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.

Central Processing Unit (CPU)
The central processing unit (CPU) is the computing part of the computer that interprets and executes program instructions. It is also known as the processor. In a microcomputer, the CPU contained on a single microprocessor chip within the system unit. The CPU has two part: the control unit and the arithmetic-logic unit.
Control Unit is the circuitry that locates, retrieves, interprets, and executes each instruction in the CPU. The control unit directs electronic signals between primary storage and the ALU, and between the CPU and input/output devices.
Arithmetic-Logic Unit (ALU) is a high-speed circuit part in the CPU. The ALU performs arithmetic (math) operations, logic (comparison) operations and related operations. The ALU retrieves alphanumeric data from memory and then does actual calculating and comparing. It sends the results of the operation back to memory again.

MoTHeRBoarD



• MOTHERBOARD

The motherboard, also referred to as system board or main board, is the primary circuit board within a personal computer. Many other components connect directly or indirectly to the motherboard. Motherboards usually contain one or more microprocessors, supporting circuitry that usually integrated circuits (ICs) that providing the interface between the CPU memory and input/output peripheral circuits, main memory, and facilities for initial setup of the computer immediately after power-on. In many portable and embedded personal computers, the motherboard houses nearly all of the PC’s core components. Often a motherboard will also contain one or more buses and physical connectors for expansion cards.

VoicE-oUtPUT DeviCeS

Voice input device A device in which speech is used to input data or system commands directly into a system. Such equipment involves the use of speech recognition processes, and can replace or supplement other input devices.
Some voice input devices can recognize spoken words from a predefined vocabulary; some have to be trained for a particular speaker. When the operator utters a vocabulary item, the matching data input is displayed as characters on a screen and can then be verified by the operator. The speech recognition process depends on the comparison of each utterance with words appearing in a stored vocabulary table. The table is created or modified by using the voice input equipment together with a keyboard. A data item or system command is typed and the related spoken word is uttered, several times. The spoken word is then analyzed and converted into a particular bit pattern that is stored in the vocabulary table.

PLoTTer



A plotter is a computer printing device for printing vector graphics. In the past, plotters were widely used in applications such as computer-aided design, though they have generally been replaced with wide-format conventional printers, and it is now commonplace to refer to such wide-format printers as "plotters," even though they technically aren't.

Vinyl Sign Cutter

A vinyl sign cutter (sometimes known as a cutting plotter) is used by professional poster and billboard sign-making businesses to produce weather-resistant signs, posters, and billboards using self-coloured adhesive-backed vinyl film that has a removable paper backing material. The vinyl can also be applied to car bodies and windows for large, bright company advertising and to sailboat transoms. A similar process is used to cut tinted vinyl for automotive windows.

Static Cutting Table

A sign cutter typically functions like a traditional roll-fed or sheet-fed plotter, in that the media to be cut is kept rigid by a backing sheet as pieces of vinyl are cut out. As the letters are cut, the backing keeps the material properly aligned in the moving rollers. This does not work when cutting a non-rigid material with no backing, such as fabric textiles or leather. Cutting a hole or slit will cause the unsupported material to droop and fall out of alignment.
The static cutting table uses a large flat vacuum cutting table instead of a roll feed. The surface of the table has a series of small pinholes drilled in it. Material is placed on the table, and a sheet of plastic overlaid onto the fabric. A vacuum pump is turned on, and air pressure pushes down on the plastic cover sheet to hold the fabric in place. The table then operates like a normal vector plotter, using various cutting tools to cut holes or slits into the fabric. The plastic overlay is also cut, which leads to a slight loss of vacuum, but this loss is usually not significant.
This configuration allows static cutting tables to cut flexible and non-rigid materials that are difficult or impossible to cut with roll-fed plotters. Static cutters are also capable of cutting much thicker and heavier materials than a typical roll-fed or sheet-fed plotter.

• DRUM PLOTTER

A type of pen plotter that wraps the paper around a drum with a pin feed attachment. The drum turns to produce one direction of the plot, and the pens move to provide the other. The plotter was the first output device to print graphics and large engineering drawings. Using different coloured pens, it could draw in colour long before colour inkjet printers became viable. Contrast with flatbed plotter. Drum Plotter
A drum plotter is also known as Roller Plotter. It consists of a drum or roller on which a paper is placed and the drum rotates back and forth to produce the graph on the paper. It also consists of mechanical device known as Robotic Drawing Arm that holds a set of coloured ink pens or pencils. The Robotic Drawing Arm moves side to side as the paper are rolled back and forth through the roller. In this way, a perfect graph or map is created on the paper. This work is done under the control of computer. Drum Plotters are used to produce continuous output, such as plotting earthquake activity.



• FLATBED PLOTTER

A graphics plotter that contains a flat surface that the paper is placed on. The size of this surface (bed) determines the maximum size of the drawing. Contrast with drum plotter. Flatbed Plotter
A flatbed plotter is also known as Table Plotter. It plots on paper that is spread and fixed over a rectangular flatbed table. The flatbed plotter uses two robotic drawing arms, each of which holds a set of coloured ink pens or pencils. The drawing arms move over the stationary paper and draw the graph on the paper. Typically, the plot size is equal to the area of a bed. The plot size may be 20- by-50 feet. It is used in the design of cars, ships, aircrafts, buildings, highways etc. Flatbed plotter is very slow in drawing or printing graphs. The large and complicated drawing can take several hours to print. The main reason of the slow printing is due to the movement mechanical devices.
Today, mechanical plotters have been replaced by thermal, electrostatic and ink jet plotters. These systems are faster and cheaper. They also produce large size drawings.

PriNTeR

Printer is an output device that produces a hard copy of data. The resolution of printer output is expressed as DPI. Printers can be classified into different types in several ways.

Serial Printers
Serial printer also called a character printer. Printer is a single character at times. They are usually inexpensive and slow.
Impact Printers
Hammer hits ribbons, papers or print head. For example dot matrix printers and daisy wheel printers. This printer is so noisy.
Nonimpact Printers
This printer does not have the hammer and do not hit in-jek and laser printer for example. Another classification can be made by the way they form character.
Bit-Mapped Printers
Images are formed from group of dots and can be placed anywhere on the page. They have many printing options and good printing quality. They use PostScript as a standard language for instructing a microcomputer.
Character-Based Printers
Character-based printers are printer print characters into the lines and columns of page. These printers use predefined set of character and are restricted in position of character. Microcomputers use five kinds of printers. They are daisy wheel printers, chains printers, dot matrix printers, in-jet printers and laser printers.

DAISY WHEEL PRINTER

Daisy wheel printers use an impact printing technology invented in 1969 by David S. Lee at Diablo Data Systems. It uses interchangeable pre-formed type elements, each with typically 96 glyphs, to generate high-quality output comparable to premium typewriters such as the IBM "Golfball" Selectric, but three times faster. Daisy-wheel printing was used in electronic typewriters, word processors and computer systems from 1972.
The heart of the system is an interchangeable metal or plastic "daisy wheel" holding an entire character set as raised characters moulded on each "petal". In use a servo motor rotates the daisy wheel to position the required character between the hammer and the ribbon. The solenoid-operated hammer then fires, driving the character type on to the ribbon and paper to print the character on the paper. The daisy wheel and hammer are mounted on a sliding carriage similar to that used by dot matrix printers.
Different typefaces and sizes can be used by replacing the daisy wheel. It is possible to use multiple fonts within a document: font changing is facilitated by printer driver software which can position the carriage to the centre of the platen and prompt the user to change the wheel before continuing printing. However, printing a document with frequent font changes and thus required frequent wheel changes was still an arduous task.
Many daisy wheel machines offer a bold type facility, accomplished by double- or triple-striking the specified character(s); servo-based printers advance the carriage fractionally for a wider (and therefore blacker) character, while cheaper machines perform a carriage return without a line feed to return to the beginning of the line, space through all non-bold text, and restrike each bolded character. The inherent imprecision in attempting to restrike on exactly the same spot after a carriage return provides the same effect as the more expensive servo-based printers, with the unique side effect that as the printer ages and wears, bold text becomes bolder.



CHAIN PRINTER

Chain printers (also known as train printers) placed the type on moving bars (a horizontally-moving chain). As with the drum printer, as the correct character passed by each column, a hammer was fired from behind the paper. Compared to drum printers, chain printers had the advantage that the type chain could usually be changed by the operator. By selecting chains that had a smaller character set (for example, just numbers and a few punctuation marks), the printer could print much faster than if the chain contained the entire upper- and lower-case alphabet, numbers, and all special symbols. This was because, with many more instances of the numbers appearing in the chain, the time spent waiting for the correct character to "pass by" was greatly reduced. Common letters and symbols would appear more often on the chain, according to the frequency analysis of the likely input. It was also possible to play primitive tunes on these printers by timing the nonsense of the printout to the sequence on the chain, a rather primitive piano. IBM was probably the best-known chain printer manufacturer and the IBM 1403 is probably the most famous example of a chain printer.



DOT-MATRIX PRINTER

A dot matrix printer or impact matrix printer is a type of computer printer with a print head that runs back and forth, or in an up and down motion, on the page and prints by impact, striking an ink-soaked cloth ribbon against the paper, much like the print mechanism on a typewriter. However, unlike a typewriter or daisy wheel printer, letters are drawn out of a dot matrix, and thus, varied fonts and arbitrary graphics can be produced. Because the printing involves mechanical pressure, these printers can create carbon copies and carbonless copies.
Each dot is produced by a tiny metal rod, also called a "wire" or "pin", which is driven forward by the power of a tiny electromagnet or solenoid, either directly or through small levers (pawls). Facing the ribbon and the paper is a small guide plate (often made of an artificial jewel such as sapphire or ruby) pierced with holes to serve as guides for the pins. The moving portion of the printer is called the print head, and when running the printer as a generic text device generally prints one line of text at a time. Most dot matrix printers have a single vertical line of dot-making equipment on their print heads; others have a few interleaved rows in order to improve dot density.
These machines can be highly durable. When they do wear out, it is generally due to ink invading the guide plate of the print head, causing grit to adhere to it; this grit slowly causes the channels in the guide plate to wear from circles into ovals or slots, providing less and less accurate guidance to the printing wires. Eventually, even with tungsten blocks and titanium pawls, the printing becomes too unclear to read.
Although nearly all inkjet, thermal, and laser printers print closely-spaced dots rather than continuous lines or characters, it is not customary to call them dot matrix printers.



INK-JET PRINTER

The basic problem with inkjet inks are the conflicting requirements for a colouring agent that will stay on the surface and rapid dispersement of the carrier fluid.
Desktop inkjet printers, as used in offices or at home, tend to use aqueous inks based on a mixture of water, glycol and dyes or pigments. These inks are inexpensive to manufacture, but are difficult to control on the surface of media, often requiring specially coated media. HP inks contain sulfonated polyazo black dye (commonly used for dying leather), nitrates and other compounds. Aqueous inks are mainly used in printers with thermal inkjet heads, as these heads require water in order to perform.
While aqueous inks often provide the broadest colour gamut and most vivid colour, most are not waterproof without specialized coating or lamination after printing. Most Dye-based inks, while usually the least expensive, are subject to rapid fading when exposed to light. Pigment-based aqueous inks are typically more costly but provide much better long-term durability and ultraviolet resistance. Inks marketed as “Archival Quality” are usually pigment-based.



LASER PRINTER

A laser printer is a common type of computer printer that rapidly produces high quality text and graphics on plain paper. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor

CoMPuTeR DisPLaY

An output device is any piece of computer hardware equipment used to communicate the results of data processing carried out by an information processing system (such as a computer) to the outside world.
In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and outputs are the signals or data sent by the system to the outside.



COMPUTER DISPLAYS
Computer displays is also called a displays screen or video display terminal (VDT). Monitor
Is screen used to displays the output of computer. Images are represented on monitor by individual dots called pixels. Pixels are the smallest unit on the screen that can be turned on and off or made different shades. The density of the dots determines the clarity of the images and the resolution.


INTERLACED AND NON-INTERLACED
An inter technique refreshes the lines of the screen by exposing all odd lines first then all even lines next. A non-interlaced technology that is developed later refreshes all the lines on the screen from the top to bottom. The non-interlaced method gives more stable video displays than interlaces method. Two form of displays cathode-ray tube and flat-panel



Cathode Ray Tubes (CRT)
Cathode rays are so named because they are emitted by the negative electrode, or cathode, in a tube. To release electrons into the tube, they first must be detached from the atoms of the cathode. In early vacuum tubes (Crookes tubes) this was done solely by the high electrical potential between the anode and the cathode. In modern tubes this is assisted by making the cathode a thin wire filament and passing an electric current through it. The current heats the filament red hot. The increased random heat motion of the filament atoms knocks electrons out of the atoms at the surface of the filament, into the evacuated space of the tube. This process is called thermionic emission.



• Monochrome monitor
Monochrome monitor is a type of computer display which was very common in the early days of computing, from the 1960s through the 1980s, before the colour monitors became popular. They are still used today in some computerized cash register systems, amongst other select applications.
Unlike colour monitors, which display text and graphics in multiple colours through the use of alternating-intensity red, green, and blue phosphors, monochrome monitors have only one colour of phosphor (mono = one, chrome = colour). All text and graphics are displayed in that colour. Some monitors have the ability to vary the brightness of individual pixels, thereby creating the illusion of depth and colour, exactly like a black-and-white television.
Monochrome monitors are available in three colours: if the P1 phosphor is used, the screen is green monochrome. If the P3 phosphor is used, the screen is amber monochrome. If the P4 phosphor is used, the screen is white monochrome (known as "page white"); this is the same phosphor as used in early television sets. An amber screen was claimed to give improved ergonomics, specifically by reducing eye strain; this claim appears to have little scientific basis.[1]
Monochrome monitors, pixel-for-pixel, produce sharper text and images than color CRT monitors. This is because on a monochrome monitor, each pixel is made up of one phosphor dot, located in the dead centre of the pixel; whereas on a colour monitor, each pixel is made up of three phosphor dots (one red, one blue, one green), none of which are in the centre of the pixel. Monochrome monitors were used in almost all dumb terminals and are still widely used in text-based applications such as computerized cash registers and Point of sale systems because of their superior sharpness and enhanced readability. Monochrome monitors are particularly susceptible to Screen burn (hence the advent, and name, of the screen saver), on account of the fact that the phosphors used are very high-intensity. Another effect of the high-intensity phosphors is an effect known as "ghosting", wherein a dim afterglow of the screen's contents is briefly visible after the screen has been blanked. This has a certain place in pop culture, as evidenced in movies such as The Matrix, amongst other places.



• Colour monitor

Colour monitor is a displays peripheral that displays more than two colour. Colour monitor have been developed trough the following paths.




• FLAT PANEL DISPLAYS

Flat panel displays encompass a growing number of technologies enabling video displays that are lighter and much thinner than traditional television and video displays that use cathode ray tubes, and are usually less than 100 mm (4 inches) thick. They can be divided into two general categories; volatile and static. In many applications, specifically modern portable devices such as laptops, cellular phones, and digital cameras, whatever disadvantages exist are overcome by the portability requirements. FEDs combine the advantages of CRTs, namely their high contrast levels and very fast response times, with the packaging advantages of LCD and other flat panel technologies. They also offer the possibility of requiring less power, about half that of an LCD system. To date, however, manufacturing problems have prevented any FED system from entering commercial production. FEDs are closely related to another developing display technology, the surface-conduction electron-emitter display, or SED.



• Electroluminescent (EL) Displays


Electroluminescence (EL) is an optical and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. Electroluminescent Displays (ELDs) are a type of display created by sandwiching a layer of electroluminescent material such as GaAs between two layers of conductors. When current flows, the layer of material emits radiation in the form of visible light. EL works by exciting atoms by passing an electric current through them, causing them to emit photons. By varying the material being excited, the colour of the light emitted can be changed. The actual ELD is constructed using flat, opaque electrode strips running parallel to each other, covered by a layer of electroluminescent material, followed by another layer of electrodes, running perpendicular to the bottom layer. This top layer must be transparent in order to let light escape. At each intersection, the material lights, creating a pixel.



• Gas plasma displays
A plasma display panel (PDP) is a type of flat panel display common to large TV displays (80 cm or larger). Many tiny cells between just two panels of glass hold a mixture of noble gases. The gas in the cells is electrically turned into a plasma which emits ultraviolet light which then excites phosphors to emit visible light. Plasma displays should not be confused with LCDs, another lightweight flat screen display using different technology.