01 April 2010

MicroProCeSSor

A microprocessor is a processor whose elements are miniaturized into one or few integrated circuits contained in a single silicon microchip. In order to function as a processor, it requires a system clock, primary storage, and power supply.
A microprocessor incorporates most or all of the functions of a computer’s central processing unit (CPU) on a single integrated circuit (IC, or microchip). The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, and various kinds of automation followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.
Computer processors were for a long period constructed out of small and medium-scale ICs containing the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has been known to generally follow Moore’s Law, which suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every two years.
In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switching losses, static current leakage, and other factors, emerged as a leading developmental constraint.

Microprocessor Capacity
The capacity of a microprocessor chip is represented in word sizes. A word size is the number of bit (e.g. 8, 16, or 32 bits) that a computer (CPU) can process at a time. If word has more bits, the computers (CPU) are powerful and faster. For example, a 16-bit-word computer can access 2 bytes (1 byte=8 bits) at a time, while a 32-bit-word computer can access 4 bytes at a time. Therefore, 32-bit computer is faster than the 16-bit computer.


Type of Microprocessor

Intel 4004



The Intel 4004 is generally considered the first microprocessor, and cost in the thousands of dollars The first known advertisement for the 4004 is dated to November 1971; it appeared in Electronic News. The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom original design called for a programmable chip set consisting of 7 different chips, three of them were used to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four–chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU), which he felt could be integrated into a single chip, although he was not a chip designer. This chip would later be called the 4004 microprocessor.
The architecture and specifications of the 4004 were the results of the interaction of Intel’s Hoff with Stanley Mazor, a software engineer reporting to Hoff, and with Busicom engineer Masatoshi Shima. In April 1970 Intel hired Federico Faggin to lead the design of the four-chip set. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor (and also designed the world’s first commercial integrated circuit using SGT – the Fairchild 3708), had the correct background to lead the project since it was the SGT to make possible the design of a CPU into a single chip with the proper speed, power dissipation and cost. Faggin also developed the new methodology for random logic design, based on silicon gate, that made the 4004 possible. Production units of the 4004 were first delivered to Busicom in March 1971, and shipped to other customers in late 1971.
Although the Intel 4004 is considered the first microprocessor, there were microprocessors embedded in industrial controllers. Some examples are automated gas pumps, traffic controllers, and flow meters.

TMS 1000




The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) in 1971. The result of their work was the TMS 1000, which went commercial in 1974.
TI developed the 4-bit TMS 1000, and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971, which implemented a calculator on a chip. The Intel chip was the 4-bit 4004, released on November 15, 1971, developed by Federico Faggin who led the design of the 4004 in 1970–1971, and Ted Hoff who led the architecture in 1969. The head of the MOS Department was Leslie L. Vadász.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core (CPU), some memory, and I/O (input/output) lines, all on one chip. It is also called as micro-controller. The computer-on-a-chip patent, called the “microcomputer patent” at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is perhaps more akin to a microcontroller.

Pico/General Instrument



In early 1971 Pico Electronics and General Instrument introduced their first collaboration in Ics, a complete single chip calculator IC for the Monroe Royal Digital III calculator. This IC could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator Ics. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott. Pico and GI went on to have significant success in the burgeoning handheld calculator market.

CADC



The design engineer, Ray Holt, a graduate of California Polytechnical University, Pomona CA, in 1968, began his computer design career with the F14 CADC. The central air data computer was shrouded in secrecy for over 30 years from its creation (the year being 1968), it was not publicly known until 1998 at which time, at the request of Mr. Ray Holt, the US Navy allowed the documents into the public domain. Since then several have debated if, in fact, this was the first microprocessor. To date, no one has taken on the task of comparing this microprocessor with those that came later. The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, and was not general purpose – it was more like a set of parallel building blocks you could use to make a special-purpose DSP form. It indicates that today’s industry theme of converging DSP-microcontroller architectures was started in 1971. This convergence of DSP and microcontroller architectures is known as a Digital Signal Controller.
In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy’s new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained “20-bit, pipelined, parallel multi-microprocessor”. However, the system was considered so advanced that the Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown even today.


Intel 8008




The Intel 4004 was followed in 1972 by the Intel 8008, the world’s first 8-bit microprocessor. According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided not to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world’s first 8-bit microprocessor. It was the basis for the famous “Mark-8” computer kit advertised in the magazine Radio-Electronics in 1974.
The 8008 was the precursor to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (designed largely by the same people). The 6502 rivalled the Z80 in popularity during the 1980s.
A low overall cost, small packaging, simple computer bus requirements, and sometimes circuitry otherwise provided by external hardware (the Z80 had a built in memory refresh) allowed the home computer “revolution” to accelerate sharply in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple Iie and Iic personal computers as well as in medical implantable grade pacemakers and efibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978, an ambitious and thought through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as design requirements were getting too complex for purely hard-wired logic only.)
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA’s RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used in NASA’s Voyager and Viking space probes of the 1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication

Intersil 6100



The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory Ics. The microprocessor recognized the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980’s.

IMP-16



The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor Micro Flame 9440, both of which were introduced in the 1975 to 1976 timeframe.
The first single-chip 16-bit microprocessor was TI’s TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple Iigs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead “upsized” their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family’s backwards compatibility.
The integrated microprocessor memory management unit (MMU) was developed by Childs et al. of Intel, and awarded U.S. patent number 4,442,484.

MC68000



The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world’s first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982[23][24]. After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world’s first desktop super microcomputer; in the “Companion”, the world’s first 32-bit laptop computer; and in “Alexander”, the world’s first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today’s gaming consoles. All these systems ran the UNIX System V operating system.
Intel’s first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel’s own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the result for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.
The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11 and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory.
Motorola’s success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix super microcomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-on into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs. The Cold Fire processor cores are derivatives of the venerable 68020.
During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pin out, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032, and a line of 32-bit industrial OEM microcomputers. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design’s few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z8000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, “microprocessor wars” started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel’s Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.

AMD64


While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD’s introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (now called AMD64), in September 2003, followed by Intel’s near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers.
The move to 64 bits by PowerPC processors had been intended since the processors’ design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.

Multicore designs



A different approach to improving a computer’s performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore’s Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock rates compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first personal computer dual-core processors were announced and as of 2009 dual-core and quad-core processors are widely used in servers, workstations and PCs while six and eight-core processors will be available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skull trail motherboard. With the transition to the LGA1366 socket and the Intel i7 chip quad core is now considered mainstream and the upcoming i9 chip will introduce six and possibly dual-die hex-core (12-cores), processors.

RISC



In the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles.
In 1986, HP released its first system with a PA-RISC CPU. The first commercial microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world’s first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.
As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications: SPARC and Power ISA.

GPU



Though the term “microprocessor” has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a “microprocessor”, but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.

Central Processing Unit (CPU)
The central processing unit (CPU) is the computing part of the computer that interprets and executes program instructions. It is also known as the processor. In a microcomputer, the CPU contained on a single microprocessor chip within the system unit. The CPU has two part: the control unit and the arithmetic-logic unit.
Control Unit is the circuitry that locates, retrieves, interprets, and executes each instruction in the CPU. The control unit directs electronic signals between primary storage and the ALU, and between the CPU and input/output devices.
Arithmetic-Logic Unit (ALU) is a high-speed circuit part in the CPU. The ALU performs arithmetic (math) operations, logic (comparison) operations and related operations. The ALU retrieves alphanumeric data from memory and then does actual calculating and comparing. It sends the results of the operation back to memory again.

No comments:

Post a Comment