Please enable JavaScript.
Coggle requires JavaScript to display documents.
Computer Components: Types of Processor (CISC and RISC (CISC In a complex…
Computer Components: Types of Processor
Multipurpose Machines
Early computers were able to calculate an output using fixed instructions.
-
They could only perform one set of instructions.
In the 1940s, John Von Neumann and Alan Turing both proposed the stored program concept.
Stored Program Concept
A program must be loaded into main memory to be executed by the processor.
The instructions are fetched one at a time, decoded and executed sequentially by the processor.
The sequence of instructions can only be changed by a conditional or unconditional jump instruction.
John Von Neumann
The most common implementation of this concept is the Von Neumann architecture.
Instructions and data are stored in a common main memory and transferred using a single shared bus.
Advantages of Von Neumann Architecture
Owing primarily to cost and programming complexity, almost all general purpose computers are based on Von Neumann's principles.
It
simplifies
the
design
of the Control Unit.
Data from memory and from devices are accessed in the same way.
Harvard Architecture
An alternative model separates the data and instructions into separate memories using different buses.
Program instructions and data are no longer competing for the same bus.
Use of Harvard Architecture
Different sized memory and word lengths can be used for data and instructions.
Harvard principles are used with specialist
embedded systems
and Digital Signal Processing (DSP), where
speed takes priority
over the complexities of design.
Von Neumann vs Harvard
Von Neumann Architecture
-
Used in PCs, laptops, servers and high performance computers.
-
Data and instructions share the same memory. Both use the same word length.
-
One bus for data and instructions is a bottleneck.
-
One bus is simpler for control unit design.
Harvard Architecture
Used in digital signal processing, microcontrollers and in embedded systems such as microwave ovens and watches.
Instructions and data are held in separate memories which may have different word lengths. Free data memory can't be used for instructions, and vice versa.
Separate buses allow parallel access to data and instructions.
Control unit for two buses is more complicated and expensive.
Contemporary Processor Architectures
Modern CPU chips often incorporate aspects of both Von Neumann and Harvard architecture.
In desktop computers, there is one main memory for holding both data and instructions, but cache memory is divided into an instruction cache and a data cache. So data and instructions are retrieved using Harvard architecture.
-
Some digital signal processors have multiple parallel data buses (two write, three read) and one instruction bus.
CISC and RISC
CISC
In a complex Instructions Set Computers (CISC), a large instruction set is used to accomplish tasks in as few lines of assembly language as possible.
-
A CISC instruction a "load/store" instruction with the instruction that carries out the actual calculation.
A single assembly language instruction such as
MULT A, B
could be used to multiply A by B and store the result back in A.
Advantages of CISC
-
Quicker to code programs
-
The compiler has very little work to do to translate a high-level language statement into machine code.
-
Because the code is relatively short, very little RAM is required to store the instructions.
RISC
Reduced Instruction Set Computers (RISC) take an opposite approach.
A minimum number of very simple instructions, each taking one clock cycle,are used to accomplish all the required operations in multiple general purpose registers.
LDA (LOAD)
STO (STORE)
MULT (MULTIPLY)
Advantages of RISC
-
The hardware is simpler to build with fewer circuits needed for carrying out complex instructions.
-
Because each instruction takes the same amount of time, i.e. one clock cycle, pipelining is possible.
-
RAM is now cheap, and RISC use of RAM and software allows better performance processors at less cost.
Coding in CISC and RISC
The CISC instruction:
MULT A, B
The RISC instruction:
LDA R1, A
LDA R2, B
MULT R1, R2
STO R1 A
Multi-core and parallel systems
Multi-core processors are able to distribute workload across multiple processor cores, thus achieving significantly higher performance by performing several tasks in parallel.
They are therefore known as parallel systems.
Many personal computers and mobile devices dual-core or quad-core, meaning they have two or four processing chips.
Supercomputers have thousands of core.
Using parallel processing
The software has to be written to take advantage of multiple cores.
For example, browsers such as Google Chrome and Mozilla Firefox can run several concurrent processes.
-
Using tabbed browsing, different cores can work simultaneously processing requests, showing videos or running software in different windows.
Co-processor systems
A co-processor is an extra processor used to supplement the functions of the primary processor (the CPU).
-
It may be used to perform floating point arithmetic, graphics processing, digital signal processing and other functions.
-
It generally carries out only a limited range of functions.
GPU
A Graphics Processing Unit is a specialised electronic circuit which is very efficient at manipulating computer graphics and image processing.
-
It consists of thousands of small efficient cores designed for parallel processing.
-
It can process large blocks of visual data simultaneously.
In a PC, a GPU may be present on a graphics card
A GPU has thousands of cores to process parallel tasks efficiently.
Function of a GPU
A GPU can act together with a CPU to accelerate scientific, engineering and other applications.
-
They are used in numerous devices ranging from mobile phones and tables to cars, drones and robots.