Please enable JavaScript.
Coggle requires JavaScript to display documents.
CACHE MEMORY - Coggle Diagram
CACHE MEMORY
memory hierarchy
The term memory hierarchy is used in the theory
of computation when discussing performance
issues in computer architectural design.
A 'memory hierarchy' in computer
storage distinguishes each level in the 'hierarchy'
by response time. Since response time,
complexity, and capacity are related ,the levels
may also be distinguished by the controlling
technology.
-
As one goes down the hierarchy, the following
occur
-
Introduction
• Physical devices used to store programs or data
on a temporary or permanent basis for use in a
computer.
-
As a consequence, the typical computer
system is equipped with a hierarchy of
memory subsystem; some internal to the
system & some external.
-
-
-
Reading data from memory is faster than reading
it from other types of storage (such as a hard
disk), but cost much more and is therefore best
suited for small amounts of data.
Cache memory
The larger the cache,the larger the number of gates involved in addressing the cache
Cache memory is random access memory
(RAM) that a computer microprocessor can
access more quickly than it can access regular
RAM
As the microprocessor processes data, it looks
first in the cache memory and if it finds the
data there (from a previous reading of data), it
does not have to do the more time-consuming
reading of data from larger memory
Mapping function
Because there are fewer cache lines than main
memory blocks , an algorithm is needed for
mapping main memory blocks into cache
lines.
-
Mapping technique
Direct mapping
The content of a location in main memory can
be stored at one and only one, specific
location in the cache (it is mapped to exactly
one location in the cache)
If the CPU request the contents of a specific
memory location,there is only one block in
the cache that could possibly contain that
information
-
-
-
Disadvantages
-
Thus if a program happens to reference words
repeatedly from two different blocks that map into the
same line, the block will be continuously swapped in
the cache the hit ratio will be low.
-
Associative mapping
Overcomes the disadvantage of direct
mapping by permitting each main memory
block to be loaded into any line of the cache
-
• In Fully Associative Mapped Cache,each
memory location can be in any cache block (a
block is allowed to go anywhere in cache)
-
Associative search of tags is expensive, so this
type of mapping only feasible for very small
size caches only.
-
Set Associative mapping
-
Set-associative mapping allows that each
word that is present in the cache can have two or more words in the main memory for the
same index address
-
-