Computer Architecture: Pipelined and Parallel Processor DesignComputer Architecture: Pipeline and Parallel Processor Design was designed for a graduate level course on computer architecture and organization. The book's content, especially the last half of the book, represents the most advanced material that a typical graduate student studies before directly encountering the design process. The text avoids extensive compendiums of current features of various processors or technologies, just as it stresses concepts that underlie these processor designs. It abstracts the essential elements of processor design and emphasizes a design methodology including: design concepts, design target data, and evaluation tools, especially those using basic probability theory and simple queuing theory. |
Contents
Architecture and Machines | 1 |
Time Area and Instruction Sets | 2 |
Concurrent Processors | 7 |
7 | 124 |
1 | 141 |
Pipelined Processor Design | 181 |
Cache Memory | 265 |
12 | 303 |
How Programs Behave | 425 |
Shared Memory Multiprocessors | 511 |
Processor Studies | 663 |
4 | 694 |
Appendix A DTMR Cache Miss Rates | 719 |
Multiprogrammed Warm Cache Environment | 728 |
Appendix B SPECmark vs DTMR Cache Performance | 741 |
787 | |
Other editions - View all
Common terms and phrases
allocation assume bandwidth branch instruction branch prediction bypass bytes cache access cache directory cache miss CBWA chapter chip clock code density complete compute condition code conditional branch copyback cost cycles per instruction decimal delay dependency determined DF DF EX DTMR effect entry evaluation EX EX Figure floating-point function guess in-line I-buffer I-cache implementation instruction execution instruction set integer interlock L/S architecture logic M-ratio machine memory access memory module memory system memory traffic microprocessor MIPS miss rate multiple number of instructions operand operating system overhead path penalty physical word pipelined processor prefetch Prob processor design queue queueing models real address references register allocation register set register windows relative request rate result RISC run-on segment sequence set associative simple strategy struction Table target instruction template tion virtual address
Popular passages
Page 765 - RH Katz, SJ Eggers, DA Wood, CL Perkins, and RG Sheldon. Implementing a cache consistency protocol.
References to this book
Integrated Circuit and System Design. Power and Timing Modeling ... Jorge Juan Chico,Enrico Macii No preview available - 2003 |