Computer Architecture: A Quantitative Approach (Google eBook)
The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelism as the key to unlocking the power of multiple processor architectures. Additionally, the new edition has expanded and updated coverage of design topics beyond processor performance, including power, reliability, availability, and dependability.
CD System Requirements
The CD material includes PDF documents that you can read with a PDF viewer such as Adobe, Acrobat or Adobe Reader. Recent versions of Adobe Reader for some platforms are included on the CD.
The content is designed to be viewed in a browser window that is at least 720 pixels wide. You may find the content does not display well if your display is not set to at least 1024x768 pixel resolution.
This CD can be used under any operating system that includes an HTML browser and a PDF viewer. This includes Windows, Mac OS, and most Linux and Unix systems.
Increased coverage on achieving parallelism with multiprocessors.
Case studies of latest technology from industry including the Sun Niagara Multiprocessor, AMD Opteron, and Pentium 4.
Three review appendices, included in the printed volume, review the basic and intermediate principles the main text relies upon.
Eight reference appendices, collected on the CD, cover a range of topics including specific architectures, embedded systems, application specific processors--some guest authored by subject experts.
What people are saying - Write a review
Chapter 2 InstructionLevel Parallelism and Its Exploitation
Chapter 3 Limits on InstructionLevel Parallelism
Chapter 4 Multiprocessors and ThreadLevel Parallelism
Chapter 5 Memory Hierarchy Design
Chapter 6 Storage Systems
Basic and Intermediate Concepts
ADD.D addressing modes Appendix architecture assume average memory access bandwidth benchmarks bits branch branch prediction buffer bytes cache block cache miss Chapter chip clock cycles clock rate coherence compiler conﬂict cost data cache data hazards dependences disk DRAM dynamic scheduling entry example execution fetch ﬁeld Figure ﬁnd ﬁrst ﬁve ﬂoating-point functional units hardware hazards implementation instruction set instruction set architecture instruction-level parallelism integer Intel invalidate issue L2 cache latency load loop main memory memory hierarchy microprocessors MIPS miss penalty miss rate MTTF multiple multiprocessor node operands operating system Opteron optimizations parallelism Pentium performance physical address pipeline prediction predictor prefetch processor programs protocol RAID register renaming renaming requests requires reservation stations result RISC server set associative shows signiﬁcant snooping speculation speedup superscalar thread-level parallelism tion virtual memory workload
Page R-8 - A new theory of deadlock-free adaptive routing in wormhole networks," IEEE Trans, on Parallel and Distributed Systems, vol. 4, no. 12, pp.
Page R-18 - JM Mellor-Crummey and ML Scott. Algorithms for Scalable Synchronization on SharedMemory Multiprocessors.
Page 38 - A widely held rule of thumb is that a program spends 90% of its execution time in only 10% of the code.
Page 39 - Amdahl's Law states that the performance improvement to be gained from using some faster mode of execution is limited by the fraction of the time the faster mode can be used.