Back to All Lectures

Lecture 20: Storage and Interfacing

Lectures on Computer Architecture

Click the thumbnail above to watch the video lecture on YouTube

By Dr. Swarnalatha Radhakrishnan

20.1 Introduction

This lecture completes our exploration of computer architecture by examining storage devices and input/output (I/O) systems that enable computers to interact with external devices and provide persistent data storage beyond volatile main memory. We explore storage technologies from mechanical magnetic disks to solid-state flash memory, understanding their performance characteristics, reliability metrics, and cost tradeoffs. The lecture covers I/O communication methods including polling, interrupts, and direct memory access (DMA), analyzes RAID configurations that improve both performance and dependability, and examines how storage systems connect to processors through memory-mapped I/O or dedicated I/O instructions. Understanding these peripheral systems reveals how complete computer systems integrate computation, memory, and external interaction into cohesive platforms.

20.2 I/O Device Characteristics

I/O devices can be characterized by three fundamental factors:

20.2.1 Behavior

Input Devices: Output Devices: Storage Devices:

20.2.2 Partner

Human Devices: Machine Devices:

20.2.3 Data Rate

20.3 I/O Bus Connections

20.3.1 Simplified System Architecture

Components

Bus Structure

Connections

Multiple controllers allow parallel device operation while sharing common interconnect.

20.4 Dependability

Critical for I/O systems, especially storage devices.

20.4.1 Why Dependability Matters

20.4.2 Dependability is Particularly Important For

20.5 Service States

20.5.1 Two Primary States

1. Service Accomplishment State

2. Service Interruption State

20.5.2 State Transitions

20.6 Fault Terminology

20.6.1 Fault Definition

Characteristics:

20.6.2 Distinction

20.7 Dependability Measures

20.7.1 Key Metrics

1. MTTF (Mean Time To Failure)

Definition:

2. MTTR (Mean Time To Repair)

Definition:

3. MTBF (Mean Time Between Failures)

Formula:
MTBF = MTTF + MTTR
Definition:

4. Availability

Formula:
Availability = MTTF / (MTTF + MTTR)
Definition:

20.8 Improving Availability

20.8.1 Two Approaches

20.9 Increase MTTF (Mean Time To Failure)

a) Fault Avoidance

Methods:

b) Fault Tolerance

Methods:

c) Fault Forecasting

Methods:

20.10 Reduce MTTR (Mean Time To Repair)

20.10.1 Methods

20.10.2 Example Problems

20.11 Magnetic Disk Storage

Traditional secondary storage technology using magnetic recording.

20.11.1 Physical Structure

Disk Shape

Tracks

Sectors

20.11.2 Sector Contents

20.12 Disk Access Process

20.12.1 Access Components and Timing

1. Queuing Delay

2. Seek Time

3. Rotational Latency

4. Transfer Time

5. Controller Overhead

20.12.2 Access Coordination

20.13 Disk Access Example Calculation

20.13.1 Given Parameters

20.13.2 Average Read Time Calculation

1. Seek Time

2. Rotational Latency

3. Transfer Time

4. Controller Delay

20.13.3 Total Average Read Time

Total = 4 + 2 + 0.005 + 0.2 = 6.2 milliseconds

20.13.4 Real Case Variation

20.13.5 Additional Examples

20.14 Flash Storage

Modern non-volatile semiconductor storage technology.

20.14.1 Characteristics

Advantages

Disadvantages

20.15 Types of Flash Storage

20.15.1 NOR Flash

Structure

Characteristics

Applications

20.15.2 NAND Flash

Structure

Characteristics

Applications

Note: Values in lecture slides may be outdated as flash storage technology rapidly evolves.

20.16 Memory-Mapped I/O

Method of accessing I/O devices using memory addresses.

20.16.1 Concept

20.16.2 Example with 8 Address Lines

20.16.3 Access Mechanism

20.16.4 Advantages

20.16.5 Disadvantages

20.17 I/O Instructions

Alternative to memory-mapped I/O: separate I/O instructions.

20.17.1 Characteristics

20.17.2 Access Control

20.17.3 Example Architecture

20.17.4 Advantages

20.17.5 Disadvantages

20.18 Polling

Method for processor to communicate with I/O devices.

20.18.1 How Polling Works

1. Periodically Check I/O Status Register

2. If Device Ready

3. If Error Detected

20.18.2 Characteristics

When Used

Advantages

Predictable Timing: Low Hardware Cost:

Disadvantages

Wastes CPU Time: Not Suitable for Complex Systems:

20.18.3 Programming Model

20.19 Interrupts

Alternative to polling: device-initiated communication.

20.19.1 How Interrupts Work

1. Device Initialization

2. Controller Interrupts CPU

3. Handler Execution

20.19.2 Characteristics

Asynchronous

Fast Identification

Priority System

20.19.3 Advantages

Efficient CPU Use: Good for Multiple Devices: Responsive:

20.19.4 Disadvantages

More Complex Hardware: Context Switching Overhead:

20.19.5 Execution Model

20.20 I/O Data Transfer Methods

Three approaches for transferring data between memory and I/O:

20.21 Polling-Driven I/O

20.21.1 Process

20.21.2 Issues

20.22 Interrupt-Driven I/O

20.22.1 Process

20.22.2 Issues

20.23 Direct Memory Access (DMA)

20.23.1 Process

Setup:

20.23.2 DMA Operation

CPU Provides: DMA Controller: Controller Interrupts CPU On:

20.23.3 Advantages

20.23.4 When Used

20.23.5 Comparison

20.24 RAID (Redundant Array of Independent Disks)

Technology to improve storage performance and dependability.

20.24.1 Purpose

20.24.2 Benefits

Performance Improvement

Dependability Improvement

Key Takeaways

  1. I/O systems connect computers to external devices and storage
  2. Dependability is critical for storage systems
  3. MTTF, MTTR, and availability are key metrics
  4. Magnetic disks use mechanical components with millisecond access times
  5. Flash storage is faster but more expensive than magnetic storage
  6. Memory-mapped I/O and separate I/O instructions are two access methods
  7. Polling is simple but inefficient
  8. Interrupts improve CPU efficiency
  9. DMA is essential for high-speed bulk data transfers
  10. RAID improves both performance and reliability

Summary

This concludes the processor and memory sections of the lecture, covering the complete spectrum from CPU design through memory hierarchy to I/O systems. We have explored how computers are designed from the ground up, from basic arithmetic operations through pipelined execution, memory hierarchies, multiprocessor systems, and finally to storage and I/O mechanisms that enable computers to interact with the external world.

← Previous Lecture Next Lecture →