Operating System
OS Part-1
OS Part-2
- File Concepts and Access methods
- Free Space Management and Allocation methods
- Directory Systems and Protection
- File Organization, Sharing and Implementation issues
- Disk and Drum Scheduling
- I/O Devices Organisation & I/O Buffering
- I/O Hardware, Kernel I/O subsystem and Transforming I/O Requests to Hardware Operations
- Device Drivers and Path Management
- Device Driver Sub Modules and Procedure
- Device Scheduler and Handler
- Interrupt Service Routine (ISR)
- File System in Linux and Windows
OS Part-3
- Process and Process Control Block(PCB)
- Process Scheduling( Preemptive and Non Preemptive)
- Scheduling Algorithms
- Algorithm Evaluation
- Multiple Processor Scheduling
- Real Time Scheduling
- Operations on Processes
- Threads
- Inter-Process Communication
- Precedence Graphs
- Critical Section Problem
- Semaphores
- Classical Problems of Synchronization
- DeadLock
- Deadlock Prevention and Avoidance
- Deadlock Detection and Recovery
- Process Management in Linux
OS Part-4
- Memory Hierarchy in OS
- Concepts of Memory Management
- MFT and MVT
- Logical and Physical Address Space
- Swapping
- Contiguous and Non Contiguous Memory Allocation
- Paging
- Segmentation
- Paging Combined with Segmentation
- Structure and Implementation of Page Table
- Virtual Memory in OS
- Cache Memory Organization
- Demand Paging
- Page Replacement Algorithms
- Allocation of Frames and Thrashing
- Demand Segmentation
OS Part-5
- Distributed Operating System: Introduction and Types
- Distributed OS: Design Issues
- Distributed OS: File System
- Distributed OS: Remote File Access
- Remote Procedure Call(RPC)
- Remote Method Invocation(RMI)
- Distributed Shared Memory
- Parallel Processing and Concurrent Programming
- Security and Threats Protection in Distributed OS
- Security Design Principles and Authentication in Distributed OS
- Sensor Network and Parallel OS
Cache Memory Organization in Operating System
Introduction
Cache Memory ek high-speed memory hoti hai jo CPU ke aur main memory (RAM) ke beech ka gap bridge karti hai. Iska main purpose data access time ko reduce karna aur system performance ko improve karna hota hai.
Why Cache Memory?
CPU ki processing speed bahut fast hoti hai, lekin RAM us speed me data nahi fetch kar sakti.
Agar CPU directly RAM se data fetch kare to system slow ho jata hai.
Cache memory fast data access provide karti hai jo CPU ke speed ke close hoti hai.
Memory Hierarchy in a System
CPU Registers → Cache Memory → Main Memory (RAM) → Secondary Storage (HDD/SSD)
Cache Memory CPU ke close hoti hai aur sabse fast hoti hai.
Iska size limited hota hai, isliye frequently used data store karti hai.
Cache Memory Working
Cache Memory CPU ki requests ko handle karti hai using principle of locality:
Principle of Locality
Temporal Locality: Agar CPU ek data ko access karta hai, to chances hain ki wo data phir se access hoga.
Spatial Locality: Agar CPU ek memory location ko access karta hai, to uske nearby locations bhi access ho sakte hain.
Cache Hit & Cache Miss
Condition | Explanation |
---|---|
Cache Hit | Jab required data cache memory me hota hai |
Cache Miss | Jab required data cache memory me nahi hota, to wo RAM se fetch hota hai |
Diagram: Cache Working Process
CPU Request → Check in Cache
→ If Found (Cache Hit) → Use Data
→ If Not Found (Cache Miss) → Fetch from RAM → Store in Cache → Use Data
Cache Mapping Techniques
Cache memory me data ko efficiently store karne ke liye mapping techniques ka use hota hai:
Mapping Technique | Working | Pros & Cons |
---|---|---|
Direct Mapping | Each block of main memory ek fixed cache block ko map hota hai | Fast but less flexible |
Associative Mapping | Koi bhi memory block kisi bhi cache block me store ho sakta hai | Flexible but complex |
Set-Associative Mapping | Blocks ko sets me divide kiya jata hai aur ek block kisi bhi set me store ho sakta hai | Balance between speed & flexibility |
Direct Mapping
Har memory block ka ek fixed cache block hota hai.
Simple and fast, lekin collisions zyada hote hain.
Example:
Main Memory Block 5 → Cache Block 1
Main Memory Block 10 → Cache Block 2
Main Memory Block 15 → Cache Block 3
Fully Associative Mapping
Koi bhi memory block kisi bhi cache block me store ho sakta hai.
Collisions avoid hote hain, lekin searching slow hoti hai.
Set-Associative Mapping
Memory blocks ko groups me divide kiya jata hai.
Ek group me multiple blocks ho sakte hain, jo collision kam karta hai.
Fast aur flexible combination hai Direct aur Associative Mapping ka.
Cache Replacement Policies
Jab cache full hota hai aur naye data ko store karna hota hai, tab replacement policies ka use hota hai:
Policy | Working |
---|---|
FIFO (First In First Out) | Jo block sabse pehle aaya tha, usko replace karte hain |
LRU (Least Recently Used) | Jo block sabse kam use hua hai, usko replace karte hain |
LFU (Least Frequently Used) | Jo block sabse kam access hua hai, usko replace karte hain |
Multi-Level Cache
Modern processors multiple levels of cache use karte hain:
Cache Level | Characteristics |
---|---|
L1 Cache (Level 1) | Fastest but smallest (Few KBs) |
L2 Cache (Level 2) | Slower than L1 but larger (Few MBs) |
L3 Cache (Level 3) | Slowest but largest among caches (Several MBs) |
Diagram: Multi-Level Cache
CPU → L1 Cache → L2 Cache → L3 Cache → RAM → HDD/SSD
L1 sabse fast hota hai, phir L2, aur phir L3.
Ye hierarchy CPU performance ko optimize karti hai.
Advantages of Cache Memory
Fast data access, CPU performance improve hoti hai
Less dependency on RAM, execution speed fast hota hai
Reduces system latency
Efficient power consumption
Disadvantages of Cache Memory
Limited size, zyada data store nahi ho sakta
Expensive as compared to RAM
If cache miss occurs, performance degrade ho sakti hai
Conclusion
Cache Memory ek high-speed storage hai jo CPU aur RAM ke beech gap ko reduce karti hai.
Direct, Associative, aur Set-Associative mapping techniques ka use hota hai.
Multiple levels of cache (L1, L2, L3) performance ko optimize karte hain.
Cache replacement policies efficiency improve karti hain.