Operating System
OS Part-1
OS Part-2
- File Concepts and Access methods
- Free Space Management and Allocation methods
- Directory Systems and Protection
- File Organization, Sharing and Implementation issues
- Disk and Drum Scheduling
- I/O Devices Organisation & I/O Buffering
- I/O Hardware, Kernel I/O subsystem and Transforming I/O Requests to Hardware Operations
- Device Drivers and Path Management
- Device Driver Sub Modules and Procedure
- Device Scheduler and Handler
- Interrupt Service Routine (ISR)
- File System in Linux and Windows
OS Part-3
- Process and Process Control Block(PCB)
- Process Scheduling( Preemptive and Non Preemptive)
- Scheduling Algorithms
- Algorithm Evaluation
- Multiple Processor Scheduling
- Real Time Scheduling
- Operations on Processes
- Threads
- Inter-Process Communication
- Precedence Graphs
- Critical Section Problem
- Semaphores
- Classical Problems of Synchronization
- DeadLock
- Deadlock Prevention and Avoidance
- Deadlock Detection and Recovery
- Process Management in Linux
OS Part-4
- Memory Hierarchy in OS
- Concepts of Memory Management
- MFT and MVT
- Logical and Physical Address Space
- Swapping
- Contiguous and Non Contiguous Memory Allocation
- Paging
- Segmentation
- Paging Combined with Segmentation
- Structure and Implementation of Page Table
- Virtual Memory in OS
- Cache Memory Organization
- Demand Paging
- Page Replacement Algorithms
- Allocation of Frames and Thrashing
- Demand Segmentation
OS Part-5
- Distributed Operating System: Introduction and Types
- Distributed OS: Design Issues
- Distributed OS: File System
- Distributed OS: Remote File Access
- Remote Procedure Call(RPC)
- Remote Method Invocation(RMI)
- Distributed Shared Memory
- Parallel Processing and Concurrent Programming
- Security and Threats Protection in Distributed OS
- Security Design Principles and Authentication in Distributed OS
- Sensor Network and Parallel OS
Multiple Processor Scheduling in Operating System
Introduction
Multiple Processor Scheduling ek process hai jisme ek operating system ek se zyada processors ke beech tasks distribute karta hai. Jab ek system me multiple CPUs hote hain, toh unka efficiently use hona zaroori hota hai taaki system ka performance maximize ho sake.
Multiple Processor Scheduling Kyun Zaroori Hai?
Performance Improve Hoti Hai – Ek se zyada processes parallel execute ho sakti hain.
CPU Utilization Badhta Hai – Sabhi processors efficiently kaam karte hain.
Throughput Increase Hota Hai – Ek time me zyada processes complete hote hain.
Waiting Time Reduce Hota Hai – Process ko jaldi CPU milta hai.
Types of Multiprocessor Systems
Multiple Processor Scheduling ko samajhne ke liye pehle hume multiprocessor systems ke types samajhne honge:
1. Asymmetric Multiprocessing (AMP)
-
Isme ek master processor scheduling aur task assignment ka kaam karta hai.
-
Baaki slave processors sirf assigned tasks ko execute karte hain.
-
Example: Early UNIX Systems.
Diagram:
Master Processor (Handles Scheduling & I/O)
|
---------------
| | |
CPU1 CPU2 CPU3 (Execute Processes)
Advantages: Simple design, kam overhead.
Disadvantages: Master processor overload ho sakta hai.
2. Symmetric Multiprocessing (SMP)
-
Sabhi processors scheduling aur execution ka kaam independently karte hain.
-
Common ready queue hoti hai jisme se processors tasks pick karte hain.
-
Example: Windows, Linux, Mac OS.
Diagram:
Shared Ready Queue
-----------------
| | |
CPU1 CPU2 CPU3 (All CPUs Perform Scheduling)
Advantages: Better performance, koi single point of failure nahi.
Disadvantages: Complexity zyada hoti hai.
Multiple Processor Scheduling Approaches
1. Load Sharing
-
All CPUs ek common queue se processes ko execute karte hain.
-
Jo CPU free hota hai, wo next process ko pick karta hai.
-
Problem: Multiple CPUs ek queue ko access kar rahe hain, toh overhead badh sakta hai.
2. Load Balancing
-
Workload ko equally distribute karna taaki sabhi processors busy rahein.
-
Do tareeke ke load balancing:
-
Push Migration: Ek scheduler workload distribute karta hai.
-
Pull Migration: Jo CPU free hota hai, wo process khud le leta hai queue se.
-
3. Processor Affinity
-
Jab ek process ek baar kisi processor pe execute ho chuka hota hai, toh wo usi processor pe dobara execute hone ko prefer karta hai.
-
Do types ki affinity hoti hai:
-
Soft Affinity: OS try karta hai ki process same CPU pe chale.
-
Hard Affinity: Process sirf ek specific CPU pe execute ho sakta hai.
-
4. Multi-core Processor Scheduling
-
Multi-core processors me ek hi chip pe multiple cores hote hain.
-
Thread-level parallelism zyada hota hai.
-
Example: Intel & AMD ke modern multi-core processors.
Example of Multiple Processor Scheduling
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 10 |
P2 | 1 | 5 |
P3 | 2 | 8 |
P4 | 3 | 12 |
Assume: System me 2 CPUs hain.
Gantt Chart (Load Sharing Approach)
CPU1: | P1 | P3 |
0 10 18
CPU2: | P2 | P4 |
1 6 18
-
-> Average Turnaround Time: (10 + 5 + 8 + 12) / 4 = 8.75 ms
-
-> CPU Utilization Improved!
Conclusion
-
Multiple Processor Scheduling performance improve karta hai aur system ke response time ko kam karta hai.
-
SMP (Symmetric Multiprocessing) modern OS me sabse zyada use hota hai.
-
Load balancing, processor affinity aur thread scheduling efficient scheduling ke important factors hain.