scheduling algorithms in windows operating system
course name: operating system
instructor : sohail abbasname and ID:
1) Omran Ali U00033937
2) Mishari U00036725
No table of contents entries found.
A number of programs are often in memory at an equivalent time, permits overlap of central processing unit and I/O. central process unit scheduling deals with the matter of selecting a process from the ready queue to be executed by the central process unit. it’s tough and time overwhelming to develop central processing unit scheduling algorithm and to grasp its impact as a result of got to modify and check the OS and calculate the performance of real applications. As processor is that the vital resource, central process unit scheduling becomes vital in accomplishing the OS design goals. The goal of central process unit scheduling is to reduce the average turnaround time and average waiting time so as to permit as several as potential running processes at all time so as to form best use of central process unit.
scheduling is a method that permits one method to use the central process unit whereas the execution of another process is on hold (in waiting state) thanks to inconvenience of any resource like I/O etc, thereby creating full use of central processor. The aim of central process unit scheduling is to form the system efficient, fair and fast.
Whenever the central process unit becomes idle, the os should choose one of the processes within the ready queue to be executed. the selection process is distributed by the short-term scheduler (or central process unit scheduler). The scheduler selects from among the processes in memory that are able to be executed, and allocates the central process unit to at least one of them.
Another component involved in the central process unit scheduling function is the Dispatcher. The dispatcher is the module that gives control of the central process unit to the process selected by the short-term scheduler. This function involves:
Switching to user mode
Jumping to the proper location in the user program to restart that program from where it left last time.
The dispatcher must be as quick as possible, provided that it’s invoked throughout each process switch. The time taken by the dispatcher to prevent one process and begin another process is understood because the Dispatch Latency. Dispatch Latency will be explained mistreatment the below figure:
Types of CPU Scheduling
Central process unit scheduling decisions may take place under the following four circumstances:
When a process switches from the running state to the waiting state(for I/O request or invocation of wait for the termination of one of the child processes).
When a process switches from the running state to the ready state (for example, when an interrupt occurs).
When a process switches from the waiting state to the ready state(for example, completion of I/O).
When a process terminates.
In circumstances one and four, there is no choice in terms of scheduling. A replacement process (if one exists in the ready queue) should be designated for execution. There is a choice, however in circumstances two and three.
When Scheduling takes place only under circumstances one and four, we are saying the scheduling scheme is non-preemptive; otherwise the scheduling scheme is preemptive.
Under non-preemptive scheduling, once the central process unit has been pointed to a process, the process keeps the central process unit till it releases the central process unit either by terminating or by changing to the waiting state.
This scheduling method may be used by the Microsoft Windows 3.1 and also Apple Macintosh os.
It is the sole method that may be used on sure hardware platforms, as a result it doesn’t need any special hardware(for example: a timer) required for preemptive scheduling.
In this sort of Scheduling, the tasks area unit sometimes allotted with priorities. Occasionally it’s is necessary to run a precise task that features a higher priority before another task though it’s running. Therefore, the running task is interrupted for a few time and resumed later once the priority task has finished its execution.
CPU Scheduling: Scheduling Criteria
1) CPU Utilization
To make out the best use of central process unit and not to waste any central process unit cycle, central process unit would be operating most of the time(Ideally 100 percent of the time). Considering a real system, central process unit usage ought to vary from 40 percent (lightly loaded) to 90 percent (heavily loaded.)
It is the total number of processes completed per unit time or rather say total quantity of labor exhausted a unit of time. This might vary from 10 second to 1 hour depending on the specific processes.
3) Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the process to the time of completion of the process(Wall clock time).
4) Waiting Time
The total of the periods spent waiting within the ready queue amount of time a process has been waiting in the ready queue to acquire get control on the central process unit.
5) Load Average
It is the average number of processes within the ready queue waiting for their turn to get into the central process unit.
6) Response Time
Amount of time it take from when a request was submitted till the first response is created. Remember, it’s the time until the primary response and not the completion of process execution(final response).
In general central process unit utilization and Throughput are maximized and different factors are reduced for proper optimization.
To decide which process to execute first and which process to execute last to achieve maximum CPU utilization, computer scientists have defined some algorithms, they are:
First Come First Serve(FCFS) SchedulingShortest-Job-First(SJF) SchedulingPriority SchedulingRound Robin(RR) SchedulingMultilevel Queue SchedulingMultilevel Feedback Queue Scheduling
First Come First Serve Scheduling
In the “First come first serve” scheduling algorithm, because the name suggests, the process that comes first, gets executed first, or we are able say that the process which requests the central process unit first, gets the central process unit allocated first.
First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data element which is added to the queue first, is the one who leaves the queue first.
This is used in Batch Systems.
It’s easy to understand and implement programmatically, using a Queue data structure, where a new process enters through the tail of the queue, and the scheduler selects process from the head of the queue.
A perfect real life example of FCFS scheduling is buying tickets at ticket counter.
Calculating Average Waiting Time
For every scheduling algorithm, Average waiting time may be a crucial parameter to evaluate it’s performance.
AWT or Average waiting time is that the average of the waiting times of the processes within the queue, waits for the scheduler to pick them for execution.
Lower the Average Waiting Time, better the scheduling algorithm.
Consider the processes Process1, Process2, Process3, Process4 given within the below table, arrives for execution within the same order, with Arrival Time zero, and given Burst Time, let’s realize the average waiting time using the FCFS scheduling algorithm.
The average waiting time = 18.75 msFor the above given processes, 1st P1 is going to be supplied with the central process unit resources,
Hence, waiting time for P1 to be zero
P1 needs 21 ms for completion, thence waiting time for P2 going to be 21 msSimilarly, waiting time for process P3 is execution time of P1 + execution time for P2, going to be (21 + 3) ms = 24 ms.For process P4 it’ll be the total sum of execution times of P1, P2 and P3.
The GANTT chart above represents the waiting time for each process.
Problems with FCFS Scheduling
Below we’ve got a couple of problems with the FCFS scheduling algorithm:
It’s a Non Pre-emptive algorithm, which means the process priority does not matter.
If a process with very least priority is being executed, a lot likely daily routine backup process, that takes more time, and all of a sudden an another high priority process comes, like interrupt to avoid system crash, the high priority process needs to wait, and hence in this case, the system can crash, simply because of improper process scheduling.
Not optimal Average Waiting Time.
Resources utilization in parallel is might bot be possible, that leads to Convoy Effect, and hence poor resource(central process unit, I/O etc) utilization.
What is Convoy Effect?
Convoy Effect is a situation wherever several processes, who needed a resource for short time are blocked by one process holding the resource for a very long time.
This leads to port utilization of resources and therefore poor performance.
Shortest Job First(SJF) Scheduling
Shortest Job First scheduling works on the process with the shortest burst time or duration first.
This is the best approach to minimize waiting time.
This is used in Batch Systems.
It is of two types:
To successfully implement it, the burst time/duration time of the processes ought to be better-known to the processor beforehand, that is much not possible all the time.
This scheduling algorithm is perfect if all the jobs/processes are offered at an equivalent time. (either Arrival time is zero for all, or Arrival time is same for all)
Non Pre-emptive Shortest Job First
Consider the below processes available in the ready queue for execution, with arrival time as 0 for all and given burst times.
As you’ll be able to see within the GANTT chart on top of, the process P4 are picked up the first because it has the shortest burst time, then P2, followed by P3 and last P1.
We scheduled identical set of processes by using the FCFS algorithm rule within the previous tutorial, and got average waiting time to be 18.75, whereas with SJF, the average waiting time is 4.5.
Problem with Non Pre-emptive SJF
If the arrival time for processes are completely different, which suggests all the processes don’t seem to be available in the ready queue at time of zero, and few jobs arrive after some time, in such scenario, typically process with short burst time got to wait for the current process’s execution to complete, as a result in Non Pre-emptive SJF, on arrival of a process with short duration, the present job/process’s execution isn’t halted/stopped to execute the short job first.
This result in the matter of Starvation, wherever a shorter process has got to wait for a long time till the current longer process gets executed. This happens if shorter jobs keep coming back, however this could be solved using the idea of aging.
Pre-emptive Shortest Job First
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a process with short burst time arrives, the existing process is preempted or removed from execution, and the shorter job is executed first.
As you’ll be able to see within the GANTT chart above, as P1 arrives first, thus it’s execution starts like a shot, however only after 1ms, process P2 arrives with a burst time of 3ms that is smaller than the burst time of P1, thus the process P1(1ms done, 20ms left) is preemptied and process P2 is executed.
As P2 is getting executed, after 1ms, P3 arrives, however it has a burst time bigger than P2, thus execution of P2 continues. however when another ms, P4 arrives with a burst time of 2ms, as a result P2(2ms done, 1ms left) is preemptied and P4 is executed.
After the completion of P4, process P2 is picked up and finishes, then P2 can get executed and at last then P1.
The Pre-emptive SJF is known as Shortest Remaining Time First, as a result of at any given purpose of time, the task with the shortest remaining time is executed first.
Priority is assigned for each process.
Process with highest priority is executed first and so on.
Processes with same priority are executed in FCFS manner.
Priority can be decided based on memory requirements, time requirements or any other resource requirement.
Round Robin Scheduling
A fixed time is allotted to each process, called quantum, for execution.
Once a process is executed for given time period that process is preemptied and other process executes for given time period.
Context switching is used to save states of preemptied processes.
Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for scenarios in which processes are simply classified into totally different groups.
For example: A standard division is created between foreground(or interactive) processes and background (or batch) processes. These 2 types of processes have totally different response-time needs, and then need totally different scheduling needs. In additionally, foreground processes could have priority over background processes.
A multi-level queue scheduling algorithm partitions the ready queue into many separate queues. The processes are permanently assigned to 1 queue, usually supported some property of the process, like memory size, process priority, or process type. every queue has its own scheduling algorithm.
For example: separate queues may be used for foreground and background processes. The foreground queue may be scheduled by Round Robin algorithm, where the background queue is scheduled by an FCFS algorithm.
In addition, there should be scheduling among the queues, that is often implemented as fixed-priority preemptive scheduling. For example: The foreground queue could have absolute priority over the background queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
Interactive Editing Processes
Each queue has absolute priority over lower-priority queues. No process within the batch queue, as an example, may run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process will be preempted.
Multilevel Feedback Queue Scheduling
In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the system. Processes don’t move between queues. This setup has the advantage of low scheduling overhead, however the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, permits a process to maneuver between queues. The concept is to separate processes with completely different CPU-burst characteristics. If a process an excessive amount of CPU time, it’ll will be moved to a lower-priority queue. Similarly, a process that waits too long in a very lower-priority queue could be moved to a higher-priority queue. This type of aging prevents starvation.
An example of a multilevel feedback queue can be seen in the below figure.
In general, a multilevel feedback queue scheduler is defined by the following parameters:
The number of queues.
The scheduling algorithm for each queue.
The method used to determine when to upgrade a process to a higher-priority queue.
The method used to determine when to demote a process to a lower-priority queue.
The method used to determine which queue a process will enter when that process needs service.
The definition of a multilevel feedback queue scheduler makes it the foremost general CPU-scheduling algorithm. It is designed to match a particular system under design. sadly, it needs some means of choosing values for all the parameters to define the best scheduler. though a multilevel feedback queue is that the most general scheme, and also the most complex.
Windows scheduled threads using apriority-based, preemptive scheduling algorithm.
The scheduler ensures that the highest priority thread will always run.
The portion of the Windows kernel that handles scheduling is called the dispatcher.
The dispatcher uses a 32-level priority scheme to determine the order of thread execution.
Priorities are divided into two classes.
The variable class contains threads having priorities 1 to 15.
The real-time class contains threads with priorities ranging from 16 to 31.
There is also a thread running at priority 0 that is used for memory management.
The dispatcher uses a queue for every scheduling priority and traverses the set of queues from highest to lowest till it finds a thread that’s able to run.
If no ready thread is found, then dispatcher will execute a special thread known as idle thread.
Priorities in all classes except the read-time priority class are variable.
This means that the priority of a thread in one of these classes can change.
The initial priority of a thread is often the bottom priority of the process that the thread belongs to.
Once a user is running an interactive program, the system has to provide especially well performance for the process.
The priority of every thread is predicated on the priority class it belongs to and its relative priority among that class.
Within each of the priority classes is a relative priority as shown below.
Processor Scheduling in Windows 10
Depending on the usage of your Windows 10 computer, you’ll be able to put together processor scheduling, in order to provide you the most effective performance while using Programs or for Background Processes you’ll be able to build this adjustment simply via the Control Panel.
Task Scheduler in windows 10
The Task Scheduler in Windows 10 Gives You a lot of Power
automates any app, as well as maintenance, alarm clocks, and more. In Windows10, Battery Saver mode modifies the Task Scheduler to use less energy.
The Task Scheduler in Windows10 executes scripts or programs at specific times or when certain events (we sit down with these as triggers or conditions.) It’s helpful as a maintenance or automation tool.
sturdy against any virus attack, are you able to Extend Battery Life with Windows 10 Battery Saver? operating with Windows 10 and wish to conserve your laptop’s battery life?
Check out Battery Saver to form certain you are obtaining the foremost out of each charge.
•The task is ready to trigger once the computer is idle.
•The task is ready to run with automatic maintenance.
•The task isn’t set to run once the user is logged on.
how to open Task Scheduler in Windows 10
Open Task Scheduler in the Computer Management.
• Step 1: Open Computer Management.
• Step 2: Click Task Scheduler.