Process Concept in OSs

Abstract

Dhamdhere (2006) explained that in the early days of the computer systems revolution, only one program can be executed by such systems where most of the system resources and controls are used by such program during the execution. With advanced technology and modern design of the computer architecture; multiple programs can be loaded to the computer system’s memory and executed concurrently. Such technique created the notion of a process execution where a process with any modern computing system represents a unit of work that can be executed within a time-sharing system.

Despite the fact that the operating system mainly executing user’s programs, the operating system can also execute other processes that can be left outside of its kernel; and as such; any computing system can execute concurrently user processes, and operating system processes. As user might require executing multiple programs concurrently, the operating system may need to handle its own internal activities (Silberschatz and Galvin, 2009).

Process Concept

Milenkovic (1987) explained that while a program becomes a process when an executable file is loaded into memory; a process is an active entity that carries certain information such as program counter that specifies when the next instructions will be executed, different resources, process stack that contains temporary data (or a heap that might be allocated during the process run time), and data section that might contains global variables.

Silberschatz and Galvin (2009) explained that during the execution of the process, its state can change from one state to another. A process may have one of the following states:

 

  • New state-where the process is being created.
  • Running state – where the process instruction is being executed.
  • Waiting state – where the process is waiting for some event to occur.
  • Ready state – where the process is waiting to be assigned to a processor.
  • Terminated state – where the process has finished its execution.

The operating system representing the process by a process control block (PCB) where it contains information associated with such process, such as process state, program counter, CPU register and CPU scheduling. With an operating system that supports multiprogramming, more than one process will be running at the same time to maximize the CPU utilization through a time sharing between processes execution controlled by the process scheduler. On a single-processor system only one process will be running while the rest of processes will be waiting for the CPU time (Stallings, 1991).

Process Creation-Termination

Milenkovic (1987) explained that since the processes can be executed concurrently within any modern computing system a mechanism is required to handle the creation, and the termination of such processes. During the process execution; a process may create several new processes via system call create-process operation. The new process is called the children of such process and the new process may in turn create other process. Most of the operating systems identify processes via a unique process identifier (PID). With the creation of sub-process by the parent process, the sub-process might share the resources with the parent process or may be able to obtain its resources directly from the operating system. Also, input data might be passed by the parent process to the sub-process. For example, displaying a content of a file on the screen might require from the parent process to pass the file name to the sub-process.

Silberschatz and Galvin (2009) stated that when a process creates sub-process two scenarios might be in terms of the execution of such process:

 

  • The parent process continues to execute concurrently with its sub-processes.
  • The parent process might wait until some or its sub-process to be terminated.

With a modern computing system, multiprogramming approach is always the best approach to maintain a time sharing among processes that can maximize the CPU utilization. Such approach will allow user to interact with the program while it is running since the CPU time are shared among the running processes. To meet such objective; the process scheduler is used to select the available process. Once the processes are entered within the system, they are put into a job queue and for the processes that are ready and waiting to execute are kept on a list called the ready queue (Dhamdhere, 2006).

Silberschatz and Galvin (2009) stated that the ready queue is used to host new processes. Each process waits in the ready queue to be selected for its execution and once the process is allocated in its CPU time, one or more of the following events might happen:

 

  • The process could issue an I/O request; and as such it will be placed in an I/O queue.
  • The process might move from the CPU to the ready queue as a result of an interrupt.

Silberschatz and Galvin (2009) also stated that the process continues this cycle until it terminates; and as such; it is removed from the queue and its PCB and resources are de-allocated. Also, a process can terminate its sub-process for one of the following reasons:

 

  • The sub-process has exceeded its usage.
  • The task assigned to the sub-process is no longer required.
  • The process is terminated; and as such the operating system doesn’t allow the sub-process to continue while the parent process is terminated.

Application Process vs. Device Process

It is imperative for the scheduler to make a careful selection between processes. In case of the application process (Called CPU bound process) it doesn’t generate frequent I/O request; and as such; it uses its time doing computations. When a process requests an I/O request; such as a disk (Called I/O bound process), it spends more of its time in doing I/O more than doing computations. For a computing system to achieve best performance; a combination of CPU bound and I/O bound has to be processed and shared within the CPU time. Such operation called process swapping where the process is swapped out and is later swapped in by the process scheduler to achieve a best performance of the CPU time (Silberschatz and Galvin, 2009).

Conclusion

A process is the unit of work that shares system resources during its execution. During the process execution each process is represented by its own process control block (PCB). Each process is placed in a waiting queue (ready queue) when it is not executing. If I/O request is issued by the process; it places the process in I/O queue. The job scheduler allows the process to share the CPU time and it is influenced by resource-Allocation considerations (Silberschatz and Galvin, 2009).

Modern operating systems provide a mechanism for parent processes to create sub-processes; and as such; the parent process may wait for its children to terminate or it can concurrently execute with their children. By allowing concurrent execution between process, computation speed, shared information and user interaction with the system can be achieved (Galli, 1999).

References

Dhamdhere, D. (2006) Operating Systems: A concept-based Approach. 2nd ed.London: McGraw Hill Higher Education.

Galli, D. (1999) Distributed Operating Systems: Concepts and Practice. 1st ed. NY:Prentice Hall.

Milenkovic, M. (1987) Operating Systems: Concepts and Design. 1st ed.London: McGraw Hill Higher Education.

Silberschatz, A. & Galvin, P. (2009) Operating System Concepts. 8th ed. NJ: John Wiley & Sons, Inc.

Stallings, W. (1991) Operating Systems: Concepts and Examples. 2nd ed.USA:Macmillan.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: