Threads

Posted by Anisha Jaiswar at May 14, 2021

Multithreading Model

There are two types of threads

1.User threads     

2. Kernel threads 

Kernel threads are supported and managed directly by the operating system.

User threads are above the kernel and the managed without kernel support. There are three common ways of establishing relationship between user threads and kernel threads.

  1. Many-to-one model
  2. One-to-one model
  3. Many-to-many model.

System calls

System calls provide an interface to the source made available by an operating system.

Computer system operation

For a computer to start running for instance, when it is powered up or re-booted, it needs to have an initial program to run. This initial program or bootstrap program, tends to be simple typically it is stored in Read-Only Memory (ROM) known by the general term firmware, within the computer hardware. It initialised all aspects of the system, from CPU register to device controller to memory contents. The boot strap program must know how to load the operating system and to start executing that system. To accomplish this goal, the bootstrap program must locate and load into memory the operating system kernel. The operating system then starts executing the first process such as inti and waits for some event to occur.

User level threads versus kernel level threads

User level threads are threads that are visible to the programmer and unknown to the kernel. The operating system kernel supports and manages kernel level threads.

  1. User level threads are faster to create and manage than are of kernel threads.
  2. There are three types of models relate to user and kernel threads.
  1. One-to-one model:  It maps each user thread to corresponding kernel threads.
  2. Many-to many model:  This model multiplexes many user threads to a smaller or equal number of kernel threads.
  3. Many-to-one model:  This maps many user threads to single kernel threads.

A thread library provides the programmer an API for creating and managing threads.

Synchronization

When several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called race condition, for example, suppose we have two variables A and B. The operation on A and B are as follows

Operation 1.          A = Result

A = A+1

Result = A

Operation 2.        B = Result

B = B – 1

Result = B

Now, initially if value of result = 4 and sequence is operation 1 then operation 2. Then,

A = 4

A = 4 + 1 = 5

B = Result = 5

B = 5 – 1 = 4

Result = B = 4

Result = A = 5

If the sequence of operation 2 then operation 1.

Then

B = Result = 4 

B = B – 1 = 4 – 1 = 3

Result = B = 3

A = A + 1 = 4

Result = A = 4

To guard against the race conditions above, we need to ensure that only one process at a time can be manipulating the variable result. To make such a guarantee, we require that the process be synchronised in some way.

Critical section

The segment code of a process where it tries to modify common variables, updating table, writing file and so on. To avoid race around condition, it is necessary that when one process is executing in its critical section, other process is to be allowed to execute in its critical sections at the same time.

The critical section problem is to design a protocol that the process can use to cooperate. A solution to the critical section problem must satisfy the following three requirements.

Mutual exclusion 

If process Pi is executing in its critical section, then on other processes can be executing in their critical section.

Progress

If no process is executing its critical section and some other processes wise to enter their critical section, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next.

Bounded waiting

There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical section after a process has made a request to enter into its critical section and before that request is granted.

Semaphores

 A semaphores S is an integer variable that, apart from initialization, is accessed through two standard atomic operations, wait () and signal ().

The wait () operation was originally termed P and the signal was originally called V. The modification done by wait () operation is S – – and signal operation is S++.

Mutex

The value of binary semaphore can range only between 0 and 1. Binary semaphore are known as mutex, thus mutex is an integer variable that has two possible value 1 or 0.

Problems of Synchronization

Bound buffer problem

The bounded buffer problem is same that has been explained in race around conditions. When the writer wants to write a code in buffer and consumer wants to consume an element from buffer, simultaneously. At this time the number of elements in buffer is dependent on sequence of operation.

Readers-writers problems

When a process want to read database, it is known as readers and if a process wants to update the content of database, it is called writer. If more than 1 readers wants to access database simultaneously then can execute without any adverse effect. However, if writer and some other threads wants to access database simultaneously, chaos may occur. This synchronisation problem is known as readers-writers problem.


Comments

Write a Reply or Comment