Implicit Threading and Language-based threads

Operating SystemWindowsMCAC

Implicit Threading

One way to address the difficulties and better support the design of multithreaded applications is to transfer the creation and management of threading from application developers to compilers and run-time libraries. This, termed implicit threading, is a popular trend today.

Implicit threading is mainly the use of libraries or other language support to hide the management of threads. The most common implicit threading library is OpenMP, in context of C.

OpenMP is a set of compiler directives as well as an API for programs written in C, C++, or FORTRAN that provides support for parallel programming in shared-memory environments. OpenMP identifies parallel regions as blocks of code that may run in parallel. Application developers insert compiler directives into their code at parallel regions, and these directives instruct the OpenMP run-time library to execute the region in parallel. The following C program illustrates a compiler directive above the parallel region containing the printf() statement:

Example

 Live Demo

#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
   /* sequential code */
   #pragma omp parallel{
      printf("I am a parallel region.");
   }
   /* sequential code */
   return 0;
}

Output

I am a parallel region.

When OpenMP encounters the directive

#pragma omp parallel

It creates as many threads which are processing cores in the system. Thus, for a dual-core system, two threads are created, for a quad-core system, four are created; and so forth. Then all the threads simultaneously execute the parallel region. When each thread exits the parallel region, it is terminated. OpenMP provides several additional directives for running code regions in parallel, including parallelizing loops.

In addition to providing directives for parallelization, OpenMP allows developers to choose among several levels of parallelism. Eg, they can set the number of threads manually. It also allows developers to identify whether data are shared between threads or are private to a thread. OpenMP is available on several open-source and commercial compilers for Linux, Windows, and Mac OS X systems.

Grand Central Dispatch (GCD)

Grand Central Dispatch (GCD)—a technology for Apple’s Mac OS X and iOS operating systems—is a combination of extensions to the C language, an API, and a run-time library that allows application developers to spot sections of code to run in parallel. Like OpenMP, GCD also manages most of the details of threading. It identifies extensions to the C and C++ languages known as blocks. A block is simply a self-contained unit of work. It is specified by a caret ˆ inserted in front of a pair of braces { }. A simple example of a block is shown below −

{
   ˆprintf("This is a block");
}

It schedules blocks for run-time execution by placing them on a dispatch queue. When GCD removes a block from a queue, it assigns the block to an available thread from the thread pool it manages. It identifies two types of dispatch queues: serial and concurrent. Blocks placed on a serial queue are removed in FIFO order. Once a block has been removed from the queue, it must complete execution before another block is removed. Each process has its own serial queue (known as main queue). Developer can create additional serial queues that are local to particular processes. Serial queues are useful for ensuring the sequential execution of several tasks. Blocks placed on a concurrent queue are also removed in FIFO order, but several blocks may be removed at a time, thus allowing multiple blocks to execute in parallel. There are three system-wide concurrent dispatch queues, and they are distinguished according to priority: low, default, and high. Priorities represent an estimation of the relative importance of blocks. Quite simply, blocks with a higher priority should be placed on the high priority dispatch queue. The following code segment illustrates obtaining the default-priority concurrent queue and submitting a block to the queue using the dispatch async() function:

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch async(queue, ˆ{ printf("This is a block."); });

Internally, GCD’s thread pool is composed of POSIX threads. GCD actively manages the pool, allowing the number of threads to grow and shrink according to application demand and system capacity.

Threads as Objects

In alternative languages, ancient object-oriented languages give explicit multithreading support with threads as objects. In these forms of languages, classes area written to either extend a thread class or implement a corresponding interface. This style resembles the Pthread approach, because the code is written with explicit thread management. However, the encapsulation of information inside the classes and extra synchronization options modify the task.

Java Threads

Java provides a Thread category and a Runnable interface that can be used. Each need to implement a public void run() technique that defines the entry purpose of the thread. Once an instance of the object is allotted, the thread is started by invoking the start() technique on that. Like with Pthreads, beginning the thread is asynchronous, that the temporal arrangement of the execution is non-deterministic.

Python Threads

Python additionally provides two mechanisms for multithreading. One approach is comparable to the Pthread style, wherever a function name is passed to a library method thread.start_new_thread(). This approach is very much and lacks the flexibility to join or terminate the thread once it starts. A additional flexible technique is to use the threading module to outline a class that extends threading. Thread. almost like the Java approach, the category should have a run() method that gives the thread's entry purpose. Once an object is instantiated from this category, it can be explicitly started and joined later.

Concurrency as Language Design

Newer programming languages have avoided race condition by building assumptions of concurrent execution directly into the language style itself. As an example, Go combines a trivial implicit threading technique (goroutines) with channels, a well-defined style of message-passing communication. Rust adopts a definite threading approach the same as Pthreads. However, Rust has terribly strong memory protections that need no extra work by the software engineer.

Goroutines

The Go language includes a trivial mechanism for implicit threading: place the keyword go before a call. The new thread is passed an association to a message-passing channel. Then, the most thread calls success := <-messages, that performs a interference scan on the channel. Once the user has entered the right guess of seven, the keyboard auditor thread writes to the channel, permitting the most thread to progress.

Channels and goroutines are core components of the Go language, that was designed beneath the belief that almost all programs would be multithreaded. This style alternative streamlines the event model, permitting the language itself up-to-date the responsibility for managing the threads and programing.

Rust Concurrency

Another language is Rust that has been created in recent years, with concurrency as a central design feature. The following example illustrates the use of thread::spawn() to create a new thread, which can later be joined by invoking join() on it. The argument to thread::spawn() beginning at the || is known as a closure, which can be thought of as an anonymous function. That is, the child thread here will print the value of a.

Example

use std::thread;
fn main() {
   /* Initialize a mutable variable a to 7 */
   let mut a = 7;
   /* Spawn a new thread */
   let child_thread = thread::spawn(move || {
      /* Make the thread sleep for one second, then print a */
      a -= 1;
      println!("a = {}", a)
   });
   /* Change a in the main thread and print it */
   a += 1;
   println!("a = {}", a);
   /* Join the thread and print a again */
   child_thread.join();
}

However, there is a subtle point in this code that is central to Rust's design. Within the new thread (executing the code in the closure), the a variable is distinct from the a in other parts of this code. It enforces a very strict memory model (known as "ownership") which prevents multiple threads from accessing the same memory. In this example, the move keyword indicates that the spawned thread will receive a separate copy of a for its own use. Regardless of the scheduling of the two threads, the main and child threads cannot interfere with each other's modifications of a, because they are distinct copies. It is not possible for the two threads to share access to the same memory.

raja
Published on 17-Oct-2019 12:13:44
Advertisements