Guide to DECthreads


Previous | Contents

After a thread terminates, its thread object continues to exist. This means that the thread object data structure remains allocated and contains meaningful information---for instance, the thread identifier is still unique and meaningful. This allows another thread to join with the terminated thread (see Section 2.3.5).

When a terminated thread is no longer needed, your program should detach that thread (see Section 2.3.4).


Note

For DIGITAL UNIX systems:

When the initial thread in a multithreaded process returns from the main routine, the entire process terminates, just as it does when a thread calls exit().

For OpenVMS systems:

When the initial thread in a multithreaded image returns from the main routine, the entire image terminates, just as it does when a thread calls SYS$EXIT.


2.3.3.1 Cleanup Handlers

A cleanup handler is a routine you provide that is associated with a particular lexical scope within your program and that can be invoked under certain circumstances when a thread exits that scope. The cleanup handler's purpose is to restore that portion of the program's state that has been changed within the handler's associated lexical scope. In particular, cleanup handlers allow a thread to react to thread-exit and cancelation requests.

Your program declares a cleanup handler for a thread by calling the pthread_cleanup_push() routine. Your program removes (and optionally invokes) a cleanup handler by calling the pthread_cleanup_pop() routine.

A cleanup handler is invoked when the calling thread exits the handler's associated lexical scope, due to:

For each call to pthread_cleanup_push(), your program must contain a corresponding call to pthread_cleanup_pop(). The two calls form a lexical scope within your program. One pair of calls to pthread_cleanup_push() and pthread_cleanup_pop() cannot overlap the scope of another pair. Pairs of calls can be nested.

Because cleanup handlers are specified by the POSIX.1c standard, they are a portable mechanism. An alternative to using cleanup handlers is to define and/or catch DECthreads exceptions with the DECthreads exception package. Chapter 5 describes how to use the DECthreads exception package. DECthreads considers cleanup handler routines, exception handling clauses (that is, CATCH, CATCH_ALL, FINALLY), and C++ object destructors to be functionally equivalent mechanisms.

2.3.4 Detaching and Destroying a Thread

Detaching a thread means to mark a thread for destruction as soon as it terminates. Destroying a thread means to free, or make available for reuse, the resources occupied by the thread object (and by DECthreads internal resources) associated with that thread.

If a thread has terminated, then detaching that thread causes DECthreads to destroy it immediately. If a thread is detached before it terminates, then DECthreads frees the thread's resources immediately after it terminates.

A thread can be detached explicitly or implicitly:

Your program can create a thread that is detached. See Section 2.3.1 for more information about creating a thread.

It is illegal for your program to attempt to join or detach a detached thread. In general, you cannot perform any operation (for example, cancelation) on a detached thread. This is because the thread ID might have become invalid or might have been assigned to a new thread immediately upon termination of the thread. Unless your program is absolutely certain that the detached thread has not terminated (or is not terminating), using the thread ID can have severe consequences.

2.3.5 Joining With a Thread

Joining with a thread means to suspend this thread's execution until another thread (the target thread) terminates. In addition, DECthreads detaches the target thread after it terminates.

For one thread to join with a functionally related thread is one way to synchronize their execution.

A thread joins with another thread by calling the pthread_join() routine and specifying the thread identifier of the thread. If the target thread has already terminated, then this thread does not wait.

The target thread of a join operation must be created with the detachstate attribute of its thread attributes object set to PTHREAD_CREATE_JOINABLE.

Keep in mind these restrictions about joining with a thread:

2.3.6 Scheduling a Thread

Scheduling means to evaluate and change the states of the process's threads. As your multithreaded program runs, DECthreads detects whether each thread is ready to execute, is waiting for completion of a system call, has terminated, and so on.

Also, for each thread DECthreads regularly checks whether that thread's scheduling priority and scheduling policy, when compared with those of the process's other threads, entail forcing a change in that thread's state. Remember that scheduling priority specifies the "precedence" of a thread in the application. Scheduling policy provides a mechanism to control how DECthreads interprets that priority as your program runs.

To understand this section, you must be familiar with the concepts presented in these sections:

2.3.6.1 Calculating the Scheduling Priority

A thread's scheduling priority falls within a range of values, depending on its scheduling policy. To specify the minimum or maximum scheduling priority for a thread, use the sched_get_priority_min() or sched_get_priority_max() routines---or use the appropriate nonportable symbol such as PRI_OTHER_MIN or PRI_OTHER_MAX. Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression.

For example, to specify a scheduling priority value that is midway between the minimum and maximum for the SCHED_OTHER scheduling policy, use the following expression (coded appropriately for your programming language):

   .
   .
   .
pri_other_mid = ( sched_get_priority_min(SCHED_OTHER) + 
                  sched_get_priority_max(SCHED_OTHER)   ) / 2 

where pri_other_mid represents the priority value you want to set.

Avoid using literal numerical values to specify a scheduling priority setting, because the range of priorities can change from implementation to implementation. Values outside the specified range for each scheduling policy might be invalid.

2.3.6.2 Effects of Scheduling Policy

To demonstrate the results of the different scheduling policies, consider the following example: A program has four threads, A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:
A minimum
B middle
C middle
D maximum

On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a symmetric multiprocessor (or SMP) system the execution behavior is completely indeterminate. Although the four threads have differing priorities, a multiprocessor system might execute two or more of these threads simultaneously.

When you design a multithreaded application that uses scheduling priorities, it is critical to remember that scheduling is not the same as synchronization. That is, you cannot assume that a higher-priority thread can access shared data without interference from lower-priority threads. For example, if one thread has a FIFO scheduling policy and the highest scheduling priority setting, while another has a background scheduling policy and the lowest scheduling priority setting, DECthreads might allow the two threads to run at the same time. As a corollary, on a four-processor system you also cannot assume that the four highest-priority threads are executing simultaneously at any particular moment.

The following figures demonstrate how DECthreads schedules a set of threads on a uniprocessor based on whether each thread has the FIFO, RR, or throughput setting for its scheduling policy attribute. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing (that is, executing during the flow shown in each figure).

Figure 2-1 shows a flow with FIFO scheduling.

Figure 2-1 Flow with FIFO Scheduling



Thread D executes until it waits or terminates. Next, although thread B and thread C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes.

Figure 2-2 shows a flow with RR scheduling.

Figure 2-2 Flow with RR Scheduling



Thread D executes until it waits or terminates. Next, thread B and thread C are timesliced, because they both have the same priority. Finally, thread A executes.

Figure 2-3 shows a flow with Default scheduling.

Figure 2-3 Flow with Default Scheduling



Threads D, B, C, and A are timesliced, even though thread A has a lower priority than the others. Thread A receives less execution time than thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects thread A against indefinitely being blocked from executing.

Because low-priority threads eventually run, the default scheduling policy protects against occurrences of thread starvation and priority inversion, which are discussed in Section 3.5.2.

2.3.7 Canceling a Thread

Canceling a thread means to request the termination of a target thread as soon as possible. A thread can request the cancelation of another thread or itself.

Thread cancelation is a three-stage operation:

  1. A cancelation request is posted for the target thread. This occurs when some routine in some thread calls pthread_cancel().
  2. The posted cancelation request is delivered to the target thread. This occurs when the target thread invokes a routine that is a cancelation point. (See Section 2.3.7.4 for a discussion of routines that are cancelation points.)
    If the target thread's cancelability state is disabled, the target thread does not receive the cancelation request until the next cancelation point after the cancelability state is set to enabled. See Section 2.3.7.3 for how to control a thread's cancelability.
  3. After the target thread receives the cancelation request, it responds to that request by invoking, in turn, its cleanup handler routines. Previously in its life the target thread might have pushed pointers to cleanup handler routines (using the pthread_cleanup_push() routine) on its handler stack. When the target thread receives the cancelation request, DECthreads automatically calls, in turn, each cleanup handler routine on the handler stack. (A cleanup handler can be removed from the handler stack by calling pthread_cleanup_pop().)

2.3.7.1 Thread Cancelation Implemented Using Exceptions

The DECthreads pthread and tis interfaces implement thread cancelation using DECthreads exceptions. Using the DECthreads exception package, it is possible for a thread (to which a cancelation request has been delivered) explicitly to catch the DECthreads-defined thread cancelation exception (pthread_cancel_e) and to perform cleanup actions accordingly. After catching this exception, the exception handler code should always reraise the exception, to avoid breaking the "contract" that cancelation leads to thread termination.

Chapter 5 describes the DECthreads exception package.

2.3.7.2 Thread Return Value After Cancelation

When DECthreads terminates a thread due to cancelation, it writes the return value PTHREAD_CANCELED into the thread's thread object. This is because cancelation prevents the thread from calling pthread_exit() or returning from its start routine.

2.3.7.3 Controlling Thread Cancelation

Each thread controls whether it can be canceled (that is, whether it receives requests to terminate) and how quickly it terminates after receiving the cancelation request, as follows:

A thread's cancelability state determines whether it receives a cancelation request. When created, a thread's cancelability state is enabled. If the cancelability state is disabled, the thread does not receive cancelation requests.

If the thread's cancelability state is enabled, use the pthread_testcancel() routine to request the delivery of any pending cancelation request. This routine enables the program to permit cancelation to occur at places where it might not otherwise be permitted, and it is especially useful within very long loops to ensure that cancelation requests are noticed within a reasonable time.

If its cancelability state is disabled, the thread cannot be terminated by any cancelation request. This means that a thread could wait indefinitely if it does not come to a normal conclusion; therefore, exercise care.

After a thread has been created, use the pthread_setcancelstate() routine to change its cancelability state.

After a thread has been created, use the pthread_setcanceltype() routine to change its cancelability type, which determines whether it responds to a cancelation request at cancelation points (synchronous cancelation), or at any point in its execution (asynchronous cancelation).

Initially, a thread's cancelability type is deferred, which means that the thread receives a cancelation request only at cancelation points---for example, when a call to the pthread_cond_wait() routine is made. If you set a thread's cancelability type to asynchronous, the thread can receive a cancelation request at any time.


Note

If the cancelability state is disabled, the thread cannot be canceled regardless of the cancelability type. Setting cancelability type to deferred or asynchronous is relevant only when the thread's cancelability state is enabled.

2.3.7.4 Cancelation Points

A cancelation point is a routine that delivers a posted cancelation request to that request's target thread. The POSIX.1c standard specifies routines that are cancelation points.

The following routines in the DECthreads pthread interface are cancelation points:

The following routines in the DECthreads tis interface are cancelation points:

Other routines that are also cancelation points are mentioned in the operating system-specific appendixes of this guide. Refer to the following thread cancelability for system services topics:

2.3.7.5 Cleanup from Synchronous Cancelation

When a cancelation request is delivered to a thread, the thread could be holding some resources, such as locked mutexes or allocated memory. Your program must release these resources before the thread terminates.

DECthreads provides two equivalent mechanisms that can do the cleanup during cancelation, as follows:

2.3.7.6 Cleanup from Asynchronous Cancelation

Because it is impossible to predict exactly when an asynchronous cancelation request will be delivered, it is extremely difficult for a program to recover properly. For this reason, an asynchronous cancelability type should be set only within regions of code that do not need to clean up in any way, such as straight-line code or tight looping code that is compute-bound and that makes no calls and holds no resources.

While a thread's cancelability type is asynchronous, do not call any routine unless it is explicitly documented as "safe for asynchronous cancelation." In particular, you can never use asynchronous cancelability type in code that allocates or frees memory, or that locks or unlocks mutexes---because the cleanup code cannot reliably determine the state of the resource.


Note

None of the general run-time routines are safe for asynchronous cancelation, and likewise for all DECthreads routines except pthread_setcanceltype().

For additional information about accomplishing asynchronous cancelation for your platform, see Section A.4, Section B.7, and Section C.5.

2.3.7.7 Example of Thread Cancelation Code

Example 2-1 shows a thread control and cancelation example.

Example 2-1 pthread Cancel


/* 
 * Pthread Cancel Example 
 */ 
 
/* 
 * Outermost cancelation state 
 */ 
{ 
 . 
 . 
 . 
int     s, outer_c_s, inner_c_s; 
 . 
 . 
 . 
/* Disable cancelation, saving the previous setting.    */ 
 
s = pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, &outer_c_s); 
if(s == EINVAL) 
   printf("Invalid Argument!\n"); 
else if(s == 0) 
         . 
         . 
         . 
        /* Now cancelation is disabled.    */ 
 . 
 . 
 . 
/* Enable cancelation.  */ 
 
       { 
        . 
        . 
        . 
        s = pthread_setcancelstate (PTHREAD_CANCEL_ENABLE, &inner_c_s); 
        if(s == 0) 
           . 
           . 
           . 
           /* Now cancelation is enabled.  */ 
           . 
           . 
           . 
           /* Enable asynchronous cancelation this time.  */ 
 
               { 
                . 
                . 
                . 
                /* Enable asynchronous cancelation.  */ 
 
                int   outerasync_c_s, innerasync_c_s; 
                . 
                . 
                . 
                s = pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 
                                           &outerasync_c_s); 
                if(s == 0) 
                   . 
                   . 
                   . 
                   /* Now asynchronous cancelation is enabled.  */ 
                   . 
                   . 
                   . 
                   /* Now restore the previous cancelation state (by 
                    * reinstating original asynchronous type cancel). 
                    */ 
                   s = pthread_setcanceltype (outerasync_c_s, 
                                              &innerasync_c_s); 
                   if(s == 0) 
                      . 
                      . 
                      . 
                      /* Now asynchronous cancelation is disabled, 
                       * but synchronous cancelation is still enabled. 
                       */ 
                } 
        . 
        . 
        . 
       } 
 . 
 . 
 . 
/* Restore to original cancelation state.    */ 
 
s = pthread_setcancelstate (outer_c_s, &inner_c_s); 
if(s == 0) 
   . 
   . 
   . 
   /* The original (outermost) cancelation state is now reinstated. */ 
} 

2.4 Synchronization Objects

In a multithreaded program, you must use synchronization objects whenever there is a possibility of conflict in accessing shared data. The following sections discuss two kinds of DECthreads synchronization objects: mutexes and condition variables.

2.4.1 Mutexes

A mutex (or mutual exclusion) object is used by multiple threads to ensure the integrity of a shared resource that they access, most commonly shared data, by allowing only one thread to access it at a time.

A mutex has two states, locked and unlocked. A locked mutex has an owner---the thread that locked the mutex. It is illegal to unlock a mutex not owned by the calling thread.

For each piece of shared data, all threads accessing that data must use the same mutex: each thread locks the mutex before it accesses the shared data and unlocks the mutex when it is finished accessing that data. If the mutex is locked by another thread, the thread requesting the lock either waits for the mutex to be unlocked or returns, depending on the lock routine called (see Figure 2-4).

Figure 2-4 Only One Thread Can Lock a Mutex



Each mutex must be initialized before use. DECthreads supports static initialization at compile time, using one of the macros provided in the pthread.h header file, as well as dynamic initialization at run time by calling pthread_mutex_init(). This routine allows you to specify an attributes object, which allows you to specify the mutex type. The types of mutexes are described in the following sections.

2.4.1.1 Normal Mutex

A normal mutex is locked exactly once by a thread. If a thread tries to lock the mutex again without first unlocking it, the thread waits for itself to release the lock and deadlocks.

This is the most efficient form of mutex. When using interface and function inlining (optional), you can often lock and unlock a normal mutex without a call to DECthreads.

A normal mutex usually does not check thread ownership---that is, a deadlock will result if the owner attempts to "relock" the mutex. The system usually will not report an erroneous attempt to unlock a mutex not owned by the calling thread.

2.4.1.2 Default Mutex

This is the name reserved by the Single UNIX Specification, Version 2, for a vendor's POSIX.1c threads implementation's default mutex type. For DECthreads, "default" is the same as "normal." This might not be true for other implementations of the Single UNIX Specification, Version 2, which could choose errorcheck, recursive, or even some nonportable mutex types as the default.

2.4.1.3 Recursive Mutex

A recursive mutex can be locked more than once by a given thread without causing a deadlock. The thread must call the pthread_mutex_unlock() routine the same number of times that it called the pthread_mutex_lock() routine before another thread can lock the mutex.


Previous | Next | Contents