Friday, May 31, 2013

home

Welcome to ccplusplus.com
We are a fast growing company aimed to provide an intelligent and cleaner Technology Solution, a solution that anyone can feel and think. We at ccplusplus are always involved in designing, developing and  providing a better software solution that can be used in any of the area or the field which are governed or not governed by us.

Aiming to spread the Information Technology we are not just limited to software development but we wish to spread our technology as far as we can to each hand. We called this a "Human Being Technology" a technology that can be used and understand by any of the human who has never used or understand the software and its capability.
If we summarize our self, ccplusplus is primarily involved in the following areas of expertize,
  • Software Solutions
  • Consultancy Services
  • Global eLearning Solutions
  • Education Support
We are team of dedicated engineers who has a passion to develop and innovate. We are not just software engineers, we are the engineers who are passion-ed to change the future of the software !!!!

ccplusplus welcome's all for any kind of suggestions and improvement, we value each of them and seeking to learn from them.

Please write to us to let us serve you in a better way.

condition variable pthread

Mutex cater to the most general form of resource sharing, but sometimes threads have more specific sharing requirements.Condition Variables allow us to express and control a directed dependency between threads. In general this means that one group of threads should only execute when a given condition becomes true, and this condition will be satisfied by another group of threads.
When thread operation is coordinated by a condition variable, one group of threads waits until a condition is triggered, at which point one or more of the waiting threads is woken.
An example: there are two groups of threads: one producing something, and the other consuming it. The Producer-Consumer pattern is also useful outside multi-threaded programming, where it allows you to decouple data/event/request/whatever production from consumption. For our contrived example, we produce and consume characters from the alphabet.
std::queue<char> producedChars;
 
boost::condition characterAvailable;
boost::mutex characterMutex;
 
void conditionDataProducer()
{
    unsigned char c = 'A';
    for(uint i = 0; i < numCharacters;)
    {
        if((c >= 'A' && c <= 'Z'))
        {
            boost::mutex::scoped_lock lock(characterMutex);
            producedChars.push(c++);
            characterAvailable.notify_one();
            ++i;
        }
        else
        {
            c = 'A';
        }
    }
    boost::mutex::scoped_lock lock(characterMutex);
    producedChars.push(EOF);
    characterAvailable.notify_one();
}
 
void conditionDataConsumer()
{
    boost::mutex::scoped_lock lock(characterMutex);
 
    while(true)
    {
        // on entry, releases mutex and suspends this thread
        // on return, reacquires mutex
        while(producedChars.empty())
        {
            characterAvailable.wait(lock);
        }
 
        if(producedChars.front() == EOF) break;
        producedChars.pop();
    }
}
Take a look at the consumer first: it acquires a mutex and then uses the condition variable to wait. When waitis called the mutex is unlocked and the calling thread is suspended. The consumer thread will now only be resumed when the condition represented by character Available becomes true.
The producer simply pushes characters onto the shared container and then calls notify_one. This will allow one of the threads waiting on this condition (a consumer) to resume and process the new data. This will be much more efficient than having consumers endlessly polling an empty queue.
Condition variables are also useful in implementing the Monitor Object concurrency pattern, which we talk about next.

See also:

livelock example

Livelock

Livelock is when multiple threads continue to run (ie. do not block indefinitely like in deadlock), but the system as a whole does not make progress due to repeating patterns of non-productive resource contention.
Livelock may arise from attempts to avoid threads blocking (which can hurt performance) via atry-lock. A try-lock attempts to lock a mutex but does not block if the mutex is already locked. The following example should make usage of the Boost try-lock clear.
Listing 1. Contrived livelock
boost::try_mutex resourceX;
boost::try_mutex resourceY;
 
void threadAFunc()
{
    uint counter = 0;
    while(true)
    {
        boost::try_mutex::scoped_lock lockX(resourceX);
        boost::thread::yield(); // encourage the livelock
 
        boost::try_mutex::scoped_try_lock lockY(resourceY);
        if(lockY.locked() == false) continue;
 
        std::cout << "threadA working: " << ++counter << "\n";
    }
}
 
void threadBFunc()
{
    uint counter = 0;
    while(true)
    {
        boost::try_mutex::scoped_lock lockY(resourceY);
        boost::thread::yield(); // encourage the livelock
 
        boost::try_mutex::scoped_try_lock lockX(resourceX);
        if(lockX.locked() == false) continue;
 
        std::cout << "threadB working: " << ++counter << "\n";
    }
}

This code exhibits an almost full livelock, though for each yield statement removed the lock gets a little less severe. When I run this example, at best threads do a few pieces of work per second. How does the livelock occur? The probable sequence is illustrated below:
Another use of the term livelock involvesstarvation, where one part of a system monopolises system resources and starves another part of the system. For example, a system composed of request-queueing and request-servicing components might exhibit starvation if an overwhelming number of requests cause the request-queueing component to use all system resources.
See also:

Self Deadlock

Another problem is self-deadlock. Self-deadlock occurs when a single thread attempts to lock a mutex twice: the second attempt will block indefinitely. This can easily happen when the same resource is used at multiple levels within an algorithm.
In particular, consider a class that attempts to provide a threadsafe interface by synchronising all member function calls with a single internal mutex. The mutex is locked at the beginning of every method, and unlocked on method return. If that class now calls a member function from within a member function, there will be a self-deadlock.

To counter this problem there is the concept of recursive mutex. A recursive mutex will allow multiple locks from within a single thread to succeed, though that thread must unlock the mutex as many times as it has locked it. The disadvantage of a recursive mutex is a slight performance decrease.
See also:

deadlock condition

Deadlock is where one or more threads wait for resources that can never become available.
The classic case (illustrated below) is where two threads both require two shared resources, and they use blocking mutexes to lock them in opposite order. Thread A locks resource X while thread B locks resource Y. Next, thread A attempts to lock resource Y and thread B attempts to lock resource X: since both resources are already locked (by the other thread), both threads wait indefinitely.
The following diagram should make the sequence clear

It is easy to write code where deadlock is inevitable, here is the classic case:
Listing 1. Classic deadlock
boost::mutex resourceX;
boost::mutex resourceY;
 
void deadlockThreadAFunc()
{
    uint counter = 0;
    while(true)
    {
        boost::mutex::scoped_lock lockX(resourceX);
        boost::thread::yield(); // to encourage deadlock
        boost::mutex::scoped_lock lockY(resourceY);
 
        std::cout << "threadA working: " << ++counter << "\n";
    }
}
 
void deadlockThreadBFunc()
{
    uint counter = 0;
    while(true)
    {
        boost::mutex::scoped_lock lockY(resourceY);
        boost::thread::yield(); // to encourage deadlock
        boost::mutex::scoped_lock lockX(resourceX);
 
        std::cout << "threadB working: " << ++counter << "\n";
    }
}
The yield statements in the above example force the current thread to stop executing and allow another thread to continue. They are for demonstration purposes only, to encourage the deadlock to occur quickly. Without them, a single core machine may run for some time without having a context switch between the two resource locking statements (and thus triggering the deadlock).
For this toy example the fix is simple but non-intuitive. All we need to do is ensure welock resources in a consistent order, so changingdeadlockThreadBFunc()to lockresourceXbeforeresourceYensures there will be no deadlock. Unlock order is not significant.

One valuable technique to ensure strict lock-order discipline is to always lock a group of resources in the order of their memory address. However, it should be clear that deadlock will become much more of a problem in non-trivial code with complex data sharing requirements where resources are being locked at multiple levels and in many different contexts. This is one of the main reasons multi-threaded programming is so difficult - it sometimes requires coordination between multiple levels in your code, and this is the enemy of encapsulation.

See also:
Mutex


mutex tutorial

Mutexes

A mutex is an OS-level synchronization primitive that can be used to ensure a section of code can only be executed by one thread at a time.
It has two states: locked and unlocked. When the mutex is locked, any further attempt to lock it will block (the calling thread will be suspended). When the mutex becomes unlocked, if there are threads waiting one of these will be resumed and will lock the mutex. Furthermore, the mutex may only be unlocked by the thread that locked it.
If we have a resource we need to share between threads, we associate a mutex with it and use the mutex to synchronize resource access. All we need to do is ensure our code locks the mutex before using the resource, and unlocks it after it is finished. This will prevent race conditions related to multiple threads simultaneously accessing that resource.

Diagram 1. Two thread contention for a mutex


Note: We will be updating mutex section above soon with more documents and sample codes.

Mutexes in Practice - Boost.Threads solution

Boost.Threads is a part of the excellent Boost libraries. It has been intelligently designed to enhance safety by making error-prone code more difficult to write.
We'll be using Boost.Threads throughout the tutorial, since we may as well get used to using a well designed C++ library from the outset. Furthermore, the upcoming C++ 0x standard (due sometime this decade) will use Boost.Threads as the model for the new threading support, so learning this library will help future-proof your C++ skills.
Listing 2. Boost.Threads synchronisation
int sharedCounter = 50;
boost::mutex counterMutex;
 
void solutionWorkerThread()
{
    while(sharedCounter > 0)
    {
        bool doWork = false;
        {
            // scoped_lock locks mutex
            boost::mutex::scoped_lock lock(counterMutex);
            if(sharedCounter > 0)
            {
                --sharedCounter;
                doWork = true;
            }
            // scoped_lock unlocks mutex automatically at end of scope
        }
 
        if(doWork) doSomeWork();
    }
}
In the above solution, the shared counter is checked and updated as an atomic operation (with respect to multiple threads) so the race condition is solved.
Note the way the scoped_lock works: the constructor locks the associated mutex and the destructor unlocks it. This is the RAII (Resource Acquisition Is Initialization idiom, and it helps exception safety. If an exception were thrown while we had locked the mutex, the scoped_lock would be destroyed during the normal stack unwinding process and the mutex would be automatically freed.
Exception safety is not an issue with this simple example, since no statement can throw while we have the mutex locked. However, real-world code will almost always benefit from the scoped_lock design.
Unfortunately concurrent code can have many problems: race conditions are only the most fundamental. The next problem we'll cover is called Deadlock, and it commonly arises from the interaction of blocking mutex.
See also:

race condition in threads

Race Conditions

A race condition is where the behavior of code depends on the interleaving of multiple threads. This is perhaps the most fundamental problem with multi-threaded programming.
When analyzing or writing single-threaded code we only have to think about the sequence of statements right in front of us; we can assume that data will not magically change between statements. However, with improperly written multi-threaded code non-local data can change unexpectedly due to the actions of another thread.
Race conditions can result in a high-level logical fault in your program, or (more excitingly) it may even pierce C++'s statement-level abstraction. That is, we cannot even assume that single C++ statements execute atomically because they may compile to multiple assembly instructions. In short, this means that we cannot guarantee the outcome of a statement such as
foo += 1;
if foo is non-local and may be accessed from multiple threads.
A contrived example follows.
Listing 1. A logical race condition
int sharedCounter = 50;
 
void* workerThread(void*)
{
    while(sharedCounter > 0)
    {
        doSomeWork();
        --sharedCounter;
    }
}
Now imagine that we start a number of threads, all executing workerThread(). If we have just one thread, doSomeWork()is going to be executed the correct number of times (whatever sharedCounter starts out at).
However, with more than one thread doSomeWork()will most likely be executed too many times. Exactly how many times depends on the number of threads spawned, computer architecture, operating system scheduling and... chance. The problem arises because we do not test and update sharedCounter as an atomic operation, so there is a period where the value of sharedCounter is incorrect. During this time other threads can pass the test when they really shouldn't have.
The value of sharedCounter on exit tells us how many extra times doSomeWork()is called. With a single thread, the final value of sharedCounter is of course 0. With multiple threads running, it will be between 0 and -N where N is the number of threads.
Moving the update adjacent to the test will not make these two operations atomic. The window during which sharedCounter is out of date will be smaller, but the race condition remains. An illustration of this non-solution follows:
Listing 2. Still a race condition
void* workerThread(void*)
{
    while(sharedCounter > 0)
    {
        --sharedCounter;
        doSomeWork();
    }
}


The solution is to use a mutex to synchronize the threads with respect to the test and update. Another way of saying this is that we need to define a critical section in which we both test and update the sharedCounter. The next section introduces mutex and solves the example race condition.

See also: