Effective Concurrency: Know When to Use an Active Object Instead of a Mutex

This month’s Effective Concurrency column, “Know When to Use an Active Object Instead of a Mutex,” is now live on DDJ’s website.

From the article:

Let’s say that your program has a shared log file object. The log file is likely to be a popular object; lots of different threads must be able to write to the file; and to avoid corruption, we need to ensure that only one thread may be writing to the file at any given time.

Quick: How would you serialize access to the log file?

Before reading on, please think about the question and pencil in some pseudocode to vet your design. More importantly, especially if you think this is an easy question with an easy answer, try to think of at least two completely different ways to satisfy the problem requirements, and jot down a bullet list of the advantages and disadvantages they trade off.

Ready? Then let’s begin.

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

10 Interrupt Politely (May 2008)

11 Maximize Locality, Minimize Contention (Jun 2008)

12 Choose Concurrency-Friendly Data Structures (Jul 2008)

13 The Many Faces of Deadlock (Aug 2008)

14 Lock-Free Code: A False Sense of Security (Sep 2008)

15 Writing Lock-Free Code: A Corrected Queue (Oct 2008)

16 Writing a Generalized Concurrent Queue (Nov 2008)

17 Understanding Parallel Performance (Dec 2008)

18 Measuring Parallel Performance: Optimizing a Concurrent Queue(Jan 2009)

19 volatile vs. volatile (Feb 2009)

20 Sharing Is the Root of All Contention (Mar 2009)

21 Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

22 Use Thread Pools Correctly: Keep Tasks Short and Nonblocking(Apr 2009)

23 Eliminate False Sharing (May 2009)

24 Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

25 The Power of “In Progress” (Jul 2009)

26 Design for Manycore Systems (Aug 2009)

27 Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)

28 Prefer structured lifetimes – local, nested, bounded, deterministic(Nov 2009)

29 Prefer Futures to Baked-In “Async APIs” (Jan 2010)

30 Associate Mutexes with Data to Prevent Races (May 2010)

31 Prefer Using Active Objects Instead of Naked Threads (June 2010)

32 Prefer Using Futures or Callbacks to Communicate Asynchronous Results (August 2010)

33 Know When to Use an Active Object Instead of a Mutex (September 2010)

10 thoughts on “Effective Concurrency: Know When to Use an Active Object Instead of a Mutex

  1. Thanks, Herb. I have profited from these articles. I hope you will treat exceptions and error handling at some point soon.

  2. Thanks for the nice article :)

    I don’t know if this is the right place to ask and get answered, but here are some questions:

    Isn’t there a problem with the “queue” approach, if the “write” operation produces tasks faster than the log thread can consume them?
    Can you give some good practices on avoiding that?

    I usually approach the problem with fast producers, by adding a semaphore that blocks producers when some limit is exceeded. In this case something like the call to “write” being an async call if the limit is not hit, and otherwise being a blocking call (the check is based only on the semaphore logic). The implementation is something like – the semaphore starts from N, where N is the desired limit, producers do a “-1” and consumers a “+1”. In the case of a queue, the limit is the number of maximum pending tasks at any time. If N can’t be fine-tuned at initialization time, the consumer can tune it as it works (though that may not always be a good idea?)
    I would be grateful if you share some opinion on this approach in the case of logging and in general – the limiting of access with a semaphore, because i am not sure of its drawbacks.

  3. Nice article, but not very revelatory. So called “active object” simply uses mutex inside. But of course, some obvious things must be said explicitly sometimes.

  4. I believe waiting on a message queue is a lot better then waiting on the file I/O to complete. Plus, Herb has another article on doing lock free queues.

  5. Thanks for such a nice article!

    Is below a stupid question?

    How to make message_queue work for a non GUI thread (without using PostMessage)? – “If the queue is empty, pop blocks until something is available.”

    Is it OK for a mutex + a condition variable?
    Or is there any better idea?

  6. J.J. Lee,

    If your OS doesn’t provide a multithreaded message queue, yes you might create one with a condition variable. Something like this (untested code, using Boost.Thread since I don’t have C++0x threads):

    class MessageQueue
    {
    public:
    typedef std::function Message;

    void Send( const Message& msg )
    {
    boost::unique_lock lock( m_mutex );
    m_queue.push_back( msg );
    m_cond.notify_all();
    }

    Message Receive()
    {
    boost::unique_lock lock( m_mutex );
    while( m_queue.empty() )
    {
    m_cond.wait( lock );
    }
    const Message msg = m_queue.front();
    m_queue.pop_front();
    return msg;
    }

    private:
    std::deque m_queue;
    boost::mutex m_mutex;
    boost::condition_variable m_cond;
    };

  7. Oops. It stripped my angle brackets in the code above. Replacing them with curly brackets, the typedef at the top should be “std::function{void()}” and the deque at the bottom should be “std::deque{Message}”.

Comments are closed.