BANGALORE, INDIA: For the multi-core, shared-memory computing platforms currently, applications are typically rendered parallel with one or another choice of threading model - extensions, such as Pthreads or OpenMP, to sequential languages.
While this situation is less than ideal, it is the current industry practice. The shared-state paradigm of typical threading models, where all threads share the same memory with equal access to its data, requires careful protocol to prevent unintended consequences. The key difficulty is the race condition, where threads access data in an unanticipated sequence, with unintended results.
Click here to post your query on Multi-core platforms
Race conditions are the worst kind of bug: insidious, very difficult to detect, reproduce, or diagnose. Tools are essential to mitigating these difficulties. Another category of irritants is deadlocks/livelocks, though - due to the fact that the code execution halts - these are normally easier to diagnose and correct. These correctness challenges are widely discussed in threading books and articles, and will not be addressed here. We focus, instead, on a necessary component of threaded programming - locks - and their potential impact on performance.
Even when parallel programs have been scrubbed to the point of reasonable confidence in their correctness, there remain potential unintended consequences in performance degradation. Applying a lock does, after all "re-serialize" that portion of execution; the goal then is to use these only as much locking as is necessary, but no more.
This scalability series assumes the code is correct, and looks are factors which may inhibit the scalability of the correctly-running application. In this episode, we look at the efficient use of locks, and in particular at two guiding principles: lock the data, not the code, and use the right lock for the job.
Click here to know more on factors impacting scalability: using locks efficiently