Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming.

Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code.

Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements.

In my reading, I have seen examples of multi-process and multi-thread programming that don’t seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use?

For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment.

th_id = omp_get_thread_num();
#pragma omp critical
{
  cout << "Hello World from thread " << th_id << 'n';
}

This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP

Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this:

wait(sem);
th_id = get_thread_num();
cout << "Hello World from thread " << th_id << 'n';
signal(sem);

In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage.

Do these systems like OpenMP eliminate the problems with semaphores?
Do they move the problem somewhere else?
How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

3

Are there concurrent programming techniques and practices that one should no longer use? I’d say yes.

One early concurrent programming technique that seems rare nowadays is interrupt-driven programming. This is how UNIX worked in the 1970s. See the Lions Commentary on UNIX or Bach’s Design of the UNIX Operating System. Briefly, the technique is to suspend interrupts temporarily while manipulating a data structure, and then to restore interrupts afterward. The BSD spl(9) man page has an example of this style of coding. Note that the interrupts are hardware-oriented, and the code embodies an implicit relationship between the kind of hardware interrupt and the data structures associated with that hardware. For example, code that manipulates disk I/O buffers needs to suspend interrupts from disk controller hardware while working with those buffers.

This style of programming was employed by operating systems on uniprocessor hardware. It was much rarer for applications to deal with interrupts. Some OSes had software interrupts, and I think people tried to build threading or coroutine systems on top of them, but this wasn’t very widespread. (Certainly not in the UNIX world.) I suspect that interrupt-style programming is confined today to small embedded systems or real-time systems.

Semaphores are an advance over interrupts because they are software constructs (not related to hardware), they provide abstractions over hardware facilities, and they enable multithreading and multiprocessing. The main problem is that they are unstructured. The programmer is responsible for maintaining the relationship between each semaphore and the data structures it protects, globally across the entire program. For this reason I think bare semaphores are rarely used today.

Another small step forward is a monitor, which encapsulates concurrency control mechanisms (locks and conditions) with the data being protected. This was carried over into the Mesa (alternate link) system and from there into Java. (If you read this Mesa paper, you can see that Java’s monitor locks and conditions are copied almost verbatim from Mesa.) Monitors are helpful in that a sufficiently careful and diligent programmer can write concurrent programs safely using only local reasoning about the code and data within the monitor.

There are additional library constructs, such as those in Java’s java.util.concurrent package, which includes a variety of highly concurrent data structures and thread pooling constructs. These can be combined with additional techniques such as thread confinement and effective immutability. See Java Concurrency In Practice by Goetz et. al. for further discussion. Unfortunately, many programmers are still rolling their own data structures with locks and conditions, when they really ought to just be using something like ConcurrentHashMap where the heavy lifting has already been done by the library authors.

Everything above shares some significant characteristics: they have multiple threads of control that interact over globally shared, mutable state. The problem is that programming in this style is still highly error-prone. It’s quite easy for a small mistake to go unnoticed, resulting in misbehavior that is hard to reproduce and diagnose. It may be that no programmer is “sufficiently careful and diligent” to develop large systems in this fashion. At least, very few are. So, I’d say that multi-threaded programming with shared, mutable state should be avoided if at all possible.

Unfortunately it’s not entirely clear whether it can be avoided in all cases. A lot of programming is still done in this fashion. It would be nice to see this supplanted by something else. Answers from Jarrod Roberson and davidk01 point to techniques such as immutable data, functional programming, STM, and message-passing. There is much to recommend them, and all are being actively developed. But I don’t think they’ve fully replaced good old-fashioned shared mutable state just yet.

EDIT: here’s my response to the specific questions at the end.

I don’t know much about OpenMP. My impression is that it can be very effective for highly parallel problems such as numeric simulations. But it doesn’t seem general-purpose. The semaphore constructs seem pretty low-level and require the programmer to maintain the relationship between semaphores and shared data structures, with all the problems I described above.

If you have a parallel algorithm that uses semaphores, I don’t know of any general techniques to transform it. You might be able to refactor it into objects and then build some abstractions around it. But if you want to use something like message-passing, I think you really need to reconceptualize the entire problem.

5

The latest rage in academic circles seems to be Software Transactional Memory (STM) and it promises to take all the hairy details of multi-threaded programming out of the hands of the programmers by using sufficiently smart compiler technology. Behind the scenes it is still locks and semaphores but you as the programmer don’t have to worry about it. The benefits of that approach are still not clear and there are no obvious contenders.

Erlang uses message passing and agents for concurrency and that is a simpler model to work with than STM. With message passing you have absolutely no locks and semaphores to worry about because each agent operates in its own mini universe so there are no data related race conditions. You still have some weird edge cases but they are nowhere near as complicated as livelocks and deadlocks. JVM languages can make use of Akka and get all the benefits of message passing and actors but unlike Erlang the JVM does not have built-in support for actors so at the end of the day Akka still makes use of threads and locks but you as the programmer don’t have to worry about it.

The other model I’m aware of that doesn’t use locks and threads is by using futures which is really just another form of async programming.

I’m not sure how much of this technology is available in C++ but chances are if you are seeing something that is not explicitly using threads and locks then it will be one of the above techniques for managing concurrency.

10

Answer to the Question

The general consensus is shared mutable state is Bad™, and immutable state is Good™, which is proven to be accurate and true again and again by functional languages and imperative languages as well.

The problem is mainstream imperative languages are just not designed to handle this way of working, things aren’t going to change for those languages over night. This is where the comparison to GOTO is flawed. Immutable state and message passing is a great solution but it isn’t a panacea either.

Flawed Premise

This question is based on comparisons to a flawed premise; that GOTO was the actual problem and was universally deprecated some how by the Intergalatic Universal Board of Language Designers and Software Engineering Unions©! Without a GOTO mechanism ASM wouldn’t work at all. Same with the premise that raw pointers are the problem with C or C++ and some how smart pointers are a panacea, they aren’t.

GOTO wasn’t the problem, programmers were the problem. Same goes for shared mutable state. It in and of itself isn’t the problem, it is the programmers using it that is the problem. If there was a way to generate code that used shared mutable state in a way that never had any race conditions or bugs, then it would not be an issue. Much like if you never write spaghetti code with GOTO or equivilent constructs it isn’t an issue either.

Education is the Panacea

Idiot programmers are what were deprecated, every popular language still has the GOTO construct either directly or indirectly and it is a best practice when properly used in every language that has this type of constructs.

EXAMPLE: Java has labels and try/catch/finally both of which directly work as GOTO statements.

Most Java programmers that I talk to don’t even know what immutable actually means outside them repeating the String class is immutable with a zombie like look in their eyes. They definitely don’t know how to use the final keyword properly to create an immutable class. So I am pretty sure they have no idea why messaging passing using immutable messages is so great and why shared mutable state is so not great.

21

I think this is mostly about levels of abstraction. Quite often in programming, it is useful to abstract away some details in a way that is safer or more readable or something like that.

This applies to control structures: ifs, fors and even trycatch blocks are just abstractions over gotos. These abstractions are almost always useful, because they make your code more readable. But there are cases when you will still need to use goto (e.g. if you’re writing assembly by hand).

This also applies to memory management: C++ smart pointers and GC are abstractions over raw pointers and manual memory de-/allocation. And sometimes, these abstractions are not appropriate, e.g. when you really need maximum performance.

And the same applies to multi-threading: things like futures and actors are just abstractions over threads, semaphors, mutexes and CAS instructions. Such abstractions can help you make your code much more readable and they also help you avoid errors. But sometimes, they are simply not appropriate.

You should know what tools you have available and what are their advantages and disadvantages. Then you can choose the correct abstraction for your task (if any). Higher levels of abstraction don’t deprecate lower levels, there will be always some cases where the abstraction is not appropriate and the best choice is to use the “old way”.

1

Yes, but you’re not likely to run into some of them.

In the old days, it was common to use blocking methods (barrier synchronization) because writing good mutexes was hard to do right. You can still see traces of this in things as recent Using modern concurrency libraries gives you a much richer, and thoroughly-tested, set of tools for parallelization and inter-process coordination.

Likewise, an older practice was to write torturous code such that you could figure out how to parallelize it manually. This form of (potentially harmful, if you get it wrong) optimization has also largely gone out the window with the advent of compilers that do this for you, unwinding loops if necessary, predictively following branches, etc. This is not new technology, however, being at least 15 years on the market. Taking advantage of things like thread pools also circumvents some really tricksy code of yesteryear.

So perhaps the deprecated practice is writing concurrency code yourself, instead of using modern, well-tested libraries.

1

Apple’s Grand Central Dispatch is an elegant abstraction that changed my thinking about concurrency. Its focus on queues make implementing asynchronous logic an order of magnitude simpler, in my humble experience.

When I program in environments where it’s available, it had replaced most of my usages of threads, locks, and inter-thread communication.

One of the major changes to parallel programming is that CPUs are tremendously faster than before, but to achieve that performance, require a nicely filled cache. If you try to run several threads at the same time swapping between them continually, you’re nearly always going to be invalidating cache for each thread (ie each thread requires different data to operate on) and you end up killing performance much more than you used to with slower CPUs.

This is one reason why async or task-based (eg Grand Central Dispatch, or Intel’s TBB) frameworks are more popular, they run code 1 task at a time, getting it finished before moving to the next one – however, you must code each each task to take little time unless you want to screw the design (ie your parallel tasks are really queued). CPU-intensive tasks are passed to an alternative CPU core rather than processed on the single thread processing all the tasks. Its also easier to manage if there are no truly multi-threaded processing going on too.

2

Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa Dịch vụ tổ chức sự kiện 5 sao Thông tin về chúng tôi Dịch vụ sinh nhật bé trai Dịch vụ sinh nhật bé gái Sự kiện trọn gói Các tiết mục giải trí Dịch vụ bổ trợ Tiệc cưới sang trọng Dịch vụ khai trương Tư vấn tổ chức sự kiện Hình ảnh sự kiện Cập nhật tin tức Liên hệ ngay Thuê chú hề chuyên nghiệp Tiệc tất niên cho công ty Trang trí tiệc cuối năm Tiệc tất niên độc đáo Sinh nhật bé Hải Đăng Sinh nhật đáng yêu bé Khánh Vân Sinh nhật sang trọng Bích Ngân Tiệc sinh nhật bé Thanh Trang Dịch vụ ông già Noel Xiếc thú vui nhộn Biểu diễn xiếc quay đĩa Dịch vụ tổ chức tiệc uy tín Khám phá dịch vụ của chúng tôi Tiệc sinh nhật cho bé trai Trang trí tiệc cho bé gái Gói sự kiện chuyên nghiệp Chương trình giải trí hấp dẫn Dịch vụ hỗ trợ sự kiện Trang trí tiệc cưới đẹp Khởi đầu thành công với khai trương Chuyên gia tư vấn sự kiện Xem ảnh các sự kiện đẹp Tin mới về sự kiện Kết nối với đội ngũ chuyên gia Chú hề vui nhộn cho tiệc sinh nhật Ý tưởng tiệc cuối năm Tất niên độc đáo Trang trí tiệc hiện đại Tổ chức sinh nhật cho Hải Đăng Sinh nhật độc quyền Khánh Vân Phong cách tiệc Bích Ngân Trang trí tiệc bé Thanh Trang Thuê dịch vụ ông già Noel chuyên nghiệp Xem xiếc khỉ đặc sắc Xiếc quay đĩa thú vị
Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa
Thiết kế website Thiết kế website Thiết kế website Cách kháng tài khoản quảng cáo Mua bán Fanpage Facebook Dịch vụ SEO Tổ chức sinh nhật