Imagine a hypothetical programming environment that is largely like Java or .NET, i.e. object-oriented, garbage-collected, etc., but with one small change:
Every time you call a method on an object, a lock is obtained so that no two threads can execute methods on the same object at the same time. The lock is relinquished when the method returns.
The lock is also temporarily relinquished when you call Thread.Join (this is just to prevent certain kinds of deadlock situations).
How severe would the performance impact of that really be? How badly would this interfere with concurrency? Are there any algorithms/programs that it would be impossible to parallelise in this kind of environment?
In my previously posted question, a link was given to this page about the GIL in CPython which claims severe performance and concurrency problems with this. However, it also states that this environment uses reference counting and the increment/decrement of that reference count is an operation that obtains such a lock. Let’s not assume that; .NET’s GC is mark-and-sweep, not reference-counted.
2
You would almost certainly ending up single threading everything, because any attempt to multithread would end up in deadlock!
The GIL in Python avoids deadlocks by having a single lock on a single object, a deadlock is simply impossible if only one object is locked.
As soon as you have locks on two or more objects a dead lock situation is possible. A lock on every object would make a deadlock a near certainty.
tl;dr: The impact will depend on how the locks are implemented and how much concurrent access you actually have in your program. This solves SOME concurrency problems, but not all of them, so it doesn’t absolve you from understanding and thinking about what’s going on in your program.
If we assume a fast-when-uncontested lock implementation, then the impact mostly depends on how much cross-thread interaction you actually have. I don’t know about you, but most of my concurrent programming ends up being ‘feed values to the other thread’ and ‘consume values from the other thread’ sorts of things – the points of interaction tend to be very narrow.
Off-hand I tend to think it won’t be worth the overhead in most cases. You may be thinking that it will save all sorts of cross-thread havoc, but it only solves a portion of the cross-thread interaction difficulties.
If you go with this model, then you have to make sure that ANYTHING you want to do to an object can be expressed as a single call. If you don’t, then the object can still be in a weird state, even with the locks.
Consider, for a moment, an object that holds a person’s contact details. Assume that we wish to update both the phone number and e-mail address. If you have one method per thing to update (simple setters), then even though these setters have locks, the object can still be in an inconsistent state in between the calls to update the phone number and e-mail.
The only way to safely update such an object is either to have some way to lock the object as a whole (I believe Java does this?), and release it when done, OR to have ‘combinatorial’ methods that cover any set of simultaneous updates you’ll ever want to make.
I’m not saying the technique is never useful, of course. But for average programming problems, I suspect it hurts more than it helps.