I’m not sure how the locks will work with multi processors/cores, I’m going to use shared memory in my application or FIFO, what worries me is the locks, I have a server with 2-CPU, how the lock will be able to lock a variable on one CPU while the another operation went to the 2nd CPU for the same variable?
I’m in Linux environment using C, I’ve searched the web for some useful info, appreciate your help
3
Intel articles offers a good point of view about this.
Also there is a small article about the top 3 solutions in multithreading programming.
You didn’t specify this, but if you are on Linux and you are programming in C you are probably talking about pthreads and this is a beginner tutorial that covers the main aspects of this topic.
Assuming you’re using pthreads, you have to be aware that locks are implemented as functions and macros.
That means that a lock doesn’t make a variable available or not to a CPU, nor they send operations to the other(s) CPU(s). What you do is to spawn threads, calling pthread_create()
with a function pointer as a parameter. Typically several threads are spawned with the same function and different parameters, so each runs the same code slightly differently.
Ideally, most of the code should work on different data, allocating private blocks so that each thread can run without checking for collisions.
Of course, there’s always some shared part, or some communication between threads, or some common store structure. For this you use locks.
In short, every shared resource must have a lock, and before trying to access the resource you have to acquire the lock using pthread_mutex_lock()
or a similar function, and release it with pthread_mutex_unlock()
just after using the resource.
When no other thread was trying to use the same resource, these functions return quickly and your code can do as intended. But if a thread tries to get a lock while another one already has it, the second one is blocked and can’t continue until the first one releases it.
In short, surrounding some code with lock/unlock calls guarantees that only one thread will operate that part of the code with that specific lock at any given time.
What you want to do to optimize the concurrency of your code is to hold locks for as few operations as possible, during the shortest time, and with the most specificity (different locks for different data structures, even if the code is common).
Beyond this, you have to be aware of deadlocks, livelocks, priority inversion, etc. But just making sure that each shared structure has an associated lock (and is used consistently!) is a good start.
Encapsulating the lock with the shared structure, and using only access functions that enforce the lock makes it much easier to do correctly.
It is the operating system’s responsibility to correctly implement the fact that, on a particular system, there are multiple CPUs and/or cores. The OS guarantees that “everything will work.” (“It’s magic … yeah … no, it’s voodoo …”)
The impact for you as a software designer – of a concurrent application – is that your locking and mutex strategy must be solid, because it is entirely possible that two different threads actually are executing “physically simultaneously.”