I’ve got an embedded Linux project running on a Cortex A8 (32-bit ARM). I have a handful of individual hardware resources that I need to lock for atomic operation both between threads in the same process and between multiple processes. The hardware operation itself is fairly fast (single microseconds), so lock/unlock overhead could add up. Contention for any given resource will be rare, but I still need to not get stuck in a deadlock due to priority inversion when it does happen.
I’m looking to add this locking at the library level; just bury it in the device read and write calls so it’s correct everywhere. So it looks like my choices are
- a named semaphore for each resource that gets opened with sem_open when the resource handle gets created
- a shared memory holding a pthread_mutex for each resource which gets initialized before any of the real-work processes start.
These feel like they take largely the same amount of work to implement, and I don’t care about time to initialize or connect to the structure.
I suppose a third answer is a kernel-mode driver and kernel mutexes, but that leaves me doing a whole mess of work. I’d hate to do all that without some reason to think it would be a substantial performance improvement.
So with all that said, anyone have a sense of which approach will have better runtime performance? Or if there’s just not much difference?