I wrote a Linux kernel module which receives data from a remote core through shared memory and passes it on to a user-space application. My driver is notified about new data via a mailbox interrupt triggered from the remote core.
When the application tries to read data and there is none available, the driver puts the process to sleep with wait_event_interruptible(). The IRQ handler then wakes it up with wake_up_interruptible().
I have noticed that my IRQ handler takes 1 microsecond if there is no process reading and 20 microseconds when there is a process reading. The only difference is one if-statement in my IRQ code, like if (someone is waiting) then wake_up_interruptible(), which either gets executed or not. Is it normal that the wakeup takes so long?
The weird effect happens when I start to overload the CPU (I did this to test the real-time capability of the reader application). When the load-test program, which from time to time occupies the CPU, is run with high enough priority, it does not just affect the reader process but also the IRQ! The IRQ – which should be triggered every millisecond – gets deferred several milliseconds and is “queued up” – they then all run shortly after another.
I would expect the following:
- wake_up_interruptible() shouldn’t take that long
- the IRQ should not fall out of its 1ms pattern, even if the reader process gets preempted
How I analysed this:
- My IRQ handler sets a GPIO to high when it starts and to low when it returns. A monitor the GPIO with an analyser.
- I used the function in_interrupt() and an occasional printk() (every thousand interrupts) to check that I am truly inside IRQ context.
What could I be doing wrong?
lrigoni is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.