I have a java program which listens on a port for input. Based on Input, it calls a webservice and then returns a success/failure back to the client program.
I fork a thread for each client connection. The response back to the client which connects to the program has to be quick.
These are the choices I am considering
- use regular threads
- use
ExecutorService
withnewFixedThreadPool
- use
ExecutorService
withnewCachedThreadPool
The reason I am considering Pools is because my threads are shortlived – they just call a webservice, return result to the client and close the connection.
I don’t think newFixedThreadPool
would be the right thing because then connections would be waiting in queues to get a thread.
newCachedThreadPool
would have been perfect except for one thing – threads die after a minute. In my case, I get bursts of connections – i.e. multiple connections and then there may be a lull for a few minutes and then again bursts. I think the threads in the CachedThreadPool would die and and would again have to be recreated – so in this case, it may work like #1 sometimes.
Ideally I would have loved to have newCachedThreadPool
with a minimum – i.e. a setting which says number of threads would never go below say 20. So idle threads are killed but never allow to go below a minimum threshold.
Is there anything like this available? Or are there any better alternatives?
1
The methods in the Executors class are just convenience methods for common use cases. There are a lot more options available for creating thread pools.
To create the same thing that Executors.newCachedThreadPool() does but with a minimum of 20 threads (this is copied from the method in Executors, with the core thread size changed from 0 to 20).
return new ThreadPoolExecutor(20, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
Interesting question.
I would recommend against newCachedThreadPool
. It will spawn as many threads as necessary without any upper limit. It is bad!
The approach suggested by Michael seems good. Use ThreadPoolExecutor
and use a number you are comfortable with as the core pool size. Set maximum number of threads to something you wish to tolerate (or your server can handle).
Please note that there is pretty much nothing you can do once you exhaust resource (Queue is full and maximum number of threads are working). In that case, you can do one of the two things: drop new connections or apply back-pressure (don’t accept new work by blocking). TPE by default throws RejectedExecutionException
when it’s full, but it is easy to implement Blocking behaviour. In fact, here’s an open source implementation of BlockingThreadPoolExecutor.
Regarding the RejectedExecutionException:
Instead of a BlockingThreadPoolExecutor, we can simply set the RejectedExecutionHandler in the ThreadPoolExecutor to use the CallerRunPolicy handler.
i think using a wrapper over a ThreadPoolExecutor is a good way to go. A wrapper so you can expose some initialization (like a pool name, number of threads etc) and a method to add a task (Runnable) to a pool made earlier with one of your init methods.
The wrapper can change which pool it uses without affecting other code, plus can expose other methods like number of threads, audits (before / after task) in a consistent fashion
You can also implement your own Thread factory for your pool, which can have a custom name, priority and at the very least hook on an UncaughtExceptionHandler so you can log any errors.
in my pool wrapper we have methods like
public static boolean queqeAdd(String poolName, int coreSize, int maxSize, boolean usePriorityQ, String threadNamePrefix, int timeOutSeconds) {...}
public static void execute(String poolName, Runnable command, Integer priority){...}
public static long tasksCount(String poolName)
public static long tasksCompletedCount(String poolName)
public static int clearPoolRequest(String poolName, boolean intrpt)
public static int shutdownPoolRequest(String poolName)
public static void executeAll(String poolName, List<Runnable> lst)