I’m rather new to distributed computing and would like some assistance with the overall architecture of my application.
My application has Jobs that can be added to a JobQueue. Then one or more JobRunner instances can be setup to run the jobs on the queue and generate JobResults. The JobResults will then be sent to some destination like a report, log file, email notification etc..
However, I also want to be able to group a related set of Jobs into a JobSet which in turn will be processed into a JobSetResult that contains all the corresponding JobResults. Each Job, however, will still be processed independently by a JobRunner. Once all the JobResults are collected the final JobResult will be sent to some destination like a log or email notification.
For example a user may create a set of jobs to process a list of files. They would create a JobSet containing a number of FileProcessingJobs and submit it to be run. I obviously don’t want the user to get an email notification for every file, but only the final JobSetResult when the entire JobSet is complete.
I’m having trouble figuring out the best way to keep track of all this in a distributed environment. Is there some existing architectural design pattern which matches what I’m trying to do?
5
I think that the solution to your problem is that you have a main thread that is handling certain aspects of your application and JobRunner threads that appear to be doing more business logic related processing.
If you are going to go parallel processing then don’t dip your feet in the pool, jump right in and make any logic in your application a job. Everything should be asynchronous including creating JobSet’s and email processed results.
If you are familiar with Batch processing then this is like a massively parallel paradigm. Each runner should have three distinct aspects.
- Reader – Read in a chunk from a source or data store that a particular type of Runner should process.
- Processor – Process that chunk of data with logic
- Writer – Persist your finished result to be picked up later by a future chunk.
If you design your runners correctly then they can constantly be running and looking for more chunks to read in, process ad-hoc and write for another type of Runner to pick up with its Reader.
There are frameworks that assist with this kind of application development, Apache Hadoop. It builds this infrastructure for you so you can focus on design and business logic instead of boilerplate code.
Your choice of framework will decide the exact API, but what you are looking to do is a fairly common task.
You need a concept of a JobSet. Some frameworks will provide you such a concept for you. I would concentrate on choosing your framework first, and researching how that framework will do what you want. What you want is a very common desire, so it should be supported by any reasonable framework.
But, if for some reason, you choose a framework that doesn’t do it for you…
If your chosen framework does not do this for you, you will have to maintain a text file with all of the jobs in your JobSet (populate it as you submit the jobs). Whenever a job completes, it opens the file and removes itself from the list of pending jobs. If it is the last job within a JobSet, it then runs the “gather” script before actually finishing.