At my company we have some vendors that we transfer data to and from. Sometimes the data is pulled into our local SQL database for business reporting. Other times we pull data from one vendor, transform it, and then transfer it to another vendor’s FTP server.
The guy I replaced has a couple of generic SFTP Push/Pull console applications that transfer data to/from these vendors. Then he has other applications that do the data imports to the SQL database or transforms the data and leaves it out in a directory to be pushed to the vendor.
Every once in a while we will have problems with these processes not finding the file needed and I have to go back and run these manually to load the data. It seems to me that it would be more reliable if these processes just did their own FTP push/pull functions so we don’t run into scheduling problems. Are there any standard practices that I can implement that would be more reliable or do I need to just tweak what I have now? I’m in a Windows/.NET environment by the way.
4
First of all, having the ftp push/pull mechanism separated from the core processing sounds to me a good design, since it will allow to test the core processing separately and plug the parts easily together in a different way if needed. This is a good example of separation of concerns.
Every once in a while we will have problems with these processes not finding the file needed
Before thinking about any solution with the chance of causing probably more problems than it will solve, make sure you know what the root cause of the problem is. Is it because job A (pulling the data) puts the file in a wrong folder where job B (pushing the file data) does not expect it? Then you need a better way to pass the file path from job A to job B in a reliable way.
Or is it because sometimes job B starts too early, before the output of job A has arrived completely? Well, then you need a better mechanism to trigger the start of job B. Is it not possible to put A and B in a command script which makes sure B does only start when A is complete? Maybe you have to implement a polling mechanism in job B which makes sure it does not start its processing until the output of job A is available. Maybe you have to implement a loop around job A to make sure it will try to download a file again when the first attempt has failed. It may be a good idea to let the ftp process download all data into a temporary file first, and rename that file as a final step when it is complete. Renaming is an atomic operation on most file systems, so this makes the file only visible to the following processes when it is ready for further processing. Another possible technique is to work with some “lock files”, prohibiting shared access to a file “X” as long as “X.lock” exists.
So, IMHO the architecture you described is not brittle per se, but you have to provide a reasonable amount of synchronization and failure tolerance around your processes.
5
Are there any standard practices that I can implement that would be more reliable or do I need to just tweak what I have now?
Tweak what you have now. For this kind of thing you probably have quite a few custom requirements – retries, availability windows, notifications, backing up, zipping. If it’s working 90% of the time see if you can get it up to 99% of the time. Add a whole bunch of logging and exception handling and take it from there. Maybe run it manually yourself every day for 2 weeks to see if you can get it to fail.
I have yet to find a great application that takes care of SFTP’ing stuff to and from other servers with the option of setting availability windows, retries, notifications, etc. I believe SQL Server can do it. AFAIK, there’s no “standard”. Hmm maybe I should code something up and sell it 🙂