I have a fairly in depth question which probably doesn’t have an exact answer.
As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application?
More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a “trail” of what happened before the issue occurred. I’m thinking of something similar to Androids alogcat tool.
These necessary infrastructure elements should be language-agnostic.
While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold?
Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive.
Thanks for any help!
Update:
I am entering in a team that has maybe 5% of the code with Unit Tests and is just beginning to Instrument and Monitor. Each software programmer (I say programmer and not engineer because not everyone on the team is an engineer) does not understand the basics of fail immediately and sanity checking. Much of our software baseline is legacy code and is in the process of being ported over. Unfortunately we don’t have the man power to refactor a lot of the older components. This is what led me to try and understand if there there are necessary infrastructure tools that can be used to detect and find bugs at the source in a much quicker fashion. While I am not expecting a tool to magically do this, I was thinking there might be tools or configurations that allow for more easily finding bugs in a sea of components.
1
When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application?
I would argue that you’re asking the wrong question. It’s good to prepare for the eventuality that things go wrong, but to depend on humans to do this in a large distributed systems defeats many of the benefits of the large distributed system…
Things to focus on instead:
- Unit Tests – 80%+ of your code isn’t going to interface with the IPC. Make sure it works (with various bad inputs, behaviors) and that eliminates many of the issues that can impact a particular application.
- Monitoring – Trying to ‘trace back’ issues is going to end in frustration, even if your logging infrastructure is working properly. Having a good monitoring system setup to identify issues early and specific to the process lets you see immediately what had issues.
- Design for unambiguity – Either fail or work. Don’t pass along bad data; don’t ‘kinda work’. If an app identifies that its IPC pair has disconnected or done something odd, fail right there. It’s then immediately apparent who is responsible.
3
Take a look at Microsoft’s Enterprise Library, a set of libraries, of Application Blocks, designed to be exactly that – commonly shared tools that are used across a large application. They include components for Caching, Exception Handling, Logging, Data Access and other pieces that are orthogonal to your actual business logic – they should stay pretty much the same for all modules.
Naturally, the components of the Enterprise Library aren’t an exhaustive list, but they’re a good place to start and see what is common to many applications, and therefore should be extracted from the main code into shared utility libraries.