I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals.
Let me give you an example for some informal requirements of Project A:
- When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user.
- It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine.
- The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs.
- …
In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable.
I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal.
I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don’t think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go “one level deeper” …
0
Read Documenting Software Architectures, second edition: Views and Beyond, Chapter 7: Documenting Software Interfaces.
Or at least check some well-known API documentations, like Google’s (Maps, GData – outdated but complex) Amazon’s (S3 ) or inspect the documentation for Microsoft applications and services, gathered together on MSDN (for Live services, but even for Windows)
Usually an API documentation has 3 parts:
- an overview on what the thing is for, what someone could make out of it, perhaps an architectural overview
- A developer’s guide, explaining some common tasks with the API, usually with code samples and downloadable sample applications.
- An API reference of how it should work
In theory – if we believe Brook’s Mythical Man Month – you design the documentation and make sure there’s a matching implementation.
Now back to practice
Designing requirements for an API goes like any software design goes.
- You enlist the different actors who are to be using the API (using a context diagram for example)
- You detail each actor’s typical needs of the system with use cases
- For each use case, you develop a set of scenarios on how the imagined system would be used (the book Writing Effective Use Cases might help you on that)
- You either create robustness diagrams, sequence diagrams or activity diagrams, but you design behaviour based on the scenarios to make out what messages are needed to be passed
- From the messages, you deduce underlying data architecture, by looking at what parameters are needed for each message to be successfully communicated.
Many people would start with the underlying data structure, but I think that’s silly: computers (and objects, for that matter) are about interactions. You need to understand what needs to be communicated from either side in order to run a succesful interaction. Data is only the parameter of those interactions.
I usually do activity diagrams or simple flowcharts which show the passed arguments as objects or classes. That is, there’s a control flow going on, but we see what information did one of the parties pass to the other.
After you’ve finished all these, you grab your scenarios again, and start to craft acceptance tests. That’s because APIs are to be used by computer clients, therefore, your first code should be a computer client which does automated retrieval as a test.
Acceptance tests are either written in “provided input”-“expected output” form, or as code. You can find lots of books on BDD and TDD which will explain to you how to write good tests.
Also, around here, you start to bring out books on REST interfaces and similar in case you’re building a web API, as your tests have to be executable from day one .
From the scenarios, you also build sample code, and developer’s guide.
From the sequence diagrams and data architecture diagrams, you develop the API reference.
Add a sprinkle of HTML, make sure all the tests pass, and that your application is fast, secure, and robust enough, and let’s get out with it!
(I know, this was waterfall-ish: Agile is the same, except you always do only a tiny part of this, eg, a few scenarios per sprint)
1
You really don’t need to get any more “formal” than what you have. You’re writing it for humans to read and probably mostly yourself, so keep that in mind. My one suggestion is to put it in a hierarchy and number it in outline format. That way in reviews, checklists and such, you can refer to a number like 3.0.1 as a shorthand and to easily disambiguate what you’re talking about.