I recently listened to an episode of Software Engineering Radio in which Leslie Lamport was interviewed. One thing he discussed was his specification language, TLA+.
Essentially, he seemed to be arguing that, for programs where correctness is very important, we need to think carefully and specify carefully before writing code, and TLA+ is meant to be a tool to do that. He said a team at Amazon has recently had success using it.
Personally, I write executable tests for my code. I see the tests as a specification, which has the huge benefit of proving whether the code conforms to it.
I assume that Mr. Lamport, being a brilliant and accomplished computer scientist, has long known about this, and still sees a need for his language. But why?
Are formal specification languages and automated tests complementary approaches, or at odds? Do they lend themselves to different kinds of code?
6
Your tests can show that the system reacts correctly to those inputs that the tests actually exercise. That is very valuable, but it is nigh-impossible to exercise all inputs that a program might receive.
A judiciously chosen formal proof or specification language document can prove mathematically that your program will respond correctly to all inputs it could possibly receive, even if that is an infinite set. That is certainly better, although whether or not the extra effort is worth the improvement in reliability varies from case to case.
6
Personally, I write executable tests for my code. I see the tests as a specification, which has the huge benefit of proving whether the code conforms to it.
This is not a proof. If you think of the semantics of your program (or embedded computer system), it has an infinity (or a large number) of states. Be aware of the halting problem, related undecidable problems, intractability, The P vs NP problem, and the Curry-Howard correspondence. Remember what D.Knuth told of proving software and testing it. Be also aware of declarative programming paradigms and even, in the long term -decades at least-, serious AGI research (like this), related to autonomous robots or vehicles. See also this.
Consider for example a cheap Arduino Uno system (very often used in hobby embedded devices). It has 2Kbytes of RAM. So 2048*8, that is 16384 bits. So it has 216384 states (a very huge number). Add to that the states modeling your abstracted view of the (physical) environment dealt with your Arduino. Assume you have only 10 of them for your device. Then your entire system has 2(16384+10) states and could be viewed as a finite state automaton around them. How much states did you tested with your executable tests? Only a small fraction of them!
A proof is “mathematical” (for some view of mathematics, and some definition of proofs).
See also the difference between DO-178C and its predecessor DO-178B.
Read also the first chapter of my Bismon draft report. It discusses, and give more details than I could give here, your question.
Executable tests and formal proofs are complementary, not competitors. You need both for safety-critical embedded digital systems.
Formal proofs tend to move bugs’ opportunity even upper: less bugs in code, but perhaps more bugs in the formalized specifications. In safety-critical systems, both could kill.
My opinion is that, in addition, safety-critical software should be free software (at least to be reviewable by other citizens having software development expertise). But this is a dream. I expect thousands of deaths because most of such vital software are, today, still proprietary, and developed with social, regulatory, and business constraints that are not questioned enough.
Mr. Lamport, … still sees a need for his language. But why?
With a formal specification language, you can run a verifier – that proves the design/algorithm mathematically. This is not the same as a set of unit/integration/bdd tests.
Are formal specification languages and automated tests complementary approaches, or at odds?
These are complementary, certainly. Nothing stops you from using both, if needed.
2