I think that in programming error checking is extremely important, not a secondary topic, what if your programme crashes without an error, just silently aborting? What if you must discover a SintaxError
at line 1293 by manually checking all the lines? Would any programme longer than 5-10 lines be possible if error checking was never invented?
By no error checking I mean absolutely zero feedback if something goes wrong: not a line number, not a hint, nothing.
6
I’m not sure why anyone has been downvoting; I think this is a valid question, even if it seems absurd or the answer is obvious to everyone else.
If there is a language with a compiler that has no error/syntax reporting, the programmer can implement their own error reporting with basic tests and their program has to meet those expectations when they run they run the test program; when all the expectations in the test are met, the program has no known errors. If a part of the test doesn’t receive the expected value from the program, then the programmer can know where something went wrong. There might not be error reporting per se, but this would definitely allow for large applications to be written with a stupid compiler or interpreter. This, of course, is under the assumption that we are dealing with languages that have construct which would allow interaction with specific components(functions/methods/classes/modules/libraries/etc.). Even without that, one could write a test routine that would go to a specific line of code and test whatever value gets returned.
Today, what I described is Behavior Driven Development(aka BDD).
Even without such tests, people have written applications with huge code bases without much useful feedback. Depending on what form of assembly language one writes in, you may get no feedback except unexpected behavior. Many C compilers don’t have much useful feedback; not like that of modern interpreted languages. Chris Sawyer wrote the original Roller Coaster Tycoon in mostly assembly and some C, which was definitely larger than 10 lines and I’m willing to bet that much of his development was based upon seeing whether or not he was getting the expected behavior from his code.
EDIT: Better yet, it just so happens that I had an experience about 5 days ago where I wrote 126 lines of Python code and it worked the first time I ran it. That doesn’t happen too often, but it shows that even a mediocre programmer such as myself can write more than 10 lines of useful code without error checking. It just takes a lot of thought and planning.
Let’s also not forget that before QWERTY keyboards and modern file systems, people wrote programs and even simple games by punching holes in cards and the programs were stored on big stacks of punched cards that were inserted into a giant computer; the program either worked or it didn’t, and it was simply not practical to rely on feedback from the computer. Using a computer was also a privilege for most, as not everyone had ownership or access to one on a regular basis. The equivalent of 10+ lines of working code was written this way.
Of course.
The only way you have syntax errors is if you have syntax. To have syntax, you need to have a language. To use a language, you need a compiler. At the very least then, the first compiler was a program written where there was no nice pleasant error messages when things went south.
5
Programming is often done without any error checking, even now. It’s just considered sloppy, and should only be done for quick prototypes (or code golfs) that you’re likely to throw away or rewrite later. Scripting languages tend to be good for this in part because they handle errors reasonably well whether or not you write any error handling code yourself.
If you mean “could we have gotten where we are without any piece of software ever doing error checks?”, then the answer is no, unless someone invented a computer with infinite memory and no hardware faults, along with a network with 100% uptime for everyone forever, and so on.
If you’re thinking of a specific convention such as checking the error codes returned by a function call, throwing and catching exceptions, or passing in success and failure callbacks, then yes we could have used different techniques.
There’s also the philosophical problem (as pointed out by delnan) that you can’t always draw a fine line between “error checking code” and “real code”. If I’m writing a sqrt() function, checking that the input is positive is definitely an error check. But if I’m writing an HTML form with a bit of Javascript that automatically hyphenates phone numbers regardless of how the user entered them, is that considered “error handling” or “presentation logic”?
1