I understand the differences between interpreted and compiled languages, but if someone could provide some examples of situations when one is likely to use interpreted languages over compiled languages, as well as situations when one is likely to use compiled languages over interpreted languages, it’d be really helpful.
0
There’s (to my knowledge) no such thing as an interpretted “language” or a compiled “language”.
Languages specify the syntax and meaning of the code’s keywords, flow constructs and various other things, but I am aware of no language which specifies whether or not it must be compiled or interpreted in the language spec.
Now if you’re question is when you use a language compiler vs a language interpreter, it really comes down to the pro’s/con’s of the compiler vs. the interpreter and the purpose of project.
For instance, you may use the JRuby compiler for easier integration with java libraries instead of the MRI ruby interpreter. There are likely also reasons to use the MRI ruby interpreter over JRuby, I’m unfamiliar with the language though and can’t speak to this.
Touted benefits of interpreters:
- No compilation means the time from editing code to testing the app can be diminished
- No need to generate binaries for multiple architectures because the interpreter will manage the architecture abstraction (though you may need to still worry about the scripts handling integer sizes correctly, just not the binary distribution)
Touted benefits of compilers:
- Compiled native code does not have the overhead of an interpreter and is therefore usually more efficient on time and space
- Interoperability is usually better, the only way for in-proc interoperation with scripts is via an interpreter rather than a standard FFI
- Ability to support architectures the interpreter hasn’t been compiled for (such as embedded systems)
However, I would bet in 90% of cases it goes something more like this: I want to write this software in blub because I know it well and it should do a good job. I’ll use the blub interpreter (or compiler) because it is the generally accepted canonical method for writing software in blub.
So TL;DR is basically, on a case by case basis comparison of the interpreters vs the compilers for your particular use case.
Also, FFI: Foreign Function Interface, in other words interface for interoperating with other languages. More reading at wikipedia
10
An important point here is that many language implementations actually do some sort of hybrid of both. Many commonly used languages today work by compiling a program into a intermediate format such as bytecode, and then executing that in an interpreter. This is how Java, C#, Python, Ruby, and Lua are typically implemented. In fact, this is arguably how most language in use today are implemented. So, the fact is, language today both interpret and compile their code. Some of these languages have an additional JIT compiler to convert the bytecode to native code for execution.
In my opinion, we should stop talking about interpreted and compiled languages because they are no longer useful categories for distinguishing the complexities of today’s language implementations.
When you ask about the merits of interpreted and compiled languages, you probably mean something else. You may be asking about the merit of static/dynamic typing, the merits of distributing native executables, the relative advantages of JIT and AOT compilation. These are all issues which get conflated with interpretation/compilation but are different issues.
First of all, a programming language can be both inerpreted and compiled. Interpretion and compilation are just methods of generating executable code from the source code. With an interpreter the source code is being read and interpreted by an interpreter which then executes the code as it interprets it. A compiler on the other hand reads the source code and generates an executable binary file from the source code – so that the program can be ran as a separate process independently.
Now before anyone wonders… Yes, C/C++/C#/Java can be interpreted, and yes, JavaScript and Bash scripts can be compiled. Whether there are working interpreters or compilers for these languages is another question though.
Now to actually answer the question when we’ll use “interpreted language” over “compiled language”. The question itself is somewhat confusing, but I assume it means when to prefer interpretion over compilation. One of th downsides of compilation is that it generates some overhead due to compilation process – the source code has to be compiled to executable machine code so it’s not suitable for tasks which require minimal delay when invoking the source code to execute a program. On the other hand compiled source code is almost always faster than equivalent interpreted source code due to the overhead caused by interpreting the code. Interpreters on the other hand can invoke and run the source code with very little invocation overhead, but at the expense of run-time performance.
In the end it’s near impossible to mention any definite use cases when to prefer one after another, but for example one (to my undersanding very unrealistic) case would be when the program source code changes dynamically between program invocations and the overhead of compiling is too high for it to be viable choice. In that case interpreting the source code instead of compiling would probably be desirable.
However, there’s something which can be regarded a real-world example: hidnig source code upon deployment. With natively compiled code the developer deploys the executable macine code of the program and data. With interpreted code the source code itself has to be deployed which can then be inspected and reverse-engineered wit much less effort than what it is to reverse-engineer native machine code. One exception to this is languages like C# and Java which compile to immediate language/bytecode(MSIL for C# and Java bytecode for Java) which then gets deployed and compiled “just in time” at runtime, kind of like an interpreter does. However, there exist so-called decompilers for MSIL and Java Bytecode which can reconstruct original source code with relatively good accuracy and as such reverse-engineering such products is far more trivial than reverse-engineering products which are deployed in native machine code.
3
I can think of the following scenarios when you’d use an interpreted language:
- Where no compiler exists, like Linux/Unix shell scripts.
- Quick and dirty scripts that solve a little problem
- Languages that make it easy to write dynamic HTML pages and are generaly interpreted like JSP (Tomcat compiles it into a servler previos to run), PHP, ASP etc.
I can think of the following scenarios when you want to compile your code:
- You need to distribute binaries because your app is closed-source and you don’t want to give out your source code.
- Speed, like embedded systems and the like.
- You need a level of code type-safety that only a compiler with an strictly typed language can give you. Compilers expose typos in every nook and cranny of your source code, whereas in interpreted programs typos can go undiscovered into production code.
- Large, complex systems: can’t imagine a OS or an office suit as anything but compiled binaries.
- You want to get every tiny bit of overhead out of the way and need good communication with assembler snippets (hard with any kind of runtime, especially with an interpreter) ( this point brought about by a @delnam comment )
11
In the end, the big trade-off is between productivity (how many lines of code do you have to write) and performance (how fast will your program execute).
Because interpreted languages when transformed to CPU information have more information, they can rely on reflection and dynamic typing which greatly increase the productivity. Another advantage of interpreted languages is that they are platform independent so long there is an interpreter for the platform.
Because the CPU mustn’t transform the language code in machine code and run the code at the same time, as in the interpreted case, compiled languages yield more faster programs. Also, a system built in a compiled language is more secure since it can detect issues at compile time which basically means you see an error as you type it (with modern IDEs) instead of seeing it only when you actually run the program (of course, this does not fix logical errors).
Knowing this, interpreted languages are suitable for:
- Productive development: fast web development (PHP, Javascript) or for prototyping.
- Cross-Platform; for example JavaScript is supported in every browser (including mobile browsers).
And compiled languages are suitable when:
- Performance is critical (operating systems) or resources are scarce (microcontrollers).
- Systems to be built are complex; when building large systems (enterprise systems) compiled languages are essential handling many possible bugs that may appear in interpreted languages; also complex programs require many resources and the balance tends to compiled languages also.
5
Besides the reasons the others have mentioned, there is one particularly important use case for choosing an ad hoc interpretation over any form of compilation or any hybrid approach.
In case if a programming language is used as a communication protocol, and when response latency is important, it makes more sense to avoid wasting time on compilation and any possible preprocessing.
This applies to agent languages, for example, or for the way how, say, Tcl/Tk is normally used.
Another possible reason for sticking with interpretation is when a language interpreter is used for bootstrapping itself or a more elaborate, higher level language, and its simplicity is more important than the bootstrap process performance.
For nearly any other possible use case, compilation (or a hybrid approach) is a better fit.