possibly a newbie question here.
I heard that if a real AI were to exist it would have to evolve, by that I think it means it should be able to write new code then run itself with the new code.
Could this be possible in a compiled language like C/C++?
Just wondering, thanks.
7
You can write an interpreter in a compiled language that executes a self-modifying, domain-specific language whose instructions are held in a mutable data structure of some sort.
So yes, it’s possible.
If your question is “can I do it with the original target language using the original compiler,” that, too, is possible, if you write a program that writes a program in the original language and then runs a script to compile and execute it. But that would be very difficult to do, and probably impractical.
2
In the old days, languages (often forms of Assembly) used to be designed with instructions specifically set aside for self-mutating code. This means you could easily tell the program to change itself while it’s running, which was used for performance improvements. The key word here is “easily”, because although this went out of style decades ago, as a result of computers getting fast enough to not need something this cryptic and hard to follow, hackers today still use Assembly to play tricks on the computer hardware to overwrite memory they shouldn’t have access to – potentially the instructions of their currently executing program that have been loaded into memory. And since you can embed a good bit of Assembly into C++, that’s one way you could say a C++ program can self-mutate; although a good operating system these days should have reasonable security measures to block things like this in either case.
3
Yes, but…
It has been long known that machine code for Von Neumann architecture machines is too “brittle” to reasonably mimic some life-based dataprocessing.
As a simple example, if you alter a single bit in a machine instruction, you won’t necessarily get something useful, you may very well crash the processor or program.
Contrariwise, biological systems are more resilient to alterations. If you change a DNA base from A to C, you will get a protein out which might even be more useful than the unaltered state (it is more likely to have no effect or deleterious effect, but the whole machine doesn’t stop). Similarly if you have less of enzyme Q for some reason, things won’t crash, they’ll just not work as well as they might otherwise.
Very productive work has been done with less fragile “machine code” running on virtual machines, but the last detailed reading I’d done on the subject was from an an ancient Proceedings from the Santa Fe Institute for the study of Complexity so I’m sure the approach has been refined considerably since then.
First of all, if by “real AI” you are referring to Strong AI, this is a yet undecided question. That question has some scientific, some ontological, and some religious aspects, and opinions widely differ.
What was commonly referred to as AI by software programmers is merely branches of machine learning, or computer science / electrical engineering / signal processing / linguistics in general.
In one kind of machine learning algorithms, the inputs and outputs are matrices of numbers. The “system” that map the inputs to the outputs are also matrices of numbers. Thus, those algorithms reduce a practical problem down to some mathematical calculations.
There are many other practical AI algorithms and implementations, employing many other mechanism and branches of domain knowledge.