How do software developers manage to know if their software will run on the target hardware they are developing to? Many developers today work on machines much faster than the intended platform. What kind of tools do they use to understand when they are going overboard?
I’m asking from the perspective where you for example develop algorithms on a modern x86 in a GNU/Linux environment but the intended platform is an much slower Arm, still in GNU/Linux.
7
Ideally in my world there would be a tool to benchmark the platforms and then a tool for running the program to get some figures so you could compare and understand which platform you’ll be using later on.
There is no such tool.
There is not even a methodology to follow.
The best you can do is extrapolate from the measured performance of the application running on a different platform. Even that is liable to be unreliable.
Basically, the performance of an application is going to depend on a large number of complicated platform parameters, and (in some cases) on complicated interactions between the platform and the applications fine-grained behaviour. It is too complicated to model.
(Depending on the nature of your application, the parameters could include CPU clock speed, CPU chipset, number of cores, memory size, memory architecture, memory speed, I/O system bandwidth, network interface, graphics card, hard drive controller and device, etcetera, etcetera. Hundreds of different hardware and software characteristics could be relevant …)
The best practical advice would be to just try it. If you can’t afford to buy an example of each candidate platform, then see if the salesman will let you try out the application on their kit before you sign the contract to buy. If you can’t do that, then toss a coin …
1
If you mean how minimum/recommended system requirements are found, the application is simply tried on different machines.
In most cases, there is no hard limit: if the application works with 512 MB of memory, it will probably work with 511 MB of RAM as well (unless it explicitly checks for the memory). This means that you may have a limited number of machines to use for benchmarking, and deduce the limits from there. For instance, if the machine with 1 GB of RAM can barely run the app, while a machine with 4 GB of RAM runs it well enough and keeps in average 1 to 2 GB free, the minimum system requirements may include 2 GB of memory.
Precision
Note that benchmarking and profiling is precise. A non-functional requirement of performance, for instance, will specify in detail the test hardware and the load, the number of milliseconds representing the threshold and the threshold percentage. You can then make an automated test which either passes or fails, on every commit, indicating when the app became slower than expected. Talking about feelings (“this part of the app feels slow for me”) is unacceptable, because the lawyer of your customer may assert that the app still doesn’t feel fast enough, while you’ve spent the last two months optimizing it and find it extremely fast.
When it comes to minimum/recommended system requirements, such precision is rarely required. The person writing down the system requirements may indeed simply test the app on multiple machines and use his feeling of fast/slow as the only criteria. If on the other hand the contract stipulates that the app should run on a machine with 2 GB of memory, than it should be in the Software requirements specification, written in non-ambiguous terms (see above).
Test environment
Also note that:
-
You should test the software on different hardware anyway (unless, of course, the software is distributed in a controlled environment, like a single data center), so there are chances that you already have the infrastructure you need.
-
Virtual machines make such testing less expensive than the purchase of dozens of actual, real machines.
However, testing on virtual machines may not be as straightforward as throwing a VM in the pool: while many hypervisors (or operating systems themselves) do a great job of allowing you to throttle some aspects (such as network bandwidth), it still requires additional configuration.
Complexity
I used RAM as an illustration, but the same logic applies to any other aspect: CPU speed, free space on hard disk, the speed of those hard disks, network bandwidth, etc. Not counting that the same hardware may not operate exactly the same every time.
For instance, one of my software products had a bug I spent a lot of time to debug. It appeared that when Windows put hard disks on stand-by when they are not used for a few minutes and they slept for a long time, waking them up takes a while, which sometimes triggered a timeout in my app.
This makes such testing a difficult task, even with virtual machines. This is one of two major complexities of desktop software, the other one being the fact that the software product has to survive in the wild, i.e. get along with thousands of other software products (including malware) which may be installed, deal with different configurations, accessibility options, broken things, etc.
2
Seeing as how you can configure the hardware and memory that the VM is allowed you can test a wide range of setups on your own PC. This also allows you to test across multiple operating systems.
Well for instance the CPU of a PC could be 6 Ghz. When you install virtual PC you can actually set the computer to use a certain amount of that. So for example you could set the virtual PC to run at 2.5 Ghz, and test the game to see if it runs. When you find the point where the game doesnt run properly say 2ghz. You would take a step up to maybe 2.2 Ghz and if it runs well then that would be your minimum specification for that piece of hardware.
I have sourced the information below from Justin Cave on stackexchange.
The web url to a similar question is How are minimum system requirements determined?
All the information below has came directly from Justin Cave 2011 and has not been edited.
Frequently, the minimum requirements are set by looking at the types of systems that target market customers would actually use for the product in question and picking some reasonable cutoff that doesn’t alienate the target customer and is something the QA department can test with a minimal additional hassle.
If you expect that most of your customers are going to install your product on relatively recent desktop computers, for example, you would probably look around and see that just about any low end desktop computer for the home is going to ship with 2 GB of RAM. So a recent computer is very likely to have at least 1 GB of RAM even if it’s a couple years old. If very few of your customers are going to want to use a machine that only has 512 MB of RAM, the revenue of these sales is likely to be more than offset by the support requests (older machines are likely to have lots of other problems and incompatibilities that will cause problems and generate more help desk calls than other customers). So it may well be more profitable to avoid making sales to those customers.
3