This article shows that for Android apps, software developers use between 30 and 50 different devices for compatibility tests.
I tried to find information about tests of applications on “classic” computers, for example PC games or graphics-oriented business applications: how many different hardware configurations are typically tested in this area?
2
Enough to gain sufficient confidence that your software will work on the desired platforms.
You don’t test every device, of course; you decide how much coverage of your target audience you want, and then structure your tests around the devices you think the vast majority of your audience has.
Unless your software requires exotic hardware (CUDA, perhaps, or multi-channel sound cards), it’s easier to test software on PC platforms. Your concerns have more to do with “system requirements” (things like processor speed, RAM and hard disk space) than they do with differences between computers, and software frameworks such as .NET are designed to run on almost any device that is PC compatible.
3
We make a .Net obfuscation product. Because we are quite conservative and strive for compatibility, we must test on a lot of different platforms.
Some platforms:
- Windows XP 32bit
- Windows Vista 32 and 64bit
- Windows 7 32 and 64bit
- Windows 8 32 and 64 bit
And then we also have to test for different versions of .Net
- .Net 3.5
- .Net 3.5 SP1
- .Net 4.0
- .Net 4.5
and then we have to test that the outputted programs must work on an even more broad set of versions.
However, I’d say you should do general QA testing (like after each feature is implemented) on a few different “popular” set of systems and then before a release do much more broad(but not as deep) testing across all platforms you plan to support. Virtual Machines, a server to run them on, and an MSDN subscription is your friend 🙂
2
We had various Java and Flash-based apps that we tested (so more simple than a game or fully installed app), and we had ~15 testing machines with (iirc) ~20-30 VMs on each for testing.
When you get to that point, you’re really testing for regressions – it becomes unfeasible to run every manual test against every hardware/software configuration.
It depends enormously on the guarantees you want to provide to your customers.
If you don’t advertise anything, then it should be sufficient to test only on a single platform.
If you state that it works on platforms X, Y and Z, then you better have the test reports to prove it.
And this is where it gets interesting. With PC applications, people tend to be far more accepting of bugs or incompatibilities than with other electronic devices, such as mobile phones. This is reflected in the testing effort and for that reason, major app developers test their app with such a large number of devices, just so they can account for any possible incompatibility.
As an anecdote, I have worked for a company making car-kits (so you can use your mobile phone in a car). Those car-kits get tested against just about every mobile phone that is on the market, because people just don’t accept it if their phone does not interoperate flawlessly with their car-kit.
This is where automated testing is the only manageable approach. It has a lot of overhead but in the end it is worth it to get the coverage you are looking for. For the sake of brevity, you develop the automated tests for maximum code coverage. And then set up a schedule for testing which alternates the different architectures you want to test. You can choose the schedule depending on the number of architecture options you want to cover. I normally just do amd64, ARM, and x86. I also include languages(en-us, ja-jp, etc) for international audiences. But I am working on a Desktop app. Hope that helps!