It seems to be accepted that computers that have been powered on for a long time and have any sort of complex software (ie and OS) running on them they tend to develop random errors and problems. Turning the device off and back on powers off the machine and destroys all volatile memory, generally fixing the problem.
First, am I just imagining that or is it accepted? Is there a better description or a word/phrase for it?
Second, how do servers deal with this. They are generally 24/7/365 machines. Though multiple machines serving the same page could be turned off individually, is this done in that situation?
It might be the accepted norm if you are accustomed to running hardware and software that isn’t that stable. But I haven’t observed a particular trend of servers running poorly after extended uptime in my career. I’ve run many a Solaris, Linux or BSD server well past 1000 day uptime and more than a handful have made it to the 1400-1500 day mark. I would update Apache or apply other patches without patching the kernel and just keep trucking. (NOTE: I don’t advocate that this as a sys-admin practice, but there are systems that customer’s don’t want rebooted unless there is a problem).
As to how it is done in web servers that just serve pages, you are correct, a page can and is often served by redundant servers and even a content delivery network. Taking down a node shouldn’t impact your site if you have redundancy and caches. High availability is all about redundancy. It isn’t so important to keep a single node healthy for extended runtimes for a static web site. You really shouldn’t need to depend on a single web server today when a Linux VM can be had for $5 a month at Digital Ocean and a 2 node load-balanced Linux setup can be put together for cheap.
The shift for the past 10 years has been toward many cheap servers. Back in the 1998-2000 time frame at IBM we were already running massively distributed web farms with 50-100 nodes serving up a single site (Olympics, Wimbledon, US Open, Masters), and now it is commonplace since companies like Google and Facebook published a lot of literature on this technique.
Any system that has to be reset/restarted after a while to continue working is faulty. Various faults include simple things like memory leaks to more complex problems like design failures.
Many years of poor software has had the effect of “training” users to accept faulty systems; and to work around the problem by restarting. Note: “working around” a problem is not the same as fixing a problem.
For servers (e.g. 24/7/365 machines) this isn’t possible, and you need (e.g.) software that isn’t faulty.
2
Well firstly I would expect years of uninterrupted up-time from any modern server OS (even Windows 🙂 ).
But there are usually external reasons for bringing down a server (often every few months!).
Software upgrades, hardware upgrades, changing dust filters, re-organizing your data center, applying security patches etc. etc.
If the system is critical and expected to run 24*7*365 then there are a few ways to deal with this.
- Run a load balanced cluster. You simply re-cycle the servers one at a time.
- Have a hot standby. Switch load to the standby machine when applying maintenance.
- No real servers — run only virtual machines. You can shift the image to another physical server quickly will minimal downtime.
In practice a robust setup will probably use a hybrid of all of the above methods.
A cluster of physical servers, running a set of load balanced virtual machine images, with some hot standby servers at a remote site.
2