I write a lot of javascript applications and in many different circumstances the browser will become unresponsive or give a “slow script” error. Even when following best practices, initializing large data sets, complex animation, or when too many event handlers fire at once, I have to include extra setTimeouts or requestAnimationFrames around script blocks to load balance them. Seems like there should be some standard way of managing the browser load for large javascript applications.
Any ideas? It seems like there must be JS application designers thinking about this, but I can find nothing on the web or on stack.
edit
I appreciate all the response. My question is not “How can I write efficient Javascript?” which seems to be the question the current responses are answering. My question is: How can I balance the load of multiple sequential threads of javascripts running in the same browser?
I am well aware of all the information posted in the responses so far and appreciate the posters, but I am looking for a framework or a design pattern that balances script execution in a browser. This question is common in many other languages and for devops, I am simply asking the same question for javascript.
1
You have a very short span of time in your hands.
Before talking about threads:
The DOM is slow
Really.
If your website is DOM-heavy, then find ways to offshore these manipulations. Use CSS3 transitions, use canvas for animations, avoid unneccessary paints, etc.
There are also techniques to make it faster, like DocumentFragments, using off-DOM elements, etc.
JavaScript runs in a single thread in the browser
This means your scripts have to be very quick if you want the user to not feel any form of lag.
You can, for example, break intensive calculations into small fragments, for (a simple) example:
function findMax(arr){
var currentMax = -Infinity;
var i=0;
setImmidiate(function() doWork{
if(arr[i] > currentMax){
currentMax = arr[i];
}
i++;
if(i< arr.length){
setImmidiate(doWork);
}
});
return currentMax;
};
Wait, don’t go yet! I lied! It’s not really single threaded
JavaScript doesn’t have to run single threaded in the browser any more! If you support modern browsers, you can use WebWorkers.. This will let you run intensive scripts in background threads and not interrupt the main program flow.
If you support modern browsers, this is probably the correct way to handle CPU intensive calculations.
WebWorkers let you use threads you’re used to from other langauges but in a save (actor-system’ish) way. You can split work between workers and balance it just like you would in any other programming language.
Note web workers require IE10, Safari 4+, Opera 10.6 +, Chrome 3+ or Firefox 3.5+
One last thing
JavaScript is designed to perform asynchronous I/O, while this might not be what you’re doing more often than not JavaScript performance issues might be the result of synchronous code where asynchronous code is appropriate. Synchronous event handling, blocking code, and more generally anything that isn’t CPU intensive should take very little time any way and not require the use of ‘the big guns’ like web workers.
7
You shouldn’t be writing synchronized JavaScript. JavaScript is single threaded in the browser. What you are doing is not load balancing – see Load Balancing.
What best practices are you using? JavaScript is pretty quick in modern browsers, and if you’re using even the most simple of best practices, unless you’re doing something incredibly complex you should be fine.
2
If a script is running slow (assuming client side code here), it generally isn’t down to JS being slow. Your bottlenecks are far more likely to be found somewhere else. The notoriously slow DOM API, for example.
As soon as you talk about DOM manipulations (such as animation), you’re almost certainly going to notice a drop in speed.
A lot of people have (and are continuing to) critique the DOM API for being badly designed, overly complex at times, and down-right slow.
The main issue with the DOM API is that it’s not controlled by ECMA, as JavaScript is, but by W3. Simply put: client side JS is often 60-75% made up of DOM manipulations, but for those, JS relies on an API maintained and developed by a third party. I think it’s pretty obvious that’s just a recipe for slow-food.
Still, if you need to perform a lot of long and complex computations, a Worker
allows you to (sort-of) spawn a background thread, which fires an event upon completion.
Another bottleneck that can be noticeable is the over-usage of libs/toolkits such as jQuery. These often aren’t very modular in terms of allowing you to include only those bits you really need, and bring with them an additional overhead. Compare
document.getElementById('foo');//calls DOM API
to
$('#foo');//calls jQuery init, a few if's and else's to make up what to do
//then string is passed to querySelector (DOM API),
//new jQ object is constructed (which also constructs an array)
//return new jQObject
At least, that’s what sort of happened behind the scenes in jQuery.
While I’m on the subject of jQ: a lot of jQ code actually binds too many event listeners, bringing us to a third (and, for now, last) possible bottleneck.
Event listeners are checked in an infinite loop, as you probably well know. If you have 1 listener, you won’t notice any loss in performance. If you bind a 1000 individual handlers directly to references to individual Nodes, you will. Each reference is kept in memory, all the handlers (fuction objects) are, too.
Event delegation isn’t that hard and it can make the event loop a lot faster, as I have found out here
8
Aside from any specific techniques, modern browser dev tools all have JavaScript profilers. Switch them on and run through some tasks and see which functions are taking the most time. Attack each bottleneck one at a time.
3
As many others have said already, you can’t really “load balance” javascript. Unless you’re using workers, javascript does one thing at a time. The best you can hope to do is work on getting tasks out of the queue faster than they’re coming in. Here’s some things to look at:
Be careful with the DOM
A big part of that is, as the other answers state, avoiding DOM redraws, as they can be extremely expensive. Try to do all your DOM reads in one step, and all your writes in another, as reading from the DOM often causes it to draw all the queued changes to make sure it has up to date information. If you’re using the information to make more DOM changes, then it’ll redraw again, and the user will never even see that first redraw.
Avoid setInterval
If the callback function takes longer than the interval to run, you’ll get multiple copies of the callback function in the queue, and you code will just fall farther and farther behind as it runs. This is why many people suggest using a setTimeout
that sets itself when it’s done. That way, there’s only ever one instance of the callback waiting in the queue.
Buffer “Burst” Events
Some events fire several times at once, but really you should only be reacting the last last occurrence. One example of this I’ve seen often is a scroll event listener. When the user’s scrolling the page, this event can fire multiple times a second, and if the listener does any real work, then you’re queue’s building up way faster than it’s clearing. That’s why you often see patterns like this:
var scrollTimer;
function scrollHandler(e) {
// Actually handle the scrolling, DOM manipulations, real work...
}
window.addEventListener('scroll', function (e) {
clearTimeout(scrollTimer);
scrollTimer = setTimeout(scrollHandler, 100, e);
});
This way, no matter how much the user scrolls, the heavy code doesn’t actually run until they stop scrolling (in this case for a tenth of a second).
Cache Continuous Events
I’ve found that in some situations certain events come in almost continuously, and have multiple listeners. If you’re reacting to the accelerometer, GPS, Game controllers (or keyboards being used as controllers), then the events will probably be coming in as fast as they possibly can, triggering their listeners almost constantly. Everythings trying to happen as fast as possible, and that makes it slow. Unlike the “burst” events, however, you can’t just wait for these events to stop before reacting. The whole point is to react as they change.
If you think about it though, your code only needs to react once per frame. Have a single keydown/keyup listener that sets flags for the keys you want. Have a single accelerometer listener update and array with orientation data, or a GPS listener store the latitude and longitude. These listeners should be fast enough to keep the queue emptying as quickly as it fills. With that in place, you can loop through all your objects once a frame and just have them check the flags and arrays for the information they need.
So based on your edits I read this as asking “how do I effectively divide up a long running calculation into multiple web workers.” I’ve been writing a library for dealing with web workers and here’s what I found, there are a couple different ways you can do this which often depend on the data. Assuming you have a array that you want to process using n workers.
- You can just arbitrarily split up the array in n parts and send a part to each worker. Either through slicing the array (if it’s small) or by iterating through the list and sending each chunk of data to a different worker (via Math.random() or %). If your data items take a fairly uniform amount of time to process this is going to be as fast as a queue but isn’t going to have the overhead.
- You can set up a queue so that after processing a piece of data the worker sends the result back and gets the next piece. This will balance the load pretty well and means that if one piece of data takes 500ms and the rest takes 1ms you won’t have of data waiting behind the 500ms piece. But you now have workers waiting to get their data and other added complexity that can lead to slower times then the first queue types, but also faster in others, especially if you use transferable objects to speed up transfer times.
- You can just do it one worker. This isn’t actually a solution to the question you asked, but could be a solution to the problem you have, I find it’s really a good idea to double check that the overhead of dividing your data up into multiple parts doesn’t outweigh the benefits you get. Especially considering that since Chrome doesn’t support workers making more workers you have to do the dividing up in the DOM thread.
The term load balancing is confusing here and I think that has led to some answers that you don’t feel are helpful.
First off, all of the advice given in the other answers is sound. Please listen to the other experts here.
Now on to your load balancing. Load balancing means to create multiple workers and distribute work to them transparently. Traditionally, it means having multiple servers serve client requests. But I see how it applies to your JavaScript performance issue here.
It sounds like you are trying to see if you can break your code up into multiple processes or threads so that they can run in parallel. Two problems with that:
- Parallelizing something doesn’t speed it up as dramatically as you might think and adds complexity / risk and sometimes even performance penalties as you add synchronization points around your critical sections.
- JavaScript is single-threaded. Even if you want to take the risk of creating multiple threads, you cannot.
That said, there is a possible solution. There is a new technology called web workers designed for message passing and handling. That can probably give you what you need. However, I don’t believe (could be mistaken here) that it is supported by all browsers.
http://www.html5rocks.com/en/tutorials/workers/basics/
Mozilla’s developer site has good information about thread safety issues. Read that over before you pursue this course.
https://developer.mozilla.org/en-US/docs/Web/Guide/Performance/Using_web_workers
1
So there are some long-winded answers here that are actually likely all correct. However, for your particular site, I’d consider it pretty likely that there are 1 to 5 functions that steal 99% of the CPU time, and could be 80%-optimized with some work.
Most browsers have code-profiling tools that will allow you to watch the code as you do something slow, and then tell you what the browser spent the most time doing. Whatever’s at the top of the list afterwards is what you should focus on first.
The stuff about web workers, for instance, is really neat. But, in a basic webpage only using Javascript for a better user experience (as opposed to doing some sort of actual work) I’d be surprised if that’s needed.
To address the “load balancing” aspect of your question, I’d say that it is impossible for browser-based JavaScript.
“Load balancing” implies that you have more than one machine to work against. I.e. if deploying a RESTful service, you’d have many machines running the same service and spread out the traffic amongst them. This idea really doesn’t apply to JavaScript running in browsers (node.js is a different story) because you’ve only got one client machine. There isn’t a way to spread the load out to other machines.
“Threading” is probably a better analogy (as others have mentioned). In general, threading isn’t really available in JavaScript, but you can use HTML5 Web Workers to achieve a similar result… but then you’re restricted to supported browsers.