Is there such a concept as a synchronous promise? Would there be any benefit to writing synchronous code using the syntax of promises?
try {
foo();
bar(a, b);
bam();
} catch(e) {
handleError(e);
}
…could be written something like (but using a synchronous version of then
);
foo()
.then(bar.bind(a, b))
.then(bam)
.fail(handleError)
1
Absolutely! For several reasons:
- Promises provide an excellent error checking system. You can return a promise and let the caller deal with the promise however he or she wishes (or not at all).
- Many would argue that promises make your code cleaner and easier to read, which is no small thing. I would argue that promises are worth using if for no other reason than the fact that it is elegant.
- Last, but certainly not least, your implementation now is synchronous but should that ever change, you don’t have to change a big section of code in order to make synchronous work asynchronously, but just the actual implementation.
Now, please don’t get me wrong, I don’t think you should use promises for every thing you do in the program, though for something that is even mildly cpu-intensive or complicated, I would get into the habit of using promises to implement these operations.
The main drawback is that promises are not fully supported in older browsers, so if you decide to go this route, consider using some polyfill library like bluebird or q to make it work both in newer browsers as well as older browsers.
Hope that helps!
3
Yes and no. Your code sample will fail on ..
.then(bar.bind(a, b))
You’d have to use
.then(function() {bar.bind(a,b);})
.. unless foo()’s resolve() returns the arguments a,b in which case you’d use ..
.then(bar.bind)
In any case, the real advantage of writing that code in that way is that it really isn’t synchronous code. It’s sequential, but not synchronous. It’s asynchronous. The foo()
invocation could invoke some filesystem activity that takes a while, if running on node.js/io.js, or it could run a background worker in a separate thread if running on a modern browser, and in either case the promise wouldn’t resolve()
and the next then()
wouldn’t execute until that task completes, but meanwhile Javascript’s main thread is still running and processing other events.
The down side is that you must have a promise strategy in place and work with it. Where Neil said you should use a polyfill, which is true in that ECMAScript 6 isn’t generally adopted yet, even if it was you must still work with its promise strategy. So your greater concern is that foo()
and bar()
and bam()
must return promises. Those promises must be objects that intercept a resolve()
or reject()
and pass them on to .then()
or .done()
and to .fail()
, and they must chain for .fail()
. (There’s more to promises than this, you’ll need to do more research on this if you bake your own.) The existing promise frameworks including even jQuery’s $.Deferred
are pretty elegant but the point is you’re adding more work to your code.
So, really as Neil said you should only write your code in the promises syntax manner when you feel you’re sequentially invoking methods that are expected to be long-running tasks. However, YAGNI; don’t do it if you think they might be long-running tasks in the future, only do it if you know they’re long-running tasks now. Don’t optimize early until you’ve identified a bottleneck. But you also must consider the cost of promises. Don’t write asynchronous code if you can’t handle supporting the promise strategy.