We’ve been in the process of changing how our AS3 application talks to our back end and we’re in the process of implementing a REST system to replace our old one.
Sadly the developer who started the work is now on long term sick leave and it’s been handed over to me. I’ve been working with it for the past week or so now and I understand the system, but there’s one thing that’s been worrying me. There seems to be a lot of passing of functions into functions. For example our class that makes the call to our servers takes in a function that it will then call and pass an object to when the process is complete and errors have been handled etc.
It’s giving me that “bad feeling” where I feel like it’s horrible practice and I can think of some reasons why but I want some confirmation before I propose a re-work to system. I was wondering if anyone had any experience with this possible problem?
19
It isn’t a problem.
It is a known technique. These are higher order functions (functions that take functions as parameters).
This kind of function is also a basic building block in functional programming and is used extensively in functional languages such as Haskell.
Such functions are not bad or good – if you have never encountered the notion and technique they can be difficult to grasp at first, but they can be very powerful and are a good tool to have in your toolbelt.
6
They’re not just used for functional programming. They can also be known as callbacks:
A callback is a piece of executable code that is passed as an argument to other code, which is expected to call back (execute) the argument at some convenient time. The invocation may be immediate as in a synchronous callback or it might happen at later time, as in an asynchronous callback.
Think about asynchronous code for a second. You pass in a function that, for example, sends data to the user. Only when the code has completed do you invoke this function with the result of the response, which the function then uses to send the data back to the user. It’s a mindset change.
I wrote a library that retrieves Torrent data from your seedbox. You are using a non-blocking event loop to execute this library and get data, then return it to the user (say, in a websocket context). Imagine you have 5 people connected in this event loop, and one of the requests to get someone’s torrent data stalls. That will block the whole loop. So you need to think asynchronously and use callbacks – the loop keeps running and the “giving the data back to the user” only runs when the function has finished execution, so there’s no waiting for it. Fire and forget.
3
This is not a bad thing. In fact, it is a very good thing.
Passing functions into functions is so important to programming that we invented lambda functions as a shorthand. For example, one may use lambdas with C++ algorithms to write very compact yet expressive code that allows a generic algorithm the ability to use local variables and other state to do things like searching and sorting.
Object-oriented libraries may also have callbacks which are essentially interfaces specifying a small number of functions (ideally one, but not always). One can then create a simple class that implements that interface and pass an object of that class through to a function. That is a cornerstone of event-driven programming, where framework-level code (perhaps even in another thread) needs to call into an object to change state in response to a user action. Java’s ActionListener interface is a good example of this.
Technically, a C++ functor is also a type of callback object which leverages syntactic sugar, operator()()
, to do the same thing.
Finally, there are C-style function pointers which should only be used in C. I will not go into detail, I just mention them for completeness. The other abstractions mentioned above are far superior and should be used in languages that have them.
Others have mentioned functional programming and how passing functions is very natural in those languages. Lambdas and callbacks are the way procedural and OOP languages mimic that, and they are very powerful and useful.
1
As already said, it isn’t bad practice. It is merely a way of decoupling and separating responsibility. For example, in OOP you would do something like this:
public void doSomethingGeneric(ISpecifier specifier) {
//do generic stuff
specifier.doSomethingSpecific();
//do some other generic stuff
}
The generic method is delegating a specific task – which it knows nothing about – to another object that implements an interface. The generic method only knows this interface. In your case, this interface would be a function to be called.
2
In general, there’s nothing wrong with passing functions to other functions. If you’re making asynchronous calls and want to do something with the result, then you’d need some sort of callback mechanism.
There are a few potential drawbacks of simple callbacks though:
- Making a series of calls can require deep nesting of callbacks.
- Error handling can need repetition for each call in a sequence of calls.
- Coordinating multiple calls is awkward, like making
multiple calls at the same time and then doing something once they
are all finished. - There’s no general way of cancelling a set of calls.
With simple webservices the way you’re doing it works fine, but it becomes awkward if you need more complex sequencing of calls. There are some alternatives though. With JavaScript for example, there’s been a shift towards the use of promises (What’s so great about javascript promises).
They still involve passing functions to other functions but asynchronous calls return a value that takes a callback rather than take a callback directly themselves. This gives more flexibility for composing these calls together. Something like this can be implemented fairly easily in ActionScript.
1
having arrays of pointers to functions is actually “best practice” for libraries, and is an absolutely critical and fundamental technique for modular Operating System design. places where the technique is used:
- Python, (cpython)
- Apache Portable Runtime Util (fundamental basis of apache2)
- inside the code generated by c++ compilers for virtual object tables
- the Windows NT Operating System
- the linux kernel
- samba’s VFS dynamically-loadable modular API
and many many more. think about this: if you want to design a library that is fully portable and clean, can you use strcpy? can you use malloc? can you use free? what about printf? these are all functions that are part of libc6… so what happens when your code is used on a system where it’s unsafe to assume that those exist, or they just simply don’t exist?
the answer to that is, you:
- declare a table of function pointers as a stable API
- have a first entry in that table which says what version of that API is
- you then ONLY ever extend that table at the end (you never CHANGE an existing entry)
then for e.g. running on any POSIX system you pre-populate that table with pointers to functions pointing to strncpy, memcpy, malloc, free, and pass that into your application, your library.
this is absolutely standard computer science, standard good practice, and is how Windows NT has managed to have a core base stable API for over… 30 years?
boost is the absolute antithesis of this practice, and as a result is one of the worst offenders in the libre/open world for stability and maintenance. it’s common to have five separate versions of libboost installed and yet, when building software that uses it, be unable to successfully link.
1