Webhooks can be a very powerful thing when you try to automate or integrate software, however, handling their deployment in a controlled environment can suck in terms of security and deployment alone.
I need a way of allowing anybody in the company —anybody that can be trusted with API access, that is— to be able to create, deploy, use (and perhaps even share) webhooks without requiring access to a server.
I have thought that a way of doing this is to create a small application that can store, route and run scripts. The best way I can think to implement such a thing is by taking code stored in a database and creating a temporary file to then run a system command for the given language.
However, I think there would be nothing that would prevent that code from, say, shutting down the machine, downloading and executing external dangerous code, etc.
Then I thought about Linux containers, but I would preferably want a portable solution. I looked for an equivalent in Windows and apparently the technology does not exist yet:
http://www.theregister.co.uk/2014/10/16/windows_containers_deep_dive/
http://www.theregister.co.uk/2014/11/18/windows_docker_client/
Is there a simpler approach that can still be regarded as secure?, I would want to at least be able to execute php, python, ssj and shell scripts.
P.S.: Free downvotes for whoever suggests PHP eval
3
What it sounds to me like you are describing is what package managers do: they take files from a repository, copy them to the target machine, and run scripts within them. This is usually done for the purpose of installation of software, but it doesn’t have to be, so long as you are OK with their idempotency. They typically use a remote filesystem full of compressed archive files instead of a database for the repository, but that’s really just an implementation detail.
What it sounds like you want beyond this is safety and portability. As you mention, safety is best achieved with containers or virtual machines. You are correct to note that there is nothing preventing, say, a Debian package’s postinst script from running rm -rf /
as root. You are also correct that Docker is a great example of containers, but it requires all applications it manages to be ‘dockerized’. Portability is a bit trickier, as some software is inherently portable (like basic web services), while other is not (like performance monitors), and some platforms support containers better than others.
It sounds like you would want something that creates a generic container and then installs/runs arbitrary code within it. To achieve this, you could create the container yourself by setting up chroots, cgroups, etc., but it might be easier to let Docker do most of the work for you: In your database (or whatever repository you choose), store the script, a command to invoke it, a base Docker image, and whatever interpreter dependencies you need. For convenience, all this could be stored under a simple identifier. You then write a script that pulls down the needed files and data and builds a Docker container on the fly and then runs it. Docker provides primitives for copying files from host to guest, and specifying the command to invoke when the container starts up.
Admittedly, since Windows doesn’t yet support containers compatible with Docker, you might need to do something different there. I’ve been away from Windows for some time now, but I recall it has a lot of support for access control lists and other security features with which you (or someone) could create something like a container. Sorry I can’t give you more here. When Windows eventually supports proper containers, your repository might need to have custom base Docker images for Windows and Linux, for example, which could be selected automatically at runtime by your master script.