I want to write a server side application which allows several users to exchange files (not above 3MB) in the following way: user A connects to (server) S. User B connects to S. User C connects S. User A sends a file. Users B and C “see” that a file was/is being uploaded and start downloading it.
My main concern is responsiveness. I want the file to arrive at B and C not long after A started uploading.
I thought of doing it using HTTP: user A sends raw bytes to the server and the server saves the file locally. At this point users B and C see that a file was uploaded and start downloading it.
My question: is HTTP the best way to go here? Or should I write an app that uses TCP sockets, and write my own protocol? If I go with sockets. How can I estimate the RAM requirements of the VPS so that I can allow N file-transfer connections simultaneously?
7
If this is what you want:
(Client A) ---> (Server) ---> (Client B)
then the simplest solution is to split the problem in two parts, have the “Client A” send the file to the server, then have “Client B” grab the file once it’s completely uploaded by “Client A”. You can easily do that with HTTP, and if the clients use the typical asymmetric cheap broadband connections available at home, it would probably be just fine: 90% of the time is used by Client A to send the file to the server, then 10% is used by Client B to grab the file.
If the clients both have symmetric connections, then the same solution would probably still be just fine, since symmetric connections are mostly only seen with the better providers. Sending 3Mb would be so fast, it would not even matter if it’s a two-stage job.
If you’re ambitious and want “Client B” to start downloading as soon as the first byte hits the server, then you should still use HTTP, but use something on the server that allows you to “block” (ie: not a PHP script or similar). You’d still use HTTP because HTTP is almost universally available, no one blocks it, works through proxies. If you try implementing your own service based on TCP then you’ll need to solve the exact same problem as you’ll need to solve with HTTP, but on top of that you’ll need to re-implement the “application” part of it (how dows client B tell the server what file to grab, etc). And on top of that you’ll be dealing with all the proxies and blocked TCP ports yourself!
An other option for the ambitious scenario is to have the Server implement a VPN service (or an other kind of tunneled network), have both ends connect to the server then have Client A send straight to Client B.
What’s wrong with plain old garden variety FTP (File Transfer Protocol), or even TFTP (Trivial File Transfer Protocol)?
Granted, they’re older than your father’s memory of his first kiss, but they still work.
Now, both FTP and TFTP want the file to be on the server before they serve it to download clients. If what you want is to stream the file from A directly, or proxied through your server, FTP and TFTP may not be quite the way to go.
1
I would probably do a proof of concept using http and check the performance. No point in reinventing the wheel if it isn’t necessary.