I’m pretty sure that someone has already thought about it and researched it extensively, but I’m having trouble finding any materials or even keywords to look for, for the idea that I was thinking about recently.
I thought about this concept, where in distributed systems, actual computation and processing would happen on the network, rather than on nodes (server or client) on their way from the source to the destination. Where network routers in addition to routing network packets to the next node until delivered to the destination, they would also execute some embedded instructions in those packets.
So for example a network packet would be split into two areas: data and instructions (+ some headers) And when a router routes a packet to the next router, it would send a transformed packet according to the embedded instructions:
+----------------------+
|Network Packet Headers|
+----------------------+
| DATA SLOT 1 |
| DATA SLOT 2 |
| .......... |
| DATA SLOT N |
+----------------------+
| Increment slot1 |
| SLOT3 = SLOT1 + SLOT2|
+----------------------+
| instruction ptr |
+----------------------+
So that by the time those network packets would get to their destination, a portion of it (depending on how many hoops it had to go through) would already be computed (say audio decoded, etc..) offloading both endpoints.
Routers are technically capable of doing this. They do process data based on network packet’s values – like choosing where to send the packet next basing on the IP header. There is nothing stopping them from being utilized as “the real cloud”, right? Each router would execute one packet or a couple of instructions from that packet (and update the instruction pointer) depending on it’s HW resources and capabilities. So a network packet would be a stack frame rather than just a network packet.
Of course, there is a whole class of problems associated with this model, like properly partitioning data (or even the ability to do so.), but it’s all based on old concepts from heterogeneous computing and functional programming.
Could someone please point me to any research, materials or projects that deal with similar concepts?
Thanks!
5
Routers are very specialized and optimized to do just what they do very well. Why would you want to burden them with extraneous tasks, and why would the providers of network infrastructure be interested in adding this to their burden?
I can’t think of any general computational task that this kind of methodology would be useful for, but perhaps some kind of network specific task could be considered, for example some kind of “onion” routing, to help conceal the origin of network traffic, or conversely adding some tracking information about conditions along along the route.