I have a set up where app1 and app2 are running in kubernetes cluster in same namespace with both having 2 pods behind. App1 and app2 both have injected linkered proxy and thus running as side-car container.
Now one of app1 pod, calls app2 on app2.namespace.svc.cluster.local ( service endpoint).
App1 opens 20 http 1.1 connections to service endpoint and I observed a drop in throuput and also observed below :
a) Through app1 is opening 20 connections, a single connection is received in app2 linkerd and main container.
b) It looks like app1 linkerd is upgrading to http2 and thus 1 connection. https://linkerd.io/what-is-a-service-mesh/#what-does-a-service-mesh-actually-do ( point 3)
c) app2 main container is receiving all requests through one connection and protocol received is http-1.1. So looks like app2 linkerd is downgrading again to http1.1.
I thought below explanation for the same :
It is single connection between app2 linkerd and main container, and it is downgraded http1.1. So even if app2 linkered is receiving multiplexed 100 requests over http2, it can pass 1 request at a time on single connection to app2 main container as http1.1 doesn’t support multiplexing.
So basically per pod of app2 one request is being served by app2 main container at a time and which should be leading to throughput down.
Can someone please correct if above explaination is right.
Then I disabled linkerd at app1, then I see throughput is good. Reason for the same can be : I see 20 connections received by app2 main container ( 10 on each pod). So it is simple 1:1 mappnig no upgrade/downgrade. So at a time app1 can fire 20 requests over 20 connections and hence throughput is good.
Did anyone face the same problem.