I have an ASP.NET Core 8 Web API where I’m uploading files per stream with a POST
command. This means, I’m reading the Request.Body
(a stream) internally.
As I’m sending it over to 3 other service, the approach is the following:
- we have one thread reading the input stream – it reads always chunks of data (e.g. 80k bytes) and places it into 3 queues
- we have 3 instances of an self implemented stream which takes these queues to answer the
Read(...)
method. - the request from the first service to the others are done normally via
HttpClient
and setting the stream (our self written stream that accesses then the queue with already read chunks)
If our self written stream is in the Read(...)
and there is no queued chunks available, then it waits with a Monitor.Wait(...)
until the next chunk from the input stream thread comes in. So, it could be that it takes a while to answer the Read(...)
call but that shouldn’t be a problem, isn’t it?
When I’m doing that with 3 or 5 requests in parallel, everything is fine. But when it is under high load with e.g. 50 requests in parallel processed at the same time, I’m getting problems like the stream cannot be read anymore and similar problems.
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Unexpected end of request content.
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.ReadAsyncInternal(CancellationToken cancellationToken) > at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder
1.StateMachineBox
1.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.ReadAsyncInternal(Memory1 destination, CancellationToken cancellationToken) at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder
1.StateMachineBox1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token) at System.Threading.Tasks.ValueTask
1.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)
— End of stack trace from previous location —
at System.IO.Stream.CopyTo(Stream destination, Int32 bufferSize)
at
Edit: this message can be seen on the service where its sent over.
The services are running as docker container in OpenShift. The CPU load is not too high – with 50 requests in parallel 2-3 CPUs, the limit is 16.
I’m wondering what could be the issue here. Each request is handled in an own thread, internally nothing should influence each other. For sure, with 50 stream readings and 150 stream sending in parallel the load is high, but I expected it then just to be slower but not failing at all.
My expectation is that each request has these 4 threads in use. 1 is reading the input stream, 3 are sending it over to further services.
Thanks a lot for help!
1