There is a c++ MSDN tcp server and client from here:
https://learn.microsoft.com/en-us/windows/win32/winsock/finished-server-and-client-code
They do a ping pong: server sends a small tcp packet (10 Bytes) to client, client waits a given wait time, replies with a tcp packet (10 Bytes) to server. Server immediately replies to client and so on.
Server and client are running on a localhost Windows Server 2019 Intel Xeon 2288G (fast hardware, optimized for high performance, C states off).
When the wait time grows from 0 to seconds, the send time on the server side also grows from 5 microseconds to 13 microseconds.
What should be done to keep the send() time at minimum for any wait time?
Relevant code:
Server:
do {
iResult = recv(ClientSocket, recvbuf, recvbuflen, 0);
QueryPerformanceCounter(&counter); //measure time
t0 = counter.QuadPart / (double)freq.QuadPart * 1000000;
iSendResult = send( ClientSocket, recvbuf, iResult, 0 );
QueryPerformanceCounter(&counter);
T[i] = counter.QuadPart / (double)freq.QuadPart * 1000000 - t0;
i++;
} while (i < N);
Client:
do{
iResult = send(ConnectSocket, sendbuf, (int)strlen(sendbuf), 0 );
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
i++;
//implementation of the delay; varies by changing Cycles from 0 to 3*10^9
int j = 0;
double a = 1.000000001;
do
{
a0 = a0 * a;
j++;
} while(j < Cycles);
} while (i < N);
plot of send() time vs wait time
Vitaly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.