m***@gmail.com
2014-10-15 14:08:15 UTC
Hi.
I have a native Visual C++ application which acts as a socket server: it accepts incoming TCP connections, and it reads/write some data from/to those sockets.
Once a new TCP connection is received / accepted by the server, a new thread is spawn, and all reads (i.e., calls to recv()) on that connected socket happen in that separate thread. All communication is done according to a well-defined standard application-level protocol, implemented by several clients by different developers/vendors. Hence, my server application is able to communicate with clients by different developers/vendors, as long as they comply with this well-defined application-level protocol.
Now, my server application works fine with 99.9% of the client applications. Nevertheless, I am experiencing strange delays with just a few clients.
In particular, with these "problematic" clients, once the client connection has been accepted by my server application, the recv() calls in my dedicated "receiver" thread hang for exactly 5 seconds before returning, even though data shall be available, since the client has already completed several send() calls.
The clients connecting to my server application run on hosts residing on the same LAN as the host upon which my server application runs.
Please notice the following facts:
1 - With these specific "problematic" clients, **ALL** recv() calls in my server application hang for 5 seconds before returning.
2 - My server application works fine (i.e., recv() show no relevant delays) with other clients (actually, the vast majority of them).
3 - On the other side, those "problematic" clients seem to work fine (i.e., no particular delays) with other third-party server applications implementing the same application-level protocol as my own server application. Some of those third-party server applications are open source: I've inspected the source code of those third-party server applications, but I could not find any relevant difference in their implementation which could apparently justify the 5-seconds delay that I am experiencing in my server application.
I realize that I described the issue in quite generic terms, but maybe this "5 exact seconds" delay may tell something to some TCP/IP expert...
I would really appreciate any hint or suggestion to troubleshoot this problem.
Thanks in advance and best regards,
Marco
I have a native Visual C++ application which acts as a socket server: it accepts incoming TCP connections, and it reads/write some data from/to those sockets.
Once a new TCP connection is received / accepted by the server, a new thread is spawn, and all reads (i.e., calls to recv()) on that connected socket happen in that separate thread. All communication is done according to a well-defined standard application-level protocol, implemented by several clients by different developers/vendors. Hence, my server application is able to communicate with clients by different developers/vendors, as long as they comply with this well-defined application-level protocol.
Now, my server application works fine with 99.9% of the client applications. Nevertheless, I am experiencing strange delays with just a few clients.
In particular, with these "problematic" clients, once the client connection has been accepted by my server application, the recv() calls in my dedicated "receiver" thread hang for exactly 5 seconds before returning, even though data shall be available, since the client has already completed several send() calls.
The clients connecting to my server application run on hosts residing on the same LAN as the host upon which my server application runs.
Please notice the following facts:
1 - With these specific "problematic" clients, **ALL** recv() calls in my server application hang for 5 seconds before returning.
2 - My server application works fine (i.e., recv() show no relevant delays) with other clients (actually, the vast majority of them).
3 - On the other side, those "problematic" clients seem to work fine (i.e., no particular delays) with other third-party server applications implementing the same application-level protocol as my own server application. Some of those third-party server applications are open source: I've inspected the source code of those third-party server applications, but I could not find any relevant difference in their implementation which could apparently justify the 5-seconds delay that I am experiencing in my server application.
I realize that I described the issue in quite generic terms, but maybe this "5 exact seconds" delay may tell something to some TCP/IP expert...
I would really appreciate any hint or suggestion to troubleshoot this problem.
Thanks in advance and best regards,
Marco