Trouble@Mill
2005-11-30 20:05:47 UTC
I'm writing a client application to listen for UDP packets coming from
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.
At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.
However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.
So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
Cheers,
Eddie
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.
At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.
However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.
So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
Cheers,
Eddie