Discussion:
How to Improve Windows UDP Reception
(too old to reply)
Trouble@Mill
2005-11-30 20:05:47 UTC
Permalink
I'm writing a client application to listen for UDP packets coming from
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.

At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.

However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.

So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.

Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.

Cheers,
Eddie
Walter Roberson
2005-11-30 20:16:08 UTC
Permalink
Post by ***@Mill
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
I would suggest you try the "tweak test" of dslreports, and consider
adjusting your settings via DrTCP.

http://www.dslreports.com/tweaks
http://www.dslreports.com/faq/tweaks/1.%20DRTCP
--
"No one has the right to destroy another person's belief by
demanding empirical evidence." -- Ann Landers
Trouble@Mill
2005-11-30 21:25:24 UTC
Permalink
Post by Walter Roberson
Post by ***@Mill
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
I would suggest you try the "tweak test" of dslreports, and consider
adjusting your settings via DrTCP.
http://www.dslreports.com/tweaks
http://www.dslreports.com/faq/tweaks/1.%20DRTCP
I've never been sure how much use "Tweak Test" is for me, or actually
what it tests, because my Windows machine sits behind a Linux box,
which connects to the "Big Bad World (tm)". So, I'd guess that it was
actually testing the Windows settings against data transfer from
Linux.

Now, if there were an equivalent that tested my Linux, to ensure it
was set up for external connection correctly, and also another version
that checked my Windows against it's connection to my Linux box, then
I guess I'd know if everything was correct.

Aside from that, I thought the settings that got "tweaked" here were
all for TCP connections, not UDP.

Also, how many of those settings have an effect if the traffic doesn't
actually leave the box. I get the same results if I run the server,
and my "receiver" application on the same Windows box.

Cheers,
Eddie
r***@yahoo.com
2005-11-30 22:49:22 UTC
Permalink
Post by ***@Mill
I'm writing a client application to listen for UDP packets coming from
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.
At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.
However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.
So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
IIRC, the default buffer allocated by Windows to a UDP socket is 8K,
which would correspond to five 1400 bytes packets plus a bit, which
would closely match the behavior you're seeing. Try increasing it with
setsockopt()/SO_RCVBUF.
Trouble@Mill
2005-12-01 00:50:12 UTC
Permalink
Post by r***@yahoo.com
Post by ***@Mill
I'm writing a client application to listen for UDP packets coming from
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.
At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.
However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.
So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
IIRC, the default buffer allocated by Windows to a UDP socket is 8K,
which would correspond to five 1400 bytes packets plus a bit, which
would closely match the behavior you're seeing. Try increasing it with
setsockopt()/SO_RCVBUF.
There ya go. That was exactly what I was looking for. Give that man
a coconut. <G>

Linux defaults to 110592, Windows 8192. Increasing it solved all my
problems.

Thanks again.

Eddie
Frank Swarbrick
2005-12-01 19:45:39 UTC
Permalink
Post by ***@Mill
Post by r***@yahoo.com
Post by ***@Mill
I'm writing a client application to listen for UDP packets coming from
a server, and am seeing a big difference in the way these are handled
between Windows and Linux. The server, which I can't modify, sends
out bursts of 16 UDP packets, each with 1400 bytes of data, with an
insignificant (if any) delay between them.
At the moment, all my application does is print the data, or part of
it, to fully analyze the pattern. In Linux, I can print the complete
packet and the time taken for this is around 2.5 seconds, for all 16
packets. Every time I run this, I always get all 16 packets.
However, running the SAME program in Windows, but only printing the
1st 32 bytes of data (to spend less time on each packet), the
application stops printing after receiving only the 1st 6 or 7
packets, in less than 1 second. I have verified, using a sniffer,
that all 16 packets were sent to the Windows box.
So, to me, it looks like the Linux implementation, there is some kind
of "buffering", or "stacking" of the UDP going on, which is able to
cope with all of the data sent. This does not appear to be the case
in Windows.
Are there any "tweaks" I can make to Windows to force it to hold more
of the UDP packets while the previous ones are still being processed.
IIRC, the default buffer allocated by Windows to a UDP socket is 8K,
which would correspond to five 1400 bytes packets plus a bit, which
would closely match the behavior you're seeing. Try increasing it with
setsockopt()/SO_RCVBUF.
There ya go. That was exactly what I was looking for. Give that man
a coconut. <G>
Linux defaults to 110592, Windows 8192. Increasing it solved all my
problems.
Thanks again.
I've only been half paying attention to this thread, but I just ran in to
something with my first use of UDP sockets that I think this may address. I
just want to get some clarification...

Are you stating that the TCP/IP stack will discard UDP datagrams if its
receive buffer is full (the 8192 on Windows)? So if there's a large buffer
it will be less likely to discard data the datagrams if there is a short
burst of many datagrams and they are received by the stack faster than the
application can process them? I am definitely experiencing this issue (not
on Windows, but on VSE/ESA) and if increasing the receive buffer would "fix"
this it would make me very happy! (Of course I will probably have tested
this out by the time I read an answer, but I'm posting this just so I can
get as much information as possible.)

Thanks,
Frank

---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
r***@yahoo.com
2005-12-01 20:02:56 UTC
Permalink
Post by Frank Swarbrick
I've only been half paying attention to this thread, but I just ran in to
something with my first use of UDP sockets that I think this may address. I
just want to get some clarification...
Are you stating that the TCP/IP stack will discard UDP datagrams if its
receive buffer is full (the 8192 on Windows)? So if there's a large buffer
it will be less likely to discard data the datagrams if there is a short
burst of many datagrams and they are received by the stack faster than the
application can process them? I am definitely experiencing this issue (not
on Windows, but on VSE/ESA) and if increasing the receive buffer would "fix"
this it would make me very happy! (Of course I will probably have tested
this out by the time I read an answer, but I'm posting this just so I can
get as much information as possible.)
That's correct, although the exact mechanism is implementation
dependent - there would, for example, be no reason an application could
not have a global buffer for packets, and no per-socket buffer
(although there are good reasons to avoid that scheme).

Unfortunately, I don't think TCP/IP for VSE supports the SO_RCVBUF
option on any of the APIs. It appears that TCP/IP for VSE (assuming
you're using CSI's product) controls the TCP version of this globally
with the SET WINDOW command, you might try and see if that impacts the
UDP side as well.

Continue reading on narkive:
Loading...