How to increase the TCP MSS from the default 512?

Caution: Non registered users only see threads and messages in the currently selected language, which is determined by their browser. Please create an account and log in to see all content by default. This is a limitation of the forum software.


Also users that are not logged in can not create new threads. This is a, unfortunately needed, counter measure against spam. Please create an account and log in to start new threads.

Don't Panic. Please wash hands.
  • Hi, first of all thank you for your ACA500+, ACA1234 and X-Surf products. I really appreciate them. :)

    Now:

    Description

    I'm using the AmiTCP stack installed from the "aca500p-installnetwork" package. First of all, I'm not sure what version of AmiTCP is this because the readmes or the help files don't say that, but nevermind that.


    Recently I wanted to try out some of the BBSes that are still around. In particular, I was trying to connect to the Particles BBS at particlesbbs.dyndns.org:6400. Now, I can connect to this BBS just fine from the telnet on PC, but the DCTelnet on the Amiga is stuck at the "Connecting" stage and then times out :(. The network works fine on the Amiga in the other aspects (AWeb, ping, FTP). Thinking that this may be some known issue with this BBS I contacted the owner of the BBS on Twitter here: https://twitter.com/robikz1/status/1553086718554824705 (please see the replies below this tweet).


    I decided to investigate this problem. Through some routing machinations, I was able to sniff on the packets that come out of the Amiga and the only packet that comes out of it when trying to contact Particles is the TCP SYN packet. The TCP SYN-ACK reply from the BBS never arrives.


    With some further investigation I was able to replicate this behavior on a Linux PC by forcing the TCP MSS to 512 with the following command:


    Code: bash
    1. $ ip route add 207.126.94.31/32 via [gateway's ip] advmss 512

    The packet sniffer confirms that this triggers the exactly same problem as I'm observing with AmiTCP.


    Now, the root cause of the issue may be that the Particles BBS simply doesn't handle this packet as it should, but the fact is that this could be fixed by simply increasing the TCP MSS on the Amiga somehow.


    I tried to figure this out on my own: how to increase the default MSS value. I even went deep into AmiTCP 2.2's source-code to find how the MSS value is determined. The code suggested that it's derived from the route's MTU but I failed to adjust that. So, please tell me, how can I increase the TCP MSS from the default 512?

    Steps to reproduce

    1. Install the AmiTCP stack.
    2. Install DCTelnet 1.5 from Aminet
    3. Try to connect to particlesbbs.dyndns.org 6400

    Current result

    The connection times out even though the BBS is operating normally.


    Workarounds

    1. If you have a Linux box in your network you can establish a socat proxy that will work as a MITM and retranslate the TCP packets. This is cumbersome and should not be necessary to use in a network that otherwise operates normally.
    2. Install the Miami or Roadshow TCP stacks. I tried them and they do fix the problem. But I don't want to use them for reasons. I prefer the AmiTCP stack.

    So, again, please advise how can I increase the default TCP MSS??

  • I'm using the AmiTCP stack installed from the "aca500p-installnetwork" package. First of all, I'm not sure what version of AmiTCP is this because the readmes or the help files don't say that, but nevermind that.

    It's the most recent one.


    I'll have to contact the author to get an answer - this might take some time.

  • In an attempt to understand what's really going on, I started reading a bit about MSS size selections. Since we have never come across a problem like this, it is likely that one or more routers in the path from you to the BBS is using MSS Clamping. Since (strictly speaking) this is a violation of the OSI model, it's likely to be a corner case, though one that might be hard to pinpoint, as there are too many different routers out there - not just in the big routing centers, but also DSL routers, which use MSS clamping more often than the big professional ones.


    Note that I'm assuming that both you and the Particulate BBS operator are using a home router, and one of them could be the root cause. This is already a rare case, as most machines that provide a service on the internet are somewhere in a server farm, not on home routers.


    Long story short, we'll attempt to re-compile AmiTCP with a higher MSS - there is no other way, as the value is not configurable in this version. Introducing a new configuration option does not sound attractive at this point.


    From what I found so far, the minimum recommended value appears to be 536 bytes, but since MTU size on the internet is de-facto 1500, it may be an idea to set MSS to an even higher value (1500 minus TCP header and IP header size, each 20 bytes, resulting in 1460). This in turn bears the risk of packets requiring fragmentation somewhere on the way, including the possibility of packet loss.



    Will keep you updated - if you have the chance, please see if your Linux router can still replicate the problem with MSS=536, or if that value already fixes it.

  • Thank you for the reply.


    I've tried other values through bisection from the start as I was trying to pin-point which exact value begins to cause the trouble. Unfortunately I don't remember the exact values because this was when I was originally investigating the issue, which was over a week ago. I would have to redo the tests to give you the exact values. IIRC the MSS at which the connection timeout started to occur was around 535.


    More importantly, however, the values below 600 already begin to produce communication issues. Below 600 it's possible to talk to the BBS but the replies are partial or broken. I don't recall if the connection is dropped eventually.


    I'm guessing here but different content may affect the communication differently, so the border value when things start to go awry might not be that easy to find.

  • Thanks - I didn't mean to completely try&error things, but to forward-engineer the answer. Calculating MSS from MTU appears to be the way, and you're confirming this with "problems start below 600". This means that 1460 has a higher probability of being the solution than 536. That's all I wanted to know.

  • Some common problems, as noted, include the DSL packet encapsulation problem, which reduced the common packet size down from 1500. It was seen frequently when it was a common ISP service offering, and Windows 3.x/95/98/2000/XP was popular with then-less dynamic TCP/IP protocol stacks.

    Another problem I have encountered (in other circles) includes ISP's that convert to IPV6 and/or encapsulate traffic over an internal VPN from their router at your place to their regional border routers. This can further impact the packet size, and also hides what is really happening to the packet. If you can tracert (PC TCP/IP command line tool) to your target Internet host(s), and it seems abnormally close - with only 1-2 hops before you 'leave' your ISP-named router gear - you are likely getting encapsulated within your ISP's local/regional network, or they gate directly to a datacenter with additional internal hops that are masked by a proxy (that will not show with 'tracert').


    This is a PRIME example of the latter:


    Tracing route to http://www.icomp.de [81.95.0.34]

    over a maximum of 30 hops:


    1 <1 ms <1 ms <1 ms Fios_Quantum_Gateway.fios-router.home [192.168.1.1]

    2 7 ms 6 ms 7 ms icomp.de [81.95.0.34]


    Trace complete.


    And I'm in the USA...

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • And I'm in the USA...

    We're in Germany, but the server is located in a server farm that has triple backup, using the same infrastructure as a neighbouring banking-serverpark. It has *direct* connection to three DE-CIX centers in Germany.


    The only reason why the shop is so slow (practically unusable) is that it's extremela bad code :-)

  • The last reply was more than 365 days ago, this thread is most likely obsolete. It is recommended to create a new thread instead.