Help with kernel tuning for network latency

Discussion in 'Linux, BSD and Other OS's' started by scars230, Nov 2, 2009.

  1. scars230

    scars230 Geek Trainee

    Likes Received:
    0
    Trophy Points:
    0
    Anti-Trend I would be really thankful to you if you help me with respect to a few of my problems
    1) This is a post i made in srcds forums could you please advise me over it :-
    Kernel Settings Help. - srcds.com forums

    2)Are there any options in kernel settings (networking options maybe?) which may help reduce the internet latency.

    3) I got AMD Athlon 64 x2 6000+ , is x2 the right choice or any other ?

    4) With respect to param.h what is the difference between hz and user_hz and which of these is equal to timer frequency we see in make menuconfig, because i wanted to set timer frequency 500hz manually because 500 is not a choice in make menuconfig.

    I don't have much experience and i am to a large extent counting on you for my answers to my doubts.
    Thank you

    I will check the new kernel and comment on it in some time.
     
  2. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    I moved this to a new thread, since it doesn't have much to do with the HWF kernel thread.

    Ahh, building a CS server. I've hosted game servers over the years, but none recently and not CS in any case. So, I can't really help with the specifics of that particular game. However, the suggestions below should apply to any latency-sensitive server.

    A good network card, first and foremost. I recommend disabling the onboard NIC and replacing it with a PCIE Intel Pro 1000 or similar. 3Com also makes some good stuff, but for the money you're better off with Intel.

    Then you'll want to extend the buffer space of the kernel for small network operations. Add the following to your /etc/sysctl.conf:
    Code:
    net.core.rmem_default = 524288 
    net.core.rmem_max = 524288 
    net.core.wmem_default = 524288 
    net.core.wmem_max = 524288 
    
    ...then force it to take effect with:
    Code:
    sysctl -p
    One other thing that will help a lot is a fast and reliable clock source. The AMD-recommended source is hpet, and I have had good experience with it myself. You can apply it immediately like so:

    Code:
    echo hpet >  /sys/devices/system/clocksource/clocksource0/current_clocksource
    ...and make it permanent by adding the following option to your boot arguments in your bootloader:

    Code:
    clocksource=hpet
    Other things that can help are a 1000Hz kernel timer, low-latency voluntary preemption, and perhaps RT/MM if you want to roll your own. However, I have had problems with both patch sets in the past. Some versions work great, others not so well. Personally I think the HWF kernel should be a good balance of performance, stability, and responsiveness, but RT or MM may give you an edge that the stock kernel tuning parameters wouldn't otherwise. It might be worth experimenting with a few kernel variations to find your sweet spot.

    An X2 6000+ is a good CPU, but it doesn't have an overabundance of L2/L3 cache or memory bandwidth. It does have an integrated memory controller though, which is good for latency. All in all it should be fine, though something with more onboard cache would be better for running multiple simultaneous servers.

    For instance, you could get a Phenom II and run it with 4 sticks of RAM in unganged mode. In that way, each CPU core has access to a 64-bit stick of RAM independently of the others, which is pretty great for tasks like multi-game servers. Plus, the Phenom II is a lot cheaper than similar CPUs from Intel.

    Essentially, the kernel timer sets the frequency of which the kernel checks for new work to be done. The lowest pre-defined value is 100Hz, meaning a resolution of 10ms, which is the same as in Windows. The highest value is 1000ms, giving a resolution of 1ms. This means it will be much more responsive if there is a lot of load or many different competing processes on the system, but will be able to give less overall CPU power since some clock cycles are wasted on the higher clock resolution.

    The kernel is tuned for 4 timer settings. In Hz: 100, 250, 300, and 1000. For example, 100Hz is good for NUMA systems with lots of CPUs and RAW CPU power is king, 250 is good for things like web servers, 300 is good for video engineering, and 1000 is good for very low-latency tasks like gaming.

    If you define your own timer 500 is an option, but then you'd be running at a timer setting that hadn't been tested by any other Linux users and you could run into some sticky situations. Really, 1000Hz is probably preferable anyway.
     
  3. scars230

    scars230 Geek Trainee

    Likes Received:
    0
    Trophy Points:
    0
    Dude !
    Seriously if you were a girl i would have married you
    . . . naaa just kidding :)
    Heheheh
    Anyways thanks for all that you have taught me i will surely put all of this to good use.
    God Bless you.

    Edit : I m sorry but these settings actually raised Internet latency.
    Could you please give a website link which may teach me TWEAKING of latency related settings.

    Thanks
     
  4. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    How are you measuring latency? If you're measuring based on ICMP echo requests or similar, it won't really apply if your game is using UDP for data. See here for a brief look at setting UDP buffer sizes:

    http://www.29west.com/docs/THPM/udp-buffer-sizing.html
     
  5. scars230

    scars230 Geek Trainee

    Likes Received:
    0
    Trophy Points:
    0
    By connecting to the game server
    also does those settings hamper download speeds of the computer ? i also saw a fall in it.

    Yes Games use UDP.
     
  6. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    The sysctl settings I gave you increase the network buffers by several times. They should help in the case of multiple simultaneous connections, but might make it worse for a single connection. You can always try different values there to find your sweet spot. After making a change, you can apply your new settings with
    Code:
    sysctl -p
     
  7. scars230

    scars230 Geek Trainee

    Likes Received:
    0
    Trophy Points:
    0
    ok thank you
     
  8. Anti-Trend

    Anti-Trend Nonconformist Geek

    Likes Received:
    118
    Trophy Points:
    63
    Try these:

    Code:
    # Add lots more TCP buffer space for high-latency connections
    net.core.wmem_max = 12582912
    net.core.rmem_max = 12582912
    net.ipv4.tcp_rmem = 10240 87380 12582912
    net.ipv4.tcp_wmem = 10240 87380 12582912
    
    # An option to enlarge the transfer window
    net.ipv4.tcp_window_scaling = 1
    
    # Enable timestamps as defined in RFC1323
    net.ipv4.tcp_timestamps = 1
    
    # Enable select acknowledgments
    net.ipv4.tcp_sack = 1
    
    # If set, TCP will not cache metrics on closing connections
    net.ipv4.tcp_no_metrics_save = 1
    
    # Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them
    net.core.netdev_max_backlog = 5000
     
  9. scars230

    scars230 Geek Trainee

    Likes Received:
    0
    Trophy Points:
    0
    Anti-Trend I respect you and the hard work you did for me.
    God bless you.
     

Share This Page