RE: [load balancing] BigIP Round Robin isn't.

From: Rick Masters (
Date: Wed Jan 02 2002 - 21:23:20 EST

  • Next message: Terrizzano, Hernán: "[load balancing] CSS11154 for transparent web proxy"


    I'm sorry if you haven't received an answer from an F5
    representative sooner. Our support lines are available
    as always if you need immediate assistance. I am filling
    in for the regular person who monitors this list since
    he is on vacation.

    As far as your problem goes, BIG-IP supports per HTTP
    request load balancing. By default, requests are load
    balanced per TCP connection because not all traffic is
    HTTP and not all customers will want their HTTP traffic
    split into separate requests automatically.

    If you want BIG-IP to load balance on the HTTP level,
    currently you've got to give it a reason to do so.
    For example, if you configured "cookie persistence" or
    you wrote a rule that uses an HTTP header, then BIG-IP
    will "know" it needs to be HTTP aware and will load balance
    and aggregate HTTP on a per-request basis.

    If you don't actually have any need to load balance on any
    HTTP criteria, like you aren't inserting HTTP headers or any
    other feature that *requires* HTTP awareness, you may have
    to fool BIG-IP into being HTTP aware by writing a dummy rule
    such as:

    pool poolX {
    rule dummy_poolX {
        if (http_uri starts_with "x") {
            use (poolX)
        else {
            use (poolX)

    The simple rule above can turn on http awareness without
    requiring you to change your configuration much.

    It has become clear that we need to provide a method to
    allow a customer to indicate that the traffic is HTTP
    without requiring that some specific HTTP feature be
    configured. A syntax such as the following may make sense:

    pool poolX {
        http_requests enable

    We are evaluating this for a future release. In the mean
    time the dummy rule should work fine as a work-around.
    Please let me know if you have any questions.

    Rick Masters
    Product Development, BIG-IP
    F5 Networks

    -----Original Message-----
    From: Reid Conti []
    Sent: Thursday, December 27, 2001 4:11 PM
    Subject: Re: [load balancing] BigIP Round Robin isn't.

    My, how I love replying to my own messages!

    The discussion generated by list members regarding persistence led me to
    examine the httpd.conf files on the servers more closely (we were not
    using persistence at the bigip level). As it turns out, the web server in
    each pool had a SLIGHTLY different httpd.conf due to some legacy stuff.

    The difference was the line MaxKeepAliveRequests 100 .. instead of
    KeepAlive off

    So, in other words, when requests happened to hit the servers in the pools
    with keepalive on, they would transfer lots of data back and forth to the
    same server, instead of splitting up each request for individual page
    components amongst multiple servers.

    This is why the connection count on the bigip was the same, but the data
    in and out was much higher. It also explains how the connection count can
    be the same, but the "hits" to the webserver (includes pages, images, etc)
    were higher for the more heavily loaded servers.

    I set KeepAlive to off for the servers that had it on, and the loads
    dropped from 8 or so to a more normal .5 or 1.

    The reason we never noticed this before is because we used to have legacy
    stuff with IP-based hosting, where only one server in each pool would have
    a lot of IPs on it. As we phased that out, NIMDA came around, so we
    blamed higher load on those servers on NIMDA requesting default.ida and
    other such stuff from 500 IPs on each of those servers. Granted, apache
    was 404ing those requests, but it was not hard to imagine a high load of
    404's driving up the system load SLIGHTLY.

    Today is a busy day for our systems, due to post-christmas shopping, so
    the differences in load between a keepalive and a nokeepalive system
    became much greater than they had been before.

    Thanks to all who helped out.. I hope this helps someone else out now or
    in the future.

    - reid

    On Thu, 27 Dec 2001, Reid Conti wrote:

    > Howdy.. searched the archives for info on this, but couldn't find anything
    > relevent. Somebody posted a similar problem, but I think it was brushed
    > aside as a configuration issue.
    > So here's what I have: Several (4 or 5) server pools of 3-7 Linux/Apache
    > webservers behind a bigip. They're all load-balanced using the round
    > robin method.
    > One server in every pool has a MUCH higher load than the others. They are
    > all similar configurations, and I have verified that the servers with
    > higher loads are not necessarily less well-equipped than their
    > counterparts.
    > Here is a brief table culled from bigtop:
    > Server Long Term Prior 4 seconds
    > In Out Conn In Out Conn
    > pool1svr1 353.2M 2.2G 15.3K 518.4K 4.6M 25
    > pool1svr2 103.5M 797.4M 15.8K 148.8K 1.1M 21
    > pool1svr3 104.9M 758.9M 16.3K 216.2K 1.6M 33
    > pool1svr4 108.5M 790.6M 16.6K 104.2K 795.7K 17
    > pool2svr1 581.9M 2.3G 39.3K 600.5K 1.5M 38
    > pool2svr2 226.9M 901.9M 39.1K 275.7K 1.0M 36
    > pool2svr3 231.8M 905.0M 39.9K 224.1K 601.7K 40
    > pool2svr4 232.8M 924.0M 40.2K 155.4K 473.8K 31
    > pool3svr1 473.4M 2.6G 19.3K 480.2K 2.8M 13
    > pool3svr2 122.6M 766.2M 19.3K 119.9K 0.9M 9
    > pool3svr3 122.1M 740.8M 19.3K 47.2K 405.0K 9
    > As you can see, each pool has a server that clearly is getting more action
    > than any of the others, somehow. Don't worry about the prior 4 seconds
    > connections, those fluctuate too rapidly to really capture. Watching the
    > recent connections, the numbers seem about the same for every server,
    > which is borne out by the long term connection numbers -- they're almost
    > identical, as one would expect from bigip round robin.
    > Unfortunately, however, the longer and prior 4 seconds in and out numbers
    > show one server in each pool being vastly more active than the others. In
    > the case of server groups 2 and 3, the in and out numbers across the pool
    > fluctuate a bit more, but as you can see from the long term numbers and
    > the number of connections and such, one server consistantly passes more
    > data in and out.. and it causes the load on these servers to be higher.
    > We're considering setting up a better system for spreading load across the
    > servers, but I'd really like to know why the most active server in a pool
    > is consistantly handling more than twice the data of any of the other
    > servers, while having essentially identical connection counts.
    > For what it's worth, looking in the bigip logs shows none of these servers
    > going down or being marked as inactive for the monitoring period.
    > I appreciate any input you might have on this matter.
    > Reid Conti
    > Systems Administrator
    > The Cobalt Group, Inc.
    > 206-391-0050
    > ____________________
    > The Load Balancing Mailing List
    > Unsubscribe:
    > Archive:
    > LBDigest:
    > MRTG with SLB:

    The Load Balancing Mailing List
    MRTG with SLB:

    The Load Balancing Mailing List
    MRTG with SLB:

    This archive was generated by hypermail 2b30 : Wed Jan 02 2002 - 21:33:15 EST