Re: [load balancing] Cascading switches off of a Foundry switch

From: Nimesh Vakharia (nvakhariIZZATgenx.net)
Date: Fri Feb 16 2001 - 23:28:14 EST

  • Next message: Nimesh Vakharia: "Re: [load balancing] Cascading switches off of a Foundry switch"

    Man, you must really not like foundry/DSR. At the risk of wasting
    bandwidth, I think I'd like to clear up some of your
    misconceptions. Please see comments inline.

     On Fri, 16 Feb 2001, Alex Samonte wrote:

    > > Sure, its a matter of preference. A hack? lets call it a cool hack
    > > :). One thing I like about it is it eliminates the SI as being the
    > > bottleneck for all network traffic to machines behing the SI. Especially
    > > if u'r doing some heavy transfers to those boxes. U'r also eliminating
    > > rewrites on every packet on the way out. (Although its all
    > > ASIC but u'r still wining on bit on efficiency). Spanning tree diameter
    > > tends to decreases etc...
    >
    > Yes, you avoid doing rewrites on the way out. And with TCP traffic in and out
    > is the same number of packets, hence my statement about it doubles performance

            TCP traffic in and out the same number of packets?? HTTP
    client sends a GET (relatively small packet) which passes through the
    SI. The response in a DSR scenario ie big index.html/images etc etc would
    not go through the SI. Hence less packets, less bytes ==
    more/bigger/faster performance. (U can talk throughput/PPS). Its a
    universal truth ;). If u see the TCP ACK model, every packet does not have
    a corresponding ACK and put fragmentation into the picture
    ... (traffic in != traffic out).

    > If you are doing heavy transfers TO those boxes, DSR won't help you.
    > If you are doing heavy transfers FROM those boxes, DSR will help you
    > if you need more network throughput. DSR was put in place so people could
    > exceed 100mb/s before most of the load balancers truely supported gigabit
    > speeds.

       
    DSR
    Backup Soln-- Switch ------------- Server Iron
                   ||||
    Machines------ ||||
                    ||||
                 Server Farm

    In this setup the bottleneck for "network" traffic is the switch backplane
    (probably a few GIG's).

    Non-DSR
    Backup Soln-- Switch --------Server Iron ------ Server Farm
                    |
                 Machines

    In a non DSR u'd be flooding the server iron buffers with traffic that has
    nothing to do with LB. Put in various different heavy IP flows and there
    you have it. SI becomes a bottleneck. Also even Gig speeds from a design
    perspective can be a bottleneck. You are not restricted to a server farm <
    10 servers (10*100Mbps=1G). Mutltiple VIP's/Multiple farms behind the
    same SI scenario

    > > I am not sure how the binding affects DSR, every app by default
    > > binds to IP0 (ie all interfaces) on the box unless configured
    > > otherwise. This is probably how many people use it anyways. Its really
    > > not that bad to maintain loopbacks on the box. Its a one time, one
    > > statement deal or in case of windows a few clicks deal!
    >
    > Right but you have to make the server respond with the VIP alias specifically
    > not just any of them (solaris had early problems with this). Every app by
    > default does not bind to ip0. Some are actually smart and let you
    > choose. If you don't have that choice, you have to do fun stuff with the
    > OS to make it say 'always respond with this IP'. Which may negate
    > performance gains you might have gotten.
    >

    If the application gives u an option then its a good thing. Fun stuff to
    the OS is called Patching it, which is always a good thing. If u'r using
    an anitque SunOS 2.51 with no patches, I think u have bigger fishes to
    fry... Realistically speaking this is a non-issue.

            Overall I've nothing against any setup, but DSR seems to be a
    good option and not something u should be afraid off. Think I convinced
    you try using DSR in your next setup? :P

    Nimesh.



    This archive was generated by hypermail 2b30 : Fri Feb 16 2001 - 23:18:58 EST