Re: [load balancing] Cascading switches off of a Foundry switch

From: Alex Samonte (asamonteIZZATsitesmith.com)
Date: Tue Feb 20 2001 - 14:15:12 EST

  • Next message: Robert Altomare: "RE: [load balancing] Disabling session timeout on the Altheons"

    On Fri, Feb 16, 2001 at 11:28:14PM -0500, Nimesh Vakharia wrote:
    >
    >
    > Man, you must really not like foundry/DSR. At the risk of wasting
    > bandwidth, I think I'd like to clear up some of your
    > misconceptions. Please see comments inline.
    >
    > On Fri, 16 Feb 2001, Alex Samonte wrote:
    >
    > > > Sure, its a matter of preference. A hack? lets call it a cool hack
    > > > :). One thing I like about it is it eliminates the SI as being the
    > > > bottleneck for all network traffic to machines behing the SI. Especially
    > > > if u'r doing some heavy transfers to those boxes. U'r also eliminating
    > > > rewrites on every packet on the way out. (Although its all
    > > > ASIC but u'r still wining on bit on efficiency). Spanning tree diameter
    > > > tends to decreases etc...
    > >
    > > Yes, you avoid doing rewrites on the way out. And with TCP traffic in and out
    > > is the same number of packets, hence my statement about it doubles performance
    >
    > TCP traffic in and out the same number of packets?? HTTP
    > client sends a GET (relatively small packet) which passes through the
    > SI. The response in a DSR scenario ie big index.html/images etc etc would
    > not go through the SI. Hence less packets, less bytes ==
    > more/bigger/faster performance. (U can talk throughput/PPS). Its a
    > universal truth ;). If u see the TCP ACK model, every packet does not have
    > a corresponding ACK and put fragmentation into the picture
    > ... (traffic in != traffic out).

    In bandwidth no, in pps it's close to the same.
    Barring retransmits, I did not take fragmentation into account, but that
    actually hurts the model because it's more packets in.

    For just the L4 stuff, the major work that the LB is doing is the
    memory lookups, and packet rewrites.

    With DSR you only see the incoming PPS, not the outgoing PPS. The
    SIZE of the request doesn't matter. With L4, the LB is not looking
    at the payload.

    > In a non DSR u'd be flooding the server iron buffers with traffic that has
    > nothing to do with LB. Put in various different heavy IP flows and there
    > you have it. SI becomes a bottleneck. Also even Gig speeds from a design
    > perspective can be a bottleneck. You are not restricted to a server farm <
    > 10 servers (10*100Mbps=1G). Mutltiple VIP's/Multiple farms behind the
    > same SI scenario

    In general you don't have any servers except streaming servers spitting
    out 100mb each. If you do it's poor design. Which is why I wanted
    to move away from a bandwidth view to a pps view. It's much more relevant
    to DSR.

    > Overall I've nothing against any setup, but DSR seems to be a
    > good option and not something u should be afraid off. Think I convinced
    > you try using DSR in your next setup? :P

    DSR was a patch when there was a performance bottleneck in the LB. It allowed
    me to scale further when there were no other options. Now there are options.

    Let's also give an example (not to slam on alteon) of this in the past.
    There was a time when Alteons were limited to 32K conns. It took them
    a little over a year to up that to 64K conns just by changing signed to
    unsigned.

    The drawback of DSR, plus the fact I have other options doesn't lead me
    to want to use DSR. It's an option i'm well aware of. And in certain
    situations, it does make more sense. But those are few and far inbetween.

    -Alex



    This archive was generated by hypermail 2b30 : Tue Feb 20 2001 - 14:20:38 EST