Re: [load balancing] Loadbalancing Unix Web Servers

From: Alex Samonte (
Date: Mon Nov 25 2002 - 13:58:11 EST

  • Next message: Chris Hubing: "Re: [load balancing] Loadbalancing Unix Web Servers"

    On Mon, Nov 25, 2002 at 07:29:05AM -0600, Jay Kline wrote:
    > On Friday 22 November 2002 3:20 pm, tony bourke wrote:
    > > Actually this isn't always the case. Local caching on the web servers
    > > (for various operating systems, such as Linux, FreeBSD, Solaris I've seen
    > > firsthand exhibit this), depending on the size of the files and the number
    > > of files, it's typical to see an 8 to 1 ratio between packets out of the
    > > NFS server and packets out to the Internet.
    > I dont know the caching will work well. There is about 2Gb of content that is
    > served (clipart, images, etc) and most of those files are around 150k.

    well, it depends on which content is being accessed to determine how
    much savings you will get via caches.

    How about we say that the NFS traffic shouldn't be much more than your

    > > One site in particular was pushing around 80-100 Mbps outbound, while only
    > > pulling 10-12 Mbps per second out of the NetApp, where all the traffic was
    > > pulled from the NFS mount (very little dynamically generated content).
    > We are currently pushing around 60Mbps. Obvously most of the content is
    > static, but one of the options people have is to create a zip file to
    > download later. This obviously writes to the disk- which creates part of our
    > syncing problem.

    NFS will make your life easier in this case, since if it's created on
    one, it's created on all. It can be a little slower, and now you may
    also have new issues (which i alluded to previously) when assuming files
    are local.

    For example, what if another server is also generating a zip file, you
    have to worry about file name collisions, etc. These problems are
    easily solvable, but may not have needed to be thought about in the

    > SGI has some GPL'd HA stuff for linux, such as NFS failover (its called
    > FailSafe I belive). We are also now considering a SAN or something, but are
    > someone concerned with the costs. Sistina has a Global File System product
    > we have also been looking at. Has anyone used these before?

    SAN's are expensive, and are about as complicated for HA. You also need
    to add a san HBA card to each server you want to be on the SAN.

    GFS looked interesting, but I never got a chance to poke around with it.

    There are also some other interesting larger products like zambeel or
    storigen, but I suspect these may be too large and expensive for your

    to build a HA NAS box you basically need either a SAN on the back end
    (so the heads can connect to the same storage), or some kind of syncing
    mechanism between the two nas boxes. And obviously the software portion
    to fail them over.

    In any case, all of these solutions might be too large/expensive for
    your needs. you probably can get away with a non HA NAS box.
    They work pretty well, and can be just as reliable as any other unix box
    you have running.


    The Load Balancing Mailing List
    MRTG with SLB:
    Hosted by:

    This archive was generated by hypermail 2.1.4 : Mon Nov 25 2002 - 14:09:23 EST