RE: HydraGPS load balancing was RE: [load balancing] Verisign and Load Balancers

From: G. Jules Huang (
Date: Tue Feb 27 2001 - 10:18:20 EST

  • Next message: Lough, Mike: "[load balancing] unsubscribe vegan-l"

    Alex was correct. SiteSmith did the alpha testing on GPS
    and he's more than qualified to answer the questions.

    Besides what he described, HydraGPS does not rewrite the
    client requests. The packets are encapsulated, forwarded,
    and de-capsulated. At that point the packets are the same
    as they were originally. (Neither source or destination
    addresses were changed or modified).

    We do not optimize the return path. The goal is to guarantee
    arrival. HydraGPS can work with any content delivery network.
    The reason for that is to make it more suitable for streaming
    media and other content intensive sites.

    Both Constellation (located near client) and Probe (located near
    data center) use real addresses - so encapsulated packets passing
    between them have "legitimate" source and destination addresses.
    Client packets have legitimate (unaltered) source and destination
    addresses also (both before and after they pass through


    -----Original Message-----
    From: []On Behalf Of
    Alex Samonte
    Sent: Monday, February 26, 2001 2:27 PM
    Subject: Re: HydraGPS load balancing was RE: [load balancing] Verisign
    and Load Balancers

    On Sat, Feb 24, 2001 at 02:32:16AM -0500, Nimesh Vakharia wrote:

    We did alpha testing of hydraGPS, so I think we can say some stuff about it.

    > This is an interesting implementation but if I understand
    > correctly u'r are not optimizing delivery from the client to the
    > website.
    > Correct me if I'm wrong but the HydraGPS's are independently
    > located and the AS thats announcing the VIP (VIP1) is discontiguous.
    > The GPS caluates the metric b/w itself and the probe/site

    Correct. There is an assumption here that the GPS location is semi close
    to the client. In order for hydraGPS to truely be effective, there has to
    a large (not 2-3 but 20-30) set of hydra constelation (the place where the
    BGP announcement is made and the tunnel starts). This gives enough coverage
    to make that assumption mostly correct. Without this many boxes BGP
    announcements for smaller number of locations are often unpredictable
    because of the GOB (good ol boys) nature of network peering.

    > </quote> "{{measured latency (delay) proximity (hop count and MTU),
    > congestion (packets per second, lost packets, out of sequence packets),
    > anomalies (duplicate and damaged packets), ICMP errors}}" </quote>
    > rewrites the client requests and directs it towards the optimal
    > destination (destination IP is the managment network VIP ie VIP2). The
    > local SLB solution balances and responds back with the actual VIP1 (VIP1
    > is not routable to this farm from the net).

    Hydraweb's solution combines 2 other available solutions today.

    1 is BGP GSLB. This is more an architecture than a technology. It consists
    of announcing the same network block out of different locations and letting
    normal network routing protocols direct someone to one site or the other.
    The main advantage of this is that network failover is just about guarenteed
    So if a datacenter blows up or falls into the ocean, no more traffic will
    go to it (unlike DNS based). Unfortunately this does not take any other
    factors into account (like server load). Foundry supports this

    The other is packet encapsulation, or triangulation. This is tunneling the
    request from the far server, to a closer one so the response traffic goes
    a shorter path, while the request traffic makes a longer one. You can think
    of it like GSLB DSR (not quite, but sorta). Arrowpoint and radware have
    this capability. This gives you much finer grained global load balancing
    control. But it doesn't offer any protection against network problems.

    They have combined these two solutions to address the shortcomings of both.
    Right now DNS based is the clear leader. it's 1 solution. Its easy to use
    and has a good mix of benefits and drawbacks. I think combining BGP, and
    triangulation (and throwing DNS on top of it) is a better solution, but it's
    obviously more complicated. That's its biggest weakness.

    > Here are the problems as I see them:
    > - the return path of the packets (the bulk of the traffic) ie from the
    > server farm to the cleint has nothing to do with the metrics being
    > measured between the GPS and the probe. For all you know the return path
    > goes through a bunch of completely different service providers. You could
    > be acheiving minimal to zero or even performance degradation at this
    > point.

    Not necessarily. There is usually 1 local probe near each constellation
    so the measurement from client to server should be close to the measurement
    from the constellation to server. This needs the assumption I said earlier,
    but it's by no means guarenteed. What you have pointed out is the case
    in which is not the closest thing. In that case, DNS wouldn't do any

    > - ISP tend to filter ingress traffic to prevent spoofing. This filters are
    > setup querying the RADB (similar concept as to filtering your BGP
    > routes). This solution indicates that you are spoofing packets from
    > the server farms. Getting around it ie falsely modifying the RADB would
    > break a lot of things on the net...

    Most ISP's do NOT do implement this filtering. They say they do, but they
    do not. But you bring up a very valid point. you ARE spoofing traffic and
    you may need to work with your ISP/colo to allow that. Definately something
    to watch out for.


    This archive was generated by hypermail 2b30 : Tue Feb 27 2001 - 10:14:22 EST