Jump to content

How bad of an idea is it to nginx proxy to a server that is 30ms away from the proxy?


G+_Ben Yanke
 Share

Recommended Posts

How bad of an idea is it to nginx proxy to a server that is 30ms away from the proxy? Looking to get some load balancing on the cheap, and I need to keep the primary server on digital ocean for uptime, but I have a high power server (8 cores, 24gb ram) that is free to me, but it's 30 ms away from DO and in someone's house so I don't want to rely on it 100% (it's on a 1 gbps connection so bandwidth isn't an issue).

 

My first thought was to setup an NGINX proxy so that all the requests go to the DO datacenter and then assuming the powerful remote server is online, send requests there instead of to my small VM.

 

Anyway to give my 40k hits/day site a little better performance...

 

Thoughts?

Link to comment
Share on other sites

I do something like that for an e-commerce site with a bunch of stores that is a bit slow. I actually run NGiNX as a caching SSL reverse proxy with some rate limiting for the customer facing sites. The proxy runs in the smallest DO instance.

Even with the latency to the server the whole experience is faster on the site through the proxy (spdy/http2 helps).

Where were you planning to run the proxy? Are you already using / able to use NGiNX as the primary webserver?

Link to comment
Share on other sites

This is how I'd like to set it up.

 

The remote webserver is free, powerful, fast connection, but in a friend's basement so I can't guarantee uptime. On the otherhand, DO can guarantee uptime, but it's more expensive. This seems to be the best compromise of making use of the power of my remote server while not completely relying on it for uptime. My only worry is that 30 ms round trip ping from DO to the proxy.

 

Is it better to proxy, or just to set up both with DNS load balancing? Both outlined in the grap below....

 

Thoughts?

 

http://imgur.com/a/udI4A

Link to comment
Share on other sites

DNS loadbalancing will distribute the load between the two pretty evenly - the A/AAAA records are returned in random order for each successive client request.

That means you will be splitting the traffic and ~half the requests will go to your second server.

 

Your DNS script idea is interesting but DNS gets cached along the way so changing the zone may be 'slow' while requests get sent to the old IP's for a while.

 

Your second illustration I think is a safe bet. If you run a reverse proxy and use the fail timeout feature like:

upstream benserver {

server 1.1.1.3 weight=10 max_fails=3 fail_timeout=20s;

server 10.128.1.2 weight=1 backup;

 

Where your fast remote server is the primary and handles all traffic unless it's unresponsive then it cuts over to the one on DO. The DO server can be connected through private networking (and optionally not internet facing at all) or it could be the save server as your reverse proxy, but with your application server on another port: 127.0.0.1:8080

If the DO webserver is up and syncing with the remote one anyway, running the proxy on it will add no costs.

Link to comment
Share on other sites

 Share

×
×
  • Create New...