G+_Ben Yanke Posted August 16, 2016 Share Posted August 16, 2016 How bad of an idea is it to nginx proxy to a server that is 30ms away from the proxy? Looking to get some load balancing on the cheap, and I need to keep the primary server on digital ocean for uptime, but I have a high power server (8 cores, 24gb ram) that is free to me, but it's 30 ms away from DO and in someone's house so I don't want to rely on it 100% (it's on a 1 gbps connection so bandwidth isn't an issue). My first thought was to setup an NGINX proxy so that all the requests go to the DO datacenter and then assuming the powerful remote server is online, send requests there instead of to my small VM. Anyway to give my 40k hits/day site a little better performance... Thoughts? Link to comment Share on other sites More sharing options...
G+_Ben Yanke Posted August 16, 2016 Author Share Posted August 16, 2016 Or would it be better to simply setup a second A record in DNS to this other server, and then have a script remove that A record if the server ever goes down? (Digital Ocean handles my DNS and has easy API access to the records) Link to comment Share on other sites More sharing options...
G+_Stede Bonnett Posted August 16, 2016 Share Posted August 16, 2016 I do something like that for an e-commerce site with a bunch of stores that is a bit slow. I actually run NGiNX as a caching SSL reverse proxy with some rate limiting for the customer facing sites. The proxy runs in the smallest DO instance. Even with the latency to the server the whole experience is faster on the site through the proxy (spdy/http2 helps). Where were you planning to run the proxy? Are you already using / able to use NGiNX as the primary webserver? Link to comment Share on other sites More sharing options...
G+_Ben Yanke Posted August 16, 2016 Author Share Posted August 16, 2016 This is how I'd like to set it up. The remote webserver is free, powerful, fast connection, but in a friend's basement so I can't guarantee uptime. On the otherhand, DO can guarantee uptime, but it's more expensive. This seems to be the best compromise of making use of the power of my remote server while not completely relying on it for uptime. My only worry is that 30 ms round trip ping from DO to the proxy. Is it better to proxy, or just to set up both with DNS load balancing? Both outlined in the grap below.... Thoughts? http://imgur.com/a/udI4A Link to comment Share on other sites More sharing options...
G+_Stede Bonnett Posted August 16, 2016 Share Posted August 16, 2016 DNS loadbalancing will distribute the load between the two pretty evenly - the A/AAAA records are returned in random order for each successive client request. That means you will be splitting the traffic and ~half the requests will go to your second server. Your DNS script idea is interesting but DNS gets cached along the way so changing the zone may be 'slow' while requests get sent to the old IP's for a while. Your second illustration I think is a safe bet. If you run a reverse proxy and use the fail timeout feature like: upstream benserver { server 1.1.1.3 weight=10 max_fails=3 fail_timeout=20s; server 10.128.1.2 weight=1 backup; Where your fast remote server is the primary and handles all traffic unless it's unresponsive then it cuts over to the one on DO. The DO server can be connected through private networking (and optionally not internet facing at all) or it could be the save server as your reverse proxy, but with your application server on another port: 127.0.0.1:8080 If the DO webserver is up and syncing with the remote one anyway, running the proxy on it will add no costs. Link to comment Share on other sites More sharing options...
G+_Ben Yanke Posted August 17, 2016 Author Share Posted August 17, 2016 Stede Bonnett? thanks so much! So you don't think that the latency from the DO data center and my colo will be any issue once it scales up? Link to comment Share on other sites More sharing options...
G+_Steve Martin Posted August 17, 2016 Share Posted August 17, 2016 I'd say the same thing I say to my engineers, test it out. You can step back if latency is giving the system fits. Link to comment Share on other sites More sharing options...
G+_Ben Yanke Posted August 17, 2016 Author Share Posted August 17, 2016 Thanks, I do appreciate having a group to bounce ideas off of! It seemed a real shame to waste that powerful server on my non-critical and extremely low traffic sites, so I'm glad I found a way to possibly use it for more! Link to comment Share on other sites More sharing options...
Recommended Posts