Forums
New posts
Search forums
What's new
New posts
New media
New media comments
Latest activity
Classifieds
Media
New media
New comments
Search media
Log in
Register
What's New?
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Navigation
Install the app
Install
More Options
Advertise with us
Contact Us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Water Cooler
General Discussion
PLANNED DOWNTIME (2012-11-10 2AM - 4AM): Transition to New Hosting Platform
Search titles only
By:
Reply to Thread
This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Message
<blockquote data-quote="vvvvvvv" data-source="post: 1978143" data-attributes="member: 5151"><p>No. It will probably be at least a week before the minimal amount of downtime.</p><p></p><p></p><p> </p><p></p><p></p><p>Lurker66 is right. (just kidding)</p><p></p><p>And to please n8thegr8, here's the basics from the geek world. (If you got lost on the first post, skip this one.)</p><p></p><p>In front is a load balancer. Normally, I run Varnish (and Nginx for SSL termination) as dual caching front end load balancers with a floating IP so that if one dies the other picks up and answers that IP within a few seconds. But for this site, it looks more cost-effective to run Rackspace's Cloud Load Balancer option. I did do some performance testing first and found them satisfactory. I've got a few of them running for other "special" sites and have found that they usually have a replacement provisioned within 30 seconds in the event of a device failure.</p><p></p><p>Next will be a pair of webservers running Nginx, PHP-FPM, APC, and Memcache. (I am considering putting a small Varnish cache on them to help with current-event spikes where we get a sudden boost of anonymous users for only a small part of the site - like when Open Carry passed.) The webservers also run HAProxy for connecting to the MySQL cluster. In the event of high load detected by Zenoss, an additional server is provisioned and the load balancer is told about it automatically. When the load dies down below a certain threshold, the server gets nuked. Most of the time, I'm getting webservers provisioned within minutes thanks to deploying from images and ensuring configuration with Puppet.</p><p></p><p>Finally, the meat is in the cluster. I'm a Galera nut, and the MySQL servers will be running Percona XtraDB Cluster (which is MySQL patched with Galera and other enhancements). There will be a minimum of three MySQL servers at all times, and since Galera is synchronous replication we don't have to worry about the issues created by having a single master take writes and slaves answering reads - every server is a master that handles reads and writes without the headaches of MySQL's normal multi-master implementation. Just like the webservers, the MySQL servers will also be scaled in and out horizontally in response to load. Also, there will always be one server in the cluster that is the designated donor for new nodes and is also where backups will be made from. This way, backups can be made and new nodes synced without having an effect on everyone else's use of the site.</p><p></p><p>Oh yeah, I'm a Debian man.</p></blockquote><p></p>
[QUOTE="vvvvvvv, post: 1978143, member: 5151"] No. It will probably be at least a week before the minimal amount of downtime. Lurker66 is right. (just kidding) And to please n8thegr8, here's the basics from the geek world. (If you got lost on the first post, skip this one.) In front is a load balancer. Normally, I run Varnish (and Nginx for SSL termination) as dual caching front end load balancers with a floating IP so that if one dies the other picks up and answers that IP within a few seconds. But for this site, it looks more cost-effective to run Rackspace's Cloud Load Balancer option. I did do some performance testing first and found them satisfactory. I've got a few of them running for other "special" sites and have found that they usually have a replacement provisioned within 30 seconds in the event of a device failure. Next will be a pair of webservers running Nginx, PHP-FPM, APC, and Memcache. (I am considering putting a small Varnish cache on them to help with current-event spikes where we get a sudden boost of anonymous users for only a small part of the site - like when Open Carry passed.) The webservers also run HAProxy for connecting to the MySQL cluster. In the event of high load detected by Zenoss, an additional server is provisioned and the load balancer is told about it automatically. When the load dies down below a certain threshold, the server gets nuked. Most of the time, I'm getting webservers provisioned within minutes thanks to deploying from images and ensuring configuration with Puppet. Finally, the meat is in the cluster. I'm a Galera nut, and the MySQL servers will be running Percona XtraDB Cluster (which is MySQL patched with Galera and other enhancements). There will be a minimum of three MySQL servers at all times, and since Galera is synchronous replication we don't have to worry about the issues created by having a single master take writes and slaves answering reads - every server is a master that handles reads and writes without the headaches of MySQL's normal multi-master implementation. Just like the webservers, the MySQL servers will also be scaled in and out horizontally in response to load. Also, there will always be one server in the cluster that is the designated donor for new nodes and is also where backups will be made from. This way, backups can be made and new nodes synced without having an effect on everyone else's use of the site. Oh yeah, I'm a Debian man. [/QUOTE]
Insert Quotes…
Verification
Post Reply
Forums
The Water Cooler
General Discussion
PLANNED DOWNTIME (2012-11-10 2AM - 4AM): Transition to New Hosting Platform
Search titles only
By:
Top
Bottom