The Artima Developer Community
Sponsored Link

Ruby Buzz Forum
Scaling Rails with Apache 2.2, mod_proxy_balancer and Mongrel

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Jonathan Weiss

Posts: 146
Nickname: jweiss
Registered: Jan, 2006

Jonathan Weiss is a Ruby and BSD enthusiast
Scaling Rails with Apache 2.2, mod_proxy_balancer and Mongrel Posted: Apr 21, 2006 3:11 PM
Reply to this message Reply

This post originated from an RSS feed registered with Ruby Buzz by Jonathan Weiss.
Original Post: Scaling Rails with Apache 2.2, mod_proxy_balancer and Mongrel
Feed Title: BlogFish
Feed URL: http://blog.innerewut.de/feed/atom.xml
Feed Description: Weblog by Jonathan Weiss about Unix, BSD, security, Programming in Ruby, Ruby on Rails and Agile Development.
Latest Ruby Buzz Posts
Latest Ruby Buzz Posts by Jonathan Weiss
Latest Posts From BlogFish

Advertisement

Unitl this week we used Lighttpd and FastCGI for MeinProf.de. The setup was nearly the same as described in the must read series scaling rails (1, 2, 3, 4) from poocs.net.

We used this setup from day 1 but always had some small issues with Lighttpd. Lighttpd was crashing every couple of days. Nothing dramatic, we had a script that monitored Lighttpd and restarted it if necessary. During the last weeks Lighttpd started to crash once a day and lately even once an hour. This was unacceptable and as we knew that we were going to get some serious press coverage in Germany we looked for alternatives.

43people and Basecamp use Apache 1.3 and FastCGI so this seemed like a good alternative. Just switch the webserver and we would be done. Unfortunately Apache 1.3 cannot loadbalance the FastCGI request and there is very little documentation on Apache 1.3 and remote FastCGI processes. Apache 2.0 is no better and has problems with mod_fastcgi. We needed remote FastCGI listeners as our hardware is quite old and we have many slow machines as opposed to a few fast ones that could use local FastCGI to handle the load.

Enter Mongrel.

Mongrel is a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP rather than FastCGI or SCGI. It is framework agnostic and already supports Ruby On Rails, Og+Nitro, and Camping frameworks.

With Mongrel your application server becomes a webserver that speaks HTTP so you "only" need to loadbalance and proxy normal HTTP request to it. Mongrel was stable during our tests so we looked for the HTTP proxy solution. Apache had always mod_proxy and could therefore proxy HTTP requests but we needed to loadbalancer these. The are extra packages for this kind of stuff like Balance but we wanted something more integrated and didn't want to introduce more components.

Enter Apache 2.2 and mod_proxy_balancer.

Apache 2.2 introduced a new proxy module, mod_proxy_balancer. This module does exactly this, it balances proxy requests. You can define a cluster of proxies and use this cluster in your mod_proxy statement instead of just one proxy server.

With this setup we use Apache 2.2 to handle all incoming requests. Apache 2.2 uses mod_proxy to redirect the incoming HTTP requests to the mod_proxy_balancer cluster. The cluster consists of several Mongrel processes on each application server (and now also internal web server) and distributes the requests.

mod_proxy_balancer is more configurable that Lighttpd's mod_fastcgi. For example you can specify load factors or routes for each cluster member. See the documentation for details.

Our httpd.conf looks like this:

First you define the cluster and tell it of which members it is composed of.

<Proxy balancer://myclustername>
  # cluster member 1
  BalancerMember http://192.168.0.1:3000 
  BalancerMember http://192.168.0.1:3001

  # cluster member 2, the fastest machine so double the load
  BalancerMember http://192.168.0.11:3000 loadfactor=2
  BalancerMember http://192.168.0.11:3000 loadfactor=2

  # cluster member 3
  BalancerMember http://192.168.0.12:3000
  BalancerMember http://192.168.0.12:3001

  # cluster member 4
  BalancerMember http://192.168.0.13:3000
  BalancerMember http://192.168.0.13:3001
</Proxy>

Then you proxy the location or virtual host to the cluster:

<VirtualHost *:80>
  ServerAdmin info@meinprof.de
  ServerName www.meinprof.de
  ServerAlias meinprof.de
  ProxyPass / balancer://meinprofcluster/
  ProxyPassReverse / balancer://meinprofcluster/
  ErrorLog /var/log/www/www.meinprof.de/apache_error_log
  CustomLog /var/log/www/www.meinprof.de/apache_access_log combined
</VirtualHost>

The slash at the end of the ProxyPass directive is very important.

Mongrel itself is startet on the cluster nodes like this:

# mongrel_rails start -d -e production -p 3000
# mongrel_rails start -d -e production -p 3001

So far this solution has proven much more stable (at least on FreeBSD) and was able to handle our peak traffic of 350.000 page request per day. In practice we use up to 8 Mongrel processes on each cluster node and it seems that Apache is the bottleneck and not our application servers as before. The next step for us is to introduce another web server that handles the incoming HTTP requests and has it's own Mongrel cluster.

Read: Scaling Rails with Apache 2.2, mod_proxy_balancer and Mongrel

Topic: ZenTest Reaches 1000 Downloads Previous Topic   Next Topic Topic: 1000 downloads of ZenTest!

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use