December 08, 2009
Say your Rails application is running in production and it’s getting good traffic. Performance isn’t as good you would like. You’ve already determined that your database is not the bottleneck. What’s your next move?
There is a good chance that
PassengerMaxPoolSize specifies how many instances of your application Passenger will spin up to service incoming requests. If you were running Mongrels back in the day,
PassengerMaxPoolSize is equivalent to the number of mongrels you configured for your app. The value
PassengerMaxPoolSizevalue yields better throughput but uses more memory.
On the other hand,
You need to find Passenger’s “happy place” where memory isn’t being wasted nor is it being over-consumed.
RAM doesn’t do you any good just sitting there. A simple way to check your memory situation is
~ $free -ml total used free shared buffers cached Mem: 1024 781 242 0 81 389
For illustration purposes, here’s the two-week history of memory usage as captured by Scout:
There is definitely some headroom here—memory usage is around half with very few spikes, leaving us around 500MB free.
Before we change settings, check that the number of Apache processes is actually a limiting factor. Passenger makes this easy to check with
GLOBAL QUEUE: Waiting on global queue: 3
There’s our smoking gun—requests are backed up while we have memory to spare! Time to bump
Sidebar: If you don’t have Passenger’s global queueing turned on, there is (unfortunately) no way to see if requests are backed up.
So we already know that that we have around 500MB sitting around. Two more key pieces of information we need:
PassengerMaxPoolSize is set in the Apache configuration, typically located
It’s easy to find this out with
--------- Passenger processes ---------- PID Threads VMSize Private Name ---------------------------------------- 6120 1 152.4 MB 26.2 MB Rails: /var/www/apps/wifi/current 637 1 165.1 MB 32.1 MB Rails: /var/www/apps/wifi/current
That “Private” column is what we’re interested in. In this case, application instances take up around 30MB each. Given that we have around 500MB free, we can safely add 10 more instances to the pool. So, we’ll
Make sure you reload Apache to take in the new settings. The command depends on your configuration;
sudo /etc/init.d/apache2 reload
After putting the new configuration in place, monitor your server to ensure everything keeps running correctly. Keep an eye on free memory available, the amount of swap used, the number of passenger processes, and Passenger’s global queue length. You should look for:
It’s easy to determine if you’re out of memory and using swap. A rule of thumb:
Any swap usage is bad.
If your server is using swap, your application will perform badly, end of story. So how do you know?
~ $vmstat 2 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 139952 174176 81292 398068 1 0 6 7 0 0 1 0 97 0 0 0 139952 174120 81292 398092 0 0 0 0 1133 382 0 0 99 0 0 0 139952 174120 81292 398092 0 0 0 28 1087 32 0 0 99 0 0 0 139952 174120 81292 398092 0 0 0 0 1155 25 0 0 99 0
You can ignore the first line in the
If you’re running in
It is unlikely that simply reducing
Some options for freeing up memory for better performance include:
If you’re just looking at the current memory usage, you’re ignoring past memory spikes. You may allocate to much memory to Passenger, exhaust the swap memory space, and shut down the server. Scout makes it easier to tune Passenger because:
The chart above shows the total RAM used and the RAM used by Passenger processes over the past week. We
The number of processes Passenger will spin up has a significant bearing on your application’s performance. The setting is in your Apache configuration file
PassengerMaxPoolSizeso that there is little free RAM left and no swap being used.
As you’re adjusting configurations, allow for other processes you may have on the machine: database? full-text search servers? Mail servers? cron jobs? periodic reporting? Keep in mind that some of these (especially