Username Password
Bookmark and Share

Performance Tuning

Performance tuning can mean the difference between having to upgrade your hardware and not. It can mean the difference between having happy users and not. It should be taken every bit as seriously as backups. And just like backups, you should test your environment regularly to make sure that what you did before is still working, or maybe some more tuning is necessary.

The following items are provided in no particular order. Depending upon your particular needs one may produce more results than the other. There are, unfortunately, no general rules as to which of these is best suited to your environment, budget, time constraints, or needs. The best advice is to consider each one carefully and apply as needed.

MySQL Tuning

There are literally dozens of settings that you could tweak in MySQL. To learn about how each one affects performance check out the MySQL manual at However, there are a few we can shed some light on right here.

Edit your my.cnf file.

vi /data/wre/etc/my.cnf


Adjust each of the following relative to each other at an appropriate level for the amount of RAM you have available on this machine. If you're running MySQL on the same machine as the rest of WebGUI, you have to take that into account. But if you're running MySQL on it's own machine, and it has 4GB of RAM free, then you could easily quadruple these numbers:








The thread_concurrency setting should be set to double the number of processors you have available.



The thread_cache setting should be set high enough that when you run “show status” from the MySQL command line the threads_created value increases very little over time while your server is under load.



As far as WebGUI is concerned, the ideal for the following settings are as follows:




After you've made these changes you'll need to restart MySQL for them to take effect.

MySQL Slaves and Replication

By using MySQL replication you can increase performance in two ways. The first is that you can add the MySQL slave to your WebGUI config file and it will be used for low priority read requests, which will reduce the load on the master server somewhat. In addition, you can use mysqldump to back up against the slave rather than the master, which means that your backups won't have any adverse performance effects on your site.

There is an additional benefit to using a slave. The slave gives you an always on backup of your database. Therefore if your database server ever goes down you can quickly switch over to use the slave as the master.

The good folks at MySQL have done an excellent job at describing how to use replication. You can set up MySQL replication using the replication how-to:

The basic steps are as follows:

vi /data/wre/etc/my.cnf


And add the following line to the config:




Restart MySQL.

/data/wre/sbin/ --restart mysql


Log in to the server and perform the following commands:

mysql -uroot -p123qwe

grant replication slave on *.* to ‘repl’@’’ identified by ‘somepassword’;

flush privileges;

flush tables with read lock;

show master status;


| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |


| mysql-bin.001 | 92 | | |



Copy down the resulting information and keep it safe.

Using another terminal (do not quit your mysql client session) log in to the server again, and make a tarball of all the databases you want to replicate.

cd /data/wre/var/mysqldata

tar cvfz /tmp/mysqlsnapshot.tar.gz www_example_com mysql


In your mysql client, you can safely unlock the tables and exit:

unlock tables;



Copy your snapshot over to your slave server. Stop the slave. Extract your snapshot and edit MySQL config on the slave server:

rm -Rf /data/wre/var/mysqldata/*
cd /data/wre/var/mysqldata

tar xvfz /tmp/mysqlsnapshot.tar.gz

vi /data/wre/etc/my.cnf


And add the following line to the slave:



Start the slave. Log in and run the following commands:

mysql -uroot -p123qwe

change master to master_host=’’, master_user=’repl’, master_password=’somepassword’, master_log_file=’mysql-bin.001’, master_log_pos=92;

start slave;


You now have basic MySQL Master/Slave replication set up.

mod_perl Tuning

There are many things you can do to tune mod_perl to work better and faster. In the WRE, all of these things have been preset to be as good as they can be without knowing about your environment and traffic patterns.

On platforms that support Apache2::SizeLimit (which is everything except Mac OS X, as far as we know) you should edit (in /data/wre/etc/ or /etc/) to match your needs.

MAX_PROCESS_SIZE is the largest size that mod_perl will allow its processes to grow before killing them. On systems where you have pruned out a bunch of WebGUI plugins, this can go lower. On systems where you have a bunch of your own custom plugins, or if you have a lot of extra RAM, this can go higher.

MAX_UNSHARED_SIZE is similar to MAX_PROCESS_SIZE except that it kills the process if the amount of RAM not shared with the master Apache process is greater than this value.

CHECK_EVERY_N_REQUESTS should be set low enough to catch processes that are growing very large, but high enough that the check doesn't stress the server. A setting of 5 is usually about right.

$Apache2::SizeLimit::MAX_PROCESS_SIZE = 100000;

$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 75000;

$Apache2::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;


If you are not able to use Apache2::SizeLimit, then you can set MaxRequestsPerChild in your /data/wre/etc/modperl.conf or your httpd.conf if you're running a source install. This should be set to something 1000 or less depending on how quickly your processes are growing. You also need to take into account how busy your server is. The lower this number the more it will stress the server because it will be creating new processes more frequently.

MaxRequestsPerChild 1000


Also in your /data/wre/etc/modperl.conf or httpd.conf you should turn off keep alives. Keep alives are good for serving static files, but bad for memory hungry mod_perl processes.

KeepAlive Off


And finally, in your /data/wre/etc/modperl.conf or httpd.conf you should set the process directives according to your traffic patterns and available memory.

StartServers and MinSpareServers should both be set to the average number of mod_perl processes that will be serving requests at any given time.

MaxSpareServers should be set to the maximum number of idle servers you want left around in case of a load spike.

MaxClients should be set to the maximum number of requests you server can simultaneously handle. For example, if you have a gigabyte of RAM available for mod_perl, and your processes max out at 100MB, then MaxClients should be set to 10.

StartServers 5

MinSpareServers 5

MaxSpareServers 10

MaxClients 20


Don't forget to restart mod_perl after making these changes.

Reverse Proxy Web Server

A reverse proxy is very good for performance in that it can handle serving static files quickly, and then hand off requests to the mod_perl server only when needed. Since the process size of the reverse proxy is 5MB or less per process, you can have a lot of these processes available for serving static requests.

In addition, the reverse proxy helps take some additional load off the mod_perl server in that slow processing clients (like people on dial-up connections) can be handled by the reverse proxy. mod_perl quickly does it's work, hands off the content to mod_proxy and that mod_perl process is freed up to do some more work while mod_proxy slowly spoon feeds the content back to the slow client.

And finally, mod_proxy can offload processor intensive, but low memory functions like SSL and compression.

If you're using the WRE then you already have a reverse proxy server in the form of mod_proxy.

If not, here are some recommendations for reverse proxy servers available:

  • Apache mod_proxy -

  • perlbal -


    Litespeed -

Load Balancing

Load balancing is the act of distributing requests between two or more servers. This is done via a load balancing device (sometimes called an IP Sprayer) or a piece of software on another server. The performance advantage of load balancing is that you can distribute the requests coming in to your web site to multiple machines. There is an added benefit in that if one machine goes down, the others can continue serving requests, and your users won't notice any down time.

There are many different kinds of load balancing solutions out there. The following instructions show you how to set up one off the smallest and easiest on a standard linux server.

Get balance and install it on the the server you wish to act as your load balancer:

Using it is really simple, just type these commands to get it started:

balance http

balance https


That will then load balance all traffic coming in for http and https between two servers with IP addresses of and

Note that you’ll likely want to configure some sort of startup script for balance so that when your machine is rebooted, balance will start automatically.


The way your network is configured can have an enormous impact on the performance of your web site, especially if you have multiple machines working together to serve up your site.

No matter what your web server looks like, it should be on a switched network segment. A lot of newbie administrators and old networks use ordinary network hubs to connect the machines on the network together. If you have a 10 machines on a 100 megabit network connected via a hub, then all 10 machines have to share that 100 megabits. But a switch has a wide bandwidth backplane, and routes packets only to the network segments they need to be delivered over. This means that in the same network configuration you might be able to get 500 megabits or more shared between your machines.

If you have multiple machines working together to serve up your site, then they should each have one network interface card (NIC) per direction they are communicating. For example, if you have a web server and a database server, the web server should have one NIC for incoming requests from the Internet, and another NIC to communicate with the database. But since the database has only to communicate with the web server, then it only needs one NIC. However, if you add a slave to that, then both the master and slave will need two NICs. One for communicating with each other, and one for communicating with the web server. In this way they aren't sharing bandwidth on each NIC for communication with various sources. And of course, all of these NICs should be connected together on a switched network segment.

Believe it or not, the physical length of the network cable can add latency to your network communication. In addition, each device (hub, router, switch, firewall, bridge) that the packets pass through when traveling over the network can add small amounts of latency. For general purposes (file serving) this amount of latency doesn't add up to much. However, when you're talking about database to web server communication this little bit can make a big difference. For this reason we recommend that your database and web servers are on the same network segment in the same physical rack. This keeps the cable short and only one device that the traffic must pass through.

This last bit of advice should go without saying, but it is often missed or overlooked, so we'll say it. Look at your networking equipment to see if there is any packet loss, routing errors, or network collisions. Sometimes one bad piece of hardware, or a poor configuration can destroy a good network. Incidentally, a lot of network errors are caused in environments where that are mixed 10MB and 100MB. Auto sensing ports aren't as good as they should be so force them to a particular speed. This doesn't happen as much anymore since 10MB networks are mostly phased out, but as people migrate to gigabit networks it is still something to think about.

Hard Disks

Hard disks are very complicated machines these days. In addition, they're also the slowest part of any server, and the most prone to failure, so they need a lot of special attention.

If you notice that the load on your server is getting high, but you aren't serving that many requests, it may be that your hard drives are the bottleneck. It could be that some process (backups, auxillary functions, serving lots of large files or even more small files) is using all the speed your hard drives have to give. This is sometimes called running up against spindle speed. If this is the case, disabling some processes may fix it, moving to a multi-disk RAID system may fix it, or you might need an additional machine and a load balancer.

RAID (redundant array of inexpensive disks) can be both a lifesaver and a bottleneck. RAID allows you to mirror your data across multiple disks. While this is good for data redundancy, it can also be bad for performance, because every write has to be done at least twice. Thankfully most people use RAID 0,1 or RAID 5, which has a striping component. This means that your data is spread out (or striped) across multiple disks, which in turn speeds up reads significantly. So what you lose in write speed you gain back, and then some in read speed.

Whether you're using RAID or not, the processing power of your disk controller coupled with the amount of Cache or RAM it has available can significantly impact performance. More expensive disk controllers, and especially RAID controllers come with diagnostic tools that can tell you if you're maxing out the abilities of your controller.

Your disk controller isn't the only thing with cache. The disks themselves have cache built in, as well as an on-disk controller card. These things can both affect performance. When you're buying your disks make sure you're getting the highest quality disks you can afford.

Spindle speed is another thing that can affect performance. Sometimes manufacturers (and sys admins) will try to cheap out and use desktop drives in a server. Desktop drives spin at between 4800 and 7200 RPMs. However, server drives often spin at speeds of up to 15,000 RPMs. That's more than double the speed of the desktop counterparts.

And finally there are two main types of hard drives in the world. The IDE family, which includes the SATA line of drives, and the other is SCSI family. Though there is much debate about whether the super fast SATA II drives can outperform SCSI, it has been Plain Black's experience that SCSI still wins in server environments.


Often adding some memory to a server can save you from buying a whole new server. This is especially true with memory intensive applications such as WebGUI and MySQL.

A big thing to check on is whether your server is using a lot of virtual memory. This is called “swapping”, as in the machine is swapping real memory for virtual memory on the hard drive. Virtual memory is several orders of magnitude slower than physical memory. You cannot afford to have your server using your hard disk as memory. Therefore if you notice it using virtual memory, get more RAM.

Auxillary Server Functions

Running extra services on your server can have an adverse impact on your web site's performance. For example, if you're running an FTP server on the box, even though that is typically low in memory and processor usage, it is usually quite high in disk I/O and network I/O. Thusly it's eating up valuable resources that your web site needs. The same is true for other common services. DNS and IRC are heavy on network I/O. Backup servers, log monitors, and web stats processing are heavy on disk I/O. Mail servers, especially with spam filtering, are heavy on everything.

In addition, servers often ship with processes running that you'll never use. No matter how small the amount of resources it's using, it's still using resources. For example, Red Hat Enterprise Linux often ships with gpm and atd enabled. gpm is used to give you a command line mouse pointer, and atd is an old alternative to cron, that basically no one uses anymore. Do you use either of these things? No? Then turn them off.

In general you should migrate any auxiliary processes you need off to some smaller hardware somewhere, and then disable anything you don't need.

WebGUI Modules

WebGUI is a huge system. The fact of the matter is that most people don't use even half of what WebGUI offers. However, by default all that extra stuff is getting loaded into memory anyway. By eliminating the components you don't need, you might save as much as 10MB per mod_perl process. If you have 20 processes on the machine, that's a 200MB savings, which is no small potatoes.

To eliminate WebGUI you need to do two things. The first is to eliminate them from all your site's WebGUI config files. The second is to tell mod_perl not to load them into memory.

In your WebGUI config files there are a number of directives to look at. They are:

  • authMethods

  • paymentPlugins

  • shippingPlugins

  • templateParsers

  • assets

  • utilityAssets

  • assetContainers

  • macros

  • workflowActivities

  • graphingPlugins

Each of these directives is a list of plugins that are available for use on your site. By eliminating the ones you don't use, you can save a lot of memory. For example, in authMethods, unless you're a business running an intranet you probably don't use LDAP, so you might as well remove it. If you're not running an online store you can empty out paymentPlugins and shippingPlugins. The following is an example of the default assets list and after that is a probable use assets list for most people:


"assets" : [































"assets" : [















You can see how the list is about half the size.

In addition to the config file changes, you need to set up a preload.exclude file. To do that run the following commands:

cd /data/WebGUI/sbin

cp preload.exclude.example preload.exclude


Now edit preload.exclude to list all of the modules that you won't be using. An example list is provided for you. And in many situations that is good enough. At the very least it will get you a good start on reducing your memory footprint.

After making these changes you must restart mod_perl for the changes to take effect.

WebGUI Cache

WebGUI has a built in caching system that helps speed up slower operations. The WebGUI cache can write out to the file system, or to a database. It depends how big your site is as to which one will perform faster. If you have a relatively small brochure-ware site, then the FileCache is will outperform the database cache. However, if your site has forums, or lots of content, then the database cache will be the winner. WebGUI defaults to using the file system based cache. To switch to the database cache engine edit your WebGUI config file and change the following directive:

"cacheType" : "WebGUI::Cache::Database",


You'll need to restart mod_perl after making this change.

If you are running multiple sites, each can have their own cache type that is better suited to that particular site's needs.

If you are using multiple load balanced web servers, you must use the database cache, as the file system cache will cause errors in a load balanced environment.

WebGUI Logging

Most people don’t believe us when we tell them this, but logging can have a significant impact on your performance.

For example, did you know that each line of information written to your WebGUI log file is 16 times more expensive than each line not written to your log file? So, for example, let’s say that with WARN turned on you’d log 15 lines of data over some amount of requests, and bumping it up to INFO level will cause you to log an additional 15 lines. It’s the same number of requests, either way, but we nearly double our log performance hit. Let’s say it takes 1 millisecond (ms) to skip a line to be logged, then it takes 16 ms to write it out to the filesystem. That means with WARN enabled it will take 256 ms to write our log, and at the INFO level it will take 480 ms.

Granted, we’re not talking about a lot of time here, but every last drop counts when you’re on the web. And more importantly, if you were logging at the DEBUG level, that line count might jump from 30 lines at INFO to 500 lines. That means we’re at 8000 ms, for the same number of requests.

Now let’s take that example further, and write to a remote logging server or remote database server. Not only do we have the security risk that the log stream might be intercepted by hackers and used against us, but we may have just doubled or quadrupled the number of milliseconds each log write costs us.

In a production environment, it’s often best to log only what you absolutely need to log, and skip the rest. And log directly to your local hard disk, and then grab the logs later if you need to process or warehouse them.


Keywords: mod_perl MySQL performance perl reverse proxy

Search | Most Popular | Recent Changes | Wiki Home
© 2018 Plain Black Corporation | All Rights Reserved