Skip to content

General Troubleshooting

Linux OS Tuning

Too Many Open Files

You can address a Too many open files error by increasing the operating system's file descriptor imit.

Too many open files errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has.

File descriptors are used by your OS to store information about each open file and thus facilitate communication. The OS puts limits on how big file descriptor tables can get and therefore constrain how many concurrent requests the server can handle.

In UNIX-like operating systems, there are system-level and process-level file descriptor limits. The system-level file descriptor limit restricts how much your server can handle, but, because LiteSpeed Web Server only uses a small number of server processes to serve all clients, LSWS can also require a higher process-level file descriptor limit. Therefore, to use your server to its maximum potential, it is important to set both of these limits to a high enough value.

If the server is started by the root user when this limit is too low, the system will automatically try to adjust the limit based on server configurations. If the server was not started by the root user, this limit has to be manually adjusted with root privilege. You may want to put the following commands into your startup scripts in order to automatically set the limit after rebooting the machine.

Process-level file descriptor limits

Use the command ulimit -n to check current process-level file descriptor limit. The output may be something like 32768.

You can reset these limits by adding a number after the command, such as:

ulimit -n 3276800

Note: In Linux, non-root users can also use ulimit -n xxxx to change the process-level limit (at least in Kernel 2.4.x), but you need to add the following lines in /etc/security/limits.conf to give these users permission:

soft nofile 2048
hard nofile 8192

System-level file descriptor limits

Setting system-level file descriptor limits is different for each system.

=== Linux kernels Check the system-level limit for open files:

more /proc/sys/fs/file-max

If it looks low, increase the limit with:
```
echo 40000 > /proc/sys/fs/file-max
```

You may also need to increase `fs.nr_open`
```
echo "10000" >  /proc/sys/fs/nr_open
```

For kernel 2.2.x, you may also need to adjust the inode limit (the maximum number of files that can be stored in the file system):
```
echo 65535 > /proc/sys/fs/inode-max
```

=== Solaris 2.4+ Add the following lines in /etc/system:

To reset the hard file descriptor limit:
```
set rlim_fd_max = XXXX
```

To reset the soft file descriptor limit:
```
set rlim_fd_cur = XXXX
```

=== FreeBSD Add the following line in /boot/loader.conf:

set kern.maxfiles=XXXX

Fix high i/o wait

In the WebAdmin Console, navigate to Server > General, and change Priority to -19. Process priority can be set between -19 and 20, with -19 being the highest priority, and 20 being the lowest.

Switch to 'deadline' I/O scheduler

From the command line use the following command to change the device sda to the appropriate device:

echo “deadline” > /sys/block/sda/queue/scheduler

Edit /boot/grub/menu.lst, and add the kernel parameter:

elevator=deadline

Change VM parameters

There are two variables which control the behavior of VM flushing and allocation and affect network and disk performance

  • vm.dirty\_background\_ratio
  • vm.dirty\_ratio

To set these values from command line

echo 20 > /proc/sys/vm/dirty_background_ratio
echo 60 > /proc/sys/vm/dirty_ratio

to make it permanent, edit /etc/sysctl.conf and add the following:

vm.dirty_background_ratio = 20
vm.dirty_ratio = 60

Increase readahead

To get the current readahead value run the following command:

$ blockdev --getra /dev/sda

To increase it to a higher value like 16K:

$ blockdev --setra 16384 /dev/sda

Disable access time stamp update

Edit /etc/fstab, remove the atime attribute if there is one, and add the noatime attribute. The noatime change can significantly improve your server's file i/o performance.

#sample /etc/fstab line before change
LABEL=/                 /                       ext3    defaults        1 1
#sample /etc/fstab line after noatime change
LABEL=/                 /                       ext3    defaults ,noatime       1 1

Kernel Network Tuning

Add the following to /etc/sysctl.conf:

#increase local ports
net.ipv4.ip_local_port_range = 1024 65535

#reduce the number of time_wait connections
#these 3 lines can reduce your time_wait count by several hundred percent.
#however you should not use the following lines in a NATed configuration.
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30

Then call sysctl to make them active:

sysctl -p

Mitigating SYN Floods

Note

For an explanation of how SYN floods work and why they are not related to your HTTP server, please see this blog article. From this point on, we will assume you understand SYN floods and the TCP handshake.

Defending against SYN floods and other TCP-level attacks is a matter of hardening your kernel. It is not something LiteSpeed Web Server or any other HTTP server can deal with. That being said, here are some simple steps for hardening your Linux kernel:

  1. Turn on syncookies
  2. Set your backlog limit
  3. Lower the number of SYN-ACK retries
  4. Apply all changes

To accomplish the first three steps, edit /etc/sysctl.conf and add the following:

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3

Perform a graceful restart to apply all changes.

To apply these values now without restarting, run the following commands:

echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 2048 > /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 3 > /proc/sys/net/ipv4/tcp_synack_retries

Tip

Doing only the above echo commands without altering /etc/sysctl.conf will mean that the changes will be lost next time you reboot.

Understanding all of the above

tcp_syncookies allows your system to serve more TCP connection requests. Instead of logging each TCP connection request and waiting for a response, the system will instead send a cookie with its SYN-ACK response and delete the original SYN message. Any ACK response the system receives from the client will then contain information about this cookie, allowing the server to recreate the original entry. 1 enables this feature, 0 disables it. This setting is off by default.

tcp_max_syn_backlog tells the system when to start using syncookies. When you have more than 2,048 (or whatever number you set it to) TCP connection requests in your queue, the system will start using syncookies. Keep this number pretty high to prevent from using syncookies with normal traffic. (Syncookies can be taxing for the CPU.)

tcp_synack_retries tells your system how many times to retry sending the SYN-ACK reply before giving up. The default is 5. Lowering it to 3 essentially lowers the turnaround time on a TCP connection request to about 45 seconds. (It takes about 15 seconds per attempt.)

Conntrack table full

A website that is fine during times of low traffic may become slow when traffic is high. A typical example is a download server, which can feel slow when there are many concurrent connections to download. One potential cause of this is a full Linux conntrack table.

To verify this is the case, run the following command:

dmesg | tail

If the conntrack table is full, you will see output like this:

nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.
nf_conntrack: table full, dropping packet.

You can get more information with these two commands:

sysctl -a | grep conntrack
...
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_count = 68999
net.netfilter.nf_conntrack_buckets = 16384
...
cat /sys/module/nf_conntrack/parameters/hashsize
16384

if nf_conntrack_count is close to or exceeds nf_conntrack_max you will have a problem.

To temporarily address the problem, run the following:

sysctl -w net.netfilter.nf_conntrack_max=655360
echo 163840 > /sys/module/nf_conntrack/parameters/hashsize

To make this change permanent you'll need to edit two files, and then restart the server:

  1. Edit /etc/sysctl.conf, and add the following line:
    net.netfilter.nf_conntrack_max=655360
    
  2. Edit /etc/rc.local, and add following line:
    echo 163840 > /sys/module/nf_conntrack/parameters/hashsize
    
  3. Perform a graceful restart.