Everyone knows about ulimit to show and set the current shell's maximum number of open files (among other things), but what if you want to know the total number of open files on the whole system. I thought I could find the global number of open files by running lsof | wc -l but this number also includes open library files so you end up seeing a lot more open "files" than there really are. These open library files do not use the kernel allocated file handles so you probably don't care about them if you are troubleshooting a too many open files error.

To see how many of the kernels total file handles that are in use you can look at /proc/sys/fs/file-nr which looks something like this:

# cat /proc/sys/fs/file-nr
16096   0   1620140

The first number shows how many of the kernels file descriptors are in use, the second number isn't used any more*, and the third number is the maximum number of file descriptors that can be allocated. This third number is the same as the configurable value of /proc/sys/fs/file-max (or should be).

Depending on what you are running, the difference between lsof and the real file handles in use can be very big. At the moment on my server the number of open files reported by lsof is over 10 times more than what is really open:

# lsof 2> /dev/null | wc -l && cat /proc/sys/fs/file-nr
217918
15744   0   1620140

* The second value in /proc/sys/fs/file-nr shows the number of allocated file handles which are currently unused. This is left over from pre 2.6 kernels where the number of file handles was dynamically allocated

Previous Post Next Post