1

The kernel says it can support up to 32768 process id at /proc/sys/kernel/pid_max, but how many process can my server handle simultaneously without complaining about resources or hangs my server,

I know it depends on each process behaviors and resources need, but is there some kind of equation that has some parameters like Ram, cache, cpu cores ...etc?

Edit:

My Server is hosted on Linode with following specifications:

RAM: 12 GB
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
cpu MHz     : 2499.970
cache size  : 4096 KB
Cores: 6 cores

My server has some old versions that run my application,

Apache 2.2 
mysql 5.5
php 5.3
php5-fpm
MohammedSimba
  • 848
  • 8
  • 20
  • 40
  • 1
    How are we supposed to answer this?! This highly depends on your hardware, on the software you use, and on tweaks. Heck it might even depend on the outside temperature... if your fans are running at full speed it will cost process power. – Rinzwind Feb 12 '17 at 17:32
  • 1
    I agree with the other answers and comments. If you want to know for your particular use case, just try it. You might get a message similar to 'fork: retry: Resource temporarily unavailableorCannot fork` or ... once your system has exhausted all available resources. My system seems to do this around 12479 tasks (for sudo or normal). – Doug Smythies Feb 12 '17 at 19:52
  • @Rinzwind I just thought it would be something simple as some kind of equation between ram, cache, cores .....etc that will lead me to the max number, i just edited my question with my server specs, thanks in advance – MohammedSimba Feb 13 '17 at 12:35
  • 1
    Oh. A hosted serer might well have a much lower imposed limit. Have a look at ulimit -a. – Doug Smythies Feb 13 '17 at 15:03
  • I run it as root, and among the results there was max user processes (-u) 7982, does that means thats the max for user root only? – MohammedSimba Feb 14 '17 at 17:17

3 Answers3

3

There is a formula for calculating the maximum number of active PIDs or threads. Excerpt from kernel/fork.c:

/*
 * set_max_threads
 */
static void set_max_threads(unsigned int max_threads_suggested)
{

    u64 threads;

    /*
     * The number of threads shall be limited such that the thread
     * structures may only consume a small part of the available memory.
     */
    if (fls64(totalram_pages) + fls64(PAGE_SIZE) > 64)
            threads = MAX_THREADS;
    else
            threads = div64_u64((u64) totalram_pages * (u64) PAGE_SIZE,
                                (u64) THREAD_SIZE * 8UL);

    if (threads > max_threads_suggested)
            threads = max_threads_suggested;

    max_threads = clamp_t(u64, threads, MIN_THREADS, MAX_THREADS);
}

However, normally other limits will be hit first. If RAM and other resources last, then some cgroup limits will likely be first, where basically the limit is:

$ cat /sys/fs/cgroup/pids/user.slice/user-1000.slice/pids.max
12288

The number, 12288, is the same on both my older 3 gigabyte server and my newer 16 gigabyte server.
And I can test by trying to spin out more than the maximum number, resulting in a message in /var/log/kern.log:

Feb 12 15:49:11 s15 kernel: [  135.742278] cgroup: fork rejected by pids controller in /user.slice/user-1000.slice

And checking the number I had at the time:

$ cat /sys/fs/cgroup/pids/user.slice/user-1000.slice/pids.current
12287

top said about 12479

But after those processes ended, I got:

$ cat /sys/fs/cgroup/pids/user.slice/pids.current
15

top said about 205, and note: 12479 - 205 + 15 = 12289

muru
  • 197,895
  • 55
  • 485
  • 740
Doug Smythies
  • 15,448
  • 5
  • 44
  • 61
2

The answer could be thousands, hundreds, or ten's. It would depend on the resources of your computer and what the processes are actually doing.

The best thing you can do is to run your server, study the resources and increase resources based on the usage of your server.

The resources is largely depended on the speed and ram of the computer The ram allows handing more processes in memory, while the speed allows for fast processing of the tasks, then going to the next process.

If your system's load is 1.00 (can be checked by running top from the commandline), then it's basically running at full complicity. Anything over that is an overflow with which the computer is working to keep up with. If the load gets too high, of course, it could take so much time to catch up, that it would become basically locked up.

By the way, the 1.00 is per processor. So if you have a 4-core processor the load would be full at 4.00.

Take a look at this article for more details on the load:
http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages

So you would have to actually study the load with an application such as top to gauge the resources that you will need to run the type of server and traffic you have in mind.

L. D. James
  • 25,036
1

There is no other answer than depends.

What does the process do? Sleep, waiting for something whilst consuming no RAM? Then you can run 32768 processes. Does it run huge database tables? Then far less.

Furthermore, it depends on your hardware. A quad-socket 10-core xeon will handle a higher load than a raspberry...

My laptop has 266 processes, and a load of 0.66, indicating most are sleeping.

vidarlo
  • 22,691
  • I understand it is highly dependable :(, i thought there would be equation for tuning the server performance for number of process, thanks for the hint, i will keep it in mind – MohammedSimba Feb 13 '17 at 12:56
  • No, because it entirely depends on the workload and hardware. If you have 1GB of RAM and each process consumes 1MB of memory, you'll hit the roof at 1000 processes or so, performance wise. If it's a simple program calculating digits of pi, requiring 10kB of RAM, the same amount of RAM will allow more processes etc. – vidarlo Feb 13 '17 at 15:07