From time to time I'm experiencing what I think is PHP running out of memory. But it doesn't appear to be using that much; or I'm having trouble reading the log. I'll post the dmesg output in the next post; as I seem to be hitting the 10,000 character limit per post. Here is the error I'm seeing in dmesg: Code: [Thu Feb 16 04:13:29 2017] Out of memory: Kill process 26552 (php) score 1 or sacrifice child [Thu Feb 16 04:13:29 2017] Killed process 26552 (php) total-vm:513048kB, anon-rss:26252kB, file-rss:15544kB I have a 10GB RAM Xen VM running Ubuntu Server 16.04.1. The only thing running when this occurs is PocketMine and occasionally a few simple management shell scripts I wrote (so maybe gzip); so it should have way more RAM than necessary. I built PHP using the latest script from here (as of a couple weeks ago): https://raw.githubusercontent.com/pmmp/php-build-scripts/master/compile.sh I'm building PMMP from source using DevTool. But this has happened on several builds over the last week. I'm using default php.ini as built by the compile.sh script: Code: date.timezone=PST short_open_tag=0 asp_tags=0 phar.readonly=0 phar.require_hash=1 zend.assertions=-1 ;zend_extension=xdebug.so zend_extension=opcache.so opcache.enable=1 opcache.enable_cli=1 opcache.save_comments=1 opcache.fast_shutdown=0 opcache.max_accelerated_files=4096 opcache.interned_strings_buffer=8 opcache.memory_consumption=128 opcache.optimization_level=0xffffffff Is something configured wrongly here? Is it the opcache.memory_consumption? I'm probably missing something obvious; so was hoping someone else looking at it might have a suggestion. Thanks.
Below is example output I'm seeing from: dmesg -T Spoiler Code: [Thu Feb 16 04:13:29 2017] php invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0 [Thu Feb 16 04:13:29 2017] php cpuset=/ mems_allowed=0 [Thu Feb 16 04:13:29 2017] CPU: 1 PID: 26552 Comm: php Not tainted 4.4.0-62-generic #83-Ubuntu [Thu Feb 16 04:13:29 2017] Hardware name: Xen HVM domU, BIOS 4.6.1-xs133690 12/15/2016 [Thu Feb 16 04:13:29 2017] 0000000000000286 000000002e6f7bda ffff8800e5a47af0 ffffffff813f7c63 [Thu Feb 16 04:13:29 2017] ffff8800e5a47cc8 ffff8804f1308000 ffff8800e5a47b60 ffffffff8120ad4e [Thu Feb 16 04:13:29 2017] ffffffff81cd2d7f 0000000000000000 ffffffff81e67760 0000000000000206 [Thu Feb 16 04:13:29 2017] Call Trace: [Thu Feb 16 04:13:29 2017] [<ffffffff813f7c63>] dump_stack+0x63/0x90 [Thu Feb 16 04:13:29 2017] [<ffffffff8120ad4e>] dump_header+0x5a/0x1c5 [Thu Feb 16 04:13:29 2017] [<ffffffff811926c2>] oom_kill_process+0x202/0x3c0 [Thu Feb 16 04:13:29 2017] [<ffffffff81192ae9>] out_of_memory+0x219/0x460 [Thu Feb 16 04:13:29 2017] [<ffffffff81198a5d>] __alloc_pages_slowpath.constprop.88+0x8fd/0xa70 [Thu Feb 16 04:13:29 2017] [<ffffffff81198e56>] __alloc_pages_nodemask+0x286/0x2a0 [Thu Feb 16 04:13:29 2017] [<ffffffff81198f0b>] alloc_kmem_pages_node+0x4b/0xc0 [Thu Feb 16 04:13:29 2017] [<ffffffff8107ea5e>] copy_process+0x1be/0x1b70 [Thu Feb 16 04:13:29 2017] [<ffffffff811c163d>] ? handle_mm_fault+0xcbd/0x1820 [Thu Feb 16 04:13:29 2017] [<ffffffff811c739f>] ? __split_vma.isra.31+0x1cf/0x1f0 [Thu Feb 16 04:13:29 2017] [<ffffffff810805a0>] _do_fork+0x80/0x360 [Thu Feb 16 04:13:29 2017] [<ffffffff81080929>] SyS_clone+0x19/0x20 [Thu Feb 16 04:13:29 2017] [<ffffffff818385f2>] entry_SYSCALL_64_fastpath+0x16/0x71 [Thu Feb 16 04:13:29 2017] Mem-Info: [Thu Feb 16 04:13:29 2017] active_anon:7766 inactive_anon:9229 isolated_anon:0 active_file:440517 inactive_file:3023386 isolated_file:0 unevictable:914 dirty:70840 writeback:0 unstable:0 slab_reclaimable:1594746 slab_unreclaimable:5641 mapped:9569 shmem:6422 pagetables:642 bounce:0 free:34315 free_pcp:0 free_cma:0 [Thu Feb 16 04:13:29 2017] Node 0 DMA free:15872kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:32kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes [Thu Feb 16 04:13:29 2017] lowmem_reserve[]: 0 3745 20054 20054 20054 [Thu Feb 16 04:13:29 2017] Node 0 DMA32 free:70600kB min:3372kB low:4212kB high:5056kB active_anon:7436kB inactive_anon:12856kB active_file:317116kB inactive_file:1758716kB unevictable:16kB isolated(anon):0kB isolated(file):0kB present:3915776kB managed:3835152kB mlocked:16kB dirty:35360kB writeback:0kB mapped:15160kB shmem:7948kB slab_reclaimable:1656408kB slab_unreclaimable:2476kB kernel_stack:688kB pagetables:452kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:4 all_unreclaimable? no [Thu Feb 16 04:13:29 2017] lowmem_reserve[]: 0 0 16309 16309 16309 [Thu Feb 16 04:13:29 2017] Node 0 Normal free:50788kB min:14696kB low:18368kB high:22044kB active_anon:23628kB inactive_anon:24060kB active_file:1444952kB inactive_file:10334828kB unevictable:3640kB isolated(anon):0kB isolated(file):0kB present:17162240kB managed:16701780kB mlocked:3640kB dirty:248000kB writeback:0kB mapped:23116kB shmem:17740kB slab_reclaimable:4722576kB slab_unreclaimable:20056kB kernel_stack:2720kB pagetables:2116kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:128 all_unreclaimable? no [Thu Feb 16 04:13:29 2017] lowmem_reserve[]: 0 0 0 0 0 [Thu Feb 16 04:13:29 2017] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15872kB [Thu Feb 16 04:13:29 2017] Node 0 DMA32: 1774*4kB (UME) 7961*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 70784kB [Thu Feb 16 04:13:29 2017] Node 0 Normal: 12692*4kB (UME) 11*8kB (ME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 50856kB [Thu Feb 16 04:13:29 2017] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [Thu Feb 16 04:13:29 2017] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [Thu Feb 16 04:13:29 2017] 3471474 total pagecache pages [Thu Feb 16 04:13:29 2017] 494 pages in swap cache [Thu Feb 16 04:13:29 2017] Swap cache stats: add 35879, delete 35385, find 197145/206331 [Thu Feb 16 04:13:29 2017] Free swap = 10581520kB [Thu Feb 16 04:13:29 2017] Total swap = 10608636kB [Thu Feb 16 04:13:29 2017] 5273501 pages RAM [Thu Feb 16 04:13:29 2017] 0 pages HighMem/MovableOnly [Thu Feb 16 04:13:29 2017] 135292 pages reserved [Thu Feb 16 04:13:29 2017] 0 pages cma reserved [Thu Feb 16 04:13:29 2017] 0 pages hwpoisoned [Thu Feb 16 04:13:29 2017] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name [Thu Feb 16 04:13:29 2017] [ 422] 0 422 12915 4671 29 3 36 0 systemd-journal [Thu Feb 16 04:13:29 2017] [ 482] 0 482 25742 345 18 3 38 0 lvmetad [Thu Feb 16 04:13:29 2017] [ 489] 0 489 11174 559 21 3 258 -1000 systemd-udevd [Thu Feb 16 04:13:29 2017] [ 907] 100 907 25081 425 19 3 26 0 systemd-timesyn [Thu Feb 16 04:13:29 2017] [ 1038] 0 1038 151028 922 29 4 19 0 lxcfs [Thu Feb 16 04:13:29 2017] [ 1042] 0 1042 69027 1199 37 3 94 0 accounts-daemon [Thu Feb 16 04:13:29 2017] [ 1046] 0 1046 70380 725 30 6 1063 0 snapd [Thu Feb 16 04:13:29 2017] [ 1052] 0 1052 1100 313 7 3 2 0 acpid [Thu Feb 16 04:13:29 2017] [ 1054] 107 1054 10760 544 26 3 86 -900 dbus-daemon [Thu Feb 16 04:13:29 2017] [ 1124] 0 1124 6511 354 18 4 43 0 atd [Thu Feb 16 04:13:29 2017] [ 1129] 0 1129 7252 453 20 3 49 0 cron [Thu Feb 16 04:13:29 2017] [ 1132] 0 1132 7348 627 18 3 240 0 systemd-logind [Thu Feb 16 04:13:29 2017] [ 1136] 104 1136 64099 562 27 3 190 0 rsyslogd [Thu Feb 16 04:13:29 2017] [ 1158] 0 1158 69272 1010 38 3 126 0 polkitd [Thu Feb 16 04:13:29 2017] [ 1193] 0 1193 3344 140 11 3 21 0 mdadm [Thu Feb 16 04:13:29 2017] [ 1200] 0 1200 16380 749 35 3 133 -1000 sshd [Thu Feb 16 04:13:29 2017] [ 1215] 0 1215 1306 30 7 3 0 0 iscsid [Thu Feb 16 04:13:29 2017] [ 1216] 0 1216 1431 877 7 3 0 -17 iscsid [Thu Feb 16 04:13:29 2017] [ 1266] 0 1266 4868 417 14 3 10 0 irqbalance [Thu Feb 16 04:13:29 2017] [ 1267] 0 1267 3939 366 13 3 24 0 agetty [Thu Feb 16 04:13:29 2017] [ 1272] 0 1272 3985 344 13 3 0 0 agetty [Thu Feb 16 04:13:29 2017] [ 1287] 1000 1287 11345 645 27 3 250 0 systemd [Thu Feb 16 04:13:29 2017] [ 1291] 1000 1291 15296 69 32 3 409 0 (sd-pam) [Thu Feb 16 04:13:29 2017] [25018] 1000 25018 6872 796 16 3 0 0 screen [Thu Feb 16 04:13:29 2017] [25020] 1000 25020 3136 747 11 3 0 0 bash [Thu Feb 16 04:13:29 2017] [26523] 1000 26523 3133 753 12 3 0 0 bash [Thu Feb 16 04:13:29 2017] [26526] 1000 26526 7323 748 20 3 0 0 tar [Thu Feb 16 04:13:29 2017] [26534] 1000 26534 1159 392 7 3 0 0 gzip [Thu Feb 16 04:13:29 2017] [26548] 1000 26548 6797 641 16 3 0 0 screen [Thu Feb 16 04:13:29 2017] [26549] 1000 26549 1823 171 10 3 0 0 sleep [Thu Feb 16 04:13:29 2017] [26550] 1000 26550 3136 752 11 3 0 0 bash [Thu Feb 16 04:13:29 2017] [26552] 1000 26552 128262 10449 54 4 0 0 php [Thu Feb 16 04:13:29 2017] Out of memory: Kill process 26552 (php) score 1 or sacrifice child [Thu Feb 16 04:13:29 2017] Killed process 26552 (php) total-vm:513048kB, anon-rss:26252kB, file-rss:15544kB
No, definitely not. This does not happen in my test environment. Only on production server with many plugins and a 20GB world. And also after several days of running with restarts of PMMP every hour. I have between 30 and 60 players online. I'm not posting this as a bug report. Just asking for advice from other server admins who might recognize a problem in my PHP config. Specifically; why is this happening when seemingly I have plenty of RAM; and if I'm reading dmesg right PHP isn't using that much: Code: Out of memory: Kill process 26552 (php) score 1 or sacrifice child
I see - I've had PHP out of memory errors before when there shouldn't have been, but it was always either corrupt chunks or faulty plugins. Nothing unusual in the logs before it happens, or any pattern?
No pattern I can detect in syslog or PMMP log. When you look at the dmesg log I posted above (behind spoiler); does it look like PHP is actually out of memory though? I'm not sure if I'm calculating correctly; but it doesn't look like it's using much RAM at all when it gets killed.
I have a custom loop bash script running the server itself. I then simply issue a "stop" command via RCON at the appropriate time via another bash script running via cron. PHP and PocketMine are completely absent from ps during PM restart. It pauses for several seconds. I do not restart the server itself very often.
Well, that's probably not the problem. I think that it may be a plugin which managed to get in some kind of loop.
I agree with robske_110, your php.ini looks fine, and it's the sort of thing that happens when, for example, someone creates a PurePerms rank with an inheritance loop that only goes haywire when a player with that rank tries to join.