neo4j 所占内存越来越大,what the hell is wrong with it?
发布于 1 个月前 作者 kingofneo4j 100 次浏览 来自 问答

Recently, 我一直在增量大数据进入neo4j,但是我的内存是有限的,所以我在neo4j.conf 里配置了这个参数: dbms.memory.pagecache.size=7g 也就是说我允许最大的pagecache的大小是7g,但是当我在命令行运行这个命令的时候,奇怪的事情发生了,如下: [zhengguanlong@pro3 bin]$ ./neo4j-admin memrec --database=graph.db

Memory settings recommendation from neo4j-admin memrec:

Assuming the system is dedicated to running Neo4j and has 32000m of memory,

we recommend a heap size of around 11900m, and a page cache of around 12100m,

and that about 8000m is left for the operating system, and the native memory

needed by Lucene and Netty.

Tip: If the indexing storage use is high, e.g. there are many indexes or most

data indexed, then it might advantageous to leave more memory for the

operating system.

Tip: The more concurrent transactions your workload has and the more updates

they do, the more heap memory you will need. However, don’t allocate more

than 31g of heap, since this will disable pointer compression, also known as

“compressed oops”, in the JVM and make less effective use of the heap.

Tip: Setting the initial and the max heap size to the same value means the

JVM will never need to change the heap size. Changing the heap size otherwise

involves a full GC, which is desirable to avoid.

Based on the above, the following memory settings are recommended:

dbms.memory.heap.initial_size=11900m dbms.memory.heap.max_size=11900m dbms.memory.pagecache.size=12100m

The numbers below have been derived based on your current data volume in database and index configuration of database ‘graph.db’.

They can be used as an input into more detailed memory analysis.

Lucene indexes: 430.0

Data volume and native indexes: 19600m

从最后一行可以看出pagecache占用的大小已经大于19g了,而且这个东西一直在增加,照这样下去,内存岂不是要爆掉? So, what the hell is wrong with this? Or 我的理解错了? 我也是按照官方配的啊? 哪位大神,Help!!!

2 回复

Data volume and native indexes 这个不是说 是pagecache 啊。是说数据和索引的空间占用了19g!!!

哦,我好像明白了,谢谢

回到顶部