CQL语句插入大量数据报Connection reset by peer Connection reset by peer的错误
发布于 4 年前 作者 liruonan 3318 次浏览 来自 问答

写程序处理大约两千条数据节点导入neo4j数据库中,由于是银行交易流水,2000条数据处理后也就几百个节点,关系有2000条,有汉字,然后再实时查询,neo4j的日志文件就经常报这个错误,1.连接重置 Connection reset by peer Connection reset by peer,2.还有这个Discarded stale query from the query cache after 536 seconds。导入数据使用的CQL语句导入的。 这个错是什么原因?(是CQL语句不能支持这么多数据导入吗?还是不能在导入数据后实时查询?如果CQL不支持那么多数据插入再实时查询,是要用LOAD CSV的方法吗?)先谢谢大神帮忙解答!

报错日志debug.log 2018-07-13 08:27:49.433+0000 ERROR [o.n.b.t.SocketTransportHandler] Fatal error occurred when handling a client connection: Connection reset by peer Connection reset by peer java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1108) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:345) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) at java.lang.Thread.run(Thread.java:748) 2018-07-13 08:30:31.181+0000 INFO [o.n.c.i.ExecutionEngine] Discarded stale query from the query cache after 536 seconds:MATCH p=(n:Persons)-[r:Detail]->(m:Persons) where r.file_id={fileId} and r.anasyle_id={anasyleId} RETURN p 2018-07-13 08:30:31.182+0000 INFO [o.n.c.i.CommunityCompatibilityFactory] Discarded stale query from the query cache after 536 seconds: MATCH p=(n:Persons)-[r:Detail]->(m:Persons) where r.file_id={fileId} and r.anasyle_id={anasyleId} RETURN p 2018-07-13 08:30:32.969+0000 INFO [o.n.c.i.ExecutionEngine] Discarded stale query from the query cache after 538 seconds: MATCH p=(n:Persons)-[r:Detail]->(m:Persons) WHERE n.file_id={fileId} and n.anasyle_id = ‘1’ RETURN p 2018-07-13 08:30:32.969+0000 INFO [o.n.c.i.CommunityCompatibilityFactory] Discarded stale query from the query cache after 538 seconds: MATCH p=(n:Persons)-[r:Detail]->(m:Persons) WHERE n.file_id={fileId} and n.anasyle_id = ‘1’ RETURN p 2018-07-13 08:39:16.221+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [114358]: Starting check pointing… 2018-07-13 08:39:16.221+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [114358]: Starting store flush… 2018-07-13 08:39:16.228+0000 INFO [o.n.k.i.s.c.CountsTracker] About to rotate counts store at transaction 114358 to [/opt/moudles/neo4j/neo4j-community-3.2.10/data/databases/opt/moudles/neo4j/data/graph.db/neostore.counts.db.b], from [/opt/moudles/neo4j/neo4j-community-3.2.10/data/databases/opt/moudles/neo4j/data/graph.db/neostore.counts.db.a].

5 回复

用java驱动链接数据库的?

插入数据是用python连接的,uri = “bolt://192.168.50.138/7687”

driver 实例全局只创建一个,你是不是 每次执行cypher都创建一次 driver实例了?

driver 是创建的一个,每次使用的时候直接调用 uri = "bolt://192.168.50.138/7687" driver = GraphDatabase.driver(uri, auth=(“neo4j”, “123456”))

a_object = st_df[st_df['roleName'].str.contains('A')]#取出roleName为A的数据
sf = merge(a_object)
str3 = sf[0]
dict_a = sf[1]
df_b = sf[2]
df3_a = df3[df_b]
for i in range(0, len(df3_a)):
    with driver.session() as session:
        dd = {}
        for k, v in dict_a.items():
            dd[k] = eval(v)
        dd['file_id'] = file_id
        dd['anasyle_id'] = anasyle_id
        session.write_transaction(lambda tx: tx.run(str3, dd))

在循环外创建一个session,在循环内创建transaction运行cypher查询,每次都调用transaction.success()提交。 更进一步的优化是将多个更新保存在列表对象中,在cypher中解开然后批量更新数据库。

回到顶部