必须乱砍protobuf jar?

时间:2016-10-10 07:05:40

标签: hadoop hdfs hadoop2

1.我namenode日志始终打印错误日志java.io.IOException: Requested data length 113675682 is longer than maximum configured RPC length 67108864. RPC came from 172.16.xxx.xxx

datanode打印Unsuccessfully sent block report 0x706cd6d00df0effe, containing 1 storage report(s), of which we sent 0. The reports had 9016550 total blocks and used 0 RPC(s). This took 1734 msec to generate and 252 msecs for RPC and NN processing. Got back no commands

2.我将ipc.maximum.data.length设置为134217728并解决了问题,但不幸的是,我发现在设定长度后,我的hdfs客户端通常无法写入数据,但每次只需几分钟然后我发现namenode抛出了一个新的异常,当客户端无法写入时,DatanodeProtocol.blockReport from 172.16.xxx.xxx:43410 Call#30074227 Retry#0 java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.

Referring HDFS-5153类似,它表示在此期间保留了NameSystem写锁定。“

我必须攻击protobuf jar并设置限制?

修改

我找到Same question,但没有解决方案

0 个答案:

没有答案