当我运行Apache Spark作业时,几行输入数据,执行程序JVM因GC中的空闲java.nio.DirectByteBuffer
而崩溃:
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j sun.misc.Unsafe.freeMemory(J)V+0
j java.nio.DirectByteBuffer$Deallocator.run()V+17
J 1537 C1 java.lang.ref.Reference.tryHandlePending(Z)Z (115 bytes) @ 0x00007f082d519f94 [0x00007f082d5199c0+0x5d4]
j java.lang.ref.Reference$ReferenceHandler.run()V+1
v ~StubRoutines::call_stub
没有记忆压力:
Heap:
par new generation total 153344K, used 17415K [0x0000000738000000, 0x0000000742660000, 0x0000000742660000)
eden space 136320K, 1% used [0x0000000738000000, 0x00000007381955c8, 0x0000000740520000)
from space 17024K, 92% used [0x0000000740520000, 0x000000074148c778, 0x00000007415c0000)
to space 17024K, 0% used [0x00000007415c0000, 0x00000007415c0000, 0x0000000742660000)
concurrent mark-sweep generation
total 2057856K, used 76674K [0x0000000742660000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 49890K, capacity 50454K, committed 50540K, reserved 1093632K
class space used 6821K, capacity 6995K, committed 7056K, reserved 1048576K
完整的hs_err文件: http://www.evernote.com/l/AAQu5abObUND5KFJbFNO9RpVfLQlBiwX6gg/
答案 0 :(得分:0)
根据@ the8472的建议,我尝试了BTrace,并发现DirectByteBuffer
来自默认的Kyro序列化程序。
所以我添加了自己的Kyro序列化程序来处理avro数据,现在一切都很好。
答案 1 :(得分:0)
您尝试使用sun.misc.Unsafe api来解决内存问题导致分段错误,因此jvm崩溃。
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j sun.misc.Unsafe.freeMemory(J)V + 0 j java.nio.DirectByteBuffer $ Deallocator.run()V + 17 J 1537 C1 java.lang.ref.Reference.tryHandlePending(Z)Z(115 bytes)@ 0x00007f082d519f94 [0x00007f082d5199c0 + 0x5d4] j java.lang.ref.Reference $ ReferenceHandler.run()V + 1 v~StubRoutines :: call_stub`