我正在群集模式下执行一个third-part tool,在Spark中实现。
当在一台机器上执行时,在执行期间产生了可理解的输出,但是当在集群模式下执行时,几分钟后我可以观察到这种输出:
...
INFO scheduler.TaskSetManager: Starting task 95.0 in stage 1.0 (TID 199, 10.0.0.13, executor 5, partition 95, ANY, 5585 bytes)
INFO scheduler.TaskSetManager: Finished task 87.0 in stage 1.0 (TID 191) in 442674 ms on 10.0.0.13 (executor 5) (80/104)
INFO scheduler.TaskSetManager: Starting task 96.0 in stage 1.0 (TID 200, 10.0.0.13, executor 4, partition 96, ANY, 5585 bytes)
INFO scheduler.TaskSetManager: Finished task 88.0 in stage 1.0 (TID 192) in 427022 ms on 10.0.0.13 (executor 4) (81/104)
INFO scheduler.TaskSetManager: Starting task 97.0 in stage 1.0 (TID 201, 10.0.0.13, executor 6, partition 97, ANY, 5586 bytes)
INFO scheduler.TaskSetManager: Finished task 89.0 in stage 1.0 (TID 193) in 434826 ms on 10.0.0.13 (executor 6) (82/104)
INFO scheduler.TaskSetManager: Starting task 98.0 in stage 1.0 (TID 202, 10.0.0.13, executor 5, partition 98, ANY, 5586 bytes)
INFO scheduler.TaskSetManager: Finished task 90.0 in stage 1.0 (TID 194) in 428479 ms on 10.0.0.13 (executor 5) (83/104)
INFO scheduler.TaskSetManager: Starting task 99.0 in stage 1.0 (TID 203, 10.0.0.13, executor 4, partition 99, ANY, 5586 bytes)
INFO scheduler.TaskSetManager: Finished task 92.0 in stage 1.0 (TID 196) in 421363 ms on 10.0.0.13 (executor 4) (84/104)
INFO scheduler.TaskSetManager: Starting task 100.0 in stage 1.0 (TID 204, 10.0.0.13, executor 6, partition 100, ANY, 5585 bytes)
INFO scheduler.TaskSetManager: Finished task 91.0 in stage 1.0 (TID 195) in 436868 ms on 10.0.0.13 (executor 6) (85/104)
INFO scheduler.TaskSetManager: Starting task 101.0 in stage 1.0 (TID 205, 10.0.0.13, executor 7, partition 101, ANY, 5585 bytes)
INFO scheduler.TaskSetManager: Finished task 93.0 in stage 1.0 (TID 197) in 423796 ms on 10.0.0.13 (executor 7) (86/104)
INFO scheduler.TaskSetManager: Starting task 102.0 in stage 1.0 (TID 206, 10.0.0.13, executor 5, partition 102, ANY, 5585 bytes)
INFO scheduler.TaskSetManager: Finished task 95.0 in stage 1.0 (TID 199) in 431473 ms on 10.0.0.13 (executor 5) (87/104)
INFO scheduler.TaskSetManager: Starting task 103.0 in stage 1.0 (TID 207, 10.0.0.13, executor 7, partition 103, ANY, 5335 bytes)
INFO scheduler.TaskSetManager: Finished task 94.0 in stage 1.0 (TID 198) in 448226 ms on 10.0.0.13 (executor 7) (88/104)
INFO scheduler.TaskSetManager: Finished task 96.0 in stage 1.0 (TID 200) in 435101 ms on 10.0.0.13 (executor 4) (89/104)
INFO scheduler.TaskSetManager: Finished task 97.0 in stage 1.0 (TID 201) in 423836 ms on 10.0.0.13 (executor 6) (90/104)
INFO scheduler.TaskSetManager: Finished task 98.0 in stage 1.0 (TID 202) in 415700 ms on 10.0.0.13 (executor 5) (91/104)
INFO scheduler.TaskSetManager: Finished task 99.0 in stage 1.0 (TID 203) in 410550 ms on 10.0.0.13 (executor 4) (92/104)
INFO scheduler.TaskSetManager: Finished task 100.0 in stage 1.0 (TID 204) in 420337 ms on 10.0.0.13 (executor 6) (93/104)
INFO scheduler.TaskSetManager: Finished task 103.0 in stage 1.0 (TID 207) in 318385 ms on 10.0.0.13 (executor 7) (94/104)
INFO scheduler.TaskSetManager: Finished task 101.0 in stage 1.0 (TID 205) in 421965 ms on 10.0.0.13 (executor 7) (95/104)
INFO scheduler.TaskSetManager: Finished task 102.0 in stage 1.0 (TID 206) in 425816 ms on 10.0.0.13 (executor 5) (96/104)
...
没有提供太多信息。有没有办法看到在本地执行中可以观察到的输出?
此外,经过几十分钟后,我发现两台机器的CPU工作负载几乎降低到0%,而几分钟前它们几乎100%忙碌。这可能是由于spark-submit
期间分配的资源很少?我不知道,因为这个输出并没有提供任何线索,我该怎么做才能调查或获得更有价值的信息?
我尝试按照建议http://localhost:4040连接到here,但我没有收到任何回复
答案 0 :(得分:0)
我找到了通常在本地模式下观察到的正常堆栈跟踪,对于如何设计Spark,在 Spark Worker 堆栈跟踪中找到它确实很自然。
要访问它,我在作业执行期间使用浏览器(因为我在SSH中使用lynx连接的VM连接到)包含Spark Worker的节点的http://localhost:8081:点击" stderr的"显示所需的堆栈跟踪。或者,在Worker节点文件系统中,类似于/ spark / work / app-20180216182621-0001 / 2 / stderr保存堆栈跟踪输出。
我之前无法访问http://localhost:8081和http://localhost:4040,因为我使用Docker配置Spark并且我没有在docker-compose文件中指定这些端口。但这与这个问题没有关系。