在运行与spark集成的项目kafka时,我的输出低于输出。
我无法理解下面输出的内容是什么?我在日食中执行。
我无法在任何地方看到制作人的数据。
angular.module('WEBAPP', ['ngRoute'])
.config(function ($routeProvider) {
$routeProvider.when('/home', {
templateUrl: 'application/partials/home_header.html'
}).when('/somepage', {
templateUrl: 'application/partials/somepage.html'
}).otherwise({
templateUrl: '/home'
});
这是我的Spark代码
17/03/07 17:06:44 INFO JobScheduler: Starting job streaming job 1488886604000 ms.0 from job set of time 1488886604000 ms
17/03/07 17:06:44 INFO SparkContext: Starting job: count at CustomerKafkaConsumerThread.java:83
17/03/07 17:06:44 INFO DAGScheduler: Got job 1 (count at CustomerKafkaConsumerThread.java:83) with 1 output partitions
17/03/07 17:06:44 INFO DAGScheduler: Final stage: ResultStage 1 (count at CustomerKafkaConsumerThread.java:83)
17/03/07 17:06:44 INFO DAGScheduler: Parents of final stage: List()
17/03/07 17:06:44 INFO DAGScheduler: Missing parents: List()
17/03/07 17:06:44 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[3] at map at CustomerKafkaConsumerThread.java:75), which has no missing parents
17/03/07 17:06:44 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 961.9 MB)
17/03/07 17:06:44 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1940.0 B, free 961.9 MB)
17/03/07 17:06:44 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:51356 (size: 1940.0 B, free: 961.9 MB)
17/03/07 17:06:44 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/03/07 17:06:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[3] at map at CustomerKafkaConsumerThread.java:75)
17/03/07 17:06:44 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/07 17:06:44 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,ANY, 1995 bytes)
17/03/07 17:06:44 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/03/07 17:06:44 INFO KafkaRDD: Beginning offset 0 is the same as ending offset skipping iot 0
17/03/07 17:06:44 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 953 bytes result sent to driver
17/03/07 17:06:44 INFO DAGScheduler: ResultStage 1 (count at CustomerKafkaConsumerThread.java:83) finished in 0.018 s
17/03/07 17:06:44 INFO DAGScheduler: Job 1 finished: count at CustomerKafkaConsumerThread.java:83, took 0.040656 s
17/03/07 17:06:44 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 16 ms on localhost (1/1)
17/03/07 17:06:44 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/07 17:06:44 INFO JobScheduler: Finished job streaming job 1488886604000 ms.0 from job set of time 1488886604000 ms
17/03/07 17:06:44 INFO JobScheduler: Total delay: 0.848 s for time 1488886604000 ms (execution: 0.097 s)
17/03/07 17:06:44 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
17/03/07 17:06:44 INFO InputInfoTracker: remove old batch metadata:
17/03/07 17:06:44 INFO MapPartitionsRDD: Removing RDD 1 from persistence list
17/03/07 17:06:44 INFO KafkaRDD: Removing RDD 0 from persistence list
17/03/07 17:06:44 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
17/03/07 17:06:44 INFO InputInfoTracker: remove old batch metadata:
17/03/07 17:06:44 INFO BlockManager: Removing RDD 1
17/03/07 17:06:44 INFO BlockManager: Removing RDD 0
17/03/07 17:06:46 INFO JobScheduler: Added jobs for time 1488886606000 ms