sqoop是否将临时数据溢出到磁盘

时间:2017-04-26 10:52:55

标签: hadoop hdfs sqoop

据我了解sqoop,它在不同的数据节点上启动了几个映射器,使得jdbc与RDBMS连接。一旦形成连接,数据就会传输到HDFS。

试图了解一下,sqoop mapper是否会临时在磁盘(数据节点)上溢出数据?我知道溢出发生在MapReduce中,但不确定sqoop作业。

1 个答案:

答案 0 :(得分:0)

似乎 sqoop-import在mapper上运行并且不会溢出。并且 sqoop-merge在map-reduce上运行并且确实溢出。您可以在sqoop导入运行期间在Job tracker上查看它。

看一下sqoop导入日志的这一部分,它不会溢出,提取和写入hdfs:

INFO [main] ... mapreduce.db.DataDrivenDBRecordReader: Using query:  SELECT...
[main] mapreduce.db.DBRecordReader: Executing query:  SELECT...
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
INFO [Thread-16] ...mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1489705733959_2462784_m_000000_0 is done. And is in the process of committing
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1489705733959_2462784_m_000000_0' to hdfs://

看一下这个sqoop-merge日志(跳过一些行),它会溢出到磁盘上(注意日志中的 Spilling map output ):

    INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://bla-bla/part-m-00000:0+48322717
    ...
    INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
    ...
    INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1024
    INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 751619264
    INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1073741824
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452; length = 67108864
    INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$**MapOutputBuffer**
    INFO [main] com.pepperdata.supervisor.agent.resource.r: Datanode bla-bla is LOCAL.
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]
    ...
    INFO [main] org.apache.hadoop.mapred.MapTask: **Starting flush of map output**
    INFO [main] org.apache.hadoop.mapred.MapTask: **Spilling map output**
    INFO [main] org.apache.hadoop.mapred.MapTask: **bufstart** = 0; **bufend** = 184775274; bufvoid = 1073741824
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452(1073741808); kvend = 267347800(1069391200); length = 1087653/67108864
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
[main] org.apache.hadoop.mapred.MapTask: Finished spill 0
    ...Task:attempt_1489705733959_2479291_m_000000_0 is done. And is in the process of committing