我正在运行此命令以使用map-reduce程序加载csv文件。 它运行成功,但扫描hbase表时给出0行 以下是执行过程的控制台日志数据:
[hadoop@01HW394491 ~]$ HADOOP_CLASSPATH='hbase classpath' hadoop jar Desktop/bulk.jar /user/hadoop/3.csv /user/hadoop/load bulk
13/06/07 15:59:00 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.3-cdh3u1--1, built on 07/18/2011 15:17 GMT
13/06/07 15:59:00 INFO zookeeper.ZooKeeper: Client environment:host.name=01HW394491
13/06/07 15:59:00 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_0
13/06/07 15:59:00 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
13/06/07 15:59:00 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/jre
hbase.mapreduce.inputtable
13/06/07 15:59:00 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/06/07 15:59:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=172.29.179.59:2181 sessionTimeout=180000 watcher=hconnection
13/06/07 15:59:02 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.29.179.59:2181
13/06/07 15:59:02 INFO zookeeper.ClientCnxn: Socket connection established to 01HW394491/172.29.179.59:2181, initiating session
13/06/07 15:59:02 INFO zookeeper.ClientCnxn: Session establishment complete on server 01HW394491/172.29.179.59:2181, sessionid = 0x13f1e28c4b4000a, negotiated timeout = 180000
13/06/07 15:59:03 INFO mapred.JobClient: Running job: job_201306071546_0001
13/06/07 15:59:04 INFO mapred.JobClient: map 0% reduce 0%
13/06/07 15:59:11 INFO mapred.JobClient: map 100% reduce 0%
13/06/07 15:59:18 INFO mapred.JobClient: map 100% reduce 33%
13/06/07 15:59:19 INFO mapred.JobClient: map 100% reduce 100%
13/06/07 15:59:19 INFO mapred.JobClient: Job complete: job_201306071546_0001
13/06/07 15:59:19 INFO mapred.JobClient: Counters: 21
13/06/07 15:59:19 INFO mapred.JobClient: Job Counters
13/06/07 15:59:19 INFO mapred.JobClient: Launched reduce tasks=1
13/06/07 15:59:19 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=5499
13/06/07 15:59:19 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/06/07 15:59:19 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/06/07 15:59:19 INFO mapred.JobClient: Rack-local map tasks=1
13/06/07 15:59:19 INFO mapred.JobClient: Launched map tasks=1
13/06/07 15:59:19 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=7561
13/06/07 15:59:19 INFO mapred.JobClient: FileSystemCounters
13/06/07 15:59:19 INFO mapred.JobClient: FILE_BYTES_READ=159
13/06/07 15:59:19 INFO mapred.JobClient: HDFS_BYTES_READ=63
13/06/07 15:59:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=127600
13/06/07 15:59:19 INFO mapred.JobClient: Map-Reduce Framework
13/06/07 15:59:19 INFO mapred.JobClient: Reduce input groups=0
13/06/07 15:59:19 INFO mapred.JobClient: Combine output records=0
13/06/07 15:59:19 INFO mapred.JobClient: Map input records=0
13/06/07 15:59:19 INFO mapred.JobClient: Reduce shuffle bytes=6
13/06/07 15:59:19 INFO mapred.JobClient: Reduce output records=0
13/06/07 15:59:19 INFO mapred.JobClient: Spilled Records=0
13/06/07 15:59:19 INFO mapred.JobClient: Map output bytes=0
13/06/07 15:59:19 INFO mapred.JobClient: Combine input records=0
13/06/07 15:59:19 INFO mapred.JobClient: Map output records=0
13/06/07 15:59:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=63
13/06/07 15:59:19 INFO mapred.JobClient: Reduce input records=0
13/06/07 15:59:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=172.29.179.59:2181 sessionTimeout=180000 watcher=hconnection
13/06/07 15:59:19 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.29.179.59:2181
13/06/07 15:59:19 INFO zookeeper.ClientCnxn: Socket connection established to 01HW394491/172.29.179.59:2181, initiating session
13/06/07 15:59:19 INFO zookeeper.ClientCnxn: Session establishment complete on server 01HW394491/172.29.179.59:2181, sessionid = 0x13f1e28c4b4000c, negotiated timeout = 180000
13/06/07 15:59:19 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://01HW394491:9000/user/hadoop/load/_SUCCESS
这是具有所有配置的Driver类。 我在分布式模式下运行此程序。 我正在使用cloudera cdh3u1版本来运行这个程序。
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat;
import org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles;
import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
/**
* HBase bulk import example<br>
* Data preparation MapReduce job driver
* <ol>
* <li>args[0]: HDFS input path
* <li>args[1]: HDFS output path
* <li>args[2]: HBase table name
* </ol>
*/
public class Driver {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
//conf.set("hbase.table.name", "bulk");
conf.set("hbase.mapreduce.inputtable", args[2]);
conf.set("hbase.zookeeper.quorum","172.29.179.59");
conf.set("hbase.zookeeper.property.clientPort", "2181");
//conf.set("hbase.master", "172.29.179.59:60000");
//conf.set("hbase.zookeeper.quorum","ibm-r1-node2.apache-nextgen.com");
HBaseConfiguration.addHbaseResources(conf);
Job job = new Job(conf, "HBase Bulk Import Example");
job.setJarByClass(HBaseKVMapper.class);
job.setMapperClass(HBaseKVMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(KeyValue.class);
job.setInputFormatClass(TableInputFormat.class);
HTable hTable = new HTable(args[2]);
// HTable hTable = new HTable("bulkdata");
// Auto configure partitioner and reducer
HFileOutputFormat.configureIncrementalLoad(job, hTable);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
/*
* FileInputFormat.addInputPath(job, new
* Path("hdfs://localhost:9000/user/685536/input1.csv"));
*
* FileOutputFormat.setOutputPath(job, new
* Path("hdfs://localhost:9000/user/685536/outputs12348"));
*
*/
System.out.println(TableInputFormat.INPUT_TABLE);
job.waitForCompletion(true);
// Load generated HFiles into table
LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
loader.doBulkLoad(new Path(args[1]), hTable);
// loader.doBulkLoad(new
// Path("hdfs://localhost:9000/user/685536/outputs12348"), hTable);
}
}
这是我在hbase表中插入值的类, 使用命令create'bulp','fields'创建简单表。 扫描数据输出时没有显示。
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
/**
* HBase bulk import example
* <p>
* Parses Facebook and Twitter messages from CSV files and outputs
* <ImmutableBytesWritable, KeyValue>.
* <p>
* The ImmutableBytesWritable key is used by the TotalOrderPartitioner to map it
* into the correct HBase table region.
* <p>
* The KeyValue value holds the HBase mutation information (column family,
* column, and value)
*/
public class HBaseKVMapper extends
Mapper<LongWritable, Text, ImmutableBytesWritable, KeyValue> {
final static byte[] SRV_COL_FAM = "fields".getBytes();
String tableName = "";
ImmutableBytesWritable hKey = new ImmutableBytesWritable();
KeyValue kv;
/** {@inheritDoc} */
@Override
protected void setup(Context context) throws IOException,
InterruptedException {
Configuration c = context.getConfiguration();
tableName = c.get("hbase.mapreduce.inputtable");
}
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String fields[] = { "", "", "", "", "", "" };
String field = value.toString();
fields = field.split(",");
String fieldValue = fields[1];
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String filename = fileSplit.getPath().getName();
hKey.set((filename).getBytes());
for(int i=2 ; i<fields.length ; i++){
fieldValue = fieldValue.concat(","+fields[i]) ;
}
//fieldValue = fieldValue.substring(0,(fieldValue.length())-2);
System.out.println(fieldValue);
kv = new KeyValue(hKey.get(), SRV_COL_FAM, Bytes.toBytes(fields[0]),
Bytes.toBytes(fieldValue));
context.write(hKey, kv);
}
}
该程序在伪分布式模式下成功执行。
答案 0 :(得分:0)
尝试HADOOP_CLASSPATH ='hbase classpath'hedoop jar Desktop / bulk.jar / user / hadoop / user / hadoop / load bulk
我不确定MR是否采用确切的输入路径FILE NAME。它可能需要路径(文件夹路径)。