我正在开发一个程序,该程序从mqtt代理获取gps数据并将其加载到hadoop集群中。在尝试将数据写入hdfs时,我得到了IOException。以下是完整的堆栈跟踪:
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "quickstart.cloudera/192.168.25.170"; destination host is: "quickstart.cloudera":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
at org.apache.hadoop.ipc.Client.call(Client.java:1165)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
at com.sun.proxy.$Proxy7.create(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
at com.sun.proxy.$Proxy7.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:187)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1250)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1269)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1063)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1021)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:232)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:75)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:806)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:686)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:675)
at com.mqttHadoopLoader.hadoop.MqttLoader.HdfsWriter.writeToHdfs(HdfsWriter.java:19)
at com.mqttHadoopLoader.hadoop.MqttLoader.MqttDataLoader.messageArrived(MqttDataLoader.java:43)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.handleMessage(CommsCallback.java:354)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.run(CommsCallback.java:162)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status
at com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:81)
at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.buildParsed(RpcPayloadHeaderProtos.java:1094)
at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.access$1300(RpcPayloadHeaderProtos.java:1028)
at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:986)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:850)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:781)
当我尝试创建OutputStream时似乎发生了错误,但很难说因为我的eclipse调试器没有正常工作(它说它无法连接到VM而我&# 39;我尝试过在stackoverflow上发布的大量修补程序)。这是我的HdfsWriter的代码:
String destFile = "hdfs://127.0.0.0.1:8020/gpsData/output/gps_data.txt";
//Note *this is just a placeholder IP address for the purpose of this post. I do have the fully correct IP address for the program.
public void writeToHdfs(String gpsInfo) {
try {
Configuration conf = new Configuration();
System.out.println("Connecting to -- " + conf.get("fs.defaultFS"));
FileSystem fs = FileSystem.get(URI.create(destFile), conf);
System.out.println(fs.getUri());
// Error seems to occur here
OutputStream outStream = fs.create(new Path(destFile));
byte[] messageByt = gpsInfo.getBytes();
outStream.write(messageByt);
outStream.close();
System.out.println(destFile + " copied to HDFS");
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
这是调用HdfsWriter的mqtt方法:
public void messageArrived(String topic, MqttMessage message)
throws Exception {
System.out.println(message);
HdfsWriter hdfsWriter = new HdfsWriter();
hdfsWriter.writeToHdfs(message.toString());
}
我对hadoop仍然很新,所以任何和所有的帮助都会很棒。
更新
我的调试工作正常,并且可以明确地告诉您每当我尝试调用FileSystem方法时都会发生错误。例如,错误也由fs.exists(pt)
和fs.setReplication()
触发。
答案 0 :(得分:0)
我相信使用谷歌protobuf库的hdfs。并且您的客户端代码似乎使用了错误(不兼容)的protobuf版本。尝试朝这个方向挖掘。
答案 1 :(得分:0)
HDFS客户端和NameNode之间的协议使用Google Protocol Buffers来序列化消息。该错误表示客户端发送的消息未包含所有预期字段,因此与服务器不兼容。
这可能表示您运行的是早于NameNode版本的HDFS客户端版本。例如,callId
字段在Apache JIRA问题HADOOP-9762跟踪的功能中实现,并在Apache Hadoop 2.1.0-beta中提供。该版本之前的客户端不会在其消息中包含callId
,因此它与运行2.1.0-beta或更高版本的NameNode不兼容。
我建议您查看您的客户端应用程序,以确保它使用的Hadoop客户端库版本与Hadoop集群版本相匹配。从堆栈跟踪中,您似乎正在使用Cloudera分发。如果是这样,那么通过使用Cloudera在其Maven存储库中提供的匹配客户端库依赖性版本,您可能获得最大的成功。有关详细信息,请参阅Using the CDH 5 Maven Repository。