我使用命令行使用以下命令将csv文件从本地系统放入HDFS系统:
C:\Hadoop\hadoop-2.7.3\bin>hdfs dfs -put c:\hdfs\stock.csv /user/XYZ
我得到的输出错误是:
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.uti
l.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(
Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(Nati
veCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum
.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme
r.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1
63)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1
44)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java
:2250)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:223
2)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOut
putStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java
:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.wr
iteStreamToFile(CommandWithDestination.java:466)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(
CommandWithDestination.java:391)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(Co
mmandWithDestination.java:328)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
WithDestination.java:263)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
WithDestination.java:248)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:2
89)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument
(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(Co
mmandWithDestination.java:220)
at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyComm
ands.java:267)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:2
01)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
有人可以帮我解决如何解决此错误,或者这不是使用命令行在HDFS中转储文件的正确方法
答案 0 :(得分:1)
请在Windows命令提示符下使用正斜杠(/)。见下面的例子
C:\Hadoop\hadoop-2.7.3\bin>hdfs dfs -put c:/hdfs/stock.csv /user/XYZ