我在Rhel 7远程服务器中有一个单节点Cloudera群集(CDH 5.16)。 我已经使用软件包安装了CDH。 当我运行sqoop导入作业时,出现以下错误:
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/06/04 15:49:31 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.16.1
19/06/04 15:49:31 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
19/06/04 15:49:32 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
19/06/04 15:49:32 INFO tool.CodeGenTool: Beginning code generation
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
19/06/04 15:49:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
19/06/04 15:49:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
19/06/04 15:49:35 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-ak_bng/compile/d07f2f60a7ecbf9411c79687daa024c9/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/06/04 15:49:37 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-ak_bng/compile/d07f2f60a7ecbf9411c79687daa024c9/categories.jar
19/06/04 15:49:38 ERROR tool.ImportTool: Import failed: org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit: user=ak_bng, path="/user/hive/warehouse/sales.db/categories":hive:hive:drwxr-xr-t, parent="/user/hive/warehouse/sales.db":hive:hive:drwxr-xr-t
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkStickyBit(DefaultAuthorizationProvider.java:387)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:159)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6861)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4290)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4245)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:856)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:313)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2106)
at org.apache.hadoop.hdfs.DistributedFileSystem$13.doCall(DistributedFileSystem.java:688)
at org.apache.hadoop.hdfs.DistributedFileSystem$13.doCall(DistributedFileSystem.java:684)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:684)
at org.apache.sqoop.tool.ImportTool.deleteTargetDir(ImportTool.java:546)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:509)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied by sticky bit: user=ak_bng, path="/user/hive/warehouse/sales.db/categories":hive:hive:drwxr-xr-t, parent="/user/hive/warehouse/sales.db":hive:hive:drwxr-xr-t
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkStickyBit(DefaultAuthorizationProvider.java:387)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:159)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6861)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4290)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4245)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:856)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:313)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272)
at org.apache.hadoop.ipc.Client.call(Client.java:1504)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:231)
at com.sun.proxy.$Proxy10.delete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:552)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy11.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2104)
... 13 more
Sqoop命令是
sqoop import --connect jdbc:mysql://10.188.177.228:3306/sales --username vaishak --password root_123 --table categories --m 1 --delete-target-dir --target-dir /user/hive/warehouse/sales.db/categories
根据以下文档,我尝试更改core-site.xml中的fs.defaultFS。 https://community.cloudera.com/t5/CDH-Manual-Installation/Permission-denied-user-root-access-WRITE-inode-quot-user/td-p/4943 它没有用。
我尝试了stackoverflow中的以下链接: Permission exception for Sqoop
那对我也没有用。 我为ak_bng创建了一个新文件夹,并将其添加到配置单元组中,如下所示:
sudo -u hdfs hadoop fs -mkdir /user/ak_bng
sudo -u hdfs hadoop fs -chown ak_bng:hive /user/ak_bng
我仍然遇到相同的错误。
在几个链接中,我看到了将用户(在我的情况下为ak_bng
)添加到supergroup
的建议。
但是我不知道该怎么做。
很少有人建议以其他用户身份运行sqoop脚本。我也不知道该怎么做。
我是Unix和CDH
的新手,我不知道该如何实现。
当我尝试从HUE编辑器运行sqoop脚本时,我遇到了类似的权限问题 以下是我当时遇到的错误:
Failed to create deployment directory: AccessControlException: Permission denied: user=hive, access=WRITE, inode="/user/hue/oozie/deployments":hue:hue:drwxr-xr-x (error 500)
在CDH之前,我分别设置了Hadoop 3.1
和sqoop
(都在CDH之外),并且我能够成功地将数据导入HDFS
中。
但是在使用CDH
时出现了这些错误。
有人可以说明这里的问题以及如何解决此问题。
hadoop fs -ls /user
的输出
drwx------ - hdfs supergroup 0 2019-06-04 12:47 /user/hdfs
drwxr-xr-x - mapred hadoop 0 2019-05-27 20:06 /user/history
drwxr-xr-t - hive hive 0 2019-06-03 18:01 /user/hive
drwxr-xr-x - hue hue 0 2019-06-03 18:01 /user/hue
drwxr-xr-x - impala impala 0 2019-05-27 20:08 /user/impala
drwxr-xr-x - oozie oozie 0 2019-05-27 20:12 /user/oozie
drwxr-xr-x - spark spark 0 2019-05-27 20:06 /user/spark
组详细信息:
root:x:0:
bin:x:1:
daemon:x:2:
sys:x:3:
adm:x:4:
tty:x:5:
disk:x:6:
lp:x:7:
mem:x:8:
kmem:x:9:
wheel:x:10:ak_bng
cdrom:x:11:
mail:x:12:postfix
man:x:15:
dialout:x:18:
floppy:x:19:
games:x:20:
tape:x:33:
video:x:39:
ftp:x:50:
lock:x:54:
audio:x:63:
nobody:x:99:
users:x:100:
utmp:x:22:
utempter:x:35:
input:x:999:
systemd-journal:x:190:
systemd-network:x:192:
dbus:x:81:
polkitd:x:998:
ssh_keys:x:997:
sshd:x:74:
postdrop:x:90:
postfix:x:89:
printadmin:x:996:
dip:x:40:
cgred:x:995:
rpc:x:32:
libstoragemgmt:x:994:
unbound:x:993:
kvm:x:36:qemu
qemu:x:107:
chrony:x:992:
gluster:x:991:
rtkit:x:172:
radvd:x:75:
tss:x:59:
usbmuxd:x:113:
colord:x:990:
abrt:x:173:
geoclue:x:989:
saslauth:x:76:
libvirt:x:988:
pulse-access:x:987:
pulse-rt:x:986:
pulse:x:171:
gdm:x:42:
setroubleshoot:x:985:
gnome-initial-setup:x:984:
stapusr:x:156:
stapsys:x:157:
stapdev:x:158:
tcpdump:x:72:
avahi:x:70:
slocate:x:21:
ntp:x:38:
ak_bng:x:1000:
localadmin:x:1001:
am_bng:x:1002:
localuser:x:1003:
apache:x:48:
cassandra:x:983:
mysql:x:27:
cloudera-scm:x:982:
hadoop:x:1011:yarn,hdfs,mapred
postgres:x:26:
zookeeper:x:981:
yarn:x:980:
hdfs:x:979:
mapred:x:978:
kms:x:977:kms
httpfs:x:976:httpfs
hbase:x:975:
hive:x:974:impala
sentry:x:973:
solr:x:972:
sqoop:x:971:
flume:x:970:
spark:x:969:
oozie:x:968:
hue:x:967:
impala:x:966:
llama:x:965:
kudu:x:964:
我需要以用户sqoop
的身份从命令行运行ak_bng
脚本
答案 0 :(得分:1)
作为struct BridgeResult<'a> {
result: &'a str,
data: Vec<String>,
}
用户,sqoop希望写入vaishak
。但是,/user/hive/warehouse/sales.db
没有写该目录的权限,因此sqoop会抛出
vaishak
尝试指定vaishak拥有的目标目录并重新运行。例如:Permission denied by sticky bit: user=ak_bng, path="/user/hive/warehouse/sales.db/categories":hive:hive:drwxr-xr-t, parent="/user/hive/warehouse/sales.db":hive:hive:drwxr-xr-t