以下是我的kafka连接器json文件:
curl -s -k -X POST http://cpnode.local.lan:8083/connectors -H "Content-Type: application/json" --data '{
"name":"jdbc-Hdfs2-Sink-Connector",
"config":{
"tasks.max":"1",
"batch.size":"1000",
"batch.max.rows":"1000",
"hdfs.poll.interval.ms":"500",
"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector",
"hdfs.url":"hdfs://hadoopnode.local.lan:9000",
"topics":"BookList2",
"flush.size":"1",
"confluent.topic.bootstrap.servers":"cpnode.local.lan:9092",
"confluent.topic.replication.factor":"1",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable":"true",
"value.converter.schema.registry.url":"http://cpnode.local.lan:8081",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schemas.enable":"true",
"key.converter.schema.registry.url":"http://cpnode.local.lan:8081"
}
}' | jq '.'
当我尝试使用此连接器时,出现以下错误:
{
"name": "jdbc-Hdfs2-Sink-Connector",
"connector": {
"state": "RUNNING",
"worker_id": "192.168.1.153:8083"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "192.168.1.153:8083",
"trace": "org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.security.AccessControlException: Permission denied: user=cp-user, access=WRITE, inode=\"/\":hadoop:supergroup:drwxr-xr-x
我尝试了export HADOOP_USER_NAME=hdfs
,
hadoop配置hdfs-site.xml
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
但是我想要一个不影响安全性的解决方案。
cp-user 是我的融合平台用户的名字... 合流和hdfs都在不同的VM上
谢谢。...
答案 0 :(得分:0)
您的用户:user = cp用户,
正在尝试access=WRITE
,
到位置inode=\"/\"
具有 hadoop:supergroup:drwxr-xr-x
的用户/组所有权可能的解决方案(不重叠):
cp-user
更改为hadoop
(我假设您正在使用Docker容器?如果这样,请参考Docker的user
指令 < / strong>。否则,请export HADOOP_USER_NAME=hadoop
)cp-user
Unix帐户并将其添加到Hadoop群集的NameNode和所有数据节点