我的脚本有一些问题。我想将每个csv文件从hdfs复制到本地文件,并删除所有其余的
在本地运行此程序将是
find /Navigation -not -name '*.csv' -not -path /Navigation -exec rm -vr {} \;
cp -r $PATH_HDFS_CAMPAGNE_MARKETING /data/logs/bi_extract_nav/
这很适合我 我尝试过
hdfs dfs -find /datalake/data/projects/marketing/campagnes -not -name '*.csv' -not -path /datalake/data/projects/marketing/campagnes -exec hdfs dfs -rm -vr {} \;
hdfs dfs -copyToLocal $PATH_HDFS_CAMPAGNE_MARKETING /data/logs/bi_extract_nav/
但是它不起作用
run_spark_yarn
hdfs dfs -find /datalake/data/projects/marketing/campagnes -not -name '*.csv' -not -path /datalake/data/projects/marketing/campagnes -exec hdfs dfs -rm -vr {} \;
hdfs dfs -copyToLocal $PATH_HDFS_CAMPAGNE_MARKETING /data/logs/bi_extract_nav/
else
PATH_HDFS_CAMPAGNE_MARKETING=/Navigation/
run_spark_local
find /Navigation -not -name '*.csv' -not -path /Navigation -exec rm -vr {} \;
cp -r $PATH_HDFS_CAMPAGNE_MARKETING /data/logs/bi_extract_nav/