我已经得到以下命令,它给了我hadoop集群中一堆文件夹的字节大小:
$ hdfs dfs -du -s /foo/bar/*tobedeleted | sort -r -k 1 -g | awk '{print $1, $3}'
31641789771845 /foo/bar/card_dim_h_tobedeleted
22541622495592 /foo/bar/transaction_item_fct_tobedeleted
3174354180367 /foo/bar/card_dim_h_new_tobedeleted
2336463389768 /foo/bar/hshd_loyalty_seg_tobedeleted
1238268384713 /foo/bar/prod_dim_h_tobedeleted
652639933614 /foo/bar/promo_item_fct_tobedeleted
490394392674 /foo/bar/card_dim_c_tobedeleted
365312782231 /foo/bar/ch_contact_offer_alc_fct_tobedeleted
218694228546 /foo/bar/prod_dim_h_new_tobedeleted
197884747070 /foo/bar/card_dim_h_test_tobedeleted
178553987067 /foo/bar/offer_dim_h_tobedeleted
124005189706 /foo/bar/promo_dim_h_tobedeleted
94380212623 /foo/bar/offer_tier_dtl_h_tobedeleted
91109144322 /foo/bar/ch_contact_offer_dlv_fct_tobedeleted
54487330914 /foo/bar/ch_contact_event_dlv_fct_tobedeleted
我想要的是格式化带有GB / TB后缀的数字。我知道我可以使用du -h
格式化它们但是一旦我这样做,sort命令就不起作用。
我知道我可以这样做:
$ hdfs dfs -du -s /foo/bar/*tobedeleted | sort -r -k 1 -g | awk '{print $1, $3}' | awk '{total = $1 / 1024 /1024 / 1024 / 1024; print total "TB", $2}'
28.778TB /foo/bar/card_dim_h_tobedeleted
20.5015TB /foo/bar/transaction_item_fct_tobedeleted
2.88706TB /foo/bar/card_dim_h_new_tobedeleted
2.125TB /foo/bar/hshd_loyalty_seg_tobedeleted
1.1262TB /foo/bar/prod_dim_h_tobedeleted
0.593573TB /foo/bar/promo_item_fct_tobedeleted
0.446011TB /foo/bar/card_dim_c_tobedeleted
0.33225TB /foo/bar/ch_contact_offer_alc_fct_tobedeleted
0.198901TB /foo/bar/prod_dim_h_new_tobedeleted
0.179975TB /foo/bar/card_dim_h_test_tobedeleted
0.162394TB /foo/bar/offer_dim_h_tobedeleted
0.112782TB /foo/bar/promo_dim_h_tobedeleted
0.0858383TB /foo/bar/offer_tier_dtl_h_tobedeleted
0.0828633TB /foo/bar/ch_contact_offer_dlv_fct_tobedeleted
0.0495559TB /foo/bar/ch_contact_event_dlv_fct_tobedeleted
但是那些打印的东西都是结核病,这不是我想要的。可能我可以把一些聪明的,如果...那么......其他逻辑放到最后一个awk命令中去做我想要的但是我希望有一个简单的格式化选项我不知道那个会做我想做的事。
答案 0 :(得分:4)
也许这就是你要找的东西:
hdfs dfs -du -s /foo/bar/*tobedeleted | \
sort -r -k 1 -g | \
awk '{ suffix=" KMGT"; for(i=1; $1>1024 && i < length(suffix); i++) $1/=1024; print int($1) substr(suffix, i, 1), $3; }'
答案 1 :(得分:1)
您可以使用du with -h选项以人类可读的方式显示数据 hdfs dfs -du -s -h / user / vgunnu
以下是更多信息 https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#du
答案 2 :(得分:1)
@ innocent-bystander想出来了(只需稍微修改他/她建议的解决方案):
$ hdfs dfs -du -s /foo/bar/* | sort -r -k 1 -g | head -5 | awk '{ suffix="KMGT"; for(i=0; $1>1024 && i < length(suffix); i++) $1/=1024; print int($1) substr(suffix, i, 1), $3; }'
28T /foo/bar/card_dim_h_tobedeleted
20T /foo/bar/transaction_item_fct_tobedeleted
2T /foo/bar/card_dim_h_new_tobedeleted
2T /foo/bar/hshd_loyalty_seg_tobedeleted
1T /foo/bar/prod_dim_h_tobedeleted
([{1}}也只是为了节省一些空间)
非常感谢你。不仅要解决这个问题,还要教我一些我对awk不了解的东西。非常强大不是吗?