我有100个.txt文件,每个文件大约有1万行。
有没有办法打开所有文件,删除重复项并相应地保存每行文件(php / unix等)?
例如:
file1.txt内容
Something here1
Something here2
file2.txt内容
Something here2
Something here3
删除后:
file1.txt内容
Something here1
Something here2
file2.txt内容
Something here 3
答案 0 :(得分:1)
使用Unix排序& grep的:
如果行的顺序无关紧要:
sort -u file1.txt > _temp && mv _temp file1.txt
如果行的顺序很重要:
awk 'FNR==NR{a[$0];next} ($0 in a) {delete a[$0]; print}' file1.txt file1.txt _temp && mv _temp file1.txt
grep -v -f file1.txt file2.txt > _temp && mv _temp file2.txt
答案 1 :(得分:0)
我测试了这个,它有效。每个文件中都没有维护行顺序,但您在评论中说您已经在应用sort
,因此无关紧要。它有点迂回,但确实有效:
#!/bin/bash
#The number of files you have, named like file1.txt, file2.txt, etc.
# If named otherwise, cahnge the definition of variable "file" in the loop below.
NUM_FILES=3
#These files will be created and removed during the script, so make sure they're
# not files you already have around.
tempfile1="_all.txt"
tempfile2="_tmp.txt"
sort -u file1.txt > file1out.txt
cat file1out.txt > $tempfile1
for i in $(seq 2 $NUM_FILES)
do
prev=$((i-1))
pofile="file${prev}out.txt"
file="file$i.txt"
ofile="file${i}out.txt"
echo "Input files: $file $pofile"
echo "Output file: $ofile"
cat $tempfile1 $pofile > $tempfile2
sort -u $tempfile2 > $tempfile1
sort -u $file | comm -23 - $tempfile1 > $ofile
done
rm -f $tempfile1 $tempfile2
答案 2 :(得分:0)
$file1 = explode("\n", file_get_contents('file1.txt')); $file2 = explode("\n", file_get_contents('file2.txt'));
$f1 = array_unique($file1); $f2 = array_unique($file2);
$new_f2 = array_diff($f2,$f1);
现在你有$ f1和$ new_f2唯一值。
现在只需更新文件即可。
注意:对于多个文件,请以递归方式执行此操作