我想使用wget逐个下载文件,并使用CURL获取内容。
我有一个sn.txt文件,其项目编号如下:
1234
2345
3456
4567
我想要的网址是:
http://myweb.com/items/item-check?item=1234
http://myweb.com/items/item-check?item=2345
http://myweb.com/items/item-check?item=3456
http://myweb.com/items/item-check?item=4567
如果我跑:
#!/bin/bash
TempFile="tmp/item.$(date +%s).txt"
while read sn; do
echo Checking Item Number: $sn
until grep -q "$sn" itemlist.txt; do
wget http://myweb.com/items/item-check?item=$sn -O $TempFile
GetaDiv()
{
grep -A 98 '<div class="main-info">' ${TempFile} |grep -A 2 '<div class="field">'
}
aDiv=$(GetaDiv)
(echo "${aDiv}") >> itemlist.txt
done
echo "Done! Item ADDED SUCCESSFULLY"
done < sn.txt
上面的代码无法获得任何结果..如果我只为1个URL运行相同的代码,那么它可以正常工作。
这很好用:
#!/bin/bash
TempFile="tmp/item.$(date +%s).txt"
while read sn; do
echo Checking Item Number: $sn
until grep -q "$sn" itemlist.txt; do
wget http://myweb.com/items/item-check?item=1234 -O $TempFile
GetaDiv()
{
grep -A 98 '<div class="main-info">' ${TempFile} |grep -A 2 '<div class="field">'
}
aDiv=$(GetaDiv)
(echo "${aDiv}") >> itemlist.txt
done
echo "Done! Item ADDED SUCCESSFULLY"
done < sn.txt
为什么我不能在这个bash循环文件中运行所有链接?它适用于单个请求。看起来bash并不等待wget完成。