使用bash脚本循环包含域的文本文件

时间:2012-07-25 07:07:48

标签: linux bash loops

嘿伙计们我编写了一个脚本来读取网页的href标签并获取该网页上的链接并将其写入文本文件。现在我有一个包含这些链接的文本文件,例如:

http://news.bbc.co.uk/2/hi/health/default.stm
http://news.bbc.co.uk/weather/
http://news.bbc.co.uk/weather/forecast/8?area=London
http://newsvote.bbc.co.uk/1/shared/fds/hi/business/market_data/overview/default.stm
http://purl.org/dc/terms/
http://static.bbci.co.uk/bbcdotcom/0.3.131/style/3pt_ads.css
http://static.bbci.co.uk/frameworks/barlesque/2.8.7/desktop/3.5/style/main.css
http://static.bbci.co.uk/frameworks/pulsesurvey/0.7.0/style/pulse.css
http://static.bbci.co.uk/wwhomepage-3.5/1.0.48/css/bundles/ie6.css
http://static.bbci.co.uk/wwhomepage-3.5/1.0.48/css/bundles/ie7.css
http://static.bbci.co.uk/wwhomepage-3.5/1.0.48/css/bundles/ie8.css
http://static.bbci.co.uk/wwhomepage-3.5/1.0.48/css/bundles/main.css
http://static.bbci.co.uk/wwhomepage-3.5/1.0.48/img/iphone.png
http://www.bbcamerica.com/
http://www.bbc.com/future
http://www.bbc.com/future/
http://www.bbc.com/future/story/20120719-how-to-land-on-mars
http://www.bbc.com/future/story/20120719-road-opens-for-connected-cars
http://www.bbc.com/future/story/20120724-in-search-of-aliens
http://www.bbc.com/news/

我希望能够过滤它们,以便我返回类似的内容:

http://www.bbc.com : 6
http://static.bbci.co.uk: 15

侧面的值表示域在文件中出现的次数。我怎么能在bash中实现这一点,因为我会有一个循环遍历文件。我是bash shell脚本的新手吗?

2 个答案:

答案 0 :(得分:7)

$ cut -d/ -f-3 urls.txt | sort | uniq -c                  
3 http://news.bbc.co.uk
1 http://newsvote.bbc.co.uk
1 http://purl.org
8 http://static.bbci.co.uk
1 http://www.bbcamerica.com
6 http://www.bbc.com

答案 1 :(得分:6)

就像这样

egrep -o '^http://[^/]+' domain.txt | sort | uniq -c

在示例数据上输出:

3 http://news.bbc.co.uk/
1 http://newsvote.bbc.co.uk/
1 http://purl.org/
8 http://static.bbci.co.uk/
6 http://www.bbc.com/
1 http://www.bbcamerica.com/

即使您的专线由一个没有尾部斜线的简单网址组成,此解决方案也能正常工作,所以

http://www.bbc.com/news
http://www.bbc.com/
http://www.bbc.com

将全部属于同一组。

如果您想允许https,那么您可以写:

egrep -o '^https?://[^/]+' domain.txt | sort | uniq -c

如果可以使用其他协议,例如ftp,mailto等,你甚至可以非常宽松地写下来:

egrep -o '^[^:]+://[^/]+' domain.txt | sort | uniq -c