Python - BeautifulSoup Webscrape

时间:2015-03-10 19:05:25

标签: python html web-scraping beautifulsoup html-parsing

我正在尝试从以下网站(http://thedataweb.rm.census.gov/ftp/cps_ftp.html)中删除一个网址列表,但是我按照教程没有运气。这是我尝试过的代码的一个例子:

from bs4 import BeautifulSoup
import urllib2

url         = "http://thedataweb.rm.census.gov/ftp/cps_ftp.html"
page        = urllib2.urlopen(url)
soup        = BeautifulSoup(page.read())
cpsLinks    = soup.findAll(text = 
              "http://thedataweb.rm.census.gov/pub/cps/basic/")

print(cpsLinks)

我正在尝试提取这些链接:

http://thedataweb.rm.census.gov/pub/cps/basic/201501-/jan15pub.dat.gz

这些链接中可能有大约200个。我怎么能得到它们?

1 个答案:

答案 0 :(得分:2)

据我了解,您想要提取遵循特定模式的链接BeautifulSoup允许您将a regular expression pattern指定为属性值。

让我们使用以下模式:pub/cps/basic/\d+\-/\w+\.dat\.gz$'。它会匹配pub/cps/basic/后跟一个或多个数字(\d+),后跟连字符(\-),后跟斜杠,一个或多个字母数字字符(\w+ ),然后是字符串末尾的.dat.gz。请注意,-.在正则表达式中具有特殊含义,需要使用反斜杠进行转义。

代码:

import re
import urllib2

from bs4 import BeautifulSoup


url = "http://thedataweb.rm.census.gov/ftp/cps_ftp.html"
soup = BeautifulSoup(urllib2.urlopen(url))

links = soup.find_all(href=re.compile(r'pub/cps/basic/\d+\-/\w+\.dat\.gz$'))

for link in links:
    print link.text, link['href']

打印:

13,232,040 http://thedataweb.rm.census.gov/pub/cps/basic/201501-/jan15pub.dat.gz
13,204,510 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/dec14pub.dat.gz
13,394,607 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/nov14pub.dat.gz
13,409,743 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/oct14pub.dat.gz
13,208,428 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/sep14pub.dat.gz
...
10,866,849 http://thedataweb.rm.census.gov/pub/cps/basic/199801-/jan99pub.dat.gz
3,172,305 http://thedataweb.rm.census.gov/pub/cps/basic/200701-/disability.dat.gz