我正在使用requests
来获取网页,例如如下。
import requests
from bs4 import BeautifulSoup
url = "http://www.ofsted.gov.uk/inspection-reports/find-inspection-report/provider/CARE/EY298883"
r = requests.get(url)
soup = BeautifulSoup(r.text)
对于这些页面中的每一页,我想获得第一个在“最新报告”部分中指出的pdf。你怎么能用美味的汤做到这一点?
HTML的相关部分是
<tbody>
<tr>
<th scope="col">Latest reports</th>
<th scope="col" class="date">Inspection <br/>date</th>
<th scope="col" class="date">First<br/>publication<br/>date</th>
</tr>
<tr>
<td><a href="/provider/files/1266031/urn/106428.pdf"><span class="icon pdf">pdf</span> Early years inspection report </a></td>
<td class="date">12 Mar 2009</td>
<td class="date">4 Apr 2009</td>
</tr> </tbody>
以下代码看起来应该可以工作但不会。
ofstedbase = "http://www.ofsted.gov.uk" for col_header in soup.findAll('th'): if not col_header.contents[0] == "Latest reports": continue for link in col_header.parent.parent.findAll('a'): if 'href' in link.attrs and link['href'].endswith('pdf'): break else: print '"Latest reports" PDF not found' break print '"Latest reports" PDF points at', link['href'] p = requests.get(ofstedbase+link['href']) print p.content break
问题是p
包含另一个网页,而不是它应该的pdf。有没有办法获得实际的PDF格式?
更新
让它再使用BeautifulSoup的一次迭代
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])
感激地收到任何更好/更好的解决方案。
答案 0 :(得分:2)
这不是最干净的解决方案,但您可以遍历列标题,直到找到“最新报告”,然后在该表中搜索指向PDF文件的第一个链接。
for col_header in soup.findAll('th'):
if not col_header.contents[0] == "Latest reports": continue
for link in col_header.parent.parent.findAll('a'):
if 'href' in link.attrs and link['href'].endswith('pdf'): break
else:
print '"Latest reports" PDF not found'
break
print '"Latest reports" PDF points at', link['href']
break
您可以尝试使用Selenium WebDriver(python -m "easy_install" selenium
)自动指示Firefox下载该文件。这需要Firefox:
from selenium import webdriver
from bs4 import BeautifulSoup
profile = webdriver.FirefoxProfile()
profile.set_preference('browser.helperApps.neverAsk.saveToDisk', ('application/pdf'))
profile.set_preference("pdfjs.previousHandler.alwaysAskBeforeHandling", False)
profile.set_preference("browser.helperApps.alwaysAsk.force", False)
profile.set_preference("browser.download.manager.showWhenStarting", False)
driver = webdriver.Firefox(firefox_profile = profile)
base_url = "http://www.ofsted.gov.uk"
driver.get(base_url + "/inspection-reports/find-inspection-report/provider/CARE/EY298883")
soup = BeautifulSoup(driver.page_source)
for col_header in soup.findAll('th'):
if not col_header.contents[0] == "Latest reports": continue
for link in col_header.parent.parent.findAll('a'):
if 'href' in link.attrs and link['href'].endswith('pdf'): break
else:
print '"Latest reports" PDF not found'
break
print '"Latest reports" PDF points at', link['href']
driver.get(base_url + link['href'])
这个解决方案非常强大,因为它可以完成人类用户所能做的一切,但它有缺点。例如,我试图解决Firefox提示下载的问题,但它对我不起作用。结果可能会因您安装的附加组件和Firefox版本而异。
答案 1 :(得分:0)
让它再使用BeautifulSoup的一次迭代
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])