使用Beautifulsoup刮取网页的URL

时间:2016-04-22 18:27:56

标签: python web-scraping beautifulsoup

我可以将页面刮到头条新闻,没问题。 URL是另一个故事。它们是附加在基本URL末尾的片段 - 我理解...... 我需要以格式提取相关的存储URL - base_url.scraped_fragment

from urllib2 import urlopen
import requests
from bs4 import BeautifulSoup
import csv
import MySQLdb
import re


html = urlopen("http://advances.sciencemag.org/")
soup = BeautifulSoup(html.read().decode('utf-8'),"lxml")
#links = soup.findAll("a","href")
headlines = soup.findAll("div", "highwire-cite-title media__headline__title")
    for headline in headlines:
    text = (headline.get_text())
    print text

1 个答案:

答案 0 :(得分:0)

首先,类名之间应该有一个空格:

highwire-cite-title media__headline__title
               HERE^

无论如何,由于您需要这些链接,因此您应找到a元素并使用urljoin()制作绝对网址:

from urlparse import urljoin

import requests
from bs4 import BeautifulSoup


base_url = "http://advances.sciencemag.org"
response = requests.get(base_url)
soup = BeautifulSoup(response.content, "lxml")

headlines = soup.find_all(class_="highwire-cite-linked-title")
for headline in headlines:
    print(urljoin(base_url, headline["href"]))

打印:

http://advances.sciencemag.org/content/2/4/e1600069
http://advances.sciencemag.org/content/2/4/e1501914
http://advances.sciencemag.org/content/2/4/e1501737
...
http://advances.sciencemag.org/content/2/2
http://advances.sciencemag.org/content/2/1