如何使用BeautifulSoup和Python抓取页面?

时间:2015-04-02 20:25:24

标签: python python-2.7 web-scraping

我正在尝试从BBC Good Food网站上提取信息,但我在收集我收集的数据方面遇到了一些麻烦。

这是我到目前为止所拥有的:

from bs4 import BeautifulSoup
import requests

webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato')
soup = BeautifulSoup(webpage.content)
links = soup.find_all("a")

for anchor in links:
    print(anchor.get('href')), anchor.text

这将返回相关页面中的所有链接以及链接的文字说明,但我想从“'文章”中提取链接。在页面上键入对象。这些是特定食谱的链接。

通过一些实验,我已经设法从文章中返回文本,但我似乎无法提取链接。

2 个答案:

答案 0 :(得分:4)

我看到的与文章标签相关的唯一两件事是href和img.src:

from bs4 import BeautifulSoup
import requests

webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato')
soup = BeautifulSoup(webpage.content)
links = soup.find_all("article")

for ele in links:
    print(ele.a["href"])
    print(ele.img["src"])

链接位于"class=node-title"

from bs4 import BeautifulSoup
import requests

webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato')
soup = BeautifulSoup(webpage.content)


links = soup.find("div",{"class":"main row grid-padding"}).find_all("h2",{"class":"node-title"})

for l in links:
    print(l.a["href"])

/recipes/681646/tomato-tart
/recipes/4468/stuffed-tomatoes
/recipes/1641/charred-tomatoes
/recipes/tomato-confit
/recipes/1575635/roast-tomatoes
/recipes/2536638/tomato-passata
/recipes/2518/cherry-tomatoes
/recipes/681653/stuffed-tomatoes
/recipes/2852676/tomato-sauce
/recipes/2075/tomato-soup
/recipes/339605/tomato-sauce
/recipes/2130/essence-of-tomatoes-
/recipes/2942/tomato-tarts
/recipes/741638/fried-green-tomatoes-with-ripe-tomato-salsa
/recipes/3509/honey-and-thyme-tomatoes

要访问,您需要添加http://www.bbcgoodfood.com

for l in links:
       print(requests.get("http://www.bbcgoodfood.com{}".format(l.a["href"])).status
200
200
200
200
200
200
200
200
200
200

答案 1 :(得分:1)

BBC 美食页面的结构现已更改。

我已经设法调整了这样的代码,虽然不完美但可以构建:

import numpy as np
#Create empty list
listofurls = []
pages = np.arange(1, 10, 1)
ingredientlist = ['milk','eggs','flour']
for ingredient in ingredientlist:
    for page in pages:
        page = requests.get('https://www.bbcgoodfood.com/search/recipes/page/' + str(page) + '/?q=' + ingredient + '&sort=-relevance')
        soup = BeautifulSoup(page.content)
        for link in soup.findAll(class_="standard-card-new__article-title"):
            listofurls.append("https://www.bbcgoodfood.com" + link.get('href'))