Python刮刀建议

时间:2017-02-27 17:34:42

标签: python web-scraping beautifulsoup urllib bs4

我现在已经在刮刀上工作了一段时间,并且非常接近让它按预期运行。我的代码如下:

import urllib.request
from bs4 import BeautifulSoup


# Crawls main site to get a list of city URLs
def getCityLinks():
    city_sauce = urllib.request.urlopen('https://www.prodigy-living.co.uk/') # Enter url here 
    city_soup = BeautifulSoup(city_sauce, 'html.parser')
    the_city_links = []

    for city in city_soup.findAll('div', class_="city-location-menu"):
        for a in city.findAll('a', href=True, text=True):
            the_city_links.append('https://www.prodigy-living.co.uk/' + a['href'])
    return the_city_links

# Crawls each of the city web pages to get a list of unit URLs
def getUnitLinks():
    getCityLinks()
    for the_city_links in getCityLinks():
        unit_sauce = urllib.request.urlopen(the_city_links)
        unit_soup = BeautifulSoup(unit_sauce, 'html.parser')
        for unit_href in unit_soup.findAll('a', class_="btn white-green icon-right-open-big", href=True):
            yield('the_url' + unit_href['href'])

the_unit_links = []
for link in getUnitLinks():
    the_unit_links.append(link)

# Soups returns all of the html for the items in the_unit_links


def soups():
    for the_links in the_unit_links:
        try:
            sauce = urllib.request.urlopen(the_links)
            for things in sauce:
                soup_maker = BeautifulSoup(things, 'html.parser')
                yield(soup_maker)
        except:
            print('Invalid url')

# Below scrapes property name, room type and room price

def getPropNames(soup):
    try:
        for propName in soup.findAll('div', class_="property-cta"):
            for h1 in propName.findAll('h1'):
                print(h1.text)
    except:
        print('Name not found')

def getPrice(soup):
    try:
        for price in soup.findAll('p', class_="room-price"):
            print(price.text)
    except:
        print('Price not found')


def getRoom(soup):
    try:
        for theRoom in soup.findAll('div', class_="featured-item-inner"):
            for h5 in theRoom.findAll('h5'):
                print(h5.text)
    except:
        print('Room not found')

for soup in soups():
    getPropNames(soup)
    getPrice(soup)
    getRoom(soup)

当我运行它时,它会返回所有拾取的网址的所有价格。但是,我没有返回姓名或房间,我不确定为什么。我真的很感激任何有关这方面的指示,或者改进我的代码的方法 - 几个月来一直在学习Python!

1 个答案:

答案 0 :(得分:1)

我认为您正在抓取的链接最终会将您重定向到另一个网站,在这种情况下,您的抓取功能将无用! 例如,伯明翰一个房间的链接会将您重定向到另一个网站。

另外,请谨慎使用BS中的findfind_all方法。第一个只返回一个标记(如果你想要一个属性名称),而find_all()将返回一个列表,允许你获得,例如,多个房间价格和类型。

无论如何,我简化了你的代码,这就是我遇到你的问题的方法。也许你想从中得到一些灵感:

import requests
from bs4 import BeautifulSoup

main_url = "https://www.prodigy-living.co.uk/"

# Getting individual cities url
re = requests.get(main_url)
soup = BeautifulSoup(re.text, "html.parser")
city_tags = soup.find("div", class_ = "footer-city-nav") # Bottom page not loaded dynamycally
cities_links = [main_url+tag["href"] for tag in city_tags.find_all("a")] # Links to cities


# Getting the individual links to the apts
indiv_apts = []

for link in cities_links[0:4]:
    print "At link: ", link
    re = requests.get(link)
    soup = BeautifulSoup(re.text, "html.parser")
    links_tags = soup.find_all("a", class_ = "btn white-green icon-right-open-big")

    for url in links_tags:
        indiv_apts.append(main_url+url.get("href"))

# Now defining your functions
def GetName(tag):
    print tag.find("h1").get_text()

def GetType_Price(tags_list):
    for tag in tags_list:
        print tag.find("h5").get_text()
        print tag.find("p", class_ = "room-price").get_text()

# Now scraping teach of the apts - name, price, room.
for link in indiv_apts[0:2]:
    print "At link: ", link
    re = requests.get(link)
    soup = BeautifulSoup(re.text, "html.parser")
    property_tag = soup.find("div", class_ = "property-cta")
    rooms_tags = soup.find_all("div", class_ = "featured-item")
    GetName(property_tag)
    GetType_Price(rooms_tags)

您将在lis的第二个元素处看到,您将获得AttributeError,因为您不再在您的网站页面上。事实上:

>>> print indiv_apts[1]
https://www.prodigy-living.co.uk/http://www.iqstudentaccommodation.com/student-accommodation/birmingham/penworks-house?utm_source=prodigylivingwebsite&utm_campaign=birminghampagepenworksbutton&utm_medium=referral # You will not scrape the expected link right at the beginning

下次有一个要解决的精确问题,或者在另一种情况下,只需看看代码审查部分。

findfind_all上:https://www.crummy.com/software/BeautifulSoup/bs4/doc/#calling-a-tag-is-like-calling-find-all

最后,我认为它也在这里回答了你的问题:https://stackoverflow.com/questions/42506033/urllib-error-urlerror-urlopen-error-errno-11001-getaddrinfo-failed

干杯:)