我写了一个脚本,使用scrapy从网站上获取name
,phone
号和email
。我关注的内容在两个不同的链接中可用,例如一个链接中的name
和phone
,而另一个链接中的email
。我在这里以yellowpages.com
为例,试图以某种方式实现逻辑,以便即使在登录页面中也可以解析email
。这是我不能使用 元 的要求。但是,我结合使用requests
和BeautifulSoup
和scrapy来完成符合上述条件的工作,但这确实很慢。
工作一个(以及requests
和BeautifulSoup
):
import scrapy
import requests
from bs4 import BeautifulSoup
from scrapy.crawler import CrawlerProcess
def get_email(target_link):
res = requests.get(target_link)
soup = BeautifulSoup(res.text,"lxml")
email = soup.select_one("a.email-business[href^='mailto:']")
if email:
return email.get("href")
else:
return None
class YellowpagesSpider(scrapy.Spider):
name = "yellowpages"
start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]
def parse(self,response):
for items in response.css("div.v-card .info"):
name = items.css("a.business-name > span::text").get()
phone = items.css("div.phones::text").get()
email = get_email(response.urljoin(items.css("a.business-name::attr(href)").get()))
yield {"Name":name,"Phone":phone,"Email":email}
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(YellowpagesSpider)
c.start()
我正在尝试在没有requests
和BeautifulSoup
的情况下模仿以上概念,但无法使其正常工作。
import scrapy
from scrapy.crawler import CrawlerProcess
class YellowpagesSpider(scrapy.Spider):
name = "yellowpages"
start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]
def parse(self,response):
for items in response.css("div.v-card .info"):
name = items.css("a.business-name > span::text").get()
phone = items.css("div.phones::text").get()
email_link = response.urljoin(items.css("a.business-name::attr(href)").get())
#CANT APPLY THE LOGIC IN THE FOLLOWING LINE
email = self.get_email(email_link)
yield {"Name":name,"Phone":phone,"Email":email}
def get_email(self,link):
email = response.css("a.email-business[href^='mailto:']::attr(href)").get()
return email
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(YellowpagesSpider)
c.start()
如何使我的第二个脚本模仿第一个脚本?
答案 0 :(得分:1)
我会使用response.meta
,但是如果需要避免使用,好吧,让我们尝试另一种方式:检查lib https://pypi.org/project/scrapy-inline-requests/
from inline_requests import inline_requests
class YellowpagesSpider(scrapy.Spider):
name = "yellowpages"
start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]
@inline_requests
def parse(self, response):
for items in response.css("div.v-card .info"):
name = items.css("a.business-name > span::text").get()
phone = items.css("div.phones::text").get()
email_url = items.css("a.business-name::attr(href)").get()
email_resp = yield scrapy.Request(response.urljoin(email_url), meta={'handle_httpstatus_all': True})
email = email_resp.css("a.email-business[href^='mailto:']::attr(href)").get() if email_resp.status == 200 else None
yield {"Name": name, "Phone": phone, "Email": email}