使用beautifulsoup发布抓取数据

时间:2019-11-27 03:25:45

标签: python web-scraping

我正在尝试从yelp抓取数据,特别是纳什维尔餐厅的名称,地址,价格和等级。我正在使用Beautifulsoup。我有两个循环来收集数据。第二个循环有效,但第一个循环仅适用于少数几个。我认为这与课程有关。我尝试了所有我能想到的课程组合,但仍然无法使它起作用。

这是我从那里刮的地方  https://www.yelp.com/search?find_desc=Restaurants&find_loc=Nashville%2C+TN

this is the code on github

当我打印每个列表时,这些是以下结果(如果找不到任何内容,则会添加“ none”): 第一行是公司名称,第二行:评分,第三行:价格,第四行:地址

1 个答案:

答案 0 :(得分:0)

我注意到,该数据正在由javascript提取,因此我使用了此调用,该调用返回json以获取数据。

更快,更干净

import requests , os , csv
from urllib.parse import urljoin

def SaveAsCsv(list_of_rows):
  try:
    with open('data.csv', mode='a',  newline='', encoding='utf-8') as outfile:
      csv.writer(outfile).writerow(list_of_rows)
  except PermissionError:
    print("Please make sure data.csv is closed\n")


def Search():
  payload = {
        'find_desc': 'Restaurants',
        'find_loc': 'Nashville, TN',
        'start': 30, #if you want second page set start to 60 and so on
        'parent_request_id': 'f3d6966567be99d1',
        'request_origin': 'user'}
  res = requests.get(url, params=payload)
  if res.status_code == 200:
    return res.json()

def Extract():
  try:
    JsonObj          = Search()
    Data             = JsonObj['searchPageProps']['searchResultsProps']['searchResults']
    if Data is not None:
      for index , item in enumerate(Data,1):
        print('getting item {} out of {}'.format(index,len(Data)))
        if item.get('searchResultBusiness','') :
          name   = item['searchResultBusiness']['name']
          rating = item['searchResultBusiness']['rating']
          price  = item['searchResultBusiness']['priceRange']
          rank   = item['searchResultBusiness']['ranking']
          review = item['searchResultBusiness']['reviewCount']
          phone  = item['searchResultBusiness']['phone']
          busUrl = urljoin(url ,item['searchResultBusiness']['businessUrl'])
          SaveAsCsv([name,rating,price,rank,review,phone,busUrl])
  except Exception as e:
    print(e)


url = 'https://www.yelp.com/search/snippet'
if os.path.isfile('data.csv') and os.access('data.csv', os.R_OK):
  print("File data.csv Already exists \n")
else:
  SaveAsCsv([ 'name','rating','priceRange','ranking','reviewCount','phone','businessUrl'])
Extract()