在各种网址上重复相同的刮板代码

时间:2019-05-05 21:33:23

标签: python loops web-scraping beautifulsoup

现在,我需要在多个子域上重复相同的代码。这是我当前的代码:


我已经编辑了代码以更好地反映我的问题:

for base in urls:
    urls = ["https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/almagro/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/palermo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/villa-crespo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/balvanera/empanadas-delivery",]
    page = 1
    restaurants = []

while True:
    soup = bs(requests.get(base + str(page)).text, "html.parser")
    page += 1
    sections = soup.find_all("section", attrs={"class": "restaurantData"})

    if not sections: break

    for section in sections:
        for elem in section.find_all("a", href=True, attrs={"class": "arrivalName"}):
            restaurants.append({"name": elem.text, "url": elem["href"],})

我需要带有以下列的.CSV:

[(url, name of all restaurants in each url, url for each restaurant)]

1 个答案:

答案 0 :(得分:0)

抱歉,花了这么长时间...

我认为这就是您想要的:

from bs4 import BeautifulSoup as bs
from urllib.request import urlopen as uReq
import bs4
import requests
import csv

urls = ["https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/almagro/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/palermo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/villa-crespo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/balvanera/empanadas-delivery",]

#writing

with open("output.csv", 'w', newline='') as csvfile:
    writer = csv.writer(csvfile, delimiter=',')
    writer.writerow(['subdomain', 'name', 'url']) #delete this line if you don't want the header

    for url in urls:
        base = url+ "?bt=RESTAURANT&page="
        page = 1
        restaurants = []

        while True:
            soup = bs(requests.get(base + str(page)).text, "html.parser")                
            sections = soup.find_all("section", attrs={"class": "restaurantData"})

            if not sections: break

            for section in sections:
                for elem in section.find_all("a", href=True, attrs={"class": "arrivalName"}):
                    restaurants.append({"name": elem.text, "url": elem["href"],})
                    writer.writerow([base+str(page),elem.text,elem["href"]])
            page += 1    

#reading

file = open("output.csv", 'r')    
reader = csv.reader(file)

for row in reader:
    #the output is a bunch of lists, which you can do what you want with
    print(row)

以下是输出:

subdomain,name,url
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Cümen-Cümen Empanadas Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,El Maitén Empanadas - Al horno o fritas,https://www.pedidosya.com.ar/restaurantes/buenos-aires/el-maiten-empanadas-al-horno-o-fritas-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Cümen-Cümen Empanadas - Barrio Norte,https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-barrio-norte-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,La Carbonera,https://www.pedidosya.com.ar/restaurantes/buenos-aires/la-carbonera-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Tatú Empanadas Salteñas Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/tatu-empanadas-saltenas-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Morita Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/morita-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Doña Eulogia,https://www.pedidosya.com.ar/restaurantes/buenos-aires/dona-eulogia-menu
...
...
...

使用python读取csv时的输出:

['subdomain', 'name', 'url']
['https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1', 'Cümen-Cümen Empanadas Palermo', 'https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-palermo-menu']
['https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1', 'El Maitén Empanadas - Al horno o fritas', 'https://www.pedidosya.com.ar/restaurantes/buenos-aires/el-maiten-empanadas-al-horno-o-fritas-menu']
...
...
...

因此,当您阅读csv时,就会得到这个(上面),这是一堆可以迭代的列表。

祝你好运!