我试图抓取数据,但它只获得10页的数据,而有26页

时间:2018-04-20 10:03:38

标签: python python-3.x web-scraping beautifulsoup python-requests

  import requests
from bs4 import BeautifulSoup

r = requests.get("https://www.flipkart.com/search?as=on&as-pos=1_1_ic_lapto&as-show=on&otracker=start&page=1&q=laptop&sid=6bo%2Fb5g&viewType=list")

c = r.content

soup = BeautifulSoup(c,"html.parser")

all = soup.find_all("div",{"class":"col _2-gKeQ"})

page_nr=soup.find_all("a",{"class":"_33m_Yg"})[-1].text
print(page_nr,"number of pages were found")



#all[0].find("div",{"class":"_1vC4OE _2rQ-NK"}).text



l=[]
base_url="https://www.flipkart.com/search?as=on&as-pos=1_1_ic_lapto&as-show=on&otracker=start&page=1&q=laptop&sid=6bo%2Fb5g&viewType=list"
for page in range(0,int(page_nr)*10,10):
    print( )
    r=requests.get(base_url+str(page)+".html")
    c=r.content
    #c=r.json()["list"]
    soup=BeautifulSoup(c,"html.parser")    

    for item in all:
        d ={}
        #price
        d["Price"] = item.find("div",{"class":"_1vC4OE _2rQ-NK"}).text
        #Name
        d["Name"] =  item.find("div",{"class":"_3wU53n"}).text

        for li in item.find_all("li",{"class":"_1ZRRx1"}):
            if " EMI" in li.text:
                d["EMI"] = li.text
            else:
                d["EMI"] = None

        for li1 in item.find_all("li",{"class":"_1ZRRx1"}):
            if "Special " in li1.text:
                d["Special Price"] = li1.text
            else:
                d["Special Price"] = None    

        for val in item.find_all("li",{"class":"tVe95H"}):
            if "Display" in val.text:
                d["Display"] = val.text

            elif "Warranty" in val.text:
                d["Warrenty"] = val.text

            elif "RAM" in val.text:
                d["Ram"] = val.text



        l.append(d) 




import pandas
df = pandas.DataFrame(l)

3 个答案:

答案 0 :(得分:1)

这可能适用于标准分页

i = 1
items_parsed = set()
loop = True
base_url = "https://www.flipkart.com/search?as=on&as-pos=1_1_ic_lapto&as-show=on&otracker=start&page={}&q=laptop&sid=6bo%2Fb5g&viewType=list"
while True:
    page = requests.get(base_url.format(i))
    items = requests.get(#yourelements#)
    if not items:
        break
    for item in items:
        #Scrap your item and once you sucessfully done the scrap, return the url of the parsed item into url_parsed (details below code) for example:
        url_parsed = your_stuff(items)
        if url_parsed in items_parsed:
            loop = False
        items_parsed.add(url_parsed)
    if not loop:
        break
    i += 1

我将您的网址格式化为?page=X with base_url.format(i),以便它可以迭代,直到您在页面上找不到任何项目,或者有时您在达到max_page + 1时返回第1页。

如果在最大页面之上,您获得了您在第一页上已经解析过的项目,您可以声明一个set()并输入您解析的每个项目的URL,然后检查您是否已经解析过它们。

请注意,这只是一个想法。

答案 1 :(得分:0)

您只能访问初始网址的前10个页面。 您可以从“& page = 1”到“& page = 26”进行循环。

答案 2 :(得分:0)

由于网址中的网页编号几乎位于中间位置,因此我对您的代码应用了类似的更改:

base_url="https://www.flipkart.com/search?as=on&as-pos=1_1_ic_lapto&as-show=on&otracker=start&page="
end_url ="&q=laptop&sid=6bo%2Fb5g&viewType=list"

for page in range(1, page_nr + 1):
    r=requests.get(base_url+str(page)+end_url+".html")