即使使用硒自动滚动后也无法删除所有照片

时间:2019-04-03 16:35:41

标签: web-scraping beautifulsoup selenium-chromedriver dynamic-html

我是网络抓取的新手。我目前正在一个项目中,我想抓取instagram用户的所有照片。该用户总共有521个帖子,因此我使用硒向下滚动到了个人资料的底部。但是我仍然只能删除前37张照片。经过进一步检查,我发现随着浏览器上下滚动,在源代码中只有img标签的前几行可见。当我滚动时,以前可见的img标签消失,只有下一行可见。所以只有一定没有。随时可以在html代码中看到行数。我怀疑这就是为什么我只能删除前37张照片的原因。

我想知道如何删除个人资料的所有照片。下面我提到了使用Beautiful Soup和Selenium的当前代码。在这里,“ scroll_down”功能使用硒向下滚动到配置文件的底部。我正在尝试删除功能“ downloading_images”中的所有“ img”标签,但是如上所述,我只能删除前37张照片。

def downloading_images(self):
        soup = BeautifulSoup(self.driver.page_source,'html.parser')
        self.all_images = soup.findAll('img')
        print(len(self.all_images))

        for index,image in enumerate(self.all_images):
            filename = "image_" + str(index) + ".jpg"
            image_path = os.path.join(self.path,filename)
            link = image['src']
            print("Downloading image ", index)

            response = requests.get(link,stream = True)
            try:
                with open(image_path,'wb') as file:
                    shutil.copyfileobj(response.raw,file)
            except Exception as e:
                print(e)
                print('Could not download image no.', index)
                print('Image link',link)

def scroll_down(self):
        sleep(3)
        try:
            num_posts = self.driver.find_element_by_xpath('//span[text()[contains(.," posts")]]/span[@class="g47SY "]')
            str_num_posts = str(num_posts.text).replace(',','')
            self.int_num_posts = int(str_num_posts)

            if self.int_num_posts > 12:
                num_scrolls = int(self.int_num_posts/12) + 3

            print(num_scrolls)
            sleep(3)
            try:
                for win in range(num_scrolls):
                    print(win)
                    self.driver.execute_script('window.scrollTo(0,document.body.scrollHeight);')
                    sleep(3)
            except Exception as e:
                self.error = True
                print(e)
        except Exception as e:
            self.error = True
            print(e)

我在这里搜索了所有相关问题,但是没有一个问题可以帮助我理解如何从代码中获取那些图像,这些代码在滚动时会不断消失。 希望我的问题清楚。预先感谢。

编辑:好的,我尝试刮擦每张滚动条,它似乎起作用。这是我的新代码。

def downloading_images(self):

        print(len(self.all_images))



        for index,image in enumerate(self.all_images):
            filename = "image_" + str(index) + ".jpg"
            image_path = os.path.join(self.path,filename)
            link = image['src']
            print("Downloading image ", index)

            response = requests.get(link,stream = True)
            try:
                with open(image_path,'wb') as file:
                    shutil.copyfileobj(response.raw,file)
            except Exception as e:
                print(e)
                print('Could not download image no.', index)
                print('Image link',link)






    def scroll_down(self):
        sleep(3)
        try:
            num_posts = self.driver.find_element_by_xpath('//span[text()[contains(.," posts")]]/span[@class="g47SY "]')
            str_num_posts = str(num_posts.text).replace(',','')
            self.int_num_posts = int(str_num_posts)

            if self.int_num_posts > 12:
                num_scrolls = int(self.int_num_posts/12) + 1

            else:
                num_scrolls = self.int_num_posts

            print(num_scrolls)
            sleep(3)
            try:
                soup = BeautifulSoup(self.driver.page_source,'html.parser')
                images = soup.findAll('img')

                self.all_images = images
                last_height = self.driver.execute_script("return document.body.scrollHeight")
                for win in range(num_scrolls):
                    print(win)
                    self.driver.execute_script('window.scrollTo(0,document.body.scrollHeight);')
                    sleep(3)
                    new_height = self.driver.execute_script("return document.body.scrollHeight")
                    if new_height == last_height:
                        break
                    soup = BeautifulSoup(self.driver.page_source,'html.parser')
                    images = soup.findAll('img')
                    self.all_images.extend(images[-12:])
                    last_height = new_height

                print(len(self.all_images))
            except Exception as e:
                self.error = True
                print(e)
        except Exception as e:
            self.error = True
            print(e)



    def search_target(self):
        try:
            search_bar = self.driver.find_element_by_xpath('//input[@class="XTCLo x3qfX "]')
            search_bar.send_keys(self.target_username)
            taget_profile_url = self.main_url + '/' + self.target_username + '/'
            self.driver.get(taget_profile_url)
        except Exception as e:
            self.error = True
            print(e)


    def close_notify_box(self):
        try:
            sleep(3)
            not_now_button = self.driver.find_element_by_xpath('//button[@class="aOOlW   HoLwm "]')
            not_now_button.click()
        except Exception:
            pass


    def log_in(self):
        try:
            log_in_button = self.driver.find_element_by_xpath('//p[@class="izU2O"]/a')
            log_in_button.click()
            sleep(3)
            user_name_input = self.driver.find_element_by_xpath('//input[@aria-label="Phone number, username, or email"]')
            user_name_input.send_keys(self.username)
            password_input = self.driver.find_element_by_xpath('//input[@aria-label="Password"]')
            password_input.send_keys(self.password)
            password_input.submit()
        except Exception as e:
            self.error = True
            print(e)

我想知道是否有其他替代解决方案。以及它是否是有效的解决方案。谢谢。

0 个答案:

没有答案