Python Beautifulsoup复制条目

时间:2016-06-13 01:49:01

标签: python image beautifulsoup duplicates screen-scraping

这会从4chans摄影板上刮下图像。问题是它会刮两次相同的图像。我无法弄清楚为什么我会得到重复的照片,如果有人能帮助我那将是非常棒的。

from bs4 import BeautifulSoup
import requests
import re
import urllib2
import os


def get_soup(url,header):
  return BeautifulSoup(urllib2.urlopen(urllib2.Request(url, headers=header)), 'lxml')

image_type = "image_name"
url = "http://boards.4chan.org/p/"
url = url.strip('\'"')
print url
header = {'User-Agent': 'Mozilla/5.0'} 
r = requests.get(url)
html_content = r.text
soup = BeautifulSoup(html_content, 'lxml')
anchors = soup.findAll('a')
links = [a['href'] for a in anchors if a.has_attr('href')]
images = []
def get_anchors(links):
for a in anchors:
    links.append(a['href'])
return links

raw_links = get_anchors(links)

for element in raw_links:
if ".jpg" in str(element) or '.png' in str(element) or '.gif' in str(element):
    print element  
    raw_img = urllib2.urlopen("http:" + element).read()
    DIR="C:\\Users\\deez\\Desktop\\test\\"
    cntr = len([i for i in os.listdir(DIR) if image_type in i]) + 1
    print cntr
    f = open(DIR + image_type + "_"+ str(cntr)+".jpg", 'wb')
    f.write(raw_img)
    f.close()

1 个答案:

答案 0 :(得分:0)

不要拉每个锚点,使用类名来获取某些链接:

import  requests
from bs4 import BeautifulSoup

soup = BeautifulSoup(requests.get("http://boards.4chan.org/p/").content)

imgs = [a["href"] for a in soup.select("div.fileText a")]

print(imgs)

你有欺骗的原因是每个图像至少有两个div具有相同的链接:

enter image description here