我正在尝试在python中创建链接爬虫;我知道收割者,但这不是我想要的。以下是我到目前为止的情况:
import httplib, sys
target=sys.argv[1]
subsite=sys.argv[2]
link = "http://"+target+subsite
def spider():
while 1:
conn = httplib.HTTPConnection(target)
conn.request("GET", subsite)
r2 = conn.getresponse()
data = r2.read().split('\n')
for x in data[:]:
if link in x:
print x
spider()
但我似乎无法找到过滤x的方法,所以我可以检索链接。
答案 0 :(得分:2)
如果您正沿着这条路走下去,那么您可以开始安装requests
和bs4
以简化生活 - 并根据以下内容启动您自己的蜘蛛模板:
import requests
from bs4 import BeautifulSoup
page = requests.get('http://www.google.com')
soup = BeautifulSoup(page.text)
# Find all anchor tags that have an href attribute
print [a['href'] for a in soup.find_all('a', {'href': True})]
# ['http://www.google.co.uk/imghp?hl=en&tab=wi', 'http://maps.google.co.uk/maps?hl=en&tab=wl', 'https://play.google.com/?hl=en&tab=w8', 'http://www.youtube.com/?gl=GB&tab=w1', 'http://news.google.co.uk/nwshp?hl=en&tab=wn', 'https://mail.google.com/mail/?tab=wm', 'https://drive.google.com/?tab=wo', 'http://www.google.co.uk/intl/en/options/', 'http://www.google.co.uk/history/optout?hl=en', '/preferences?hl=en', 'https://accounts.google.com/ServiceLogin?hl=en&continue=http://www.google.co.uk/', '/advanced_search?hl=en-GB&authuser=0', '/language_tools?hl=en-GB&authuser=0', 'https://www.google.com/intl/en_uk/chrome/browser/promo/cubeslam/', '/intl/en/ads/', '/services/', 'https://plus.google.com/103583604759580854844', '/intl/en/about.html', 'http://www.google.co.uk/setprefdomain?prefdom=US&sig=0_cYDPGyR7QbF1UxGCXNpHcrj09h4%3D', '/intl/en/policies/']
答案 1 :(得分:1)
我认为会起作用
import re
re.findall("href=([^ >]+)",x)