Link Scraping Program Redundancy?

时间:2015-06-16 23:49:38

标签: python loops for-loop python-requests infinite-loop

我正在尝试创建一个小脚本,只需将一个给定的网站与关键字一起使用,按照所有链接进行一定次数(仅限网站上的链接),最后搜索所有找到的链接对于关键字并返回任何成功的匹配。最终,它的目标是,如果你记得一个网站,你看到了一些内容,并且知道该页面包含的关键字,该程序可能有助于找到丢失页面的链接。现在我的错误:循环遍历所有这些页面,提取它们的URL,并创建它们的列表,它似乎以某种方式冗余地重复并从列表中删除相同的链接。我确实为此添加了一个安全措施,但它似乎没有按预期工作。我觉得有些网址被错误地复制到列表中并最终被无数次检查。

这是我的完整代码(抱歉长度),问题区域似乎在for循环的最后:

import bs4, requests, sys

def getDomain(url):
    if "www" in url:
        domain = url[url.find('.')+1:url.rfind('.')]
    elif "http" in url:
        domain = url[url.find("//")+2:url.rfind('.')]
    else:
        domain = url[:url.rfind(".")]
    return domain

def findHref(html):
    '''Will find the link in a given BeautifulSoup match object.'''
    link_start = html.find('href="')+6
    link_end = html.find('"', link_start)
    return html[link_start:link_end]

def pageExists(url):
    '''Returns true if url returns a 200 response and doesn't redirect to a dns search.
    url must be a requests.get() object.'''
    response = requests.get(url)
    try:
        response.raise_for_status()
        if response.text.find("dnsrsearch") >= 0:
            print response.text.find("dnsrsearch")
            print "Website does not exist"
            return False
    except Exception as e:
        print "Bad response:",e
        return False
    return True

def extractURLs(url):
    '''Returns list of urls in url that belong to same domain.'''
    response = requests.get(url)
    soup = bs4.BeautifulSoup(response.text)
    matches = soup.find_all('a')
    urls = []
    for index, link in enumerate(matches):
        match_url = findHref(str(link).lower())
        if "." in match_url:
            if not domain in match_url:
                print "Removing",match_url
            else:
                urls.append(match_url)
        else:
            urls.append(url + match_url)
    return urls

def searchURL(url):
    '''Search url for keyword.'''
    pass

print "Enter homepage:(no http://)"
homepage = "http://" + raw_input("> ")
homepage_response = requests.get(homepage)
if not pageExists(homepage):
    sys.exit()
domain = getDomain(homepage)

print "Enter keyword:"
#keyword = raw_input("> ")
print "Enter maximum branches:"
max_branches = int(raw_input("> "))

links = [homepage]
for n in range(max_branches):
    for link in links:
        results = extractURLs(link)
        for result in results:
            if result not in links:
                links.append(result)

部分输出(约.000000000001%):

Removing /store/apps/details?id=com.handmark.sportcaster
Removing /store/apps/details?id=com.handmark.sportcaster
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.eweware.heard
Removing /store/apps/details?id=com.eweware.heard
Removing /store/apps/details?id=com.eweware.heard

2 个答案:

答案 0 :(得分:1)

您使用外循环多次循环同一链接:

for n in range(max_branches): 
   for link in links: 
       results = extractURLs(link)

我也会小心地追加你正在迭代的列表,否则你最终会得到一个无限循环

答案 1 :(得分:0)

好的,我找到了解决方案。我所做的只是将links变量更改为字典,其中值0表示未搜索的链接,1表示搜索的链接。然后我遍历了密钥的副本以保留分支,而不是让它疯狂地跟随循环中添加的每个链接。最后,如果找到一个尚未在链接中的链接,则将其添加并设置为0以进行搜索。

links = {homepage: 0}
for n in range(max_branches):
    for link in links.keys()[:]:
        if not links[link]:
            results = extractURLs(link)
            for result in results:
                if result not in links:
                    links[result] = 0