网站抓取大量链接?

时间:2020-07-28 00:57:05

标签: python web-scraping beautifulsoup

我对Web抓取非常陌生。我已经开始在Python中使用BeautifulSoup。我编写了一个代码,该代码将遍历URL列表并为我获取所需的数据。该代码可用于10-12个链接,但我不确定如果列表中包含100个以上的链接,则相同的代码是否有效。是否有其他替代方法或任何其他库通过输入大量url列表来获取数据,而不以任何方式损害网站。到目前为止,这是我的代码。

url_list = [url1, url2,url3, url4,url5]
mylist = []
for l in url_list:
    url = l 
    res = get(url)
    soup = BeautifulSoup(res.text, 'html.parser')
    data = soup.find('pre').text
    mylist.append(data)

1 个答案:

答案 0 :(得分:0)

这是一个例子,也许适合您。

from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain, utils

class MySpider(Spider):
    name = 'my_spider'
    start_urls = ['url1']
    # refresh_urls = True # If you want to download the downloaded link again, please remove the "#" in the front
    def __init__(self):
        # If your link is stored elsewhere, read it out here.
        self.start_urls = utils.getFileLines('you url file name.txt')
        Spider.__init__(self,self.name) # Necessary

    def extract(self, url, html, models, modelNames):
        doc = SimplifiedDoc(html)
        data = doc.select('pre>text()') # Extract the data you want.
        return {'Urls': None, 'Data':{'data':data} } # Return the data to the framework, which will save it for you.

SimplifiedMain.startThread(MySpider())  # Start download

您可以在此处看到更多示例,以及library_scrapy库的源代码:https://github.com/yiyedata/simplified-scrapy-demo