刮取搜索结果中找到的链接列表

时间:2017-08-16 09:21:30

标签: python hyperlink web-scraping bs4

我试图从图书馆页面中搜索搜索结果。但由于我想要的不仅仅是图书标题,我希望脚本打开每个搜索结果并抓取详细网站以获取更多信息。
到目前为止我所拥有的是:

    import bs4 as bs
    import urllib.request, urllib.error, urllib.parse
    from http.cookiejar import CookieJar
    from bs4 import Comment


    cj = CookieJar()
    basisurl = 'http://mz-villigst.cidoli.de/index.asp?stichwort=hans'
    #just took any example page similar to the one i have in mind

    opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
    p = opener.open(basisurl)

    for mednrs in soup.find_all(string=lambdatext:isinstance(text,Comment)):
    #and now when i do [0:] it gives me the medianumbers and i can create the links like this:

           links = 'http://mz-villigst.cidoli.de/index.asp?MEDIENNR=' + mednrs[10:17]

我现在的主要问题是:我怎么能得到它给我一个清单(像这样:[" 1"," 2"] ......)然后我可以经过?

1 个答案:

答案 0 :(得分:0)

创建一个列表并在循环中附加到它:

links = []
for mednrs in soup.find_all(string=lambda text: isinstance(text, Comment)):
    link = 'http://mz-villigst.cidoli.de/index.asp?MEDIENNR=' + mednrs[10:17]
    links.append(link)

或使用列表理解:

links = ['http://mz-villigst.cidoli.de/index.asp?MEDIENNR=' + mednrs[10:17]
         for mednrs in soup.find_all(string=lambda text: isinstance(text, Comment))]