网页抓取过程

时间:2021-07-23 08:14:11

标签: python web-scraping

我一直在尝试替换输出中的一些文本,但没有成功。

我希望输出看起来像这样

su-n-s-e-t
 https://64.media.tumblr.com/35fb46ace19cf31bf16c3655eff26fa6/bc3cfd4a41299b1e-9a/s500x750/98bbd40e71066761a4bd5983896932dd56c94427.jpg
houndsofvalinor-art
 https://64.media.tumblr.com/e89e2a223d965a0351e310c829389583/ce52b6a3e76c58be-6e/s500x750/43548527d68eac8def536a88901a2ff78355ef51.jpg 
amazinglybeautifulphotography
 https://64.media.tumblr.com/a7d31eb63666d39d10868debbab9e27c/5be73c7f5dadb3dd-aa/s500x750/5b10d9cc0400e7b0dbabb9ea14c37e6b91e85e91.jpg 
kylebonallo
 https://64.media.tumblr.com/b406c3ceb50e4e09e550710b35de1310/dddef868163205f7-71/s500x750/9fec23368ed8ca5d6effae89fbdcda54554d0a68.jpg 
expressions-of-nature
 https://64.media.tumblr.com/e1eb3612511e21177dfa66ac02f07b98/c5b5c1fc2cbbc58d-e1/s500x750/550dbdf7568167891c5ea1af18af9cbc91cd620f.jpg 
ex0skeletal-undead
 https://64.media.tumblr.com/14d837eb6159b8376443393d8b1ef551/fb5d595667e75d0f-79/s500x750/cee48e58e0b1191376e20fd11904c09adbea50b3.jpg 
geopsych
 https://64.media.tumblr.com/a596b92db62c8ae4f68b490d172f8227/c856f013961ced0e-10/s500x750/73fb838c8065174e5ede5d93698ea386e6df1efe.jpg 
jacobvanloon
 https://64.media.tumblr.com/ca5f1e13bb4642de55422e74611f1df6/6f85f80cb48e73f7-e4/s500x750/12b59223056baf7733d99f210f1cd8bc397d52cd.png 
amazinglybeautifulphotography
 https://64.media.tumblr.com/06a1ff4abc50e80df59ddbd6e9c8c42c/3fd49bbbfb9dffd8-df/s500x750/43d3adf64f6fec58ebd37633be4988f36746e819.jpg 

url_list 变量返回:

geopsych
[' https://64.media.tumblr.com/a596b92db62c8ae4f68b490d172f8227/c856f013961ced0e-10/s500x750/73fb838c8065174e5ede5d93698ea386e6df1efe.jpg 500w']
burningmine
[' https://64.media.tumblr.com/e32b99ad1de8f8cd494205982c0137a1/54985812c55123d3-99/s500x750/cbe83b505eb14ff36e2be05e171a30bfd073a41b.jpg 500w']
amazinglybeautifulphotography
[' https://64.media.tumblr.com/06a1ff4abc50e80df59ddbd6e9c8c42c/3fd49bbbfb9dffd8-df/s500x750/43d3adf64f6fec58ebd37633be4988f36746e819.jpg 500w']

这是我尝试过的:

for results in urls:
                results.replace('500w','')

但我仍然以 500w 结尾。

而且由于我想在没有 [''] 的情况下将每个链接都放在一行中,因此我尝试用 .split('\n') 而不是 .split(',') 将其拆分,但似乎我也遇到错误我用那个。

其余代码如下:

import requests
from bs4 import BeautifulSoup

search_term = 'landscape'
posts_scrape = requests.get(f'https://www.tumblr.com/search/{search_term}')
soup = BeautifulSoup(posts_scrape.text, 'html.parser')

articles = soup.find_all('article', class_='_2DpMA')

for article in articles:
    try:
        source = article.find('div', class_='_3QBiZ').text
        urls = []
        for imgvar in article.find_all('img', alt='Image'):
            url_list = [i for i in imgvar['srcset'].split(',') if (i.find('500w') != -1)]
            urls.append(url_list)
        for results in urls:
            results.replace('500w','')
        print (source)
        print (results)
    except AttributeError:
        continue

2 个答案:

答案 0 :(得分:1)

我推荐使用字典来存储图片 URL。键是图像的来源,值是图像 URL 的列表。例如:

import requests
from bs4 import BeautifulSoup

search_term = "landscape"
posts_scrape = requests.get(f"https://www.tumblr.com/search/{search_term}")
soup = BeautifulSoup(posts_scrape.text, "html.parser")

articles = soup.find_all("article", class_="_2DpMA")

data = {}
for article in articles:
    try:
        source = article.find("div", class_="_3QBiZ").text
        for imgvar in article.find_all("img", alt="Image"):
            data.setdefault(source, []).extend(
                [
                    i.replace("500w", "").strip()
                    for i in imgvar["srcset"].split(",")
                    if "500w" in i
                ]
            )
    except AttributeError:
        continue

for source, image_urls in data.items():
    for url in image_urls:
        print(source)
        print(url)

打印:

leahberman
https://64.media.tumblr.com/e29c3dd39ab0e413ff6eefa0cfc973de/d6817667d3007f74-09/s500x750/2971fc9af6619f1f783bb169b104dea023f339de.gifv
leahberman
https://64.media.tumblr.com/8c61e084290ccea6fef3eab1d96204fd/d6817667d3007f74-b8/s500x750/45873681924618d179bfc97e04a02d3d6ebaac39.gifv
leahberman
https://64.media.tumblr.com/c4db8bc21289aec008219f5a4b307714/d6817667d3007f74-85/s500x750/c49d38c369ccb507d950b116e637886ac4467685.gifv
poetry-siir
https://64.media.tumblr.com/5495c24e4608688a6a0052d81da01882/d97a76eeb3edd5e9-d7/s500x750/9862a63fe430e83850ceb73f384bf2af6322db5e.jpg
poetry-siir
https://64.media.tumblr.com/9944d7a5d2d26a57118c8b391b699efb/d97a76eeb3edd5e9-ad/s500x750/7cfc69a18143d5b2a678fe0c85c431e5387a2107.jpg

...and so on.

答案 1 :(得分:1)

我认为问题主要出在这行代码中:

results.replace('500w','')

首先:results 是一个列表,所以它没有 .replace() 方法。如果您不使 AttributeError 静音,您会看到这一点。你至少应该这样打印错误:

except AttributeError as exc:
    print(exc)

你会看到它输出一个非常具有描述性的错误:'list' object has no attribute 'replace'

第二:我猜您想替换 results 中的实际网址以删除末尾的“500w”。 Python 中的字符串是不可变的,这意味着对 str 的任何转换都会创建一个新的 str 而不是修改第一个。因此,您在这里修改了一个 str,创建了一个新的但不将其存储在任何地方。

第三:您的 urls 列表在 for 循环的每次迭代中都会被删除。您可能希望在循环外声明它并让它包含所有匹配的 url。

这是一个代码,我知道你想要做什么:

import requests
from bs4 import BeautifulSoup

search_term = 'landscape'
posts_scrape = requests.get(f'https://www.tumblr.com/search/{search_term}')
soup = BeautifulSoup(posts_scrape.text, 'html.parser')

articles = soup.find_all('article', class_='_2DpMA')

urls = []

for article in articles:
    try:
        source = article.find('div', class_='_3QBiZ').text
        for imgvar in article.find_all('img', alt='Image'):
            url = next((i for i in imgvar['srcset'].split(',') if i.endswith('500w')), None)
            if url:
                urls.append(url.strip().replace(' 500w', ''))
    except AttributeError as exc:
        print(exc)