Beautiful Soup查找html

时间:2020-08-05 14:26:33

标签: python twitter beautifulsoup

我正在使用一个名为twitterscraper的库,该库会从任何给定的URL中抓取推文。我给了它一个推文回复的URL,它成功地抓取了页面上显示的推文。 (除了网址本身的推文,但我已经拥有了该推文)。问题是我在调试时无法从html本身的响应html中找到它已删除的任何元素。搜索它们时,我也找不到推文内容。这些推文根本就不存在。

这是得到响应的地方:

response = requests.get(url, headers=HEADER, proxies={"http": proxy}, timeout=timeout)
### some code
html = response.text

呼叫from_html: tweets = list(Tweet.from_html(html))

bs4 find_all被调用并且推文被解析

def from_html(cls, html):
        soup = BeautifulSoup(html, "lxml") #no li element with js-stream-item class found when i looked through the html.
        tweets = soup.find_all('li', 'js-stream-item') #but it still finds the li elements with tweets in them?
        if tweets:
            for tweet in tweets:
                try:
                    yield cls.from_soup(tweet)
                except AttributeError:
                    pass  
                except TypeError:
                    pass  

这是怎么回事?

我在调试时在vscode中复制了html变量的值,并对其进行了搜索。链接到bs4的find_all方法:https://beautiful-soup-4.readthedocs.io/en/latest/#find-all。链接到回复网址-https://twitter.com/renderwonk/status/1290793272353239040

提供的用于抓取url的函数(在第一行中作了更改(注释掉了第一行。我没有通过URL本身给出查询):

def query_single_page(query, lang, pos, retry=50, from_user=False, timeout=60, use_proxy=True):
    """
    Returns tweets from the given URL.

    :param query: The query parameter of the query url
    :param lang: The language parameter of the query url
    :param pos: The query url parameter that determines where to start looking
    :param retry: Number of retries if something goes wrong.
    :return: The list of tweets, the pos argument for getting the next page.
    """
    #url = get_query_url(query, lang, pos, from_user)
    url = query
    logger.info('Scraping tweets from {}'.format(url))

    try:
        if use_proxy:
            proxy = next(proxy_pool)
            logger.info('Using proxy {}'.format(proxy))
            response = requests.get(url, headers=HEADER, proxies={"http": proxy}, timeout=timeout)
        else:
            print('not using proxy')
            response = requests.get(url, headers=HEADER, timeout=timeout)
        if pos is None:  # html response
            html = response.text or ''
            json_resp = None
        else:
            html = ''
            try:
                json_resp = response.json()
                html = json_resp['items_html'] or ''
            except (ValueError, KeyError) as e:
                logger.exception('Failed to parse JSON while requesting "{}"'.format(url))

        tweets = list(Tweet.from_html(html))

        if not tweets:
            try:
                if json_resp:
                    pos = json_resp['min_position']
                    has_more_items = json_resp['has_more_items']
                    if not has_more_items:
                        logger.info("Twitter returned : 'has_more_items' ")
                        return [], None
                else:
                    pos = None
            except:
                pass
            if retry > 0:
                logger.info('Retrying... (Attempts left: {})'.format(retry))
                return query_single_page(query, lang, pos, retry - 1, from_user, use_proxy=use_proxy)
            else:
                return [], pos

        if json_resp:
            return tweets, urllib.parse.quote(json_resp['min_position'])
        if from_user:
            return tweets, tweets[-1].tweet_id
        return tweets, "TWEET-{}-{}".format(tweets[-1].tweet_id, tweets[0].tweet_id)

    except requests.exceptions.HTTPError as e:
        logger.exception('HTTPError {} while requesting "{}"'.format(
            e, url))
    except requests.exceptions.ConnectionError as e:
        logger.exception('ConnectionError {} while requesting "{}"'.format(
            e, url))
    except requests.exceptions.Timeout as e:
        logger.exception('TimeOut {} while requesting "{}"'.format(
            e, url))
    except json.decoder.JSONDecodeError as e:
        logger.exception('Failed to parse JSON "{}" while requesting "{}".'.format(
            e, url))

    if retry > 0:
        logger.info('Retrying... (Attempts left: {})'.format(retry))
        return query_single_page(query, lang, pos, retry - 1, use_proxy=use_proxy)

    logger.error('Giving up.')
    return [], None

在方法find_allfrom_html调用的结果

enter image description here

bs4从中找到上述元素的html。我在调试时复制了它: https://codeshare.io/ad8qNe (复制到编辑器并使用自动换行)

这与javascript有关吗?

1 个答案:

答案 0 :(得分:2)

仔细研究任何HTTP GET请求:

response = requests.get(url, headers=HEADER, proxies={"http": proxy}, timeout=timeout)

我们是否使用代理并不重要。在这种情况下,最重要的是标头。它说headers=HEADER,那么HEADER是什么?滚动到query.py的顶部:

HEADER = {'User-Agent': random.choice(HEADERS_LIST), 'X-Requested-With': 'XMLHttpRequest'}

在这种情况下,'X-Requested-With': 'XMLHttpRequest'是至关重要的。响应似乎仍然是HTML,但是您要查找的内容将嵌入其中。

编辑-这是没有Twitterscraper的情况下自己可以做的(基本上做同样的事情):

def main():

    import requests
    from bs4 import BeautifulSoup

    url = "https://twitter.com/renderwonk/status/1290793272353239040"

    headers = {
        "user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36",
        "X-Requested-With": "XMLHttpRequest"
    }

    response = requests.get(url, headers=headers)
    response.raise_for_status()

    soup = BeautifulSoup(response.text, "html.parser")
    for li in soup.find_all("li", {"class": "js-stream-item"}):
        tweet_div = li.find("div", {"class": "tweet"})
        text = tweet_div.find("p", {"class": "tweet-text"}).get_text()
        print(f"{''.center(32, '-')}\n{text}\n{''.center(32, '-')}\n")

    return 0


if __name__ == "__main__":
    import sys
    sys.exit(main())