使用Python解析网页

时间:2020-05-16 13:09:36

标签: python html parsing web-scraping beautifulsoup

我是Python的新手,真的可以使用一些帮助。

我正在尝试解析网页并从该网页中检索电子邮件地址。 我已经尝试了许多我在网上阅读过的内容,但都失败了。

我意识到,当运行BeautifulSoup(browser.page_source)时,它会带来源代码,但是由于某种原因,它并没有带来电子邮件地址或业务概况。

下面是我的代码(不要判断:-))

import os, random, sys, time

from urllib.parse import urlparse

from selenium import webdriver

from bs4 import BeautifulSoup

from webdriver_manager.chrome import ChromeDriverManager

import lxml

browser = webdriver.Chrome('./chromedriver.exe')

url = ('https://www.yellowpages.co.za/search?what=accountant&where=cape+town&pg=1')
browser.get(url)

BeautifulSoup(browser.page_source)

旁注:我的目标是根据搜索条件浏览网页并解析每个页面的电子邮件地址,我已经找到了如何浏览网页并发送密钥的方法,这只是我所坚持的解析方式。 非常感谢您的帮助

1 个答案:

答案 0 :(得分:1)

我建议您使用requests模块来get页面来源:

from requests import get

url = 'https://www.yellowpages.co.za/search?what=accountant&where=cape+town&pg=1'
src = get(url).text  # Gets the Page Source

此后,我搜索了电子邮件格式的单词并将其添加到列表中:

src = src.split('<body>')[1]  # Splits it and gets the <body> part

emails = []

for ind, char in enumerate(src):
    if char == '@':
        add = 1  # Count the characteres after and before
        new_char = src[ind+add]  # New character to add to the email
        email = char  # The full email (not yet)

        while new_char not in '<>":':
            email += new_char  # Add to email

            add += 1                   # Readjust
            new_char = src[ind + add]  # Values

        if '.' not in email or email.endswith('.'):  # This means that the email is 
            continue                                 # not fully in the page

        add = 1                    # Readjust
        new_char = src[ind - add]  # Values

        while new_char not in '<>":':
            email = new_char + email  # Add to email

            add += 1                   # Readjust
            new_char = src[ind - add]  # Values

        emails.append(email)

最后,您可以使用set删除重复项并打印电子邮件

emails = set(emails)  # Remove Duplicates

print(*emails, sep='\n')