我正在尝试从网站中提取一些信息,但是我不知道如何抓取电子邮件。
此代码对我有用:
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup
url = "https://www.eurocham-cambodia.org/member/476/2-LEau-Protection"
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
soup = BeautifulSoup(page_html,"lxml")
members = soup.findAll("b")
for member in members:
member = members[0].text
print(member)
我想提取数字并与soup.findAll()链接,但是找不到正确获取文本的方法,因此我使用SelectorGadget工具并尝试了此方法:
numbers = soup.select("#content li:nth-child(1)")
for number in numbers:
number = numbers[0].text
print(number)
links = soup.findAll(".icon-globe+ a")
for link in links:
link = links[0].text
print(link)
它可以正确打印:
2 L'Eau Protection
(+33) 02 98 19 43 86
http://www.2leau-protection.com/
现在,在提取电子邮件地址时,我陷入了困境。我是新来的,任何建议将不胜感激,谢谢!
尝试1
emails = soup.select("#content li:nth-child(2)")
for email in emails:
email = emails[0].text
print(email)
我什至不知道它只会打印出什么
//<![CDATA[
var l=new Array();
l[0]='>';l[1]='a';l[2]='/';l[3]='<';l[4]='|109';l[5]='|111';l[6]='|99';l[7]='|46';l[8]='|110';l[9]='|111';l[10]='|105';l[11]='|116';l[12]='|99';l[13]='|101';l[14]='|116';l[15]='|111';l[16]='|114';l[17]='|112';l[18]='|45';l[19]='|117';l[20]='|97';l[21]='|101';l[22]='|108';l[23]='|50';l[24]='|64';l[25]='|110';l[26]='|111';l[27]='|105';l[28]='|116';l[29]='|97';l[30]='|109';l[31]='|114';l[32]='|111';l[33]='|102';l[34]='|110';l[35]='|105';l[36]='|32';l[37]='>';l[38]='"';l[39]='|109';l[40]='|111';l[41]='|99';l[42]='|46';l[43]='|110';l[44]='|111';l[45]='|105';l[46]='|116';l[47]='|99';l[48]='|101';l[49]='|116';l[50]='|111';l[51]='|114';l[52]='|112';l[53]='|45';l[54]='|117';l[55]='|97';l[56]='|101';l[57]='|108';l[58]='|50';l[59]='|64';l[60]='|110';l[61]='|111';l[62]='|105';l[63]='|116';l[64]='|97';l[65]='|109';l[66]='|114';l[67]='|111';l[68]='|102';l[69]='|110';l[70]='|105';l[71]='|32';l[72]=':';l[73]='o';l[74]='t';l[75]='l';l[76]='i';l[77]='a';l[78]='m';l[79]='"';l[80]='=';l[81]='f';l[82]='e';l[83]='r';l[84]='h';l[85]=' ';l[86]='a';l[87]='<';
for (var i = l.length-1; i >= 0; i=i-1){
if (l[i].substring(0, 1) == '|') document.write("&#"+unescape(l[i].substring(1))+";");
else document.write(unescape(l[i]));}
//]]>
尝试2
emails = soup.select(".icon-mail~ a") #follow the same logic
for email in emails:
email = emails[0].text
print(email)
错误
NameError: name 'email' is not defined
尝试3
emails = soup.select(".icon-mail~ a")
print(emails)
打印为空
[]
尝试4,5,6
email = soup.find("a",{"href":"mailto:"}) # Print "None"
email = soup.findAll("a",{"href":"mailto:"}) # Print empty "[]"
email = soup.select("a",{"href":"mailto:"}) # Print a lot of informations but not the one that I need.
答案 0 :(得分:2)
Selenium库用于自动化Web任务和测试自动化。它也可以用作刮板。由于它使用的是浏览器的真实主干(例如Mozilla Gecko或Google Chrome驱动程序),因此在大多数情况下,它似乎更强大。这是一个如何完成任务的示例:
from selenium import webdriver
url = "https://www.eurocham-cambodia.org/member/476/2-LEau-Protection"
option = webdriver.ChromeOptions()
option.add_argument("--headless")
browser = webdriver.Chrome(executable_path="./chromedriver", options=option)
browser.get(url)
print(browser.find_element_by_css_selector(".icon-mail~ a").text)
输出为:
information@2leau-protection.com
编辑:您可以通过pip install selenium
获取硒,也可以从here找到chrome驱动程序
答案 1 :(得分:2)
之所以无法抓取该网站的给定部分,是因为它是由JavaScript生成的,并且最初不存在。可以通过以下代码段进行检查
import lxml
import requests
page = requests.get(https://www.eurocham-cambodia.org/member/476/2-LEau- Protection).text
tree = html.fromstring(page)
print(lxml.html.tostring(tree, pretty_print=True).decode())
为您提供了完整的HTML文档,但让我们仅关注包含用户个人资料的div
。
<div class="col-sm-12 col-md-6">
<ul class="iconlist">
<li>
<i class="icon-phone"> </i>(+33) 02 98 19 43 86</li>
<li>
<i class="icon-mail"> </i><script type="text/javascript">
//<![CDATA[
var l=new Array();
l[0]='>';l[1]='a';l[2]='/';l[3]='<';l[4]='|109';l[5]='|111';l[6]='|99';l[7]='|46';l[8]='|110';l[9]='|111';l[10]='|105';l[11]='|116';l[12]='|99';l[13]='|101';l[14]='|116';l[15]='|111';l[16]='|114';l[17]='|112';l[18]='|45';l[19]='|117';l[20]='|97';l[21]='|101';l[22]='|108';l[23]='|50';l[24]='|64';l[25]='|110';l[26]='|111';l[27]='|105';l[28]='|116';l[29]='|97';l[30]='|109';l[31]='|114';l[32]='|111';l[33]='|102';l[34]='|110';l[35]='|105';l[36]='|32';l[37]='>';l[38]='"';l[39]='|109';l[40]='|111';l[41]='|99';l[42]='|46';l[43]='|110';l[44]='|111';l[45]='|105';l[46]='|116';l[47]='|99';l[48]='|101';l[49]='|116';l[50]='|111';l[51]='|114';l[52]='|112';l[53]='|45';l[54]='|117';l[55]='|97';l[56]='|101';l[57]='|108';l[58]='|50';l[59]='|64';l[60]='|110';l[61]='|111';l[62]='|105';l[63]='|116';l[64]='|97';l[65]='|109';l[66]='|114';l[67]='|111';l[68]='|102';l[69]='|110';l[70]='|105';l[71]='|32';l[72]=':';l[73]='o';l[74]='t';l[75]='l';l[76]='i';l[77]='a';l[78]='m';l[79]='"';l[80]='=';l[81]='f';l[82]='e';l[83]='r';l[84]='h';l[85]=' ';l[86]='a';l[87]='<';
for (var i = l.length-1; i >= 0; i=i-1){
if (l[i].substring(0, 1) == '|') document.write("&#"+unescape(l[i].substring(1))+";");
else document.write(unescape(l[i]));}
//]]>
</script>
</li>
<li>
<i class="icon-globe"></i> <a href="http://www.2leau-protection.com/" target="_blank"><i style="background-color:#2C3E50"></i>http://www.2leau-protection.com/</a>
</li>
</ul>
</div>
仔细查看,这是相同脚本,当您尝试在尝试中抓取 电子邮件 时,您在上面抓了该脚本1 。
答案 2 :(得分:1)
如果要查找电子邮件地址,可以使用正则表达式来查找。导入模块并搜索文本并提取数据并将其放在列表中。
import re
..
text = soup.get_text()
list = re.findAll(r'[a-z0-9]+@[gmail|yahoo|rediff].com', text)
for email in list:
print(email)
让我知道结果。编码愉快!
答案 3 :(得分:1)
BeautifulSoup仅处理页面的HTML,不执行任何JavaScrip。电子邮件地址是在加载文档时使用JavaScript生成的(可能使抓取该信息更加困难)。
在这种情况下,它是通过以下方式生成的:
<script type="text/javascript">
//<![CDATA[
var l=new Array();
l[0]='>';l[1]='a';l[2]='/';l[3]='<';l[4]='|109';l[5]='|111';l[6]='|99';l[7]='|46';l[8]='|110';l[9]='|111';l[10]='|105';l[11]='|116';l[12]='|99';l[13]='|101';l[14]='|116';l[15]='|111';l[16]='|114';l[17]='|112';l[18]='|45';l[19]='|117';l[20]='|97';l[21]='|101';l[22]='|108';l[23]='|50';l[24]='|64';l[25]='|110';l[26]='|111';l[27]='|105';l[28]='|116';l[29]='|97';l[30]='|109';l[31]='|114';l[32]='|111';l[33]='|102';l[34]='|110';l[35]='|105';l[36]='|32';l[37]='>';l[38]='"';l[39]='|109';l[40]='|111';l[41]='|99';l[42]='|46';l[43]='|110';l[44]='|111';l[45]='|105';l[46]='|116';l[47]='|99';l[48]='|101';l[49]='|116';l[50]='|111';l[51]='|114';l[52]='|112';l[53]='|45';l[54]='|117';l[55]='|97';l[56]='|101';l[57]='|108';l[58]='|50';l[59]='|64';l[60]='|110';l[61]='|111';l[62]='|105';l[63]='|116';l[64]='|97';l[65]='|109';l[66]='|114';l[67]='|111';l[68]='|102';l[69]='|110';l[70]='|105';l[71]='|32';l[72]=':';l[73]='o';l[74]='t';l[75]='l';l[76]='i';l[77]='a';l[78]='m';l[79]='"';l[80]='=';l[81]='f';l[82]='e';l[83]='r';l[84]='h';l[85]=' ';l[86]='a';l[87]='<';
for (var i = l.length-1; i >= 0; i=i-1){
if (l[i].substring(0, 1) == '|') document.write("&#"+unescape(l[i].substring(1))+";");
else document.write(unescape(l[i]));}
//]]>
</script>
答案 4 :(得分:1)
我看到您已经有了完全可以接受的答案,但是当我看到这种混淆脚本时,我着迷了,只需要对其进行“反混淆”即可。
from bs4 import BeautifulSoup
from requests import get
import re
page = "https://www.eurocham-cambodia.org/member/476/2-LEau-Protection"
content = get(page).content
soup = BeautifulSoup(content, "lxml")
exp = re.compile(r"(?:.*?='(.*?)')")
# Find any element with the mail icon
for icon in soup.findAll("i", {"class": "icon-mail"}):
# the 'a' element doesn't exist, there is a script tag instead
script = icon.next_sibling
# the script tag builds a long array of single characters- lets gra
chars = exp.findall(script.text)
output = []
# the javascript array is iterated backwards
for char in reversed(list(chars)):
# many characters use their ascii representation instead of simple text
if char.startswith("|"):
output.append(chr(int(char[1:])))
else:
output.append(char)
# putting the array back together gets us an `a` element
link = BeautifulSoup("".join(output))
email = link.findAll("a")[0]["href"][8:]
# the email is the part of the href after `mailto: `
print(email)
答案 5 :(得分:-1)
我发现这个方法更准确...
text = get(url).content
emails = re.findall(r'[a-z0-9]+@\S+.com', str(text))