我正在尝试从某些RSS Feed中下载和解析文本,例如http://rss.sciencedirect.com/publication/science/03043878。这是一个简单的例子:
import urllib.request
import urllib.parse
import requests
from bs4 import BeautifulSoup
def main():
soup = BeautifulSoup(urllib.request.urlopen('http://rss.sciencedirect.com/publication/science/03043878'),"html.parser").encode("ascii")
print(soup)
if __name__ == '__main__':
main()
在原始html中(如果直接查看网站),链接前面有<link>
,后跟</link>
。但是,打印出来的beautifulsoup会将<link>
替换为<link/>
并完全删除</link>
。任何想法我可能做错了或这是一个错误?
PS尝试将编码更改为utf-8,但仍然会发生。
答案 0 :(得分:0)
解析器无法正确评估链接。对于此问题,您应使用xml
作为解析器,而不是html.parser
。
soup = BeautifulSoup(urllib.request.urlopen('http://rss.sciencedirect.com/publication/science/03043878'),"xml")
print(len(soup.find_all("link")))
输出52个链接。
答案 1 :(得分:0)
您正在解析RSS。 RSS是XML。所以将features =“xml”传递给BeautifulSoup构造函数。
import urllib.request
from bs4 import BeautifulSoup
def main():
doc = BeautifulSoup(urllib.request.urlopen('http://rss.sciencedirect.com/publication/science/03043878'), "xml")
# If you want to print it as ascii (as per your original post).
print (doc.prettify('ascii'))
# To write it to an file as ascii (as per your original post).
with open("ascii.txt", "wb") as file:
file.write(doc.prettify('ascii'))
# To write it to an file as utf-8 (as the original RSS).
with open("utf-8.txt", "wb") as file:
file.write(doc.prettify('utf-8'))
# If you want to print the links.
for item in doc.findAll('link'):
print(item)
if __name__ == '__main__':
main()
文件和终端的输出:
... <link>
http://rss.sciencedirect.com/action/redirectFile?&zone=main&currentActivity=feed&usageType=outward&url=http%3A%2F%2Fwww.sciencedirect.com%2Fscience%3F_ob%3DGatewayURL%26_origin%3DIRSSSEARCH%26_method%3DcitationSearch%26_piikey%3DS0304387817300512%26_version%3D1%26md5%3D16ed8e2672e8048590d3c41993306b0f
</link> ...