我正在做一个基础练习项目。我调用一个简单的Wikipedia页面,然后使用Beautiful Soup将所有内容写入文本文件。然后我计算一下那个单词在该新写入的文本文件中出现的次数
由于某种原因,我第一次运行该代码时,得到的编号与第二次运行该代码时的编号不同。
相信我第一次运行代码中,“anime.txt”比第二次我运行的代码不同。
问题一定出在我用Beautiful Soup收集所有文本数据的方式上。
请帮助
from urllib.request import urlopen
from bs4 import BeautifulSoup
f = open("anime.txt", "w", encoding="utf-8")
f.write("")
f.close()
my_url ="https://en.wikipedia.org/wiki/Anime"
uClient = urlopen(my_url)
page_html = uClient.read()
uClient.close()
page_soup = BeautifulSoup(page_html, "html.parser")
p=page_soup.findAll("p")
f = open("anime.txt", "a", encoding="utf-8")
for i in p:
f.write(i.text)
f.write("\n\n")
data= open("anime.txt", encoding="utf-8").read()
anime_count = data.count("anime")
Anime_count = data.count("Anime")
print(anime_count,"\n")
print(Anime_count, "\n")
count= anime_count+Anime_count
print("The total number of times the word Anime appears within <p> in the wikipedia page is : ", count)
第一输出:
anime_count = 14
Anime_count = 97
计数= 111
第二输出:
anime_count = 23
Anime_count = 139
计数= 162
编辑:
我根据前2条注释编辑了代码,当然,现在可以使用:P。 就正确打开和关闭文件的方式/次数而言,这看起来更好吗?
from urllib.request import urlopen
from bs4 import BeautifulSoup
my_url ="https://en.wikipedia.org/wiki/Anime"
uClient = urlopen(my_url)
page_html = uClient.read()
uClient.close()
page_soup = BeautifulSoup(page_html, "html.parser")
p=page_soup.findAll("p")
f = open("anime.txt", "w", encoding="utf-8")
for i in p:
f.write(i.text)
f.write("\n\n")
f.close()
data= open("anime.txt", encoding="utf-8").read()
anime_count = data.count("anime")
Anime_count = data.count("Anime")
print(anime_count,"\n")
print(Anime_count, "\n")
count= anime_count+Anime_count
print("The total number of times the word Anime appears within <p> in the wikipedia page is : ", count)
答案 0 :(得分:0)
不要混淆有关打开和关闭文件。在with
statements中包括所有写作/阅读部分。
from urllib.request import urlopen
from bs4 import BeautifulSoup
with open("anime.txt", "w", encoding="utf-8") as outfile:
my_url ="https://en.wikipedia.org/wiki/Anime"
uClient = urlopen(my_url)
page_html = uClient.read()
uClient.close()
page_soup = BeautifulSoup(page_html, "html.parser")
p=page_soup.findAll("p")
for i in p:
outfile.write(i.text)
outfile.write("\n\n")
with open("anime.txt", "r", encoding="utf-8") as infile:
data = infile.read()
anime_count = data.count("anime")
Anime_count = data.count("Anime")
print(anime_count,"\n")
print(Anime_count, "\n")
count= anime_count+Anime_count
print("The total number of times the word Anime appears within <p> in the wikipedia page is : ", count)s : ", count)