使用beautifulsoup提取数据

时间:2017-05-22 19:24:51

标签: python html web-scraping beautifulsoup

from urllib.request import urlopen
from bs4 import BeautifulSoup

#specify the url
wiki = "http://www.bbc.com/urdu"

#Query the website and return the html to the variable 'page'
page = urlopen(wiki)


#Parse the html in the 'page' variable, and store it in Beautiful Soup format
soup = BeautifulSoup(page,"html.parser")
all_links=soup.find_all("a")
for link in all_links:
    #print (link.get("href"))
    #text=soup.body.get_text()
    #print(text)
    for script in soup(["script", "style"]):
        script.extract()    # rip it out

# get text
text=soup.body.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text)
text1 = str(text) 
text_file = open("C:\\Output.txt", 'w') 
text_file.write(text) 
text_file.close()

我想用美丽的汤从新闻网站中提取数据。我写了一个代码,但它没有给我所需的输出。首先,我必须处理页面中的所有链接,然后从中提取数据并将其保存到文件中。然后,更多关于下一页并提取数据并保存它等等...现在,我只是试图处理第一页上的链接,但它没有给我全文,而且它在输出中给了我一些标签

1 个答案:

答案 0 :(得分:0)

要从网站中提取所有链接,您可以尝试以下内容:

data = []
soup = BeautifulSoup(page,"html.parser")
for link in soup.find_all('a', href=True):
    data.append(link['href'])

text = '\n'.join(data)
print(text)

然后继续将文本保存到文件中。在此之后,您需要迭代data以获取这些网站的所有网址。