如何进行循环以从网页的超链接中提取文本

时间:2019-07-05 14:10:32

标签: html python-3.x web-scraping beautifulsoup

我对python还是很陌生,并试图将其用于网络抓取。

具体来说,我想获得this page上的所有引号,这些引号是在此处给出的:“ YYY的XXX全引号”,或者只有一个引号:“ YYY的全引号”。在每个页面上获取文本后,我希望将它们另存为单独的文本文件。

我一直在关注这个tutorial,但是在如何过滤html方面我有些困惑。老实说,我几乎没有使用HTML的经验,因此很难理解它的含义,但是我想,感兴趣的部分是这样的:

 <a href="javascript:pop('../2020/

到目前为止,这是我的代码,用于打开网页。

import bs4
from urllib.request import Request,urlopen as uReq
from bs4 import BeautifulSoup as soup

import re
#define url of interest
my_url = 'http://archive.ontheissues.org/Free_Trade.htm'

# set up known browser user agent for the request to bypass HTMLError
req=Request(my_url,headers={'User-Agent': 'Mozilla/5.0'})

#opening up connection, grabbing the page
uClient = uReq(req)
page_html = uClient.read()
uClient.close()

#html is jumbled at the moment, so call html using soup function

soup = soup(page_html, "html.parser")

非常感谢您的帮助。

编辑

我的想法是,我将首先编译相关的URL并存储它们,然后尝试让bs将文本存储在每个URL中。 我设法隔离了所有感兴趣的链接:

tags = soup.findAll("a" , href=re.compile("javascript:pop"))
print(tags)

for links in tags:
    link = links.get('href')
    if "java" in link: 
        print("http://archive.ontheissues.org" + link[18:len(link)-3])

现在,如何从每个单独的链接中提取文本?

2 个答案:

答案 0 :(得分:1)

使用requestregular expression搜索特定文本并将文本值保存到textfile

import requests
from bs4 import BeautifulSoup
import re
URL = 'http://archive.ontheissues.org/Free_Trade.htm'
headers = {"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/75.0.3770.100 Safari/537.36' }
page=requests.get(URL, headers=headers)
soup=BeautifulSoup(page.content,'html.parser')
file1 = open("Quotefile.txt","w")
for a in soup.find_all('a',text=re.compile("the full quote by|full quotes by")):
    file1.writelines(a.text.strip() +"\n")
    # print(a.text.strip())
file1.close()

已编辑

import requests
from bs4 import BeautifulSoup
import re
URL = 'http://archive.ontheissues.org/Free_Trade.htm'
headers = {"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/75.0.3770.100 Safari/537.36' }
page=requests.get(URL, headers=headers)
soup=BeautifulSoup(page.content,'html.parser')
file1 = open("Quotefile.txt","w")
for a in soup.find_all('a',href=re.compile("javascript:pop")):
    shref=a['href'].split("'")[1]

    if ('Background_Free_Trade.htm' not in shref):
        link="http://archive.ontheissues.org" + shref[2:len(shref)]
        print(link)

        file1.writelines(a.text.strip() +"\n")

file1.close()

EDITED2

import requests
from bs4 import BeautifulSoup
import re
URL = 'http://archive.ontheissues.org/Free_Trade.htm'
headers = {"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/75.0.3770.100 Safari/537.36' }
page=requests.get(URL, headers=headers)
soup=BeautifulSoup(page.content,'html.parser')
file1 = open("Quotefile.txt","w")
for a in soup.find_all('a',href=re.compile("javascript:pop")):
    shref=a['href'].split("'")[1]
    if ('Background_Free_Trade.htm' not in shref):
        link="http://archive.ontheissues.org" + shref[2:len(shref)]
        print(link)
        pagex=requests.get(link,headers=headers)
        soup=BeautifulSoup(pagex.content,'html.parser')
        print(soup.find('h1').text)

        file1.writelines(soup.find('h1').text +"\n")

file1.close()

答案 1 :(得分:0)

是您想要的东西

soup = soup(page_html, "html.parser")
if (__name__ == '__main__'):
    for tag in soup.find_all('a'): # type: Tag
        if ('href' in tag.attrs and tag.attrs.get('href').startswith("javascript:pop('../2020/")):
            print(tag)