网络抓取:如何仅提取我需要的信息

时间:2019-07-08 12:03:23

标签: python web-scraping

我必须从congress.gov网站(https://www.congress.gov/search?q=%7B%22source%22%3A%22legislation%22%2C%22congress%22%3A%22115%22%2C%22type%22%3A%22bills%22%7D&page=113)上抓取一些信息。 我无法提取赞助商信息。

import os
import requests
import csv
from bs4 import BeautifulSoup
import re
x=0
y=0
index=0;
mydirectory= '/Users/Antonio/Desktop/statapython assignment'
congress115 =os.path.join(mydirectory, '115congress.csv')
headers = {'User-Agent': 'Make_America_Great_Again',
                    'From': 'Donald'}
with open('115congress.csv', 'w') as f:
    fwriter=csv.writer(f, delimiter=';')
    fwriter.writerow(['Spons'])
    for j in range(1, 114):
        hrurl='https://www.congress.gov/search?q=%7B%22source%22%3A%22legislation%22%2C%22congress%22%3A%22115%22%2C%22type%22%3A%22bills%22%7D&page='+str(j)
        hrpage=requests.get(hrurl, headers=headers)
        data=hrpage.text
        soup=BeautifulSoup(data, 'lxml')
        #index=0;
        for q in soup.findAll('span', {'class':'result-item'}):
            for a in q.findAll('a', href=True, text=True, target='_blank'):
                if a==y:
                    continue
                y=a
                Spons=a['href']
                print(Spons)

我得到的是这样的(为了简洁起见,我将仅报告7401个结果之一)

/member/michael-enzi/E000285

当我需要

Sen. Enzi, Michael B. [R-WY] 

很抱歉,如果我以错误的方式放下东西,但这是我的第一个问题。 任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:1)

只需从<a>标记中提取文本(而不是href属性):

...
Spons = a.text
相关问题