寻找工作的BeautifulSoup

时间:2014-01-15 00:14:02

标签: python beautifulsoup

我想使用BeautifulSoup在批处理模式下搜索我的字段中的作业。我会有一份网址列表,所有网页都包含雇主职业页面。如果搜索在作业标题中找到关键字GIS,我希望它返回到作业发布的链接。

我将给出一些案例场景:

第一家公司网站需要关键字搜索。这个页面的结果是:

https://jobs-challp.icims.com/jobs/search?ss=1&searchKeyword=gis&searchCategory=&searchLocation=&latitude=&longitude=&searchZip=&searchRadius=20

我希望它返回以下内容:

https://jobs-challp.icims.com/jobs/2432/gis-specialist/job

https://jobs-challp.icims.com/jobs/2369/gis-specialist/job

第二个网站不需要关键字搜索:

https://www.smartrecruiters.com/SpectraForce1/

我希望它返回以下内容:

https://www.smartrecruiters.com/SpectraForce1/74966857-gis-specialist

https://www.smartrecruiters.com/SpectraForce1/74944180-gis-technician

这是我能得到的:

from bs4 import BeautifulSoup
import urllib2

url = 'https://www.smartrecruiters.com/SpectraForce1/'
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content)

text = soup.get_text()

if 'GIS ' in text:
        print 'Job Found!'

有两个问题: 1.)这当然会返回确认找到作业的确认,但不会返回作业本身的链接 2.)使用此方法找不到第一个公司站点的两个相关位置。我通过扫描soup.get_text()的输出检查了这一点,发现它在返回的文本中没有包含作业标题。

任何帮助或其他建议都将不胜感激。

谢谢!

3 个答案:

答案 0 :(得分:1)

这是一个去!

此代码将找到包含“GIS”的字符串的所有链接。我需要添加&in_iframe=1才能使第一个链接生效。

import urllib2
from bs4 import BeautifulSoup

urls = ['https://jobs-challp.icims.com/jobs/search?ss=1&searchKeyword=gis&searchCategory=&searchLocation=&latitude=&longitude=&searchZip=&searchRadius=20&in_iframe=1',
        'https://www.smartrecruiters.com/SpectraForce1/']

for url in urls:
    soup = BeautifulSoup(urllib2.urlopen(url))
    print 'Scraping {}'.format(url)
    for link in soup.find_all('a'):
        if 'GIS' in link.text:
            print '--> TEXT: ' + link.text.strip()
            print '--> URL:  ' + link['href']
            print ''

输出:

Scraping https://jobs-challp.icims.com/jobs/search?ss=1&searchKeyword=gis&searchCategory=&searchLocation=&latitude=&longitude=&searchZip=&searchRadius=20&in_iframe=1
--> TEXT: GIS Specialist
--> URL:  https://jobs-challp.icims.com/jobs/2432/gis-specialist/job?in_iframe=1

--> TEXT: GIS Specialist
--> URL:  https://jobs-challp.icims.com/jobs/2369/gis-specialist/job?in_iframe=1

Scraping https://www.smartrecruiters.com/SpectraForce1/
--> TEXT: Technical Specialist/ Research Analyst/ GIS/ Engineering Technician
--> URL:  https://www.smartrecruiters.com/SpectraForce1/74985505-technical-specialist

--> TEXT: GIS Specialist
--> URL:  https://www.smartrecruiters.com/SpectraForce1/74966857-gis-specialist

--> TEXT: GIS Technician
--> URL:  https://www.smartrecruiters.com/SpectraForce1/74944180-gis-technician

答案 1 :(得分:1)

这是一种方式:

from bs4 import BeautifulSoup
import urllib2
import re

url = 'https://www.smartrecruiters.com/SpectraForce1/'
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html)

titles = [i.get_text() for i in soup.findAll('a', {'target':'_blank'})]
jobs = [re.sub('\s+',' ',title) for title in titles]

links = [i.get('href') for i in soup.findAll('a', {'target':'_blank'})]

for i,j in enumerate(jobs):
    if 'GIS' in j:
        print links[i]

如果你现在运行它会打印:

https://www.smartrecruiters.com/SpectraForce1/74985505-technical-specialist
https://www.smartrecruiters.com/SpectraForce1/74966857-gis-specialist
https://www.smartrecruiters.com/SpectraForce1/74944180-gis-technician

答案 2 :(得分:1)

这是我的尝试,但它与上面几乎相同:

from bs4 import BeautifulSoup
from urllib2 import urlopen

def work(url):
    soup = BeautifulSoup(urlopen(url).read())

    for i in soup.findAll("a", text=True):
        if "GIS" in i.text:
            print "Found link "+i["href"].replace("?in_iframe=1", "")

urls = ["https://jobs-challp.icims.com/jobs/search?pr=0&searchKeyword=gis&searchRadius=20&in_iframe=1", "https://www.smartrecruiters.com/SpectraForce1/"]

for i in urls:
    work(i)

它定义了一个函数“work()”来完成实际工作,从远程服务器获取页面;使用urlopen(),因为看起来您想使用urllib2,但我建议您使用Python-Requests;然后它使用a找到所有findAll()元素(链接),并且对于每个链接,它检查链接的文本中是否有“GIS”,如果是,则打印链接的href属性。

然后它使用list comprehension定义URL列表(在这种情况下只有2个URL),然后它为列表中的每个URL运行work()函数并将其作为参数传递给功能。