在取消LinkedIn的工作期间,我找不到任何错误,但未显示任何输出

时间:2019-07-14 18:51:14

标签: python web-scraping beautifulsoup data-science spyder

我正试图在Beautiful-soup的帮助下找到LinkedIn的职位。我没有收到任何错误,但没有显示输出。我想将数据保存为CSV,但也不会显示任何输出和错误。

from urllib.request import urlopen
from bs4 import BeautifulSoup


titles = []
companies = []
locations = []
links = []

file = "lindinjobs3.csv"
f = open(file, "w")

Headers = "title, compines , location, links\n"
f.write(Headers)

for page in range(0,1):
    url = "http://www.linkedin.com/jobs/search?keywords=%22Data+Scientist%22&locationId="
    "fr:0&start=nPostings&count=25&trk=jobs_jserp_pagination_1"
    html = urlopen(url)
    soup = BeautifulSoup(html, "html.parser")
    for script in soup(["script", "style"]):
        script.extract() 
        results = soup.find_all('li', {'class': 'job-listing'})
        results.sort()
        print(results)

    for res in results:
        # set only the value if get_text() 
        titles.append(res.h2.a.span.get_text() if res.h2.a.span else 'None')
        companies.append( res.find('span',{'class' : 'company-name-text'}).get_text() if
                         res.find('span',{'class' : 'company-name-text'}) else 'None')
        locations.append( res.find('span', {'class' : 'job-location'}).get_text() if
                        res.find('span', {'class' : 'job-location'}) else 'None' )
        links.append(res.find('a',{'class' : 'job-title-link'}).get('href') )
        print(companies)
        f.write(f'{titles},{companies},{locations},{links}\n')
f.close()

0 个答案:

没有答案