我正在开发一些抓取代码并且它会不断返回一些我想象其他人可能会帮助的错误。
首先,我运行此代码段:
import pandas as pd
from urllib.parse import urljoin
import requests
base = "http://www.reed.co.uk/jobs"
url = "http://www.reed.co.uk/jobs?datecreatedoffset=Today&pagesize=100"
r = requests.get(url).content
soup = BShtml(r, "html.parser")
df = pd.DataFrame(columns=["links"], data=[urljoin(base, a["href"]) for a in soup.select("div.pages a.page")])
df
我在今天的招聘信息的第一页上运行此代码段。然后我在页面底部提取URL,以便找到该时间点存在的总页数。下面的正则表达式为我解决了这个问题:
df['partone'] = df['links'].str.extract('([a-z][a-z][a-z][a-z][a-z][a-z]=[0-9][0-9].)', expand=True)
df['maxlink'] = df['partone'].str.extract('([0-9][0-9][0-9])', expand=True)
pagenum = df['maxlink'][4]
pagenum = pd.to_numeric(pagenum, errors='ignore')
不在上面的第三行,页面数始终包含在此列表中最后一个(五个)URL中的第二个页面中。我确信这样做的方式更为优雅,但它足以满足要求。然后,我将从网址中取出的号码输入循环:
result_set = []
loopbasepref = 'http://www.reed.co.uk/jobs?cached=True&pageno='
loopbasesuf = '&datecreatedoffset=Today&pagesize=100'
for pnum in range(1,pagenum):
url = loopbasepref + str(pnum) + loopbasesuf
r = requests.get(url).content
soup = BShtml(r, "html.parser")
df2 = pd.DataFrame(columns=["links"], data=[urljoin(base, a["href"]) for a in soup.select("div", class_="results col-xs-12 col-md-10")])
result_set.append(df2)
print(df2)
这是我收到错误的地方。我尝试做的是循环遍历从第1页开始列出作业的所有页面,然后转到页面N,其中N = pagenum,然后提取链接到每个单独作业页面的URL并将其存储在数据帧。我尝试了soup.select("div", class_="")
的各种组合,但每次读取时都会收到错误:TypeError: select() got an unexpected keyword argument 'class_'
。
如果有人对此有任何想法,并且能看到前进的好方法,我会很感激帮助!
干杯
克里斯
答案 0 :(得分:1)
你可以保持循环,直到没有下一页:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
base = "http://www.reed.co.uk"
url = "http://www.reed.co.uk/jobs?datecreatedoffset=Today&pagesize=100"
def all_urls():
r = requests.get(url).content
soup = BeautifulSoup(r, "html.parser")
# get the urls from the first page
yield [urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")]
nxt = soup.find("a", title="Go to next page")
# title="Go to next page" is missing when there are no more pages
while nxt:
# wash/repeat until no more pages
r = requests.get(urljoin(base, nxt["href"])).content
soup = BeautifulSoup(r, "html.parser")
yield [urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")]
nxt = soup.find("a", title="Go to next page")
只需循环生成器函数以获取每个页面的URL:
for u in (all_urls()):
print(u)
我还在选择器中使用a[href^=/jobs]
,因为还有其他匹配的标记,所以我们确保只提取作业路径。
在您自己的代码中,使用选择器的正确方法是:
soup.select("div.results.col-xs-12.col-md-10")
您的语法适用于 find 或 find_all ,其中您使用class_=...
作为css类:
soup.find_all("div", class_="results col-xs-12 col-md-10")
但无论如何,这都不是正确的选择。
不确定为什么要创建多个dfs,但如果这是您想要的:
def all_urls():
r = requests.get(url).content
soup = BeautifulSoup(r, "html.parser")
yield pd.DataFrame([urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")],
columns=["Links"])
nxt = soup.find("a", title="Go to next page")
while nxt:
r = requests.get(urljoin(base, nxt["href"])).content
soup = BeautifulSoup(r, "html.parser")
yield pd.DataFrame([urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")],
columns=["Links"])
nxt = soup.find("a", title="Go to next page")
dfs = list(all_urls())
那会给你一个dfs列表:
In [4]: dfs = list(all_urls())
dfs[0].head()
In [5]: dfs[0].head(10)
Out[5]:
Links
0 http://www.reed.co.uk/jobs/tufting-manager/308...
1 http://www.reed.co.uk/jobs/financial-services-...
2 http://www.reed.co.uk/jobs/head-of-finance-mul...
3 http://www.reed.co.uk/jobs/class-1-drivers-req...
4 http://www.reed.co.uk/jobs/freelance-middlewei...
5 http://www.reed.co.uk/jobs/sage-200-consultant...
6 http://www.reed.co.uk/jobs/bereavement-support...
7 http://www.reed.co.uk/jobs/property-letting-ma...
8 http://www.reed.co.uk/jobs/graduate-recruitmen...
9 http://www.reed.co.uk/jobs/solutions-delivery-...
但如果您只想要一个,请使用原始代码itertools.chain:
from itertools import chain
df = pd.DataFrame(columns=["links"], data=list(chain.from_iterable(all_urls())))
这将为您提供一个df中的所有链接:
In [7]: from itertools import chain
...: df = pd.DataFrame(columns=["links"], data=list(chain.from_iterable(all_
...: urls())))
...:
In [8]: df.size
Out[8]: 675