我经过大量艰苦的努力,从本网站的表格中提取了我需要的一些信息:
http://gbgfotboll.se/serier/?scr=table&ftid=57108
从表“Kommande Matcher”(第二张表)中我设法提取了日期和团队名称。
但现在我完全陷入了试图从第一张表中提取的问题:
第一栏“滞后”
第二栏“S”
6h栏目“GM-IM”
最后一栏“P”
有什么想法吗? ,谢谢
答案 0 :(得分:2)
我刚刚做到了:
from io import BytesIO
import urllib2 as net
from lxml import etree
import lxml.html
request = net.Request("http://gbgfotboll.se/serier/?scr=table&ftid=57108")
response = net.urlopen(request)
data = response.read()
collected = [] #list-tuple of [(col1, col2...), (col1, col2...)]
dom = lxml.html.parse(BytesIO(data))
#all table rows
xpatheval = etree.XPathDocumentEvaluator(dom)
rows = xpatheval('//div[@id="content-primary"]/table[1]/tbody/tr')
for row in rows:
columns = row.findall("td")
collected.append((
columns[0].find("a").text.encode("utf8"), # Lag
columns[1].text, # S
columns[5].text, # GM-IM
columns[7].text, # P - last column
))
for i in collected: print i
您可以直接在lxml.html.parse()中传递URL,而不是调用urllib2。此外,您将按类属性获取目标表,如下所示:
# new version
from lxml import etree
import lxml.html
collected = [] #list-tuple of [(col1, col2...), (col1, col2...)]
dom = lxml.html.parse("http://gbgfotboll.se/serier/?scr=table&ftid=57108")
#all table rows
xpatheval = etree.XPathDocumentEvaluator(dom)
rows = xpatheval("""//div[@id="content-primary"]/table[
contains(concat(" ", @class, " "), " clTblStandings ")]/tbody/tr""")
for row in rows:
columns = row.findall("td")
collected.append((
columns[0].find("a").text.encode("utf8"), # Lag
columns[1].text, # S
columns[5].text, # GM-IM
columns[7].text, # P - last column
))
for i in collected: print i