我从网页上提取数据。我遇到的一个问题是它提取了很多空白,我选择使用其他人建议的.strip()。我遇到了一个问题
if a.strip():
print a
if b.strip():
print b
返回:
a1
b1
.
.
.
但是这个:
if a.strip():
aList.append(a)
if b.strip():
bList.append(b)
print aList, bList
返回:
a1
b1
我试图用这里的.strip()来模拟我删除的空格,但是你明白了。无论出于何种原因,它都会将空白添加到列表中,即使我告诉它不要。我甚至可以在if语句中打印列表,它也可以正确显示,但无论出于何种原因,当我决定在if语句之外打印时它不能按照我的意图工作。
这是我的整个代码:
# coding: utf-8
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.exporter import CsvItemExporter
import re
import csv
import urlparse
from stockscrape.items import EPSItem
from itertools import izip
class epsScrape(BaseSpider):
name = "eps"
allowed_domains = ["investors.com"]
ifile = open('test.txt', "r")
reader = csv.reader(ifile)
start_urls = []
for row in ifile:
url = row.replace("\n","")
if url == "symbol":
continue
else:
start_urls.append("http://research.investors.com/quotes/nyse-" + url + ".htm")
ifile.close()
def parse(self, response):
f = open("eps.txt", "a+")
sel = HtmlXPathSelector(response)
sites = sel.select("//div")
# items = []
for site in sites:
symbolList = []
epsList = []
item = EPSItem()
item['symbol'] = site.select("h2/span[contains(@id, 'qteSymb')]/text()").extract()
item['eps'] = site.select("table/tbody/tr/td[contains(@class, 'rating')]/span/text()").extract()
strSymb = str(item['symbol'])
newSymb = strSymb.replace("[]","").replace("[u'","").replace("']","")
strEps = str(item['eps'])
newEps = strEps.replace("[]","").replace(" ","").replace("[u'\\r\\n","").replace("']","")
if newSymb.strip():
symbolList.append(newSymb)
# print symbolList
if newEps.strip():
epsList.append(newEps)
# print epsList
print symbolList, epsList
for symb, eps in izip(symbolList, epsList):
f.write("%s\t%s\n", (symb, eps))
f.close()
答案 0 :(得分:8)
strip
不会就地修改字符串。它返回一个带有空白的新字符串。
>>> a = ' foo '
>>> b = a.strip()
>>> a
' foo '
>>> b
'foo'
答案 1 :(得分:0)
我弄清楚是什么造成了混乱。它是我声明变量/列表的位置。我在for循环中声明它所以每次迭代它重写时,空白列表或变量与if语句的结果相同。