对于为什么会出现属性错误,我有些困惑。仅当我放置一个等于stock_list的列表时,才会发生此错误。如果我打印列表,然后复制并粘贴,我不会收到错误/
我曾尝试从代码顶部输入技术行情自动收录器,但尝试时会出现属性错误,当我先打印列表然后复制并粘贴时,这不会发生吗?< / p>
file = 'techtickerlist.csv'
with open(file) as f:
reader = csv.reader(f)
technologyTickers = []
for row in reader:
technologyTickers.append(row[0])
def scrape(stock_list, interested, technicals):
SuggestedStocks = []
for each_stock in stock_list:
try:
technicals = scrape_yahoo(each_stock)
condition_1 = float(technicals.get('Return on Equity',0).replace('%','').replace('N/A','-100').replace(',','')) > 25
condition_2 = float(technicals.get('Trailing P/E',0).replace('N/A','0').replace(',','')) < 25
condition_3 = float(technicals.get('Price/Book',0).replace('N/A','100')) <8
condition_4 = float(technicals.get('Beta (3Y Monthly)',0).replace('N/A','100')) <1.1
if condition_1 and condition_2 and condition_3 and condition_4:
print(each_stock)
SuggestedStocks.append(each_stock)
for ind in interested:
print(ind + ": "+ technicals[ind])
print("------")
time.sleep(1)
except ValueError:
print('Value Error')
return
# Use delay to avoid getting flagged as bot
#return technicals
print(SuggestedStocks)
def main():
stock_list = technologyTickers
interested = ['Return on Equity', 'Revenue', 'Quarterly Revenue Growth','Trailing P/E', 'Beta (3Y Monthly)','Price/Book']
technicals = {}
tech = scrape(stock_list, interested, technicals)
print(tech)
AttributeError:'int'对象没有属性'replace'
答案 0 :(得分:1)
检查实施情况
technicals.get('股本回报率',0)
如果键不存在,则方法dict
(对于类型0
)将返回默认值int
。根据您的实现,所有默认值的类型均为matplotlib.pyplot.plot()
。因为它们被设置为数字,而不是字符串(用引号引起来)。
如果零是正确的默认值,则可以在更改类型时忽略错误,并保持实现。
technicals.get('股本回报率','0')
答案 1 :(得分:0)
我认为这将满足您的要求。
import csv
import requests
from bs4 import BeautifulSoup
url_base = "https://finviz.com/quote.ashx?t="
tckr = ['SBUX','MSFT','AAPL']
url_list = [url_base + s for s in tckr]
with open('C:\\Users\\Excel\\Downloads\\SO.csv', 'a', newline='') as f:
writer = csv.writer(f)
for url in url_list:
try:
fpage = requests.get(url)
fsoup = BeautifulSoup(fpage.content, 'html.parser')
# write header row
writer.writerow(map(lambda e : e.text, fsoup.find_all('td', {'class':'snapshot-td2-cp'})))
# write body row
writer.writerow(map(lambda e : e.text, fsoup.find_all('td', {'class':'snapshot-td2'})))
except HTTPError:
print("{} - not found".format(url))