我有这段代码,但是一直遇到标题错误的版本。谁能帮助我克服这些困难?追溯击中了newfilingDate行(从底部起第4行),但我怀疑这不是实际错误所在吗?
def getIndexLink(tickerCode,FormType):
csvOutput = open(IndexLinksFile,"a+b") # "a+b" indicates that we are adding lines rather than replacing lines
csvWriter = csv.writer(csvOutput, quoting = csv.QUOTE_NONNUMERIC)
urlLink = "https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany&CIK="+tickerCode+"&type="+FormType+"&dateb=&owner=exclude&count=100"
pageRequest = urllib.request.Request(urlLink)
with urllib.request.urlopen(pageRequest) as url:
pageRead = url.read()
soup = BeautifulSoup(pageRead,"html.parser")
#Check if there is a table to extract / code exists in edgar database
try:
table = soup.find("table", { "class" : "tableFile2" })
except:
print("No tables found or no matching ticker symbol for ticker symbol for"+tickerCode)
return -1
docIndex = 1
for row in table.findAll("tr"):
cells = row.findAll("td")
if len(cells)==5:
if cells[0].text.strip() == FormType:
link = cells[1].find("a",{"id": "documentsbutton"})
docLink = "https://www.sec.gov"+link['href']
description = cells[2].text.encode('utf8').strip() #strip take care of the space in the beginning and the end
filingDate = cells[3].text.encode('utf8').strip()
newfilingDate = filingDate.replace("-","_") ### <=== Change date format from 2012-1-1 to 2012_1_1 so it can be used as part of 10-K file names
csvWriter.writerow([tickerCode, docIndex, docLink, description, filingDate,newfilingDate])
docIndex = docIndex + 1
csvOutput.close()
答案 0 :(得分:0)
一样的字节对象可以调用.replace,只要replace args也像字节一样。 (特别感谢juanpa.arrivillaga指出了这一点)
foo = b'hi-mom'
foo = foo.replace(b"-", b"_")
print(foo)
或者,您可以将其重铸为字符串,然后再恢复为类似字节的形式,但这很麻烦且效率低下。
foo = b'hi-mom'
foo = str(foo).replace("-","_").encode('utf-8')
print(foo)