我编写了一个Python 2.7 scraper,但在尝试保存数据时遇到错误。刮刀是用Scraperwiki写的,但我认为这与我得到的错误基本无关 - Scraperwiki中的保存似乎是用Sqlalchemy来处理的,而这就是给出错误。
我收到此错误消息:
Traceback (most recent call last):
File "./code/scraper", line 192, in <module>
saving(spreadsheet_pass)
File "./code/scraper", line 165, in saving
scraperwiki.sql.save(["URN"], school, "magic")
File "/usr/local/lib/python2.7/dist-packages/scraperwiki/sql.py", line 195, in save
connection.execute(insert.values(row))
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 729, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 893, in _execute_context
None, None)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1160, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 889, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 573, in _init_compiled
param.append(processors[key](compiled_params[key]))
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/processors.py", line 56, in boolean_to_int
return int(value)
sqlalchemy.exc.StatementError: invalid literal for int() with base 10: 'n/a' (original cause: ValueError: invalid literal for int() with base 10: 'n/a') u'INSERT OR REPLACE INTO magic (published_recent, inspection_rating2, schooltype, "LA", "URL", "URN", schoolname, open_closed, opendate_full, inspection_rating, opendate_short, phase, publication_date, include, notes, inspection_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' []
尝试保存此行数据时:
{u'published_recent': 'n/a', u'inspection_rating2': 'n/a', u'schooltype': u'Free school', u'LA': u'Tower Hamlets', u'URL': u'http://www.ofsted.gov.uk/inspection-reports/find-inspection-report/provider/ELS/138262', u'URN': u'138262', u'schoolname': u'City Gateway 14-19 Provision', u'open_closed': u'Open', u'opendate_full': u'2012-09-03', u'inspection_rating': 'No section 5 inspection yet', u'opendate_short': u'September 2012', u'phase': u'Alternative provision', u'publication_date': 'n/a', u'include': False, u'notes': 'test message', u'inspection_date': 'n/a'}
使用以下代码行:
scraperwiki.sql.save(["URN"], school, "magic")
(在Scraperwiki中,将“&#39; school”字典中的数据保存到名为“魔法”的数据库中,使用密钥&#39; URN&#39;作为唯一键。 )
奇怪的是,有时刮刀工作正常但我没有得到错误,但有时候,运行相同的代码,我得到了这个错误。
我尝试的事情:
sqlalchemy.exc.IntegrityError: (IntegrityError) constraint failed
无论哪种方式,这都不是解决方案,因为我需要能够保存字符串。
提前感谢任何答案 - 这让我有点疯狂。
答案 0 :(得分:0)
错误表示它尝试保存为整数的值为'n/a'
。如果你正在抓取数据,那么你总是得不到你想要的东西。似乎'n/a'
就是他们在你正在抓取的网站上放置的内容,当时该字段没有编号。在保存之前,您必须对数据进行一些验证。