当我运行此脚本时,它不起作用,我不知道为什么。你能救我吗?
import pandas as pd
data1 = pd.read_csv(url)
print(data1)
错误:
Traceback (most recent call last):
File "C:\Users\abc\Desktop\script.py", line 4, in <module>
data1 = pd.read_csv(url)
File "C:\Users\abc\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\abc\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\io\parsers.py", line 401, in _read
data = parser.read()
File "C:\Users\abc\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\io\parsers.py", line 939, in read
ret = self._engine.read(nrows)
File "C:\Users\abc\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\io\parsers.py", line 1508, in read
data = self._reader.read(nrows)
File "pandas\parser.pyx", line 848, in pandas.parser.TextReader.read (pandas\parser.c:10415)
File "pandas\parser.pyx", line 870, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:10691)
File "pandas\parser.pyx", line 924, in pandas.parser.TextReader._read_rows (pandas\parser.c:11437)
File "pandas\parser.pyx", line 911, in pandas.parser.TextReader._tokenize_rows (pandas\parser.c:11308)
File "pandas\parser.pyx", line 2024, in pandas.parser.raise_parser_error (pandas\parser.c:27037)
pandas.io.common.CParserError: Error tokenizing data. C error: Expected 45 fields in line 49, saw 46
谢谢!
答案 0 :(得分:1)
实施例: pd.read_csv(location.of.archive) 喜欢: pd.read_csv(myfile.csv)
这就是全部!
答案 1 :(得分:0)
就像那样无法分辨。 我建议下载PyCharm,并在调试模式下运行代码,一步一步查看可能存在的问题。
请参阅JetBrains文档。