我正在使用初始化安装Jupyter Notebook的Data Proc Spark集群进行工作。我无法读取存储在Google云存储桶中的csv文件,但是在Spark Shell上工作时,我可以读取相同的文件
下面是我得到的错误代码
import pandas as pd
import numpy as np
data = pd.read_csv("gs://dataproc-78r5fe64b-a56d-4f5f4-bcf9-e1b7t6fb9d8f-au-southeast1/notebooks/datafile.csv")
FileNotFoundError Traceback (most recent call last)
<ipython-input-20-2457012764fa> in <module>
----> 1 data = pd.read_csv("gs://dataproc-78r5fe64b-a56d-4f5f4-bcf9-e1b7t6fb9d8f-au-southeast1/notebooks/datafile.csv")
/opt/conda/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision)
676 skip_blank_lines=skip_blank_lines)
677
--> 678 return _read(filepath_or_buffer, kwds)
679
680 parser_f.__name__ = name
/opt/conda/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
438
439 # Create the parser.
--> 440 parser = TextFileReader(filepath_or_buffer, **kwds)
441
442 if chunksize or iterator:
/opt/conda/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
785 self.options['has_index_names'] = kwds['has_index_names']
786
--> 787 self._make_engine(self.engine)
788
789 def close(self):
/opt/conda/lib/python3.6/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
1012 def _make_engine(self, engine='c'):
1013 if engine == 'c':
-> 1014 self._engine = CParserWrapper(self.f, **self.options)
1015 else:
1016 if engine == 'python':
/opt/conda/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
1706 kwds['usecols'] = self.usecols
1707
-> 1708 self._reader = parsers.TextReader(src, **kwds)
1709
1710 passed_names = self.names is None
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
FileNotFoundError: File b'gs://dataproc-78r5fe64b-a56d-4f5f4-bcf9-e1b7t6fb9d8f-au-southeast1/notebooks/datafile.csv' does not exist
CSV文件的位置路径
gs://dataproc-78r5fe64b-a56d-4f5f4-bcf9-e1b7t6fb9d8f-au-southeast1/notebooks/datafile.csv
我还确保将csv文件存储在与数据proc相同的存储桶中,并确保文件采用UTF-8编码的csv格式
任何人都可以帮助我如何从运行在Google Cloud的Dataproc集群上的Jupyter Notebook读取存储在Google存储桶中的文件。
请告诉我是否需要更多信息
预先感谢!
答案 0 :(得分:1)
Spark可以从GCS读取的原因是我们将其配置为对以gs://
开头的路径使用GCS connector。您可能想使用spark.read.csv(gs://path/to/files/)
将CSV文件读取到Spark数据框中。
您可以使用熊猫来读写GCS,但这要复杂一些。 This stackoverflow post列出了一些选项。
旁注:如果您使用的是Pandas,则应使用single node集群,因为熊猫代码不会在整个集群中分布。