如何将csv文件转换为镶木地板

时间:2014-09-30 15:18:50

标签: java bigdata parquet

我是BigData的新手。我需要将csv / txt文件转换为Parquet格式。我搜索了很多,但无法找到任何直接的方法。有没有办法实现这个目标?

9 个答案:

答案 0 :(得分:11)

您可以Apache Drill使用Convert a CSV File to Apache Parquet With Drill

简而言之:

启动Apache Drill:

$ cd /opt/drill/bin
$ sqlline -u jdbc:drill:zk=local

创建Parquet文件:

-- Set default table format to parquet
ALTER SESSION SET `store.format`='parquet';

-- Create a parquet table containing all data from the CSV table
CREATE TABLE dfs.tmp.`/stats/airport_data/` AS
SELECT
CAST(SUBSTR(columns[0],1,4) AS INT)  `YEAR`,
CAST(SUBSTR(columns[0],5,2) AS INT) `MONTH`,
columns[1] as `AIRLINE`,
columns[2] as `IATA_CODE`,
columns[3] as `AIRLINE_2`,
columns[4] as `IATA_CODE_2`,
columns[5] as `GEO_SUMMARY`,
columns[6] as `GEO_REGION`,
columns[7] as `ACTIVITY_CODE`,
columns[8] as `PRICE_CODE`,
columns[9] as `TERMINAL`,
columns[10] as `BOARDING_AREA`,
CAST(columns[11] AS DOUBLE) as `PASSENGER_COUNT`
FROM dfs.`/opendata/Passenger/SFO_Passenger_Data/*.csv`;

尝试从新的Parquet文件中选择数据:

-- Select data from parquet table
SELECT *
FROM dfs.tmp.`/stats/airport_data/*`

您可以转到dfs.tmp(来源:CSV and Parquet)来更改http://localhost:8047/storage/dfs位置。

答案 1 :(得分:10)

Here是一个代码示例,可以双向完成。

答案 2 :(得分:8)

[对于Python ]

熊猫现在对此有直接支持。

只需使用read_csv用熊猫将csv文件读取到数据帧中,然后使用to_parquet将该数据帧写入镶木地板文件中即可。

答案 3 :(得分:7)

我已经发布了an answer关于如何使用Apache Drill执行此操作的信息。但是,如果您熟悉Python,现在可以使用PandasPyArrow执行此操作!

安装依赖项

使用pip

pip install pandas pyarrow

或使用conda

conda install pandas pyarrow -c conda-forge

将CSV转换为Parquet in

# csv_to_parquet.py

import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq

csv_file = '/path/to/my.tsv'
parquet_file = '/path/to/my.parquet'
chunksize = 100_000

csv_stream = pd.read_csv(csv_file, sep='\t', chunksize=chunksize, low_memory=False)

for i, chunk in enumerate(csv_stream):
    print("Chunk", i)
    if i == 0:
        # Guess the schema of the CSV file from the first chunk
        parquet_schema = pa.Table.from_pandas(df=chunk).schema
        # Open a Parquet file for writing
        parquet_writer = pq.ParquetWriter(parquet_file, parquet_schema, compression='snappy')
    # Write CSV chunk to the parquet file
    table = pa.Table.from_pandas(chunk, schema=parquet_schema)
    parquet_writer.write_table(table)

parquet_writer.close()

我没有针对Apache Drill版本对此代码进行基准测试,但根据我的经验,它非常快,每秒转换成数万行(当然这取决于CSV文件!)。

答案 4 :(得分:5)

以下代码是使用spark2.0的示例。读取比inferSchema选项快得多。 Spark 2.0转换为镶木地板文件比spark1.6更有效。

<html>
<head>
<title>MathJax with Dynamic CSS</title>
<script
  src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML">
</script>
<style>
.box {
  margin: 1em auto 0 auto;
  border: 2px solid black;
  padding: 0 20px;
  width: 20em;
}
</style>
</head>
<body>

<div class="box">
<div id="rescale" style="display:inline-block">
\[\sum_{n=1}^{10} {1\over n} = {1\over 1} + {1\over 2} + {1\over 3}
  + {1\over 4} + {1\over 5} + {1\over 6} + {1\over 7} + {1\over 8}
  + {1\over 9} + {1\over 10}\]
</div>
</div>

<script>
MathJax.Hub.Queue(function () {
  var math = document.getElementById("rescale");
  var w = math.offsetWidth, W = math.parentNode.offsetWidth-40;
  if (w > W) {
    math.style.fontSize = (95*W/w)+"%";
    MathJax.Hub.getAllJax(math)[0].Rerender();
  }
});
</script>

</body>
</html>

答案 5 :(得分:2)

1)您可以创建外部配置单元

create  external table emp(name string,job_title string,department string,salary_per_year int)
row format delimited
fields terminated by ','
location '.. hdfs location of csv file '

2)另一个存储镶木地板文件的hive表

create  external table emp_par(name string,job_title string,department string,salary_per_year int)
row format delimited
stored as PARQUET
location 'hdfs location were you want the save parquet file'

将表中的一个数据插入表二:

insert overwrite table emp_par select * from emp 

答案 6 :(得分:1)

使用Dataframe in Apache Spark将csv文件读取为spark-csv package。将数据加载到Dataframe后将数据帧保存到parquetfile。

val df = sqlContext.read
      .format("com.databricks.spark.csv")
      .option("header", "true")
      .option("inferSchema", "true")
      .option("mode", "DROPMALFORMED")
      .load("/home/myuser/data/log/*.csv")
df.saveAsParquetFile("/home/myuser/data.parquet")

答案 7 :(得分:0)

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
import sys

sc = SparkContext(appName="CSV2Parquet")
sqlContext = SQLContext(sc)

schema = StructType([
    StructField("col1", StringType(), True),
    StructField("col2", StringType(), True),
    StructField("col3", StringType(), True),
    StructField("col4", StringType(), True),
    StructField("col5", StringType(), True)])
rdd = sc.textFile('/input.csv').map(lambda line: line.split(","))
df = sqlContext.createDataFrame(rdd, schema)
df.write.parquet('/output.parquet')

答案 8 :(得分:0)

您可以使用 https://github.com/fraugster/parquet-go 项目中的 csv2parquet 工具。使用起来比Apache Drill简单多了

相关问题