如何解析数据并将其放入Spark SQL表中

时间:2017-09-14 14:40:14

标签: scala apache-spark apache-spark-sql

我有一个日志文件,我想使用Spark SQL进行分析。日志文件的格式如下:

71.19.157.174 - - [24/Sep/2014:22:26:12 +0000] "GET /error HTTP/1.1" 404 505 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36"

我有一个可用于解析数据的正则表达式模式:

Pattern.compile("""^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] \"(\S+) (\S+) (\S+)\" (\d{3}) (\d+)""")

此外,我还创建了案例类:

case class LogSchema(ip: String, client: String, userid: String, date: String, method: String, endpoint: String, protocol: String, response: String, contentsize: String)

但是,我无法将其转换为可以运行spark sql查询的表。

如何使用正则表达式模式解析数据并将其放在表中?

1 个答案:

答案 0 :(得分:4)

假设您的日志文件位于/home/user/logs/log.txt,那么您可以使用以下逻辑从日志文件中获取table / dataframe

val rdd = sc.textFile("/home/user/logs/log.txt")
val pattern = Pattern.compile("""^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] \"(\S+) (\S+) (\S+)\" (\d{3}) (\d+)""")
val df = rdd.map(line => pattern.matcher(line)).map(elem => {
  elem.find
  LogSchema(elem.group(1), elem.group(2), elem.group(3), elem.group(4), elem.group(5), elem.group(6), elem.group(7), elem.group(8), elem.group(9))
}).toDF()
df.show(false)

您应该关注dataframe

+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+
|ip           |client|userid|date                      |method|endpoint|protocol|response|contentsize|
+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+
|71.19.157.174|-     |-     |24/Sep/2014:22:26:12 +0000|GET   |/error  |HTTP/1.1|404     |505        |
+-------------+------+------+--------------------------+------+--------+--------+--------+-----------+

我使用了您提供的case class

case class LogSchema(ip: String, client: String, userid: String, date: String, method: String, endpoint: String, protocol: String, response: String, contentsize: String)