将HQL转换为SparkSQL

时间:2019-04-22 12:50:15

标签: apache-spark hive hiveql

我正在尝试将HQL转换为Spark。

我有以下查询(在Hive中使用Hive编辑器工作):

select reflect('java.util.UUID', 'randomUUID') as id,
    tt.employee, 
    cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,
    collect_set(tt.employee_detail) as employee_details,
    collect_set( tt.emp_indication ) as employees_indications,
    named_struct ('employee_info', collect_set(tt.emp_info),
        'employee_mod_info', collect_set(tt.emp_mod_info),
        'employee_comments', collect_set(tt.emp_comment) )
        as emp_mod_details,
    from (
        select views_ctr.employee,
        if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,
        if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,
        if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,
        if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,
        if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,
        from 
        ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr
        ) tt
        group by employee
        distribute by employee

首先,我正在尝试将其写在spark.sql中,如下所示:

sparkSession.sql("select reflect('java.util.UUID', 'randomUUID') as id, tt.employee,    cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,    collect_set(tt.employee_detail) as employee_details,    collect_set( tt.emp_indication ) as employees_indications,  named_struct ('employee_info', collect_set(tt.emp_info),        'employee_mod_info', collect_set(tt.emp_mod_info),      'employee_comments', collect_set(tt.emp_comment) )      as emp_mod_details, from (      select views_ctr.employee,      if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,     if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,      if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,       if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,      if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,      from        ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr        ) tt        group by employee       distribute by employee")

但是我遇到了以下异常:

  

线程“ main”中的异常org.apache.spark.SparkException:作业   由于阶段故障而中止:任务不可序列化:   java.io.NotSerializableException:   org.apache.spark.unsafe.types.UTF8String $ IntWrapper   -对象不可序列化(类:org.apache.spark.unsafe.types.UTF8String $ IntWrapper,值:   org.apache.spark.unsafe.types.UTF8String$IntWrapper@30cfd641)

如果我尝试在不使用collect_set函数的情况下运行查询,由于表中的结构列类型而导致查询失败?

如何在Spark中编写HQL查询/修复异常?

0 个答案:

没有答案