RserveException:在Databricks上运行R时,eval失败

时间:2016-06-26 10:33:46

标签: r apache-spark

我对R没有任何经验,我尝试在Databricks笔记本中与Spark一起使用它来分析一些数据。

我已按照此处的教程http://people.apache.org/~pwendell/spark-releases/latest/sparkr.html

进行操作

到目前为止,我有这段代码:

sparkR.stop()
sc <- sparkR.init()
sqlContext <- sparkRSQL.init(sc)

df <- createDataFrame(sqlContext, '/FileStore/tables/boanf7gu1466936449434/german.data')

在最后一行,我收到错误:

RserveException: eval failed, request status: error code: 127
org.rosuda.REngine.Rserve.RserveException: eval failed, request status: error code: 127
    at org.rosuda.REngine.Rserve.RConnection.eval(RConnection.java:234)
    at com.databricks.backend.daemon.driver.RShell.setJobGroup(RShell.scala:202)
    at com.databricks.backend.daemon.driver.RDriverLocal.setJobGroup(RDriverLocal.scala:150)
    at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:125)
    at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$3.apply(DriverWrapper.scala:483)
    at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$3.apply(DriverWrapper.scala:483)
    at scala.util.Try$.apply(Try.scala:161)
    at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:480)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:381)
    at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:212)
    at java.lang.Thread.run(Thread.java:745)

是什么引发了这个?

1 个答案:

答案 0 :(得分:0)

在DataBricks中,已经有一个Spark实例在运行。因此,您不想停止它。

public class DataLoader extends AsyncTaskLoader<List<RowItem>> {

    Context activityContext;
    private String TAG="DataLoader";
    private List<RowItem> rowItems;

    public DataLoader(Context context){
        super(context);
        this.activityContext = context;
    }

    @Override
    protected void onStartLoading() {
        super.onStartLoading();
        if (rowItems != null) {
            deliverResult(rowItems); // skip loadInBackground() call
        } else {
            forceLoad(); // call loadInBackground()
        }
    }


    @Override
    public List<RowItem> loadInBackground(){
        Log.v(TAG, "loadinBackground() IN");
        rowItems = new ArrayList<RowItem>();
        DbReaderHelper mDbHelper = new DbReaderHelper(activityContext);
        SQLiteDatabase db = mDbHelper.getReadableDatabase();
        String[] columns = {DbReaderHelper.COLUMN_CITYNAME,DbReaderHelper.COLUMN_ZIPCODE};
        Cursor cur =  db.query(DbReaderHelper.TABLE_NAME, columns,null,null,null,null,null);
        cur.moveToFirst();
        while(!cur.isAfterLast())
        {
            String city = cur.getString(cur.getColumnIndex(DbReaderHelper.COLUMN_CITYNAME));
            String zipcode = cur.getString(cur.getColumnIndex(DbReaderHelper.COLUMN_ZIPCODE));
            rowItems.add(new RowItem(city,Integer.parseInt(zipcode)));
            cur.moveToNext();
        }
        cur.close();    // release all the resources
        db.close();
        mDbHelper.close();
        Log.v(TAG, "loadinBackground() OUT");
        return rowItems;
    }
}

由于您已经有一个实例,因此可以直接使用以下内容创建df。

sparkR.stop() #This line of your code stops the existing spark instance. 
sc <- sparkR.init()  #You also don't need this to start a spark instance because you already have one.
sqlContext <- sparkRSQL.init(sc)