我在3个VM上运行spark 1.6(即1x主站; 2x从站),全部有4个内核和16GB RAM。
我可以看到工作人员在spark-master webUI上注册。
我想从Vertica数据库中检索数据来处理它。由于我没有设法运行复杂的查询,我尝试了虚拟查询来理解。我们认为这是一项简单的任务。
我的代码是:
df = sqlContext.read.format('jdbc').options(url='xxxx', dbtable='xxx', user='xxxx', password='xxxx').load()
four = df.take(4)
输出是(注意:我用@IPSLAVE
替换从属VM IP:端口):
16/03/08 13:50:41 INFO SparkContext: Starting job: take at <stdin>:1
16/03/08 13:50:41 INFO DAGScheduler: Got job 0 (take at <stdin>:1) with 1 output partitions
16/03/08 13:50:41 INFO DAGScheduler: Final stage: ResultStage 0 (take at <stdin>:1)
16/03/08 13:50:41 INFO DAGScheduler: Parents of final stage: List()
16/03/08 13:50:41 INFO DAGScheduler: Missing parents: List()
16/03/08 13:50:41 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at take at <stdin>:1), which has no missing parents
16/03/08 13:50:41 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 5.4 KB, free 5.4 KB)
16/03/08 13:50:41 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.6 KB, free 7.9 KB)
16/03/08 13:50:41 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on @IPSLAVE (size: 2.6 KB, free: 511.5 MB)
16/03/08 13:50:41 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/03/08 13:50:41 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at take at <stdin>:1)
16/03/08 13:50:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/03/08 13:50:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, @IPSLAVE, partition 0,PROCESS_LOCAL, 1922 bytes)
16/03/08 13:50:41 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on @IPSLAVE (size: 2.6 KB, free: 511.5 MB)
16/03/08 15:02:20 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4299240 ms on @IPSLAVE (1/1)
16/03/08 15:02:20 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/03/08 15:02:20 INFO DAGScheduler: ResultStage 0 (take at <stdin>:1) finished in 4299.248 s
16/03/08 15:02:20 INFO DAGScheduler: Job 0 finished: take at <stdin>:1, took 4299.460581 s
正如您所看到的,这需要花费很长时间。 我的表实际上非常大(存储大约2.2亿行,每行11个字段),但这样的查询将立即执行使用&#34; normal&#34; sql(例如pyodbc)。
我想我很想念/错过Spark,你会有这样的想法或建议让它更好用吗?
答案 0 :(得分:8)
虽然Spark支持对JDBC的有限谓词下推,但所有其他操作(如限制,组,聚合)都在内部执行。不幸的是,这意味着public class BeaconService extends BaseService implements BootstrapNotifier, BeaconConsumer {
private static final int NOTIFICATION = R.string.notify_service_started;
private static final String TAG = "BeaconService";
private int size = -1;
private RegionBootstrap regionBootstrap;
private BackgroundPowerSaver backgroundPowerSaver;
private Beacon beacon;
@Nullable
@Override
public IBinder onBind(Intent intent) {
return null;
}
@Override
public void onCreate() {
super.onCreate();
Log.d(TAG, "onCreate");
regionBootstrap = new RegionBootstrap(this, region);
beaconManager.bind(this);
backgroundPowerSaver = new BackgroundPowerSaver(getApplicationContext());
}
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
if (intent != null && intent.getAction() != null) {
switch (intent.getAction()) {
case Constants.ACTION.STARTFOREGROUND_ACTION:
startForeground(NOTIFICATION, getNotification());
break;
case Constants.ACTION.STOPFOREGROUND_ACTION:
Log.d(TAG, "Received stop foreground request");
stopForeground(true);
stopSelf();
break;
}
}
return START_STICKY;
}
@Override
public void onDestroy() {
beaconManager.unbind(this);
regionBootstrap.disable();
Log.d(TAG, "service onDestroy");
}
/**
* Called when at least one beacon in a Region is visible.
*
* @param region region
*/
@Override
public void didEnterRegion(Region region) {
// TODO: 3/8/16 reload all the resource
Log.d(TAG, "didEnterRegion called");
L.object(region);
try {
beaconManager.startRangingBeaconsInRegion(region);
} catch (RemoteException e) {
e.printStackTrace();
}
}
/**
* Called when no beacons in a Region are visible.
*
* @param region region
*/
@Override
public void didExitRegion(Region region) {
// TODO: 3/8/16 close all the resource
Log.d(TAG, "didExitRegion called");
try {
beaconManager.stopRangingBeaconsInRegion(region);
} catch (RemoteException e) {
e.printStackTrace();
}
beaconManager.unbind(this);
regionBootstrap.disable();
L.object(region);
}
/**
* Called with a state value of MonitorNotifier.INSIDE when at least one beacon in a Region is visible
*
* @param region region
*/
@Override
public void didDetermineStateForRegion(int i, Region region) {
Log.d(TAG, "switch from seeing/not seeing beacons");
L.object(region);
}
@Override
public void onBeaconServiceConnect() {
Log.d(TAG, "onBeaconServiceConnect");
if (null == beaconManager.getRangingNotifier()) {
beaconManager.setRangeNotifier(new RangeNotifier() {
@Override
public void didRangeBeaconsInRegion(Collection<Beacon> beacons, Region region) {
Log.d(TAG, "beacons.size():" + beacons.size() + "," + this);
if (beacons.size() != 0) {
Iterator<Beacon> iterator = beacons.iterator();
if (beacons.size() != size) {
saveBeacon(iterator);
size = beacons.size();
}
}
}
});
}
}
/**
* Save beacon p-o-j-o to SQLite.
*/
private void saveBeacon(Iterator<Beacon> iterator) {
while (iterator.hasNext()) {
beacon = iterator.next();
L.object(beacon);
entity.setId(null);
entity.setUuid(beacon.getId1().toString());
entity.setMajor(beacon.getId2().toString());
entity.setMinor(beacon.getId3().toString());
entity.setTxpower(beacon.getTxPower());
entity.setTime(Utils.getCurrentTime());
dbHelper.provideNinjaDao().insert(entity);
Log.d(TAG, "sql save success");
}
}
private Notification getNotification() {
CharSequence text = getText(R.string.notify_service_started);
PendingIntent contentIntent = PendingIntent.getActivity(this, 0, new Intent(this, MainActivity.class), 0);
Notification notification = new Notification.Builder(this)
.setSmallIcon(R.mipmap.ninja_turtle)
.setTicker(text)
.setWhen(System.currentTimeMillis())
.setContentTitle(getText(R.string.info_service))
.setContentText(text)
.setContentIntent(contentIntent)
.build();
return notification;
}
}
将首先获取数据,然后应用take(4)
。换句话说,您的数据库将执行(假设没有投影过滤器)等同于:
limit
其余的将由Spark处理。涉及一些优化(特别是Spark迭代地评估分区以获得SELECT * FROM table
请求的记录数)但与数据库端优化相比,它仍然是非常低效的过程。
如果您想将LIMIT
推送到数据库,则必须使用子查询作为limit
参数静态执行此操作:
dbtable
(sqlContext.read.format('jdbc')
.options(url='xxxx', dbtable='(SELECT * FROM xxx LIMIT 4) tmp', ....))
请注意,子查询中的别名是必需的。
注意强>:
一旦Data Source API v2准备就绪,将来可能会改进此行为: