我有一个表示安全摄像头NVR元数据的数据库。每1分钟视频片段有一个26字节recording
行。 (如果你很好奇,设计文档正在进行中here。)我的设计限制是8台摄像机,1年(约400万行,每台摄像机50万)。我已经伪造了一些数据来测试性能。此查询比我预期的要慢:
select
recording.start_time_90k,
recording.duration_90k,
recording.video_samples,
recording.sample_file_bytes,
recording.video_sample_entry_id
from
recording
where
camera_id = ?
order by
recording.start_time_90k;
这只是扫描相机的所有数据,使用索引过滤掉其他相机并订购。索引看起来像这样:
create index recording_camera_start on recording (camera_id, start_time_90k);
explain query plan
看起来像预期的那样:
0|0|0|SEARCH TABLE recording USING INDEX recording_camera_start (camera_id=?)
行很小。
$ sqlite3_analyzer duplicated.db
...
*** Table RECORDING w/o any indices *******************************************
Percentage of total database...................... 66.3%
Number of entries................................. 4225560
Bytes of storage consumed......................... 143418368
Bytes of payload.................................. 109333605 76.2%
B-tree depth...................................... 4
Average payload per entry......................... 25.87
Average unused bytes per entry.................... 0.99
Average fanout.................................... 94.00
Non-sequential pages.............................. 1 0.0%
Maximum payload per entry......................... 26
Entries that use overflow......................... 0 0.0%
Index pages used.................................. 1488
Primary pages used................................ 138569
Overflow pages used............................... 0
Total pages used.................................. 140057
Unused bytes on index pages....................... 188317 12.4%
Unused bytes on primary pages..................... 3987216 2.8%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 4175533 2.9%
*** Index RECORDING_CAMERA_START of table RECORDING ***************************
Percentage of total database...................... 33.7%
Number of entries................................. 4155718
Bytes of storage consumed......................... 73003008
Bytes of payload.................................. 58596767 80.3%
B-tree depth...................................... 4
Average payload per entry......................... 14.10
Average unused bytes per entry.................... 0.21
Average fanout.................................... 49.00
Non-sequential pages.............................. 1 0.001%
Maximum payload per entry......................... 14
Entries that use overflow......................... 0 0.0%
Index pages used.................................. 1449
Primary pages used................................ 69843
Overflow pages used............................... 0
Total pages used.................................. 71292
Unused bytes on index pages....................... 8463 0.57%
Unused bytes on primary pages..................... 865598 1.2%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 874061 1.2%
...
我喜欢这样的事情(可能每次只有一个月,而不是一整年)每次特定网页被点击时都要运行,所以我希望它能够非常快。但是在我的笔记本电脑上,它需要大部分时间,而在Raspberry Pi 2上我想支持,它太慢了。下面的时间(以秒为单位);它受CPU限制(用户+系统时间〜=实时):
laptop$ time ./bench-profiled
trial 0: time 0.633 sec
trial 1: time 0.636 sec
trial 2: time 0.639 sec
trial 3: time 0.679 sec
trial 4: time 0.649 sec
trial 5: time 0.642 sec
trial 6: time 0.609 sec
trial 7: time 0.640 sec
trial 8: time 0.666 sec
trial 9: time 0.715 sec
...
PROFILE: interrupts/evictions/bytes = 1974/489/72648
real 0m20.546s
user 0m16.564s
sys 0m3.976s
(This is Ubuntu 15.10, SQLITE_VERSION says "3.8.11.1")
raspberrypi2$ time ./bench-profiled
trial 0: time 6.334 sec
trial 1: time 6.216 sec
trial 2: time 6.364 sec
trial 3: time 6.412 sec
trial 4: time 6.398 sec
trial 5: time 6.389 sec
trial 6: time 6.395 sec
trial 7: time 6.424 sec
trial 8: time 6.391 sec
trial 9: time 6.396 sec
...
PROFILE: interrupts/evictions/bytes = 19066/2585/43124
real 3m20.083s
user 2m47.120s
sys 0m30.620s
(This is Raspbian Jessie; SQLITE_VERSION says "3.8.7.1")
我可能最终会做一些非规范化数据,但首先我想知道我是否可以让这个简单的查询尽可能地执行。我的基准非常简单;它提前准备好声明然后循环:
void Trial(sqlite3_stmt *stmt) {
int ret;
while ((ret = sqlite3_step(stmt)) == SQLITE_ROW) ;
if (ret != SQLITE_DONE) {
errx(1, "sqlite3_step: %d (%s)", ret, sqlite3_errstr(ret));
}
ret = sqlite3_reset(stmt);
if (ret != SQLITE_OK) {
errx(1, "sqlite3_reset: %d (%s)", ret, sqlite3_errstr(ret));
}
}
我使用gperftools创建了一个CPU配置文件。图像:
$ google-pprof bench-profiled timing.pprof
Using local file bench-profiled.
Using local file timing.pprof.
Welcome to pprof! For help, type 'help'.
(pprof) top 10
Total: 593 samples
154 26.0% 26.0% 377 63.6% sqlite3_randomness
134 22.6% 48.6% 557 93.9% sqlite3_reset
83 14.0% 62.6% 83 14.0% __read_nocancel
61 10.3% 72.8% 61 10.3% sqlite3_strnicmp
41 6.9% 79.8% 46 7.8% sqlite3_free_table
26 4.4% 84.1% 26 4.4% sqlite3_uri_parameter
25 4.2% 88.4% 25 4.2% llseek
13 2.2% 90.6% 121 20.4% sqlite3_db_config
12 2.0% 92.6% 12 2.0% __pthread_mutex_unlock_usercnt (inline)
10 1.7% 94.3% 10 1.7% __GI___pthread_mutex_lock
这看起来很奇怪,让我希望它可以改进。也许我做的事情愚蠢。我对sqlite3_randomness和sqlite3_strnicmp操作持怀疑态度:
sqlite3_randomness
用于在某些情况下插入rowid,但我只是在做一个select查询。为什么现在要使用它?从略读sqlite3源代码,我看到它在select sqlite3ColumnsFromExprList
中使用,但这似乎是在准备语句时发生的事情。我曾经这样做过,而不是在基准测试的部分。strnicmp
用于不区分大小写的字符串比较。但是这个表中的每个字段都是整数。为什么要使用这个功能?比较什么?sqlite3_reset
会很昂贵,或者为什么要从sqlite3_step
调用它。架构:
-- Each row represents a single recorded segment of video.
-- Segments are typically ~60 seconds; never more than 5 minutes.
-- Each row should have a matching recording_detail row.
create table recording (
id integer primary key,
camera_id integer references camera (id) not null,
sample_file_bytes integer not null check (sample_file_bytes > 0),
-- The starting time of the recording, in 90 kHz units since
-- 1970-01-01 00:00:00 UTC.
start_time_90k integer not null check (start_time_90k >= 0),
-- The duration of the recording, in 90 kHz units.
duration_90k integer not null
check (duration_90k >= 0 and duration_90k < 5*60*90000),
video_samples integer not null check (video_samples > 0),
video_sync_samples integer not null check (video_samples > 0),
video_sample_entry_id integer references video_sample_entry (id)
);
我已经完成了我的测试数据+测试程序;你可以下载它here。
修改1:
啊,看看SQLite代码,我看到了一个线索:
int sqlite3_step(sqlite3_stmt *pStmt){
int rc = SQLITE_OK; /* Result from sqlite3Step() */
int rc2 = SQLITE_OK; /* Result from sqlite3Reprepare() */
Vdbe *v = (Vdbe*)pStmt; /* the prepared statement */
int cnt = 0; /* Counter to prevent infinite loop of reprepares */
sqlite3 *db; /* The database connection */
if( vdbeSafetyNotNull(v) ){
return SQLITE_MISUSE_BKPT;
}
db = v->db;
sqlite3_mutex_enter(db->mutex);
v->doingRerun = 0;
while( (rc = sqlite3Step(v))==SQLITE_SCHEMA
&& cnt++ < SQLITE_MAX_SCHEMA_RETRY ){
int savedPc = v->pc;
rc2 = rc = sqlite3Reprepare(v);
if( rc!=SQLITE_OK) break;
sqlite3_reset(pStmt);
if( savedPc>=0 ) v->doingRerun = 1;
assert( v->expired==0 );
}
在架构更改时看起来sqlite3_step
调用sqlite3_reset
。 (FAQ entry)我不知道为什么自从我的陈述准备好之后就会出现架构变更......
编辑2:
我下载了SQLite 3.10.1&#34; amalgation&#34;并使用调试符号对其进行编译。我现在得到了一个非常不同的个人资料,看起来并不奇怪,但它并不快。也许我之前看到的奇怪结果是由于相同的代码折叠或其他原因。
编辑3:
尝试下面的Ben的聚集索引解决方案,速度提高了约3.6倍。我认为这是我对这个查询做的最好的。我的笔记本电脑上SQLite的CPU性能约为700 MB / s。如果没有重写它以将JIT编译器用于其虚拟机或其他类似的东西,我将不会做得更好。特别是,我认为我在第一张个人资料中看到的离奇的电话实际上并没有发生;由于优化或其他原因,gcc必须具有误导性的调试信息。
即使CPU性能得到改善,吞吐量也超过了我的存储器现在可以进行的冷读取,我认为Pi(SD卡的USB 2.0总线有限)也是如此。
$ time ./bench
sqlite3 version: 3.10.1
trial 0: realtime 0.172 sec cputime 0.172 sec
trial 1: realtime 0.172 sec cputime 0.172 sec
trial 2: realtime 0.175 sec cputime 0.175 sec
trial 3: realtime 0.173 sec cputime 0.173 sec
trial 4: realtime 0.182 sec cputime 0.182 sec
trial 5: realtime 0.187 sec cputime 0.187 sec
trial 6: realtime 0.173 sec cputime 0.173 sec
trial 7: realtime 0.185 sec cputime 0.185 sec
trial 8: realtime 0.190 sec cputime 0.190 sec
trial 9: realtime 0.192 sec cputime 0.192 sec
trial 10: realtime 0.191 sec cputime 0.191 sec
trial 11: realtime 0.188 sec cputime 0.188 sec
trial 12: realtime 0.186 sec cputime 0.186 sec
trial 13: realtime 0.179 sec cputime 0.179 sec
trial 14: realtime 0.179 sec cputime 0.179 sec
trial 15: realtime 0.188 sec cputime 0.188 sec
trial 16: realtime 0.178 sec cputime 0.178 sec
trial 17: realtime 0.175 sec cputime 0.175 sec
trial 18: realtime 0.182 sec cputime 0.182 sec
trial 19: realtime 0.178 sec cputime 0.178 sec
trial 20: realtime 0.189 sec cputime 0.189 sec
trial 21: realtime 0.191 sec cputime 0.191 sec
trial 22: realtime 0.179 sec cputime 0.179 sec
trial 23: realtime 0.185 sec cputime 0.185 sec
trial 24: realtime 0.190 sec cputime 0.190 sec
trial 25: realtime 0.189 sec cputime 0.189 sec
trial 26: realtime 0.182 sec cputime 0.182 sec
trial 27: realtime 0.176 sec cputime 0.176 sec
trial 28: realtime 0.173 sec cputime 0.173 sec
trial 29: realtime 0.181 sec cputime 0.181 sec
PROFILE: interrupts/evictions/bytes = 547/178/24592
real 0m5.651s
user 0m5.292s
sys 0m0.356s
我可能需要保留一些非规范化数据。幸运的是,我认为我可以将它保存在我的应用程序的RAM中,因为它不会太大,启动不必非常快,只有一个进程曾经写过数据库。
答案 0 :(得分:2)
您需要聚簇索引,或者如果您使用的SQLite版本不支持,则需要覆盖索引。
Sqlite 3.8.2及以上
在SQLite 3.8.2及更高版本中使用它:
create table recording (
camera_id integer references camera (id) not null,
sample_file_bytes integer not null check (sample_file_bytes > 0),
-- The starting time of the recording, in 90 kHz units since
-- 1970-01-01 00:00:00 UTC.
start_time_90k integer not null check (start_time_90k >= 0),
-- The duration of the recording, in 90 kHz units.
duration_90k integer not null
check (duration_90k >= 0 and duration_90k < 5*60*90000),
video_samples integer not null check (video_samples > 0),
video_sync_samples integer not null check (video_samples > 0),
video_sample_entry_id integer references video_sample_entry (id),
--- here is the magic
primary key (camera_id, start_time_90k)
) WITHOUT ROWID;
早期版本
在早期版本的SQLite中,您可以使用这种方法来创建覆盖索引。这应该允许SQLite从索引中提取数据值,从而避免为每一行提取单独的页面:
create index recording_camera_start on recording (
camera_id, start_time_90k,
sample_file_bytes, duration_90k, video_samples, video_sync_samples, video_sample_entry_id
);
<强>讨论强>
成本很可能是IO(不管你说它不是),因为召回IO需要CPU,因为数据必须复制到总线上或从总线上复制。
如果没有聚簇索引,则会使用rowid插入行,并且可能没有任何合理的顺序。这意味着对于您请求的每个26字节行,系统可能必须从SD卡获取4KB页面 - 这是一个很大的开销。
限制为8个摄像头,id
上的简单聚簇索引确保它们以插入的顺序出现在磁盘上,通过确保获取的页面包含接下来的10-20行,可能会使速度提高10倍这将是必需的。
相机和时间上的聚簇索引应确保每个提取的页面包含100行或更多行。