为什么Perforce填充命令会变慢?

时间:2018-07-19 14:35:29

标签: perforce

情况:

  • Perforce服务器的网络带宽有限(我正在远程工作)。

  • 我想分支的存储库包含许多二进制文件,并且大小为数GB。

  • 文件数量很大(大约500000个文件)。

当我发出p4 populate命令来填充新的dev分支时,该操作需要1-2个小时才能完成,为什么?

据我所知,填充不会分支,也不会通过网络拉动整个存储库,因此我认为这不是网络问题。 在数据库中创建一百万个条目也不需要花费很长时间。 它将二进制文件复制到新的存储库中,而不仅仅是引用它们吗?

编辑这里是我再次使用-Ztrace进行分支之后的日志:

57055 files branched (change 2779417).
--- lapse 6408s
--- usage 3573+2619us 0+0io 0+0net 65672k 0pf 
--- rpc msgs/size in+out 2+57059/0mb+13mb himarks 785732/280100 snd/rcv 3.38s/.047s
--- db.counters
---   pages in+out+cached 11+16+2
---   locks read/write 0/4 rows get+pos+scan put+del 3+0+0 4+0
--- db.logger
---   pages in+out+cached 6+4+4
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 2+0
--- db.user
---   pages in+out+cached 4+0+3
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.group
---   pages in+out+cached 5+0+4
---   locks read/write 1/0 rows get+pos+scan put+del 0+3+5 0+0
--- db.domain
---   pages in+out+cached 7+0+5
---   locks read/write 2/0 rows get+pos+scan put+del 2+0+0 0+0
--- db.view
---   pages in+out+cached 7+0+7
---   locks read/write 4/0 rows get+pos+scan put+del 0+4+57 0+0
--- db.integed
---   pages in+out+cached 42486+51286+192
---   pages reordered internal+leaf 180+5972
---   pages split internal+leaf 182+4377
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 114110+0
---   total lock wait+held read/write 0ms+0ms/0ms+5175ms
--- db.archmap
---   pages in+out+cached 28+47+26
---   pages split internal+leaf 1+12
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 502+0
---   total lock wait+held read/write 0ms+0ms/0ms+1174ms
--- db.revdx
---   pages in+out+cached 3+0+1
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+0ms/0ms+1174ms
--- db.revhx
---   pages in+out+cached 2888+8526+96
---   pages split internal+leaf 85+2783
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 57055+0
---   total lock wait+held read/write 0ms+0ms/0ms+1174ms
--- db.revpx
---   pages in+out+cached 233571+357872+96
---   pages split internal+leaf 79+2395
---   locks read/write 0/114110 rows get+pos+scan put+del 0+0+0 57055+57055
---   total lock wait+held read/write 0ms+0ms/5846ms+2128ms
---   max lock wait+held read/write 0ms+0ms/99ms+9ms
--- db.revcx
---   pages in+out+cached 1414+4160+96
---   pages split internal+leaf 43+1356
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 57055+0
---   total lock wait+held read/write 0ms+0ms/0ms+1174ms
--- db.rev
---   pages in+out+cached 10977+7868+96
---   pages split internal+leaf 76+2570
---   locks read/write 2/1 rows get+pos+scan put+del 57055+7+184634 57055+0
---   total lock wait+held read/write 0ms+571ms/0ms+1174ms
---   max lock wait+held read/write 0ms+571ms/0ms+1174ms
---   peek count 1 wait+held total/max 0ms+0ms/0ms+0ms
--- db.trigger
---   pages in+out+cached 4+0+2
---   locks read/write 2/0 rows get+pos+scan put+del 0+2+35 0+0
--- db.change
---   pages in+out+cached 13+12+4
---   locks read/write 0/3 rows get+pos+scan put+del 3+0+0 2+1
---   total lock wait+held read/write 0ms+0ms/0ms+1175ms
---   max lock wait+held read/write 0ms+0ms/0ms+1175ms
--- db.changex
---   pages in+out+cached 11+8+3
---   locks read/write 0/3 rows get+pos+scan put+del 0+0+0 1+1
---   total lock wait+held read/write 0ms+0ms/0ms+1175ms
---   max lock wait+held read/write 0ms+0ms/0ms+1175ms
--- db.changeidx
---   pages in+out+cached 5+0+1
---   locks read/write 0/2 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+0ms/1ms+1174ms
---   max lock wait+held read/write 0ms+0ms/1ms+1174ms
--- db.desc
---   pages in+out+cached 13+12+4
---   locks read/write 0/3 rows get+pos+scan put+del 0+0+0 2+1
---   total lock wait+held read/write 0ms+0ms/0ms+1174ms
---   max lock wait+held read/write 0ms+0ms/0ms+1174ms
--- db.protect
---   pages in+out+cached 9+0+8
---   locks read/write 1/0 rows get+pos+scan put+del 0+1+410 0+0
--- db.monitor
---   pages in+out+cached 2+6+256
---   locks read/write 4/2 rows get+pos+scan put+del 4+0+0 1+1

1 个答案:

答案 0 :(得分:0)

如果您运行带有全局标记-Ztrack的命令,例如:

p4 -Ztrack populate (args...)

您将获得大量性能调试信息,其中包括服务器花费的时间。 (如果您不想再次运行该命令,则此信息可能已经存储在服务器日志中;您只需要在该日志中进行跟踪即可。)

在一般情况下,p4 populate是仅元数据的命令(复制修订表中的行并添加整数行以连接它们),并且所有实际文件内容都可以简单地“惰性复制”具有对同一基础存档的其他引用。

例外情况是,有些修订不能被懒惰地复制。我想到的两种情况是远程仓库和+S文件(也许是您提到的一些大型二进制文件)。

无论如何,如果您可以获得性能跟踪信息,则可以查看服务器是否将所有时间都花在了创建数据库条目上(在这种情况下,我将查看数据库所在的磁盘并查看如果有任何方法可以加快它的速度-是通过网络安装的,是否由其他进程共享的,等等),或者是否正在复制档案(在这种情况下,您需要弄清楚为什么需要这样做)这样做,并且如果有任何避免的方法,例如,我始终建议不要对要分支的文件使用+S文件类型标志。