我试图在C中为collectd编写一个glusterfs插件,它使用glusterfs库来进行实际的RPC调用。我设法让一个独立的C程序工作,基本上只是拼凑了部分gluster CLI。
但是,当我将该代码移植到collectd库时,我在进行使用gluster库中的符号的函数调用时会遇到一些奇怪的行为。在函数调用之外,内存位置X指的是具有非null成员的结构。在函数调用内部,内存位置X指的是具有null成员的结构。这导致函数失败并返回NULL而不是指向结构的指针,这当然会导致RPC失败。
以下是GDB通过程序执行的一些输出。您可以看到生成在0x7fffec0231b0
中的结构的成员在进入函数时会发生变化。请注意,我已从打印件中删除了部分成员,以使其更加清晰:
(gdb) next
718 frame = create_frame (THIS, state.ctx->pool);
(gdb) print state.ctx.pool
$22 = (struct call_pool *) 0x7fffdc022740
(gdb) print *state.ctx.pool
$23 = {..., frame_mem_pool = 0x7fffdc0227e0, stack_mem_pool = 0x7fffdc0586d0}
(gdb) print *{struct call_pool *}0x7fffdc022740
$24 = {..., frame_mem_pool = 0x7fffdc0227e0, stack_mem_pool = 0x7fffdc0586d0}
(gdb) step
create_frame (xl=0x7ffff4fc96e0 <global_xlator>, pool=pool@entry=0x7fffdc022740) at stack.c:17
17 {
(gdb) print pool
$26 = (call_pool_t *) 0x7fffdc022740
(gdb) print *pool
$27 = {..., frame_mem_pool = 0x0, stack_mem_pool = 0x0}
(gdb) print *{struct call_pool*}0x7fffdc022740
$28 = {..., frame_mem_pool = 0x0, stack_mem_pool = 0x0}
(gdb) finish
Run till exit from #0 create_frame (xl=0x7ffff4fc96e0 <global_xlator>, pool=pool@entry=0x7fffdc022740) at stack.c:17
0x00007ffff5400492 in profiler_callback (d=<optimized out>) at src/glusterfs.c:718
718 frame = create_frame (THIS, state.ctx->pool);
Value returned is $29 = (call_frame_t *) 0x0
(gdb) print state.ctx.pool
$30 = (struct call_pool *) 0x7fffdc022740
(gdb) print *{struct call_pool*}0x7fffdc022740
$31 = {..., frame_mem_pool = 0x7fffdc0227e0, stack_mem_pool = 0x7fffdc0586d0}
正如您所看到的,函数内部的内存位置似乎是指具有不同值的结构,因为其成员在函数内部为null,但不在其外部。
也许值得注意的是,create_frame
来自一个gluster库,它与我的collectd库分开。我描述的行为也发生在gluster库中的另一个函数中。
编辑:
可以看到完整的来源here
这里有一段简化的片段:
// globals
struct cli_state state = {0, };
struct rpc_clnt *global_rpc;
rpc_clnt_prog_t *cli_rpc_prog;
int
glusterfs_ctx_defaults_init ()
{
call_pool_t *pool = NULL;
glusterfs_ctx_t *ctx = state.ctx;
pool = GF_CALLOC (1, sizeof (call_pool_t),
mt_call_pool_t);
/* stack_mem_pool size 256 * 128 */
pool->stack_mem_pool = mem_pool_new (call_stack_t, 16);
if (!pool->stack_mem_pool)
return 1;
ctx->pool = pool;
return 0;
}
void * profiler_callback (void* d) {
int ret = 0;
call_frame_t *frame = NULL;
// function from gluster library returns
// pthread_getspecific (this_xlator_key);
// so essentially a thread-specific variable
xlator_t *xlator = (*__glusterfs_this_location());
// here, state.ctx->pool->stack_mem_pool is not null
frame = create_frame (xlator, xlator->ctx->pool);
// it is also not null here. however, frame is null
// since state.ctx->pool->stack_mem_pool *is* null
// inside create_frame
if (!frame)
goto out;
out:
INFO("Exiting with: %d", ret);
kill_self();
return NULL;
}
int
read_stats (void)
{
int ret = -1;
glusterfs_ctx_t *ctx = NULL;
xlator_t * xlator = NULL;
ctx = glusterfs_ctx_new ();
if (!ctx)
return ENOMEM;
ret = glusterfs_globals_init (ctx);
if (ret)
return ret;
// function from gluster library returns
// pthread_getspecific (this_xlator_key);
// so essentially a thread-specific variable
xlator = (*__glusterfs_this_location());
// Sets ctx.pool.stack_mem_pool
ret = glusterfs_ctx_defaults_init (xlator);
if (ret) {
return ret;
}
state.ctx = ctx;
pthread_mutex_init (&cond_mutex, NULL);
pthread_cond_init (&cond, NULL);
pthread_mutex_init (&conn_mutex, NULL);
pthread_cond_init (&conn, NULL);
ret = pthread_create ((&(state.input)), NULL, profiler_callback, &state);
if (ret) {
ERROR("problem! %d", ret);
}
ret = event_dispatch (state.ctx->event_pool);
out:
return ret;
}
void module_register(void) {
plugin_register_read("glusterfs", read_stats);
}
module_register
向collectd注册read_stats
函数,然后在每个间隔期间调用它。