有一篇文章:http://lwn.net/Articles/378262/描述了Linux内核循环缓冲区的实现。我有一些问题:
这是“制片人”:
spin_lock(&producer_lock);
unsigned long head = buffer->head;
unsigned long tail = ACCESS_ONCE(buffer->tail);
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
/* insert one item into the buffer */
struct item *item = buffer[head];
produce_item(item);
smp_wmb(); /* commit the item before incrementing the head */
buffer->head = (head + 1) & (buffer->size - 1);
/* wake_up() will make sure that the head is committed before
* waking anyone up */
wake_up(consumer);
}
spin_unlock(&producer_lock);
问题:
这是“消费者”:
spin_lock(&consumer_lock);
unsigned long head = ACCESS_ONCE(buffer->head);
unsigned long tail = buffer->tail;
if (CIRC_CNT(head, tail, buffer->size) >= 1) {
/* read index before reading contents at that index */
smp_read_barrier_depends();
/* extract one item from the buffer */
struct item *item = buffer[tail];
consume_item(item);
smp_mb(); /* finish reading descriptor before incrementing tail */
buffer->tail = (tail + 1) & (buffer->size - 1);
}
spin_unlock(&consumer_lock);
特定于“消费者”的问题:
答案 0 :(得分:7)
对于制作人:
spin_lock()
是为了防止两个生产者同时尝试修改队列。ACCESS_ONCE
确实阻止了重新排序,它还阻止编译器稍后重新加载该值。 (an article about ACCESS_ONCE
on LWN进一步扩展了这一点)head
值。消费者:
smp_read_barrier_depends()
是一个数据依赖障碍,它是一种较弱的读屏障形式(参见2)。这种情况下的效果是确保在buffer->tail
中将buffer[tail]
用作数组索引之前读取smp_mb()
。其他参考资料:
(注意:我不完全确定我在制作人中的5和消费者的答案,但我相信他们是事实的近似。我强烈建议阅读有关记忆障碍的文档页面,因为它比我在这里写的任何内容都更全面。)