我尝试编写一个函数来从GList *list
未排序的链表中删除重复项,并返回GList *list
而不重复:
GList *remove_dup (GList *list)
{
GList *a, *b, *dup;
a = list;
/* Pick elements one by one */
while (a != NULL && a->next != NULL) {
b = a;
/* Compare the picked element with rest of the elements */
while (b->next != NULL) {
/* If duplicate then delete it */
if (a->data == b->next->data) {
/* sequence of steps is important here */
dup = b->next;
b->next = b->next->next;
g_list_free_1 (dup);
} else /* This is tricky */ {
b = b->next;
}
}
a = a->next;
}
/* return list without duplicates */
return list;
}
示例name.list
带有重复项:
A
A
B
C
B
A
使用后remove_dup
函数:
name.list = remove_dup (name.list);
name.list
没有重复:
A
B
C
- >但似乎remove_dup
返回相同的name.list
重复项。
这是我在这段代码中的错误?
答案 0 :(得分:1)
这个怎么样(没有测试过,但应该有效):
GList *remove_dup (GList *list)
{
GList *a, *b, *dup;
/* Pick elements one by one */
for (a = list; a; a = a->next) {
/* Compare the picked element with rest of the elements */
for (b = a->next; b;) {
dub = b;
b = b->next;
if (a->data = dub->data)
list = g_list_delete_link (list, dub);
}
}
/* return list without duplicates */
return list;
}
虽然这个O(n2)函数对于短列表来说很好,但是对于较长的列表来说效率很低。也许你想使用不同的算法,如:
GList *remove_dup (GList *list)
{
GList *a, *dup;
GHashTable *set;
set = g_hash_table_new(NULL, NULL);
/* Pick elements one by one */
for (a = list; a;) {
dub = a;
a = a->next;
if (g_hash_table_contains(set, dub->data)) {
list = g_list_delete_link (list, dub);
} else {
g_hash_table_add (set, dub->data);
}
}
g_hash_table_unref (set);
/* return list without duplicates */
return list;
}