RecordLinkage:如何仅匹配最佳匹配并导出合并表?

时间:2016-11-02 13:13:48

标签: r duplicates linkage fuzzy-comparison data-linking

我正在尝试使用R包RecordLinkage来匹配采购订单列表中的项目与主目录中的条目。下面是R代码和使用两个虚拟数据集(DOrders和DCatalogue)的可重现示例:

DOrders <- structure(list(Product = structure(c(1L, 2L, 7L, 3L, 4L, 5L, 
6L), .Label = c("31471 - SOFTSILK 2.0 SCREW 7mm x 20mm", "Copier paper white A4 80gsm", 
"High resilience memory foam standard  mattress", "Liston forceps bone cutting 152mm", 
"Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm", "Micro reciprocating blade 39.5 x 7.0 x 0.38", 
"microaire dual tooth 18 x 90 x 0.89"), class = "factor"), Supplier = structure(c(5L, 
6L, 2L, 1L, 4L, 3L, 3L), .Label = c("KAROMED LTD", "Morgan Steer Ortho Limited", 
"ORTHOPAEDIC SOLUTIONS", "SURGICAL HOLDINGS", "T J SMITH NEPHEW LTD", 
"XEROX SOLUTIONS"), class = "factor"), UOI = structure(c(1L, 
1L, 1L, 1L, 1L, 1L, 2L), .Label = c("Each", "Pack"), class = "factor"), 
    Price = c(5.99, 6.99, 40, 230, 35, 80, 79)), .Names = c("Product", 
"Supplier", "UOI", "Price"), class = "data.frame", row.names = c(NA, 
-7L))

DCatalogue <- structure(list(Product = structure(c(7L, 3L, 4L, 5L, 6L, 2L, 
8L, 1L), .Label = c("7.0mm cann canc scr 32x80mm non sterile single use", 
"A4 80gsm white copier paper", "High resilience memory foam standard hospital mattress with stitched seams has a fully enclosing cover", 
"Liston bone cutting forceps with fluted handle straight 152mm", 
"Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm", "Micro reciprocating blade 39.5mm x 7.0mm x 0.38mm", 
"microaire large osc dual tooth 18mm x 90mm x 0.89mm", "Softsilk 2.0 pkg 7x20 ster"
), class = "factor"), Supplier = structure(c(3L, 2L, 6L, 4L, 
4L, 7L, 5L, 1L), .Label = c("BIOMET MERCK LTD", "KAROMED LIMITED", 
"MORGAN STEER ORTHOPAEDICS LTD", "ORTHO SOLUTIONS", "SMITH & NEPHEW ADVANCED SURGICAL DEVICES", 
"SURGICAL HOLDINGS", "XEROX"), class = "factor"), UOI = structure(c(1L, 
1L, 1L, 2L, 2L, 1L, 1L, 1L), .Label = c("Each", "Pack"), class = "factor"), 
    RefPrice = c(38.7, 274.18, 34.96, 79.48, 81.29, 6.99, 5.99, 
    5)), .Names = c("Product", "Supplier", "UOI", "RefPrice"), class = "data.frame", row.names = c(NA, 
-8L))

为了进行实验,DOrders有7个条目,每个条目与参考集DCatalogue中的9个行中的一个匹配。在实际数据中,并非所有订单都匹配。

head(DOrders)
                                            Product                   Supplier  UOI  Price
1             31471 - SOFTSILK 2.0 SCREW 7mm x 20mm       T J SMITH NEPHEW LTD Each   5.99
2                       Copier paper white A4 80gsm            XEROX SOLUTIONS Each   6.99
3               microaire dual tooth 18 x 90 x 0.89 Morgan Steer Ortho Limited Each  40.00
4    High resilience memory foam standard  mattress                KAROMED LTD Each 230.00
5                 Liston forceps bone cutting 152mm          SURGICAL HOLDINGS Each  35.00
6 Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm      ORTHOPAEDIC SOLUTIONS Each  80.00

> head(DCatalogue)
                                                                                                 Product                      Supplier  UOI RefPrice
1                                                    microaire large osc dual tooth 18mm x 90mm x 0.89mm MORGAN STEER ORTHOPAEDICS LTD Each    38.70
2 High resilience memory foam standard hospital mattress with stitched seams has a fully enclosing cover               KAROMED LIMITED Each   274.18
3                                          Liston bone cutting forceps with fluted handle straight 152mm             SURGICAL HOLDINGS Each    34.96
4                                                      Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm               ORTHO SOLUTIONS Pack    79.48
5                                                      Micro reciprocating blade 39.5mm x 7.0mm x 0.38mm               ORTHO SOLUTIONS Pack    81.29
6                                                                            A4 80gsm white copier paper                         XEROX Each     6.99

链接的第一步是确保项目按发行单位(UOI)匹配。这是因为一件物品显然与一个单元不同,即使这些物品完全相同。 E.g:

Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm      ORTHOPAEDIC SOLUTIONS Each  80.00

是相同的项目,但应该是不匹配的:

Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm               ORTHO SOLUTIONS Pack    79.48

因此,我使用阻塞参数blockfld = 3来尝试仅匹配第3列中具有相同值的那些条目。此外,使用exclude = 4,从匹配中排除价格。订单和目录之间会有所不同,这本身就是匹配的主要兴趣。匹配是使用产品和供应商名称上的jarowinkler字符串比较器(如here所述)完成的:

library(RecordLinkage)

rpairs <- compare.linkage(DOrders, DCatalogue, 
                          blockfld = 3,
                          exclude = 4,
                          strcmp = 1:2,
                          strcmpfun = jarowinkler)

接下来,我使用Contiero等计算每对的权重。 (2005)方法:

rpairs <- epiWeights(rpairs)
> summary(rpairs)
Weight distribution:

[0.3,0.4] (0.4,0.5] (0.5,0.6] (0.6,0.7] (0.7,0.8] (0.8,0.9]   (0.9,1] 
        1         1        19        10         3         0         4

基于此分布,我想将匹配仅仅与那些权重为&gt的对匹配。 0.7

result <- epiClassify(rpairs, 0.7)
> summary(result)
7 links detected 
0 possible links detected 
31 non-links detected 

据我所知,但这有一些问题。

首先,getPairs(result)表示来自DOrders的一个条目可以与DC模式中的多个条目具有高权重匹配。 E.g。

这对正确匹配,重量为0.948

Micro reciprocating blade 39.5 x 7.0 x 0.38 ORTHOPAEDIC SOLUTIONS   Pack    79  
Micro reciprocating blade 39.5mm x 7.0mm x 0.38mm   ORTHO SOLUTIONS Pack    81.29   0.9480503

但错误匹配的权重为0.928:

Micro reciprocating blade 39.5 x 7.0 x 0.38 ORTHOPAEDIC SOLUTIONS   Pack    79  
Micro reciprocating blade 25.4mm x 8.0mm x 0.38mm   ORTHO SOLUTIONS Pack    79.48   0.9283522

显然,我需要将配对限制为只有一个最高重量的最佳匹配,但是如何做呢?

最后,我要查找的最终结果是合并数据集,其中包含来自订单和目录的匹配条目,并且两个原始集合中的所有列并排进行比较。 getPairs以笨拙的格式生成输出:

> getPairs(result)
    id  Product Supplier    UOI Price   Weight
1   7   Micro reciprocating blade 39.5 x 7.0 x 0.38 ORTHOPAEDIC SOLUTIONS   Pack    79  
2   5   Micro reciprocating blade 39.5mm x 7.0mm x 0.38mm   ORTHO SOLUTIONS Pack    81.29   0.9480503
3                       
4   5   Liston forceps bone cutting 152mm   SURGICAL HOLDINGS   Each    35  
5   3   Liston bone cutting forceps with fluted handle straight 152mm   SURGICAL HOLDINGS   Each    34.96   0.9329244
...

1 个答案:

答案 0 :(得分:3)

首先,感谢您提供可重复的示例,这样可以轻松回答您的问题。我将从你的第二个问题开始:

  

最后,我要找的最终结果是合并数据集,其中包含来自订单和目录的匹配条目,并且两个原始集合中的所有列并排进行比较。

使用single.rows=TRUE,getPairs在一行中列出两个条目。此外,show="links"将输出限制为归类为属于一起的对(有关详细信息,请参阅?getPairs):

> matchedPairs <- getPairs(result, single.rows=TRUE, show="links")

但是,这并没有将匹配的列彼此相邻,但是记录一的所有列都跟着记录二的所有列(最后匹配的权重作为最后一列)。我只在这里显示列名,因为整个表格非常广泛:

> names(matchedPairs)
 [1] "id1"        "Product.1"  "Supplier.1" "UOI.1"      "Price.1"    "id2"        "Product.2"  "Supplier.2" "UOI.2"      "RefPrice.2" "Weight"    

因此,如果您希望以此格式进行直接的列间比较,则必须重新排列列以满足您的需求。

  

显然,我需要将配对限制为只有一个最高重量的最佳匹配,但是如何做呢?

软件包不提供此功能,我相信从记录链接结果中选择一对一分配的过程本身需要一些概念上的注意。我从来没有深入到这一步,所以以下可能只是一个想法开始。您可以使用data.table库从具有相同左侧id的每组对中选择具有最大权重的对(比较How to select the row with the maximum value in each group):

> library(data.table)
> matchedPairs <- data.table(matchedPairs)
> matchedPairs[matchedPairs[,.I[which.max(Weight)],by=id1]$V1, list(id1,id2)]
   id1 id2
1:   7   5
2:   5   3
3:   4   2
4:   2   6
5:   6   1
6:   3   1

此处,list(id1,id2)将输出限制为记录ID。

为了消除右手ID的双重映射(在这种情况下,1出现id2两次),您必须重复id2的过程。但是请注意,在某些情况下,在步骤1中选择具有最高权重的对(减少到id1的唯一值)可以删除对于给定值id2的权重最大的对。因此,为了选择最佳的整体映射(例如,最大化所有选择的映射的权重之和),需要非贪婪的优化策略。

更新:使用大数据集的类和方法

对于大型数据集,可以使用所谓的“大数据”类和方法(参见https://cran.r-project.org/web/packages/RecordLinkage/vignettes/BigData.pdf)。它们使用文件支持的数据结构,因此大小限制是可用磁盘空间。语法主要是,但不完全相同。对于此示例,实现与上述相同结果的必要调用将是:

rpairs <- RLBigDataLinkage(DOrders, DCatalogue, 
                      blockfld = 3,
                      exclude = 4,
                      strcmp = 1:2,
                      strcmpfun = "jarowinkler")

rpairs <- epiWeights(rpairs)
result <- epiClassify(rpairs, 0.7)
matchedPairs <- getPairs(result, single.rows=TRUE, filter.link="link")
matchedPairs <- data.table(matchedPairs)
matchedPairs[matchedPairs[,.I[which.max(Weight)],by=id.1]$V1, list(id.1,id.2)]

然而,关于2 TB的大小估计,这仍然是不可行的。我认为你必须通过额外的阻止来进一步减少对的数量。

这种情况下的问题是包只支持“硬”阻塞标准(即两个记录必须在阻塞字段中完全匹配)。当链接个人数据(这是我们在开发包时的用例)时,出生日期的日,月和年组成部分通常可以组合起来进行阻止,使得对的数量显着减少而不会错过匹配候选者。据我可以从例子中判断,你的数据不可能进一步“硬”阻塞,因为匹配对只有相似但不相等的属性值(除了你已经用过的“问题单位”)阻塞)。像“仅考虑产品名称的字符串相似性大于[某个阈值]的对”这样的标准似乎对我来说最合适。为此,您必须扩展compare.linkage()RLBigDataLinkage()