我有一个非常大的RDS文章文件(13GB)。 R的全球环境中的数据帧大小约为6GB
每篇文章都有一个ID,一个日期,一个带有POS标签的正文文本,一个带有POS标签的仅两个或三个单词的模式。以及其他一些元数据。
structure(list(an = c("1", "2", "3", "4", "5"), pub_date = structure(c(11166, 8906, 12243, 4263, 13077), class = "Date"),
source_code = c("1", "2", "2", "3", "2"), word_count = c(99L,
97L, 30L, 68L, 44L), POStagged = c("the_DT investment_NN firm_NN lehman_NN brothers_NNS holdings_NNS said_VBD yesterday_NN that_IN it_PRP would_MD begin_VB processing_VBG its_PRP$ own_JJ stock_NN trades_NNS by_IN early_RB next_JJ year_NN and_CC end_VB its_PRP$ existing_VBG tradeclearing_NN contract_NN with_IN the_DT bear_NN stearns_VBZ companies_NNS lehman_NN which_WDT is_VBZ the_DT last_JJ big_JJ securities_NNS firm_NN to_TO farm_VB out_RP its_PRP$ stock_NN trade_NN processing_NN said_VBD it_PRP would_MD save_VB million_CD to_TO million_CD annually_RB by_IN clearing_VBG its_PRP$ own_JJ trades_NNS a_DT bear_NN stearns_VBZ spokesman_NN said_VBD lehmans_NNS business_NN contributed_VBD less_JJR than_IN percent_NN to_TO bear_VB stearnss_NN clearing_NN operations_NNS",
"six_CD days_NNS after_IN she_PRP was_VBD introduced_VBN as_IN womens_NNS basketball_NN coach_NN at_IN wisconsin_NN with_IN a_DT fouryear_JJ contract_NN nell_NN fortner_NN resigned_VBD saying_VBG she_PRP wants_VBZ to_TO return_VB to_TO louisiana_JJR tech_NN as_IN an_DT assistant_NN im_NN shocked_VBN said_VBD associate_JJ athletic_JJ director_NN cheryl_NN marra_NN east_JJ carolina_NN came_VBD from_IN behind_IN with_IN two_CD runs_NNS in_IN the_DT seventh_JJ inning_NN and_CC defeated_VBD george_NN mason_NN in_IN the_DT colonial_JJ athletic_JJ association_NN baseball_NN tournament_NN in_IN norfolk_NN johnny_NN beck_NN went_VBD the_DT distance_NN for_IN the_DT pirates_NNS boosting_VBG his_PRP$ record_NN to_TO the_DT patriots_NNS season_NN closed_VBD at_IN",
"tomorrow_NN clouds_NNS and_CC sun_NN high_JJ low_JJ", "the_DT diversity_NN of_IN the_DT chicago_NN financial_JJ future_NN markets_NNS the_DT chicagoans_NNS say_VBP also_RB enhances_VBG their_PRP$ strength_NN traders_NNS and_CC arbitragers_NNS can_MD exploit_VB price_NN anomalies_NNS for_IN example_NN between_IN cd_NN and_CC treasurybill_NN futures_NNS still_RB nyfe_JJ supporters_NNS say_VBP their_PRP$ head_NN start_VB in_IN cd_NN futures_NNS and_CC technical_JJ advantages_NNS in_IN the_DT contract_NN traded_VBN on_IN the_DT nyfe_NN mean_VBP that_IN the_DT chicago_NN exchanges_NNS will_MD continue_VB to_TO play_VB catchup_NN",
"williams_NNS industries_NNS inc_IN the_DT manufacturing_NN and_CC construction_NN company_NN provides_VBZ steel_NN products_NNS to_TO build_VB major_JJ infrastructure_NN it_PRP has_VBZ been_VBN involved_VBN with_IN area_NN landmark_NN projects_NNS including_VBG rfk_JJ stadium_NN left_VBD the_DT woodrow_JJ wilson_NN bridge_NN and_CC the_DT mixing_NN bowl_NN"
), phrases = c("begin processing", "wants to return", "high",
"head start in", "major"), repeatPhraseCount = c(1L, 1L,
1L, 1L, 1L), pattern = c("begin_V", "turn_V", "high_JJ",
"start_V", "major_JJ"), code = c(NA_character_, NA_character_,
NA_character_, NA_character_, NA_character_), match = c(TRUE,
TRUE, TRUE, TRUE, TRUE)), .Names = c("an", "pub_date", "source_code", "word_count", "POStagged", "phrases", "repeatPhraseCount", "pattern",
"code", "match"), row.names = c("4864065", "827626", "6281115",
"281713", "3857705"), class = "data.frame")
我的目标是(对于每一行)检测POStagged中模式的存在。
模式列是我个人构建的固定列表。该列表包含465个单词/短语及其POS。
我想进行一场比赛,以便在将疑问用作动词或名词时区分它们。基本上是确定上下文。
但是,在某些情况下,我有一些短语,而不是单词,其中短语的结尾可能是一种变化的模式。例如,短语“ 可能无法进行交易”(其中“ 能够进行交易”)可以是任何动词短语(例如,能够达成交易)。我的尝试是多种多样的,不确定是否要以正确的方式进行:
--might_MD not_RB _VP (this works and picks up ***might not*** but is clearly wrong since the verb phrase after it is not picked)
如果我使用fixed(),那么str_detect就可以工作并且执行速度非常快。但是,fixed()确实缺少某些情况(如上所述),因此我无法比较结果。这是一个示例:
str_detect("might_MD not_RB be able to make the deal", "might_MD not_RB [A-Za-z]+(?:\\s+[A-Za-z]+){0,6}")
TRUE
str_detect("might_MD not_RB be able to make the deal", fixed("might_MD not_RB [A-Za-z]+(?:\\s+[A-Za-z]+){0,6}"))
FALSE
https://stackoverflow.com/a/51406046/3290154
我想要的输出是数据框中的另一列,其结果为TRUE / FALSE,告诉我是否在POStagged中看到了模式。
## Attempt 1 - R fatally crashes
## this works in a smaller sample but bombs R in a large dataframe
df$match <- str_detect(df$POStagged, df$pattern)
## Attempt 2
## This bombs (using multidplyr and skipping some lines of code)
partition(source_code, cluster=cl) %>%
mutate(match=str_detect(POStagged, pattern)) %>%
filter(!(match==FALSE)) %>%
filter(!is.na(match)) %>%
collect()
##I get this error: Error in serialize(data, node$con) : error writing to connection
根据我的理解,这是由于multidplyr处理内存的方式及其在内存中加载数据的方式(https://github.com/hadley/multidplyr/blob/master/vignettes/multidplyr.md)存在局限性。但是,由于multidplyr使用的是并行程序包,并且如果我在这里进行推断,我还是可以的-如果我说我的数据分成5个副本,则6 * 5 = 30GB加上任何程序包,依此类推。
## Attempt 3 - I tried to save the RDS to a csv/txt file and use the chuncked package, however, the resulting csv/txt was over 100GB.
## Attempt 4 - I tried to run a for loop, but I estimate it will take ~12days to run
我读了一些有关正则表达式的贪婪的知识,因此我尝试通过附加?+来修改我的模式列(使我的正则表达式变得懒惰)。但是,走这条路线意味着我不能使用fixed(),因为我所有的匹配都为假。朝着正确方向提供的任何帮助,将不胜感激!
https://stringr.tidyverse.org/articles/regular-expressions.html
What do 'lazy' and 'greedy' mean in the context of regular expressions?
答案 0 :(得分:0)
使用时也许可以使进度更快并获得更好的结果 改为Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing ?当然,这是完全不同的方法,对不起。因此,以防万一您出于某种原因不了解它。