这是一个很大的Web文档:https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut。我知道如何用
提取文本library(rvest)
library(magrittr)
page_url<- "https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut"
page_html<- read_html(page_url)
document <- page_html %>%
html_nodes("hr") %>%
html_text()
document
[1] "Rappel de votre demande:"
[2] "Format de téléchargement: : Texte"
[3] "Vues 1 à 544 sur 544"
[4] "Nombre de pages: 544"
[5] "Notice complète:"
[6] "Titre : Oeuvres complètes de Molière : accompagnées de notes tirées de tous les commentateurs avec des remarques nouvelles. Monsieur de Pourceaugnac / par M. Félix Lemaistre"
[7] "Auteur : Molière (1622-1673). Auteur du texte"
[8] "Auteur : Voltaire (1694-1778). Auteur du texte"
[9] "Auteur : La Harpe, Jean François de (1739-1803). Auteur du texte"
[10] "Auteur : Auger, Louis-Simon (1772-1829). Auteur du texte"
但是,对我来说重要的是跟踪从中提取文本的页面。页面的开始和结束实际上由一条水平线表示,如您在https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut中所见。因此,我希望获取一个列表,其中每个元素是页面,每个页面是一个向量,其中每个元素是文档的一行,而不是检索其中每个元素代表文档的一行的向量。
[[1]]
[1] "avurrbbihevyupsexvgymphjhdiqtfxzlwrbzpuqqpcxtlyrmyfxewydqnwqpinafaajvhylgaerlqilsvlwnscbiwoyinwjoudu"
[2] "gcgyuizpzznacdnrucvcjajjkbfahvlqqcoudbhpvuuvgrefpglnweznrimuzuydbzjzvhqezmjqtndzdhvvvbnhyipujusjmbhf"
[3] "caugvpyabksaqgktlrcoghkgjaqglpicgcngovvecesasevcdsmimysvrojvpwhbewxfwhdysvdcwmgxlziajwhilclecnkobmnc"
[4] "vuskqpyfqvqexilxqbhviqbdhhldprgdhifwzvhhvcclmljdgqmzsjrvlosftjshpuhxyjfsmfkqsxhaafysgesxwtoechrtekhy"
[[2]]
[1] "muvahkvftgglaphbzfehpnzvemhzixawlvadoxncmtmtzhqjlciozhgspnrusbkycgoqovxslusonmgqehbajbwpcldjquxchsvx"
[2] "pnhpzpbhjvqhehmlchncmgnhapaoqncvezaphilrpqguetutczpydrqthgdhwjtmlfhgvqvofdcylefrmergbkkwnsxlojgyaagw"
[3] "okjhxdpliykzbmdaghtgnsqftxhgpmkpsmiknuugejnrqmzaxqdljnbroxensegyxpikhzwkfzrqairvdhcvglcelnexvcypjkrx"
[4] "ftrbacjpwgmiuwbprvdkfpplycthukvycsyrjwsrokrrvcylzaxxdsgwlctglqaylegeflnlodttkiincavtncxttegstkgvvqgo"
[[3]]
[1] "ndnsdtqxpatoigobldauekhqdbcgvyqmcwyvmcvaredlrfjafiidwvcczqmufvufwjtdhordkaauukjezkyaodffohbzrnhwvioi"
[2] "ywryphperpsnbuspbfengmlllevavpbebfquiguvahshxdleyutvknsfiqcvrsirajqkzppbutsfbspjoirnqacoipcfxisugrto"
[3] "ivuzuxpflzqyphbnsdwvrqwcblxfagdflhqpgldnxkpuhzlhapueowofcgnakgwajgnaaqcvqxzwmorcmjybljsioulscnnntbmx"
[4] "cpbjxincbyrdasbrgrfdzxdzlmogfjmezgdkswpmcjrrlonsvgsaccrjvpbholodgsdcwslpsylslhoxliarkbighsmffoxprffb"
答案 0 :(得分:2)
library(stringi)
library(rvest)
library(tidyverse)
缓存页面,因为页面很大并且加载速度非常慢:
if (!file.exists("~/Data/forso.html")) {
read_html(
"https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut"
) -> pg
write_lines(as.character(pg), "~/Data/forso.html")
}
将其作为行读取。对于使用HTML,这通常是一个非常糟糕的想法,但对于此过程而言则更好,因为处理标记序列之间的文本所需的XPath粗糙且缓慢(甚至只是找到<hr>
元素使用html_nodes()
感觉有点慢:
doc <- read_lines("~/Data/forso.html")
现在,找到所有<hr>
元素,忽略前两个,因为它们位于intro / metadata部分之后:
pos <- which(doc == "<hr>")[-(1:2)]
创建开始/结束索引标记可定位文本:
starts <- head(pos, -1)
ends <- tail(pos, -1)
沿开始/结束位置进行迭代,提取文本,将其拆分为几行,然后形成一个数据框:
map_df(seq_along(starts), ~{
start <- starts[.x]
end <- ends[.x]
data_frame(
pg = .x,
txt = read_html(paste0(doc[start:end], collapse="\n")) %>%
html_children() %>%
html_text() %>%
stri_split_lines() %>%
flatten_chr() %>%
list()
)
}) -> xdf
看看:
xdf
## # A tibble: 542 x 2
## pg txt
## <int> <list>
## 1 1 <chr [4]>
## 2 2 <chr [2]>
## 3 3 <chr [13]>
## 4 4 <chr [1]>
## 5 5 <chr [35]>
## 6 6 <chr [19]>
## 7 7 <chr [22]>
## 8 8 <chr [18]>
## 9 9 <chr [16]>
## 10 10 <chr [36]>
## # ... with 532 more rows
另一种样子:
glimpse(xdf)
## Observations: 542
## Variables: 2
## $ pg <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, ...
## $ txt <list> [<"OEUVRES COMPLETES ", "DE MOLIERE ", "TOMI: III ", "">, <"PARIS. — I1IP. SIMON RAÇON ET COUP., RUE D...
一个:
str(head(xdf))
## Classes 'tbl_df', 'tbl' and 'data.frame': 6 obs. of 2 variables:
## $ pg : int 1 2 3 4 5 6
## $ txt:List of 6
## ..$ : chr "OEUVRES COMPLETES " "DE MOLIERE " "TOMI: III " ""
## ..$ : chr "PARIS. — I1IP. SIMON RAÇON ET COUP., RUE D'ERFURTH, 1. " ""
## ..$ : chr "OEUVRES COMPLETES " "DE MOLIERE " "NOUVELLE ÉDITION " "ACe-OJIPAfi NEES DE NOTES TIRÉES DE TOUS L, E S COMMENTATEURS AVEC DES REMARQUES NOUVELLES " ...
## ..$ : chr ""
## ..$ : chr "OEUVRES " "COMPLÈTES " "DE MOLIÈRE " "MONSIEUR DE POURCEAUGNAC' " ...
## ..$ : chr "MONSIEUR DE POURCEAUGNAC. " "MATASSINS dansants. DEUX AVOCATS chantants. DEUX PROCUREURS dansants. DEUX SERGENTS dansants. TROUPE DE MASQUES"| __truncated__ "La scène est à Paris. " "ACTE PREMIER " ...
这也捕获了空行,但是除了所描述的内容之外,我不知道您需要什么。
答案 1 :(得分:2)
另一种方法
正如@hrbrmstr在他的回答中已经提到的那样,xpath
并不是很友好,如果您想提取其他节点之间的结点。 ...事情变得非常低效,非常快...
因此,请记住,以下代码将需要几分钟才能完成(或更长时间,具体取决于您的计算机)...(也许其他用户可以使用此答案作为基础来加快处理速度)。>
说过:
library( xml2 )
library( data.table )
#get the contents od the webpage
doc <- read_html( "https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut" )
#determine how many hr-tags/nodes are there in the document
hr <- length( xml_nodes( doc, "hr") )
#create an empty list
l <- list()
#fill the list with a loop. This seems to take forever, but is works!
# just be patient (and get a cup of coffe. or two...).
for( i in seq(1, hr, by = 1) ) {
#set up the xpath.
#xpath: get all p-nodes after the i-th hr-nodes, that have exactly i preceding hr-nodes
xpath_ <- paste0 ( ".//hr[", i, "]/following-sibling::p[count(preceding-sibling::hr)=", i, "]" )
#
l[[i]] <- xml_find_all( doc, xpath = xpath_ ) %>% xml_text() %>% data.table()
}
一些结果
l[1:5]
# [[1]]
# Empty data.table (0 rows) of 1 col: .
#
# [[2]]
# Empty data.table (0 rows) of 1 col: .
#
# [[3]]
# .
# 1: OEUVRES COMPLETES
# 2: DE MOLIERE
# 3: TOMI: III
#
# [[4]]
# .
# 1: PARIS. — I1IP. SIMON RAÇON ET COUP., RUE D'ERFURTH, 1.
#
# [[5]]
# .
# 1: OEUVRES COMPLETES
# 2: DE MOLIERE
# 3: NOUVELLE ÉDITION
# 4: ACe-OJIPAfi NEES DE NOTES TIRÉES DE TOUS L, E S COMMENTATEURS AVEC DES REMARQUES NOUVELLES
# 5: PAR FÉLIX L E M A I T R E
# 6: P R É C É D É E
# 7: DE LA VIE DE MOLIÈRE PAR VOLTAIRE
# 8: TOME TROISIEME
# 9: PARIS
# 10: GARNIER FRÈRES, LIBRAIRES-ÉDITEURS
# 11: G, RUE DES SAINTS-PÈRES, ET P A L A I S-R 0 V A I., 213
# 12: 8 6 7
或将所有内容绑定到data.table
dt <- rbindlist(l, use.names = TRUE, idcol = "page")
# page .
# 1: 3 OEUVRES COMPLETES
# 2: 3 DE MOLIERE
# 3: 3 TOMI: III
# 4: 4 PARIS. — I1IP. SIMON RAÇON ET COUP., RUE D'ERFURTH, 1.
# 5: 5 OEUVRES COMPLETES
# 6: 5 DE MOLIERE
# 7: 5 NOUVELLE ÉDITION
# 8: 5 ACe-OJIPAfi NEES DE NOTES TIRÉES DE TOUS L, E S COMMENTATEURS AVEC DES REMARQUES NOUVELLES
# 9: 5 PAR FÉLIX L E M A I T R E
# 10: 5 P R É C É D É E
# 11: 5 DE LA VIE DE MOLIÈRE PAR VOLTAIRE
# 12: 5 TOME TROISIEME
# 13: 5 PARIS
# 14: 5 GARNIER FRÈRES, LIBRAIRES-ÉDITEURS
# 15: 5 G, RUE DES SAINTS-PÈRES, ET P A L A I S-R 0 V A I., 213
# 16: 5 8 6 7
# 17: 7 OEUVRES
# 18: 7 COMPLÈTES
# 19: 7 DE MOLIÈRE
# 20: 7 MONSIEUR DE POURCEAUGNAC'
答案 2 :(得分:1)
查找所有hr节点的索引是解决该问题的直接方法。变异部分是最引人注目的部分,它使用%in%和总和。
# set up and read
library(rvest)
library(xml2)
library(dplyr)
page_url<- "https://gallica.bnf.fr/ark:/12148/bpt6k5619759j.texteBrut"
page_html<- read_html(page_url)
# filter to body only, so no need to deal with child nodes
allbodynodes <- page_html %>%
xml_node('body')
# get all nodes and all hr nodes to compare later
# the first could be put into the pipeline, but it's more clear to me here
allnodes <- allbodynodes %>%
xml_nodes('*')
allhr <- allbodynodes %>%
xml_nodes('hr')
alltext <- allnodes %>%
html_text(trim = T) %>% # convert to text only
as.data.frame(stringsAsFactors = F) %>% # put into dataframe
select(maintext = '.') %>% # give the text a variable name
mutate(
ishr = allnodes %in% allhr, # check which nodes were <hr> (now blank)
page = cumsum(ishr) + 1 # add page number by running across the hr
) %>%
filter(!ishr) %>% # get rid of blank hr lines
select(-ishr) # get rid of all false ishr column
# split into a list of sorts if desired
alltextlist <- split(alltext$maintext,alltext$page)
我希望有一种更简洁的方式来创建索引(最好是在dplyr管道中),但是我还没有找到它。