使用R从网址提取jpg名称

时间:2018-11-01 11:48:45

标签: r url jpeg

有人可以帮助我,这是我的问题: 我在tbl中有一个网址列表,我必须提取jpg nane。 这是网址 https://content_xxx.xxx.com/vp/969ffffff61/5C55ABEB/t51.2ff5-15/e35/13643048_612108275661958_805860992_n.jpg?ff_cache_key=fffffQ%3ff%3D.2 这是提取的一部分 13643048_612108275661958_805860992_n 谢谢你的帮助

2 个答案:

答案 0 :(得分:2)

这需要两件事:

  1. 解析网址本身
  2. 从URL路径获取文件名

您可以手动完成这两个操作,但是使用现有工具要好得多。第一部分由‹XML›包中的parseURI函数解决:

uri = 'https://content_xxx.xxx.com/vp/969ffffff61/5C55ABEB/t51.2ff5-15/e35/13643048_612108275661958_805860992_n.jpg?ff_cache_key=fffffQ%3ff%3D.2
parts = XML::parseURI(uri)

第二部分由basename函数轻松解决:

filename = basename(parts$path)

答案 1 :(得分:1)

搜索“ R解析URL”可能使您不必键入约400次击键(因此我希望粘贴URL)。

无论如何,您都希望处理这些事情的向量,因此有一种更好的方法。实际上,在R中有多种方法可以执行此URL路径提取。这是3:

library(stringi)
library(urltools)
library(httr)
library(XML)
library(dplyr)

我们将生成100个符合相同Instagram模式的唯一URL(注意:抓取instagram违反了其ToS并由robots.txt控制。如果您的URL并非来自Instagram API,请告诉我由于我对内容窃贼无济于事,因此我可以删除此答案。

set.seed(0)

paste(
  "https://content_xxx.xxx.com/vp/969ffffff61/5C55ABEB/t51.2ff5-15/e35/13643048_612108275661958_805860992_n.jpg?ff_cache_key=fffffQ%3ff%3D.2",
  stri_rand_strings(100, 8, "[0-9]"), "_",
  stri_rand_strings(100, 15, "[0-9]"), "_",
  stri_rand_strings(100, 9, "[0-9]"), "_",
  stri_rand_strings(100, 1, "[a-z]"),
  ".jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2",
  sep=""
) -> img_urls

head(img_urls)
## [1] "https://content_xxx.xxx.com/vp/969ffffff61/5C55ABEB/t51.2ff5-15/e35/13643048_612108275661958_805860992_n.jpg?ff_cache_key=fffffQ%3ff%3D.2"
## [2] "https://https://content_xxx.xxx.com/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/66021637_359927357880233_471353444_q.jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2"
## [3] "https://https://content_xxx.xxx.com/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/47937926_769874508959124_426288550_z.jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2"
## [4] "https://https://content_xxx.xxx.com/vp/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/12303834_440673970920272_460810703_n.jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2"
## [5] "https://https://content_xxx.xxx.com/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/54186717_202600346704982_713363439_y.jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2"
## [6] "https://https://content_xxx.xxx.com/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/48675570_402479399847865_689787883_e.jpg?ff_cache_key=MTMwOTE4NjEyMzc1OTAzOTc2NQ%3D%3D.2"

现在,让我们尝试解析这些URL:

invisible(urltools::url_parse(img_urls))

invisible(httr::parse_url(img_urls))
## Error in httr::parse_url(img_urls): length(url) == 1 is not TRUE

DOH! httr无法做到。

invisible(XML::parseURI(img_urls))
## Error in if (is.na(uri)) return(structure(as.character(uri), class = "URI")): the condition has length > 1

DOH! XML也无法做到。

这意味着我们需要为sapply()httr使用XML拐杖来获取路径分量(如Konrad所示,您可以对任何结果矢量运行basename()) :

data_frame(
  urltools = urltools::url_parse(img_urls)$path,
  httr = sapply(img_urls, function(URL) httr::parse_url(URL)$path, USE.NAMES = FALSE),
  XML = sapply(img_urls, function(URL) XML::parseURI(URL)$path, USE.NAMES = FALSE)
) -> paths

glimpse(paths)
## Observations: 100
## Variables: 3
## $ urltools <chr> "vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/82359289_380972639303339_908467218_h...
## $ httr     <chr> "vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/82359289_380972639303339_908467218_h...
## $ XML      <chr> "/vp/969b7087cc97408ccee167d473388761/5C55ABEB/t51.2885-15/e35/82359289_380972639303339_908467218_...

请注意,/路径中的首字母XML并不是标准的包含。在这个示例中,这对您来说并不重要,但要注意总体上的区别。

我们将处理其中之一,因为XMLhttr具有这种可悲的局限性:

microbenchmark::microbenchmark(
  urltools = urltools::url_parse(img_urls[1])$path,
  httr = httr::parse_url(img_urls[1])$path,
  XML = XML::parseURI(img_urls[1])$path
)
## Unit: microseconds
##      expr     min       lq      mean   median       uq      max neval
##  urltools 351.268 397.6040 557.09641 499.2220 618.5945 1309.454   100
##      httr 550.298 619.5080 843.26520 717.0705 888.3915 4213.070   100
##       XML  11.858  16.9115  27.97848  26.1450  33.9065  109.882   100

XML 看起来更快,但是实际上却没有:

microbenchmark::microbenchmark(
  urltools = urltools::url_parse(img_urls)$path,
  httr = sapply(img_urls, function(URL) httr::parse_url(URL)$path, USE.NAMES = FALSE),
  XML = sapply(img_urls, function(URL) XML::parseURI(URL)$path, USE.NAMES = FALSE)
)
## Unit: microseconds
##      expr       min        lq      mean     median        uq        max neval
##  urltools   718.887   853.374  1093.404   918.3045  1146.540   2872.076   100
##      httr 58513.970 64738.477 80697.548 68908.7635 81549.154 224157.857   100
##       XML  1155.370  1245.415  2012.660  1359.8215  1880.372  26184.943   100

如果您真的想使用正则表达式,可以阅读URL BNF的RFC和朴素的正则表达式,以破解其中的一小部分,而Google可以阅读具有代表性的示例,其中有十几个正则表达式可以处理不正确的行为。格式正确的URI,但是对于各种URL内容,解析通常是更好的策略。对于您的情况,拆分和正则表达式可能会很好,但并不一定比解析快得多:

microbenchmark::microbenchmark(
  urltools = tools::file_path_sans_ext(basename(urltools::url_parse(img_urls)$path)),
  httr = tools::file_path_sans_ext(basename(sapply(img_urls, function(URL) httr::parse_url(URL)$path, USE.NAMES = FALSE))),
  XML = tools::file_path_sans_ext(basename(sapply(img_urls, function(URL) XML::parseURI(URL)$path, USE.NAMES = FALSE))),
  regex = stri_match_first_regex(img_urls, "/([[:digit:]]{8}_[[:digit:]]{15}_[[:digit:]]{9}_[[:alpha:]]{1})\\.jpg\\?")[,2]
)
## Unit: milliseconds
##      expr       min        lq      mean    median        uq        max neval
##  urltools  1.140421  1.228988  1.502525  1.286650  1.444522   6.970044   100
##      httr 56.563403 65.696242 77.492290 69.809393 80.075763 157.657508   100
##       XML  1.513174  1.604012  2.039502  1.702018  1.931468  11.306436   100
##     regex  1.137204  1.223683  1.337675  1.260339  1.397273   2.241121   100

如最后一个示例所述,您需要对结果运行tools::file_path_sans_ext()才能删除.jpg(或删除sub())。