非贪婪搜索字符串的开头

时间:2017-06-05 22:15:59

标签: python regex non-greedy

我要提取以下链接:

[{"file":"https:\/\/www.rapidvideo.com\/loadthumb.php?v=FFIMB47EWD","kind":"thumbnails"}], 
    "sources": [
        {"file":"https:\/\/www588.playercdn.net\/85\/1\/e_q8OBtv52BRyClYa_w0kw\/1496784287\/170512\/359E33j28Jo0ovY.mp4",
         "label":"Standard (288p)","res":"288"},
        {"file":"https:\/\/www726.playercdn.net\/86\/1\/q64Rsb8lG_CnxQAX6EZ2Sw\/1496784287\/170512\/371lbWrqzST1OOf.mp4"

我想提取以mp4结尾的链接

我的正则表达式如下:

"file":"(https\:.*?\.mp4)"

然而,我匹配是错误的,因为在PHP中结束的第一个链接是匹配的。 我在这里练习Pythex.org。如何避免第一个链接? 我试图解析的html页面的链接是https://www.rapidvideo.com/e/FFIMB47EWD

1 个答案:

答案 0 :(得分:2)

为什么甚至使用正则表达式?这看起来像一个JSON对象/ Python dict,你可以迭代它并使用str.endswith

>>> sources = {
...     "sources": [
...         {"file": "https:\/\/www588.playercdn.net\/85\/1\/e_q8OBtv52BRyClYa_w0kw\/1496784287\/170512\/359E33j28Jo0ovY.mp4",
...          "label": "Standard (288p)","res":"288"},
...         {"file": "https:\/\/www726.playercdn.net\/86\/1\/q64Rsb8lG_CnxQAX6EZ2Sw\/1496784287\/170512\/371lbWrqzST1OOf.mp4",
...          "label": "Standard (288p)","res":"288"}
...     ]
... }
>>> for item in sources['sources']:
...     if item['file'].endswith('.mp4'):
...         print(item['file'])
... 
https:\/\/www588.playercdn.net\/85\/1\/e_q8OBtv52BRyClYa_w0kw\/1496784287\/170512\/359E33j28Jo0ovY.mp4
https:\/\/www726.playercdn.net\/86\/1\/q64Rsb8lG_CnxQAX6EZ2Sw\/1496784287\/170512\/371lbWrqzST1OOf.mp4

编辑:

在加载javascript后,video标记中的链接可用。您可以使用无头浏览器,但我只是使用selenium来完全加载页面,然后保存html。

获得完整页面html后,您可以使用BeautifulSoup而不是正则表达式来解析它。

Using regular expressions to parse HTML: why not?

from bs4 import BeautifulSoup
from selenium import webdriver


def extract_mp4_link(page_html):
    soup = BeautifulSoup(page_html, 'lxml')
    return soup.find('video')['src']


def get_page_html(url):
    driver = webdriver.Chrome()
    driver.get(url)
    page_source = driver.page_source
    driver.close()
    return page_source


if __name__ == '__main__':
    page_url = 'https://www.rapidvideo.com/e/FFIMB47EWD'
    page_html = get_page_html(page_url)
    print(extract_mp4_link(page_html))