我有一个我要解析的网址列表:
['https://www.richmondfed.org/-/media/richmondfedorg/press_room/speeches/president_jeff_lacker/2017/pdf/lacker_speech_20170303.pdf','http://www.federalreserve.gov/newsevents/speech/powell20160929a.htm','http://www.federalreserve.gov/newsevents/speech/fischer20161005a.htm']
我想使用Regex表达式创建一个新列表,其中包含字符串末尾的数字和标点符号前的任何字母(某些字符串包含两个位置的数字,如上面列表中的第一个字符串所示)。所以新列表看起来像:
['20170303', '20160929a', '20161005a']
这是我尝试过的,没有运气:
code = re.search(r'?[0-9a-z]*', urls)
更新
跑步 -
[re.search(r'(\d+)\D+$', url).group(1) for url in urls]
我收到以下错误 -
AttributeError: 'NoneType' object has no attribute 'group'
此外,如果有一封信,这似乎不会在数字后面收到一封信..!
答案 0 :(得分:0)
假设:
>>> lios=['https://www.richmondfed.org/-/media/richmondfedorg/press_room/speeches/president_jeff_lacker/2017/pdf/lacker_speech_20170303.pdf','http://www.federalreserve.gov/newsevents/speech/powell20160929a.htm','http://www.federalreserve.gov/newsevents/speech/fischer20161005a.htm']
你可以这样做:
for s in lios:
m=re.search(r'(\d+\w*)\D+$', s)
if m:
print m.group(1)
打印:
20170303
20160929a
20161005a
这是基于这个正则表达式:
(\d+\w*)\D+$
^ digits
^ any non digits
^ non digits
^ end of string
答案 1 :(得分:0)
答案 2 :(得分:0)
# python3
from urllib.parse import urlparse
from os.path import basename
def extract_id(url):
path = urlparse(url).path
resource = basename(path)
_id = re.search('\d[^.]*', resource)
if _id:
return _id.group(0)
urls =['https://www.richmondfed.org/-/media/richmondfedorg/press_room/speeches/president_jeff_lacker/2017/pdf/lacker_speech_20170303.pdf','http://www.federalreserve.gov/newsevents/speech/powell20160929a.htm','http://www.federalreserve.gov/newsevents/speech/fischer20161005a.htm']
# /!\ here you have None if pattern doesn't exist ;) in ids list
ids = [extract_id(url) for url in urls]
print(ids)
输出:
['20170303', '20160929a', '20161005a']
答案 3 :(得分:-1)
import re
patterns = {
'url_refs': re.compile("(\d+[a-z]*)\."), # YCF_L
}
def scan(iterable, pattern=None):
"""Scan for matches in an iterable."""
for item in iterable:
# if you want only one, add a comma:
# reference, = pattern.findall(item)
# but it's less reusable.
matches = pattern.findall(item)
yield matches
然后你可以这样做:
hits = scan(urls, pattern=patterns['url_refs'])
references = (item[0] for item in hits)
将references
提供给您的其他功能。你可以通过这种方式浏览更多的东西,并且我想这样做更快。