删除URL的Python列表末尾的特殊字符/标点符号

时间:2018-08-31 15:11:57

标签: python regex python-3.x

我正在编写Python代码以从输入文件中提取所有URL,其中包含来自Twitter的内容或文本(推文)。但是,在这样做的时候,我意识到在python列表中提取的几个URL在结尾处都有“特殊字符”或“标点符号”,因此,我无法进一步解析它们以获得基本URL链接。我的问题是:“如何识别和删除列表中每个URL末尾的特殊字符?”

当前输出:

['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u'', 'https://twitter.com/GVNyqWEu5u@#', 'https://twitter.com/GVNyqWEu5u"']

所需的输出:

['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u']

您会意识到,“当前输出”列表中并非所有元素都在结尾处带有特殊字符/标点符号。该任务是仅从具有字符/标点的列表元素中识别和删除字符/标点。

我正在使用以下正则表达式从Tweet文本中提取Twitter URL:lst = re.findall('(http.?://[^\s]+)', text) 是否可以在此步骤本身的URL末尾删除特殊字符/标点符号?

完整代码:

import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
from socket import timeout
import ssl
import re
import csv

ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

count = 0
file = "Test.CSV"
with open(file,'r', encoding='utf-8') as f, open('output_themes_1.csv', 'w', newline='', encoding='utf-8') as ofile:
    next(f)
    reader = csv.reader(f)
    writer = csv.writer(ofile)
    fir = 'S.No.', 'Article_Id', 'Validity', 'Content', 'Geography', 'URL'
    writer.writerow(fir)
    for line in reader:
        count = count+1
        text = line[5]
        lst = re.findall('(http.?://[^\s]+)', text)
        if not lst:
            x = count, line[0], 'Empty List', text, line[8], line[6]
            print (x)
            writer.writerow(x)
        else:
            try:
                for url in lst:
                    try:
                        html = urllib.request.urlopen(url, context=ctx, timeout=60).read()
                        #html = urllib.request.urlopen(urllib.parse.quote(url, errors='ignore'), context=ctx).read()
                        soup = BeautifulSoup(html, 'html.parser')
                        title = soup.title.string
                        str_title = str (title)
                        if 'Twitter' in str_title:
                            if len(lst) > 1: break
                            else: continue
                        else:
                            y = count, line[0], 'Parsed', str_title, line[8], url
                            print (y)
                            writer.writerow(y)
                    except UnicodeEncodeError as e:
                        b_url = url.encode('ascii', errors='ignore')
                        n_url = b_url.decode("utf-8")
                        try:
                            html = urllib.request.urlopen(n_url, context=ctx, timeout=90).read()
                            soup = BeautifulSoup(html, 'html.parser')
                            title = soup.title.string
                            str_title = str (title)
                            if 'Twitter' in str_title:
                                if len(lst) > 1: break
                                else: continue
                            else:
                                z = count, line[0], 'Parsed_2', str_title, line[8], url
                                print (z)
                                writer.writerow(z)
                        except Exception as e:
                            a = count, line[0], str(e), text, line[8], url
                            print (a)
                            writer.writerow(a)
            except Exception as e:
                b = count, line[0], str(e), text, line[8], url
                print (b)
                writer.writerow(b)
print ('Total Rows Analyzed:', count)

3 个答案:

答案 0 :(得分:1)

假设特殊字符出现在您可以使用的字符串的末尾:

mydata = ['https://twitter.com/GVNyqWEu5u', "https://twitter.com/GVNyqWEu5u'", 'https://twitter.com/GVNyqWEu5u@#', 'https://twitter.com/GVNyqWEu5u"']
mydata = [re.sub('[^a-zA-Z0-9]+$','',item) for item in mydata]
print(mydata)

打印:

['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u']

答案 1 :(得分:0)

假设您的列表称为网址:

def remove_special_chars(url, char_list=None):
    if char_list is None:
        # Build your own default list here
        char_list = ['#', '%']
    for character in char_list:
        if url.endswith(character):
            return remove_special_chars(url[:-1], char_list)
    return url

urls = [remove_special_chars(url) for url in urls]

如果要摆脱特殊字符集,只需更改默认值或传递适当的列表作为参数

答案 2 :(得分:0)

您可以尝试-

lst = [re.sub('[=" ]$', '', i) for i in re.findall('(http.?://[^\s]+)', text)]

您可以根据需要在子栏中添加更多要替换的字符