包含特殊字符的字符串在解析它们时会在新列中分开

时间:2019-01-03 08:04:19

标签: python csv parsing

我在解析http请求时遇到问题。我在链接

的.txt中有这样的数据

https://drive.google.com/open?id=1RSyCYgxBCJnxAXDInyIs1cOp_3EoUyqG

我正在尝试将此数据转换为csv格式,但是特殊字符如';'将数据分成新列

示例: “接受”列中的数据应类似于-text / xml; q = 0.6,application / rtf; q = 0.7,image / *

但是当我运行代码时,我在此列中以text / xml格式获取数据 并且q = 0.6移到新列。

我发现的一个解决方案是将单引号字符串转换为双引号,然后存储该字符串,但这不起作用。from

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
import urllib.parse
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
import io
from sklearn.svm import LinearSVC
from sklearn.metrics import confusion_matrix
import os
import json
import csv
from itertools import islice
import numpy as np
import pandas as pd


fields = ['Start - Id', 'class', 'Method', 'Url', 'Protocol', 'Content-   Length','Content-Language','Content-Encoding','Content-Location','Content-MD5','Content-Type','Expires','Last-Modified', 'Host', 'Connection', 'Accept', 'Accept-Charset', 'Accept-Encoding', 'Accept-Language', 'Cache-Control','Client-ip', 'Cookie', 'Cookie2', 'Date', 'ETag', 'Expect', 'From', 'If-Modified-Since', 'If-Unmodified-Since', 'If-Match', 'If-None-Match', 'If-Range','Max-Forwards', 'MIME-Version', 'Pragma', 'Proxy-Authorization', 'Authorization', 'Range', 'Referer', 'TE', 'Trailer', 'User-Agent', 'UA-CPU', 'UA-Disp', 'UA-OS', 'UA-Color', 'UA-Pixels', 'Via', 'Transfer-Encoding', 'Upgrade', 'Warning', 'X-Forwarded-For', 'X-Serial-Number', '~~~~~','----']

listofKeys = dict.fromkeys(fields)

def init(file_out):
    with open(file_out, 'w', encoding="utf-8") as csvfile:
        csvwriter = csv.writer(csvfile, delimiter="\t")
        csvwriter.writerow(fields)

def write(file_out, lines):
    with open(file_out, 'a', encoding="utf-8")  as csvfile:
        csvwriter = csv.writer(csvfile, delimiter ="\t")
        row = []
        N = len(lines)
        foundP = False
        for i in range(N-1):
            line = lines[i].strip()
            if len(line)>0:
                if i==2:
                    listofKeys['Method'] = line.split(" ")[0]
                    listofKeys['Url'] = line.split(" ")[1]
                    listofKeys['Protocol'] = line.split(" ")[2]
                    if(line.startswith("PUT") or line.startswith("POST")):
                        foundP = True
                elif i==N-3 :
                    if foundP == True:
                        listofKeys['Url'] += (line)
                else:
                    index = line.find(':')
                    key = line[0:index].strip()
                    value = line[index+1:].strip()
                    listofKeys[key] = str(value)
        for keys in fields:
            row.append(listofKeys[keys])
        print(type(row))

        print(row)
        csvwriter.writerow(row)


def convertText2Csv(file_in, file_out):
    init(file_out)
    with open(file_in, 'r') as infile:
        lines = []
        count = 0
        for line in infile:
            if line.startswith("Start"):
                count+=1
                print("-------------------------------------------------------------------Request #",count," -------------------------------------------------------------------------")
                lines.append(line)
            elif line.startswith("End"):
                lines.append(line)
                write(file_out, lines)
                lines = []
            else:
                lines.append(line)

csvFile = 'test.csv'
textFile = 'test.txt'
convertText2Csv(textFile, csvFile)

我得到的结果在链接中给出 https://drive.google.com/open?id=1rLPdbuZkS6pcDQqHZZP6ck9H8XbnMPWM

我只想将数据转换为csv文件,每列包含其特定值(如果存在)并带有特殊字符

1 个答案:

答案 0 :(得分:1)

您的csv文件完全正确。

这是Accept列的内容,当它装入Libre Office calc 并指定“ \ t”作为唯一定界符时:

Accept
*/*
*/*
*/*
text/xml;q=0.6, application/rtf;q=0.7, image/*

您真正的问题是,用于打开csv文件的程序太 clever (实际上很愚蠢!):它假定用户太愚蠢,不知道分隔符是什么是并尝试猜测它们。并假设;也是定界符,这在这里是一个错误的猜测。

长话短说:您只是在尝试使用愚蠢的工作表程序显示正确的csv文件(可能是Excel吗?)。 Excel是一个非常不错的工具,只是涉及到csv文件的地方。

正如您在评论中所建议的那样,quoting=csv.QUOTE_ALL选项在这里应该没用,可能足以解释它应该忽略的废话 字段中的分隔符...