我有大约10,000个具有类似结构的XML文件,希望将其转换为单个CSV文件。 每个XML文件如下所示:
<?xml version='1.0' encoding='UTF-8'?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Body>
<ns7:GetStopMonitoringServiceResponse xmlns:ns3="http://www.siri.org.uk/siri" xmlns:ns4="http://www.ifopt.org.uk/acsb" xmlns:ns5="http://www.ifopt.org.uk/ifopt" xmlns:ns6="http://datex2.eu/schema/1_0/1_0" xmlns:ns7="http://new.webservice.namespace">
<Answer>
<ns3:ResponseTimestamp>2019-03-31T09:00:52.912+03:00</ns3:ResponseTimestamp>
<ns3:ProducerRef>ISR Siri Server (141.10)</ns3:ProducerRef>
<ns3:ResponseMessageIdentifier>276480603</ns3:ResponseMessageIdentifier>
<ns3:RequestMessageRef>0100700:1351669188:4684</ns3:RequestMessageRef>
<ns3:Status>true</ns3:Status>
<ns3:StopMonitoringDelivery version="IL2.71">
<ns3:ResponseTimestamp>2019-03-31T09:00:52.912+03:00</ns3:ResponseTimestamp>
<ns3:Status>true</ns3:Status>
<ns3:MonitoredStopVisit>
<ns3:RecordedAtTime>2019-03-31T09:00:52.000+03:00</ns3:RecordedAtTime>
<ns3:ItemIdentifier>-881202701</ns3:ItemIdentifier>
<ns3:MonitoringRef>20902</ns3:MonitoringRef>
<ns3:MonitoredVehicleJourney>
<ns3:LineRef>23925</ns3:LineRef>
<ns3:DirectionRef>2</ns3:DirectionRef>
<ns3:FramedVehicleJourneyRef>
<ns3:DataFrameRef>2019-03-31</ns3:DataFrameRef>
<ns3:DatedVehicleJourneyRef>36962685</ns3:DatedVehicleJourneyRef>
</ns3:FramedVehicleJourneyRef>
<ns3:PublishedLineName>15</ns3:PublishedLineName>
<ns3:OperatorRef>15</ns3:OperatorRef>
<ns3:DestinationRef>26020</ns3:DestinationRef>
<ns3:OriginAimedDepartureTime>2019-03-31T08:35:00.000+03:00</ns3:OriginAimedDepartureTime>
<ns3:VehicleLocation>
<ns3:Longitude>34.78000259399414</ns3:Longitude>
<ns3:Latitude>32.042293548583984</ns3:Latitude>
</ns3:VehicleLocation>
<ns3:VehicleRef>37629301</ns3:VehicleRef>
<ns3:MonitoredCall>
<ns3:StopPointRef>20902</ns3:StopPointRef>
<ns3:ExpectedArrivalTime>2019-03-31T09:03:00.000+03:00</ns3:ExpectedArrivalTime>
</ns3:MonitoredCall>
</ns3:MonitoredVehicleJourney>
</ns3:MonitoredStopVisit>
</ns3:StopMonitoringDelivery>
</Answer>
</ns7:GetStopMonitoringServiceResponse>
</S:Body>
</S:Envelope>
上面的示例显示了一个MonitoredStopVisit嵌套标记,但是每个XML大约有4,000个。 可以在here中找到完整的XML为例。
我想将所有10K文件转换为一个CSV,其中每条记录对应一个MonitoredStopVisit标记,因此CSV看起来应该像这样:
当前,这是我的体系结构:
这是xml到df的代码:
def xml_to_df(xml_file):
from lxml import objectify
xml_content = xml_file.read()
obj = objectify.fromstring(xml_content)
df_cols=[
'RecordedAtTime',
'MonitoringRef',
'LineRef',
'DirectionRef',
'PublishedLineName',
'OperatorRef',
'DestinationRef',
'OriginAimedDepartureTime',
'Longitude',
'Latitude',
'VehicleRef',
'StopPointRef',
'ExpectedArrivalTime',
'AimedArrivalTime'
]
tempdf = pd.DataFrame(columns=df_cols)
arr_of_vals = [""] * 14
for i in obj.getiterator():
if "MonitoredStopVisit" in i.tag or "Status" in i.tag and "false" in str(i):
if arr_of_vals[0] != "" and (arr_of_vals[8] and arr_of_vals[9]):
s = pd.Series(arr_of_vals, index=df_cols)
if tempdf[(tempdf==s).all(axis=1)].empty:
tempdf = tempdf.append(s, ignore_index=True)
arr_of_vals = [""] * 14
elif "RecordedAtTime" in i.tag:
arr_of_vals[0] = str(i)
elif "MonitoringRef" in i.tag:
arr_of_vals[1] = str(i)
elif "LineRef" in i.tag:
arr_of_vals[2] = str(i)
elif "DestinationRef" in i.tag:
arr_of_vals[6] = str(i)
elif "OriginAimedDepartureTime" in i.tag:
arr_of_vals[7] = str(i)
elif "Longitude" in i.tag:
if str(i) == "345353":
print("Lon: " + str(i))
arr_of_vals[8] = str(i)
elif "Latitude" in i.tag:
arr_of_vals[9] = str(i)
elif "VehicleRef" in i.tag:
arr_of_vals[10] = str(i)
elif "ExpectedArrivalTime" in i.tag:
arr_of_vals[12] = str(i)
if arr_of_vals[0] != "" and (arr_of_vals[8] and arr_of_vals[9]):
s = pd.Series(arr_of_vals, index=df_cols)
if tempdf[(tempdf == s).all(axis=1)].empty:
tempdf = tempdf.append(s, ignore_index=True)
return tempdf
问题在于,使用8个子处理器,对于10K文件而言,这大约需要10个小时。 在检查CPU /内存使用情况时,我发现它们没有被充分利用。
任何想法都可以如何改善?我的下一步是穿线,但也许还有其他适用的方法。 请注意,记录的顺序并不重要-我可以稍后对其进行排序。
答案 0 :(得分:1)
这是我对大熊猫的解决方案:
每个5Mb文件的计算时间约为0.4s
import xml.etree.ElementTree as ET
import re
import pandas as pd
import os
def collect_data(xml_file):
# create xml object
root = ET.parse(xml_file).getroot()
# collect raw data
out_data = []
for element in root.iter():
# get tag name
tag = re.sub('{.*?}', '', element.tag)
# add break segment element
if tag == 'RecordedAtTime':
out_data.append('break')
if tag in tag_list:
out_data.append((tag, element.text))
# get break indexes
break_index = [i for i, x in enumerate(out_data) if x == 'break']
# break list into parts
list_data = []
for i in range(len(break_index) - 1):
list_data.append(out_data[break_index[i]:break_index[i + 1]])
# check for each value in data
final_data = []
for item in list_data:
# delete bleak element ad convert list into dictionary
del item[item.index('break')]
data_dictionary = dict(item)
if 'RecordedAtTime' in data_dictionary.keys():
recorded_at_time = data_dictionary.get('RecordedAtTime')
else:
recorded_at_time = ''
if 'MonitoringRef' in data_dictionary.keys():
monitoring_ref = data_dictionary.get('MonitoringRef')
else:
monitoring_ref = ''
if 'LineRef' in data_dictionary.keys():
line_ref = data_dictionary.get('LineRef')
else:
line_ref = ''
if 'DirectionRef' in data_dictionary.keys():
direction_ref = data_dictionary.get('DirectionReff')
else:
direction_ref = ''
if 'PublishedLineName' in data_dictionary.keys():
published_line_name = data_dictionary.get('PublishedLineName')
else:
published_line_name = ''
if 'OperatorRef' in data_dictionary.keys():
operator_ref = data_dictionary.get('OperatorRef')
else:
operator_ref = ''
if 'DestinationRef' in data_dictionary.keys():
destination_ref = data_dictionary.get('DestinationRef')
else:
destination_ref = ''
if 'OriginAimedDepartureTime' in data_dictionary.keys():
origin_aimed_departure_time = data_dictionary.get('OriginAimedDepartureTime')
else:
origin_aimed_departure_time = ''
if 'Longitude' in data_dictionary.keys():
longitude = data_dictionary.get('Longitude')
else:
longitude = ''
if 'Latitude' in data_dictionary.keys():
latitude = data_dictionary.get('Latitude')
else:
latitude = ''
if 'VehicleRef' in data_dictionary.keys():
vehicle_ref = data_dictionary.get('VehicleRef')
else:
vehicle_ref = ''
if 'StopPointRef' in data_dictionary.keys():
stop_point_ref = data_dictionary.get('StopPointRef')
else:
stop_point_ref = ''
if 'ExpectedArrivalTime' in data_dictionary.keys():
expected_arrival_time = data_dictionary.get('ExpectedArrivalTime')
else:
expected_arrival_time = ''
if 'AimedArrivalTime' in data_dictionary.keys():
aimed_arrival_time = data_dictionary.get('AimedArrivalTime')
else:
aimed_arrival_time = ''
final_data.append((recorded_at_time, monitoring_ref, line_ref, direction_ref, published_line_name, operator_ref,
destination_ref, origin_aimed_departure_time, longitude, latitude, vehicle_ref,
stop_point_ref,
expected_arrival_time, aimed_arrival_time))
return final_data
# setup tags list for checking
tag_list = ['RecordedAtTime', 'MonitoringRef', 'LineRef', 'DirectionRef', 'PublishedLineName', 'OperatorRef',
'DestinationRef', 'OriginAimedDepartureTime', 'Longitude', 'Latitude', 'VehicleRef', 'StopPointRef',
'ExpectedArrivalTime', 'AimedArrivalTime']
# collect data from each file
save_data = []
for file_name in os.listdir(os.getcwd()):
if file_name.endswith('.xml'):
save_data.append(collect_data(file_name))
else:
pass
# merge list of lists
flat_list = []
for sublist in save_data:
for item in sublist:
flat_list.append(item)
# load data into data frame
data = pd.DataFrame(flat_list, columns=tag_list)
# save data to file
data.to_csv('data.csv', index=False)
答案 1 :(得分:0)
所以看来问题出在使用Pandas数据框和系列。 使用上面的代码,处理一个具有〜4000条记录的xml文件花费了4-120秒。随着程序继续工作,时间增加了。
使用python列表或numpy矩阵(更方便地处理csv)可大大减少运行时间-每个xml文件处理现在需要0.1-0.5秒的时间。
我每次使用以下代码追加新处理的记录
records = np.append(records, new_redocrds, axis=0)
这等效于:
tempdf = tempdf.append(s, ignore_index=True)
但是明显更快。
希望这对可能遇到类似问题的人有所帮助!
答案 2 :(得分:0)
实际上考虑使用XSLT,这是一种特殊用途的语言,可以将XML文件转换为其他XML even 文本文件,例如CSV。唯一需要的第三方库是Python的lxml
,它可以运行XSLT 1.0脚本,而无需使用较重的,广泛的分析工具,例如Pandas和Numpy。
实际上,由于XSLT是一种独立的行业语言,因此它是可移植的,并且可以使用带有XSLT库(例如Java,PHP,Perl,C#,VB)的任何语言或独立运行1.0、2.0或3.0 processors(例如Xalan,Saxon),Python都可以将它们作为命令行subprocess
进行调用。
XSLT (以下另存为.xsl文件,特殊的.xml文件)
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns3="http://www.siri.org.uk/siri"
xmlns:ns4="http://www.ifopt.org.uk/acsb"
xmlns:ns5="http://www.ifopt.org.uk/ifopt"
xmlns:ns6="http://datex2.eu/schema/1_0/1_0"
xmlns:ns7="http://new.webservice.namespace">
<xsl:output method="text" indent="yes" omit-xml-declaration="yes"/>
<xsl:strip-space elements="*"/>
<xsl:template match ="/S:Envelope/S:Body/ns7:GetStopMonitoringServiceResponse/Answer">
<xsl:apply-templates select="ns3:StopMonitoringDelivery"/>
</xsl:template>
<xsl:template match="ns3:StopMonitoringDelivery">
<!-- HEADERS -->
<!-- <xsl:text>RecordedAtTime,MonitoringRef,LineRef,DirectionRef,PublishedLineName,OperatorRef,DestinationRef,OriginAimedDepartureTime,Longitude,Latitude,VehicleRef,StopPointRef,ExpectedArrivalTime,AimedArrivalTime
</xsl:text> -->
<xsl:apply-templates select="ns3:MonitoredStopVisit"/>
<xsl:text>
</xsl:text>
</xsl:template>
<xsl:template match="ns3:MonitoredStopVisit">
<xsl:variable name="delim">,</xsl:variable>
<xsl:variable name="quote">"</xsl:variable>
<!-- DATA ROWS -->
<xsl:value-of select="concat($quote, ns3:RecordedAtTime, $quote, $delim,
$quote, ns3:MonitoringRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:LineRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:DirectionRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:PublishedLineName, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:OperatorRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:DestinationRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:OriginAimedDepartureTime, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:VehicleLocation/ns3:Longitude, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:VehicleLocation/ns3:Latitude, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:VehicleRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:MonitoredCall/ns3:StopPointRef, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:MonitoredCall/ns3:ExpectedArrivalTime, $quote, $delim,
$quote, ns3:MonitoredVehicleJourney/ns3:MonitoredCall/ns3:AimedArrivalTime, $quote, $delim
)"/>
</xsl:template>
</xsl:stylesheet>
Python (不附加列表,数组或数据框)
import glob # TO RETRIEVE ALL XML FILES
import lxml.etree as et # TO PARSE XML AND RUN XSLT
xml_path = "/path/to/xml/files"
# PARSE XSLT
xsl = et.parse('XSLTScript.xsl')
# BUILD CSV
with open("MonitoredStopVisits.csv", 'w') as f:
# HEADER
f.write('RecordedAtTime,MonitoringRef,LineRef,DirectionRef,PublishedLineName,'
'OperatorRef,DestinationRef,OriginAimedDepartureTime,Longitude,Latitude,'
'VehicleRef,StopPointRef,ExpectedArrivalTime,AimedArrivalTime\n')
# DATA ROWS
for f in glob.glob(xml_path + "/**/*.xml", recursive=True):
# LOAD XML AND XSL SCRIPT
xml = et.parse(f)
# TRANSFORM XML TO STRING RESULT TREE
transform = et.XSLT(xsl)
result = str(transform(xml))
# WRITE TO CSV
f.write(result)