任何人都可以建议我根据其中存在的重复值,将给定的json减少到更简单的json。例如:如果以下是我的json数据,
{
"result": [{
"LOCATION": [
"US",
"United States",
"United States",
"Honolulu",
"Hawaii",
"Chicago",
"13th District",
"United States"
],
"ORGANIZATION": [
"Columbia University",
"Harvard Law School",
"Harvard Law School",
"University of Chicago Law School",
"Illinois Senate",
"House of Representatives"
]
}]
}
Then i need the output after json reduction as ,
{
"result": [{
"LOCATION": [
"US",
"United States (3)",
"Honolulu",
"Hawaii",
"Chicago",
"13th District"
],
"ORGANIZATION": [
"Columbia University",
"Harvard Law School (2)",
"University of Chicago Law School",
"Illinois Senate",
"House of Representatives"
]
}]
}
即每个重复值必须替换为单个相同的值以及json中相同值的出现次数。有没有可以这样做的图书馆?
提前致谢。
答案 0 :(得分:1)
请发布您的代码并更清楚地说明您的要求。是否有一些POJO可用于此JSON? 直接没有库可以进入数组并根据您的要求进行更改,因此您必须编写自己的自定义解析器。
我通过假设三件事创建了一个示例代码片段:
你的结构将保持不变,即:
实施例
{
"result": [//ASSUMPTION [1]
{ //ASSUMPTION [2]
"LOCATION": [ //ASSUMPTION [3]
"US",
"United States",
"United States",
"Honolulu",
"Hawaii",
"Chicago",
"13th District",
"United States"
],
"ORGANIZATION": [ //ASSUMPTION [3]
"Columbia University",
"Harvard Law School",
"Harvard Law School",
"University of Chicago Law School",
"Illinois Senate",
"House of Representatives"
]
},
{ //ASSUMPTION [2]
"LOCATION": [ //ASSUMPTION [3]
"United States",
"United States"
],
"ORGANIZATION": [ //ASSUMPTION [3]
"Harvard Law School",
"Harvard Law School"
]
}
]
}
将它传递给代码之后,我得到了这个结果:
[{
"LOCATION": ["US", "Honolulu", "Hawaii", "Chicago", "13th District", "United States (3)"],
"ORGANIZATION": ["Columbia University", "University of Chicago Law School", "Illinois Senate", "House of Representatives", "Harvard Law School (2)"]
}, {
"LOCATION": ["United States (2)"],
"ORGANIZATION": ["Harvard Law School (2)"]
}]
它是非常原始的代码而没有任何优化,但你会明白这一点。
CODE SNIPPET:
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper();
Map<String, Object> map = new HashMap<String, Object>();
// convert JSON string to Map
map = mapper.readValue(new File("c://drive//test.json"), new TypeReference<Map<String, Object>>() {
});
System.out.println("map " + map);
// Value of result element i.e array
List<Map<String, Object>> result = (List<Map<String, Object>>) map.get("result");
System.out.println("result " + result);
// Traverse over array, Each element inside is one more object (ASSUMPTION 2)
for (Map<String, Object> resultMap : result) {
List<String> traversed = new ArrayList<>();
// Traverse over each object
for (Entry<String, Object> entry : resultMap.entrySet()) {
// Each object contains a list
List<String> insideMostArray = (List<String>) entry.getValue();
// Entries to remove
Map<String, Integer> removedEntry = new HashMap<>();
// Inside most values
for (String insideMostArrayTemp : insideMostArray) {
if (traversed.contains(insideMostArrayTemp)) {
removedEntry.put(insideMostArrayTemp, removedEntry.get(insideMostArrayTemp) == null ? 2
: (removedEntry.get(insideMostArrayTemp) + 1));
} else {
traversed.add(insideMostArrayTemp);
}
}
//Remove all duplicate entries and replace with 1 entry
for (Entry<String, Integer> entriesToRemove : removedEntry.entrySet()) {
insideMostArray.removeAll(Collections.singleton(entriesToRemove.getKey()));
insideMostArray.add(entriesToRemove.getKey() + " (" + entriesToRemove.getValue() + ")");
}
}
}
String optimizedJson = mapper.writeValueAsString(result);
}