Map<String, String> phoneBook=people.stream()
.collect(toMap(Person::getName, Person::getAddress));
重复发生时,我得到重复的密钥异常。
是否可以忽略在重复发生时将值添加到地图?
当存在重复时,只需忽略该重复键即可继续。
答案 0 :(得分:342)
可以使用Collectors.toMap(keyMapper, valueMapper, mergeFunction)
的mergeFunction
参数:
Map<String, String> phoneBook =
people.stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress,
(address1, address2) -> {
System.out.println("duplicate key found!");
return address1;
}
));
mergeFunction
是一个函数,它对与同一个键相关联的两个值进行操作。 adress1
对应于收集元素时遇到的第一个地址,adress2
对应于遇到的第二个地址:这个lambda告诉保留第一个地址并忽略第二个地址。
答案 1 :(得分:73)
正如JavaDocs所说:
如果映射的密钥包含重复项(根据
Object.equals(Object)
),当IllegalStateException
时抛出toMap(Function keyMapper, Function valueMapper, BinaryOperator mergeFunction)
执行收集操作。如果映射的键可能有 重复,改为使用Map<String, String> phoneBook = people.stream() .collect(Collectors.toMap(Person::getName, Person::getAddress, (a1, a2) -> a1));
。
所以你应该使用toMap(Function keyMapper, Function valueMapper, BinaryOperator mergeFunction)
代替。只需提供合并功能,即确定需要将哪一个重复放入地图中。例如,如果您不关心哪一个,请致电
char[30]
答案 2 :(得分:2)
@alaster答案对我有很大帮助,但是如果有人尝试对信息进行分组,我想添加有意义的信息。
例如,如果您有两个Orders
,但每个产品具有相同的code
但产品的quantity
不同,而您的愿望是 sum 数量,您可以这样做:
List<Order> listQuantidade = new ArrayList<>();
listOrders.add(new Order("COD_1", 1L));
listOrders.add(new Order("COD_1", 5L));
listOrders.add(new Order("COD_1", 3L));
listOrders.add(new Order("COD_2", 3L));
listOrders.add(new Order("COD_3", 4L));
listOrders.collect(Collectors.toMap(Order::getCode, o -> o.getQuantity(), (o1, o2) -> o1 + o2));
结果:
{COD_3=4, COD_2=3, COD_1=9}
答案 3 :(得分:0)
假设有人是对象列表
Map<String, String> phoneBook=people.stream()
.collect(toMap(Person::getName, Person::getAddress));
现在您需要两个步骤:
1)
people =removeDuplicate(people);
2)
Map<String, String> phoneBook=people.stream()
.collect(toMap(Person::getName, Person::getAddress));
这是删除重复项的方法
public static List removeDuplicate(Collection<Person> list) {
if(list ==null || list.isEmpty()){
return null;
}
Object removedDuplicateList =
list.stream()
.distinct()
.collect(Collectors.toList());
return (List) removedDuplicateList;
}
在此处添加完整示例
package com.example.khan.vaquar;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class RemovedDuplicate {
public static void main(String[] args) {
Person vaquar = new Person(1, "Vaquar", "Khan");
Person zidan = new Person(2, "Zidan", "Khan");
Person zerina = new Person(3, "Zerina", "Khan");
// Add some random persons
Collection<Person> duplicateList = Arrays.asList(vaquar, zidan, zerina, vaquar, zidan, vaquar);
//
System.out.println("Before removed duplicate list" + duplicateList);
//
Collection<Person> nonDuplicateList = removeDuplicate(duplicateList);
//
System.out.println("");
System.out.println("After removed duplicate list" + nonDuplicateList);
;
// 1) solution Working code
Map<Object, Object> k = nonDuplicateList.stream().distinct()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("Result 1 using method_______________________________________________");
System.out.println("k" + k);
System.out.println("_____________________________________________________________________");
// 2) solution using inline distinct()
Map<Object, Object> k1 = duplicateList.stream().distinct()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("Result 2 using inline_______________________________________________");
System.out.println("k1" + k1);
System.out.println("_____________________________________________________________________");
//breacking code
System.out.println("");
System.out.println("Throwing exception _______________________________________________");
Map<Object, Object> k2 = duplicateList.stream()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("k2" + k2);
System.out.println("_____________________________________________________________________");
}
public static List removeDuplicate(Collection<Person> list) {
if (list == null || list.isEmpty()) {
return null;
}
Object removedDuplicateList = list.stream().distinct().collect(Collectors.toList());
return (List) removedDuplicateList;
}
}
// Model class
class Person {
public Person(Integer id, String fname, String lname) {
super();
this.id = id;
this.fname = fname;
this.lname = lname;
}
private Integer id;
private String fname;
private String lname;
// Getters and Setters
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getFname() {
return fname;
}
public void setFname(String fname) {
this.fname = fname;
}
public String getLname() {
return lname;
}
public void setLname(String lname) {
this.lname = lname;
}
@Override
public String toString() {
return "Person [id=" + id + ", fname=" + fname + ", lname=" + lname + "]";
}
}
结果:
Before removed duplicate list[Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=3, fname=Zerina, lname=Khan], Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=1, fname=Vaquar, lname=Khan]]
After removed duplicate list[Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=3, fname=Zerina, lname=Khan]]
Result 1 using method_______________________________________________
k{1=Person [id=1, fname=Vaquar, lname=Khan], 2=Person [id=2, fname=Zidan, lname=Khan], 3=Person [id=3, fname=Zerina, lname=Khan]}
_____________________________________________________________________
Result 2 using inline_______________________________________________
k1{1=Person [id=1, fname=Vaquar, lname=Khan], 2=Person [id=2, fname=Zidan, lname=Khan], 3=Person [id=3, fname=Zerina, lname=Khan]}
_____________________________________________________________________
Throwing exception _______________________________________________
Exception in thread "main" java.lang.IllegalStateException: Duplicate key Person [id=1, fname=Vaquar, lname=Khan]
at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
at java.util.HashMap.merge(HashMap.java:1253)
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.example.khan.vaquar.RemovedDuplicate.main(RemovedDuplicate.java:48)
答案 4 :(得分:0)
在对对象进行分组时遇到了这样的问题,我总是通过一种简单的方法解决它们:使用java.util.Set执行自定义过滤器以删除重复的对象,该对象具有您选择的以下属性
Set<String> uniqueNames = new HashSet<>();
Map<String, String> phoneBook = people
.stream()
.filter(person -> person != null && !uniqueNames.add(person.getName()))
.collect(toMap(Person::getName, Person::getAddress));
希望这可以帮助遇到相同问题的任何人!
答案 5 :(得分:0)
用于按对象分组
Map<Integer, Data> dataMap = dataList.stream().collect(Collectors.toMap(Data::getId, data-> data, (data1, data2)-> {LOG.info("Duplicate Group For :" + data2.getId());return data1;}));
答案 6 :(得分:0)
对于其他遇到此问题但没有流式传输重复键的人,确保您的keyMapper函数未返回空值。
跟踪下来很烦人,因为当1实际上是条目的值而不是键时,错误会显示“ Duplicate key 1”。
在我的情况下,我的keyMapper函数试图在另一个映射中查找值,但是由于字符串中的错字返回了空值。
final Map<String, String> doop = new HashMap<>();
doop.put("a", "1");
doop.put("b", "2");
final Map<String, String> lookup = new HashMap<>();
doop.put("c", "e");
doop.put("d", "f");
doop.entrySet().stream().collect(Collectors.toMap(e -> lookup.get(e.getKey()), e -> e.getValue()));
答案 7 :(得分:0)
感觉像 toMap
经常工作但并不总是是 Java Streams 的一个黑暗的弱点。就像他们应该称它为 toUniqueMap
之类的......
最简单的方法是使用 Collectors.groupingBy
instead of Collectors.toMap
。
默认情况下它会返回一个 List
类型的输出,但是碰撞问题消失了,无论如何,这可能是您在存在倍数的情况下想要的。
Map<String, List<Person>> phoneBook = people.stream()
.collect(groupingBy((x) -> x.name));
如果 Set
类型的地址集合与特定名称相关联,groupingBy
可以这样做as well:
Map<String, Set<String>> phoneBook = people.stream()
.collect(groupingBy((x) -> x.name, mapping((x) -> x.address, toSet())));
另一种方法是从散列或集合“开始”......并仔细跟踪以确保键在输出流中永远不会重复。啊。这是一个 example 碰巧幸存下来...有时...
答案 8 :(得分:0)
为了完整起见,以下是如何将重复项“减少”为一个。
如果你对最后一个没意见:
Map<String, Person> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, identity(), (first, last) -> last)));
如果你只想要第一个:
Map<String, Person> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, identity(), (first, last) -> first != null ? first : last)));
如果你想要 last 但“address as String”(不使用 identity()
作为参数)。
Map<String, String> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, x -> x.address, (first, last) -> last)));
因此本质上,groupingBy
与 reducing
收集器配对开始的行为与 toMap
收集器非常相似,具有类似于其 mergeFunction 的东西......以及相同的最终结果......