JPA N + 1迭代和JSON序列化程序:查询仍然存在

时间:2018-01-23 15:29:22

标签: json jpa

问题

我的实体产品有很多关系:产品,类别等。

当我要求提供产品列表时,所有JPA的头部都会执行正常查询:

select t1.a, t1.b,..., t1.idcategory, ... from product t1 limit 50 offset 0

然后,为每个产品询问类别和其他关系

select * from category c where c.id = ?

这非常重:50行产生50x关系查询(在我的情况下约为8个关系,因此50 + 50 * 8个查询)。 我需要删除这些查询。

首先尝试

我使用条件查询,因此我已将连接添加到查询中:

Root<T> root  = query.from(Product.class);
Map<Class,Field> foreignRelationship = EntityUtils.listEntity(Product.class);
for(Field f : foreign.values()){
   root.join(f.getName());
}

此代码读取我的实体并生成正确的查询:

select t1.* from product t1 join category c on t1.idcategory = c.id ...

查询已成功构建,但是当我调用json序列化程序时(我需要通过REST服务返回对象),执行NxROW查询:(

第二次尝试 我尝试过NamedEntityGraph:

@NamedEntityGraphs({
@NamedEntityGraph(name = "graph.Product", 
    attributeNodes = {
        @NamedAttributeNode(value = "producer",
                subgraph = "graph.Product.producer"),
        @NamedAttributeNode(value = "catmer"),
        @NamedAttributeNode(value = "shop"),
        ...
    },
    subgraphs = {
        @NamedSubgraph(
            name = "graph.Product.producer",
            attributeNodes = {
                @NamedAttributeNode(value = "id"),
                @NamedAttributeNode(value = "code")
            }
        ),...
    },
    includeAllAttributes = true
)
})

在我的条件查询中:

getEntityManager().createQuery(query)
                //.setHint("javax.persistence.fetchgraph",graph);
                .setHint("javax.persistence.loadgraph",graph);

查询没有连接,并且所有每行查询都已完成:(

我该如何解决这个问题?我的方法有些错误吗?

PS:我正在使用eclipselink

2 个答案:

答案 0 :(得分:0)

我建议尝试获取连接,如:

>>>import datetime
>>> from pyspark.sql import functions as F
>>> from pyspark.sql.types import StringType

>>> maxtimeudf = F.udf(lambda x : (datetime.datetime.strptime(x,'%H:%M')+datetime.timedelta(minutes=5)).strftime('%H:%M'),StringType())
>>> mintimeudf = F.udf(lambda x : (datetime.datetime.strptime(x,'%H:%M')+datetime.timedelta(minutes=-5)).strftime('%H:%M'),StringType())

>>> df_fruit = df_fruit.withColumn('starttime',mintimeudf(df_fruit['Time'])).withColumn('endtime',maxtimeudf(df_fruit['Time']))
>>> df_fruit.show()
+------+-----+---------+-------+
| Fruit| Time|starttime|endtime|
+------+-----+---------+-------+
| Apple|10:00|    09:55|  10:05|
|Orange|12:35|    12:30|  12:40|
| Apple|11:36|    11:31|  11:41|
| Apple|12:48|    12:43|  12:53|
|  Pear|11:00|    10:55|  11:05|
+------+-----+---------+-------+
>>> df = df_fruit.join(df_calories,((df_fruit.Fruit == df_calories.Fruit) & (df_calories.Time.between(df_fruit.starttime,df_fruit.endtime))),'left_outer')
>>> df.select(df_fruit['Fruit'],df_fruit['Time'],df_calories['Time'],df_calories['Weight'],df_calories['Calories']).show()
+------+-----+-----+------+--------+
| Fruit| Time| Time|Weight|Calories|
+------+-----+-----+------+--------+
|Orange|12:35|12:39|   75g|   14cal|
|Orange|12:35|12:33|   75g|    2cal|
|  Pear|11:00| null|  null|    null|
| Apple|10:00|10:02|   86g|    1cal|
| Apple|10:00|10:04|    9g|   47cal|
| Apple|11:36| null|  null|    null|
| Apple|12:48|12:46|   25g|    9cal|
+------+-----+-----+------+--------+    

或在标准查询中:

select p from Person p join fetch p.adresses

这样可以避免n + 1问题。

详细了解:What is the solution for the N+1 issue in hibernate?

答案 1 :(得分:0)

通过一些注释部分解决了问题。

在产品中:

public class Product extends EntityBase implements Serializable {

@JoinColumn(name = "idsupp",referencedColumnName = "ID")
@ManyToOne(targetEntity = Fornitori.class,fetch = FetchType.EAGER)
@BatchFetch(BatchFetchType.JOIN)
@JoinFetch(JoinFetchType.OUTER)
private Supplier supplier;

@ManyToOne(cascade=CascadeType.ALL,fetch = FetchType.EAGER)
@JoinColumn(name="infagg_id", referencedColumnName = "ID")
@BatchFetch(BatchFetchType.IN)
private Infos info = new Infos();

供应商处的BatchFetch和JoinFetch在查询减少数据查询中生成左外连接子句。

相反,对于Infos,我更喜欢使用BatchFetchType.IN。这样,所有记录都在一个查询中获取(其中id为...)而不是N个查询器