Datanucleus JPA命名查询返回已删除的实体

时间:2016-03-03 16:14:26

标签: jpa datanucleus

我正在使用Datanucleus来执行CRUD。我删除一个实体,然后执行命名查询,为什么已经删除的实体仍在结果列表中?

首先,删除实体:

MyEntity e = manager.find(MyEntity.class, id);
manager.remove(e);

然后,查询:

@NamedQueries({
        @NamedQuery(name = MyEntity.FIND_ALL, query = "SELECT a FROM MyEntity a ORDER BY a.updated DESC")
})
public static final String FIND_ALL = "MyEntity.findAll";
TypedQuery<MyEntity> query = manager.createNamedQuery(FIND_ALL, MyEntity.class);
return query.getResultList();

配置datanucleus.Optimistic persistence.xml:

<property name="datanucleus.Optimistic" value="true" />

命名查询将意外返回其中包含已删除实体的结果列表。

但是,如果是datanucleus.Optimistic=false,那么结果是正确的。为什么datanucleus.Optimistic=true不起作用?

有关此案例的更多详情:

以下是与CRUD相关的日志:

1。保存操作的日志:

DEBUG: DataNucleus.Transaction - Transaction begun for ExecutionContext org.datanucleus.ExecutionContextThreadedImpl@6bc3bf (optimistic=true)
INFO : org.springframework.test.context.transaction.TransactionalTestExecutionListener - Began transaction (1): transaction manager [org.springframework.orm.jpa.JpaTransactionManager@7dfefcef]; rollback [true]
DEBUG: DataNucleus.Persistence - Making object persistent : "com.demo.MyEntity@30a7803e"
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 1 cache [cache size = 0]
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 2 cache
DEBUG: DataNucleus.Persistence - Managing Persistence of Class : com.demo.MyEntity [Table : (none), InheritanceStrategy : superclass-table]
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity@96da65f" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") added to Level 1 cache (loadedFlags="[YNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN]")
DEBUG: DataNucleus.Lifecycle - Object "com.demo.MyEntity@96da65f" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") has a lifecycle change : "HOLLOW"->"P_NONTRANS"
DEBUG: DataNucleus.Persistence - Fetching object "com.demo.MyEntity@96da65f" (id=07cad778-d1c3-4834-ace7-ac2e4ecacc24) fields [entityId,extensions,objectType,openSocial,published,updated,url,actor,appId,bcc,bto,cc,content,context,dc,endTime,generator,geojson,groupId,icon,inReplyTo,ld,links,location,mood,object,odata,opengraph,priority,provider,rating,result,schema_org,source,startTime,tags,target,title,to,userId,verb]
DEBUG: DataNucleus.Datastore.Retrieve - Object "com.demo.MyEntity@96da65f" (id="07cad778-d1c3-4834-ace7-ac2e4ecacc24") being retrieved from HBase
DEBUG: org.apache.hadoop.hbase.zookeeper.ZKUtil - hconnection opening connection to ZooKeeper with ensemble (master.hbase.com:2181)

....
DEBUG: org.apache.hadoop.hbase.client.MetaScanner - Scanning .META. starting at row=MyEntity,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@25c7f5b0
...
DEBUG: DataNucleus.Cache - Object with id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" being removed from Level 1 cache [current cache size = 1]
DEBUG: DataNucleus.ValueGeneration - Creating ValueGenerator instance of "org.datanucleus.store.valuegenerator.UUIDGenerator" for "uuid"
DEBUG: DataNucleus.ValueGeneration - Reserved a block of 1 values
DEBUG: DataNucleus.ValueGeneration - Generated value for field "com.demo.BaseEntity.entityId" using strategy="custom" (Generator="org.datanucleus.store.valuegenerator.UUIDGenerator") : value=4aa3c4a8-b450-473e-aeba-943dc6ef30ce
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity@30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") added to Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]")
DEBUG: DataNucleus.Transaction - Object "com.demo.MyEntity@30a7803e" (id="4aa3c4a8-b450-473e-aeba-943dc6ef30ce") enlisted in transactional cache
DEBUG: DataNucleus.Persistence - Object "com.demo.MyEntity@30a7803e" has been marked for persistence but its actual persistence to the datastore will be delayed due to use of optimistic transactions or "datanucleus.flush.mode" setting

2。记录DELETE操作:

DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity@30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1]
DEBUG: DataNucleus.Persistence - Deleting object from persistence : "com.demo.MyEntity@30a7803e"
DEBUG: DataNucleus.Lifecycle - Object "com.demo.MyEntity@30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") has a lifecycle change : "P_NEW"->"P_NEW_DELETED"

第3。记录命名的QUERY操作:

DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.query.cache.SoftQueryCompilationCache" initialised
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryDatastoreCompilationCache" initialised
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryResultsCache" initialised
DEBUG: DataNucleus.Query - JPQL Single-String with "SELECT a FROM MyEntity a ORDER BY a.updated DESC"
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process started using optimised flush - 0 to delete, 1 to insert and 0 to update
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #7
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #7
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: exists 0
DEBUG: DataNucleus.Datastore.Persist - Object "com.demo.MyEntity@30a7803e" being inserted into HBase with all reachable objects
DEBUG: DataNucleus.Datastore.Native - Object "com.demo.MyEntity@30a7803e" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"}
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #8
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #8
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: multi 2
DEBUG: DataNucleus.Datastore.Persist - Execution Time = 123 ms
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process finished
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC"
DEBUG: DataNucleus.Query - JPQL Query : Compile Time = 13 ms
DEBUG: DataNucleus.Query - QueryCompilation:
  [from:ClassExpression(alias=a)]
  [ordering:OrderExpression{PrimaryExpression{a.updated} descending}]
  [symbols: a type=com.demo.MyEntity]
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC" for datastore
DEBUG: DataNucleus.Query - JPQL Query : Compile Time for datastore = 2 ms
DEBUG: DataNucleus.Query - JPQL Query : Executing "SELECT a FROM MyEntity a ORDER BY a.updated DESC" ...
DEBUG: DataNucleus.Datastore.Native - Retrieving objects for candidate=com.demo.MyEntity and subclasses
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Creating scanner over MyEntity starting at key ''
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Advancing internal scanner to startKey at ''
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #9
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #9
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: openScanner 1
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #10
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #10
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity@30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1]
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #11
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #11
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #12
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #12
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: close 1
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Finished with scanning at {NAME => 'MyEntity,,1457106265917.c6437b9afd33cd225c33e0ed52ff50d4.', STARTKEY => '', ENDKEY => '', ENCODED => c6437b9afd33cd225c33e0ed52ff50d4,}
DEBUG: DataNucleus.Query - JPQL Query : Processing the "ordering" clause using in-memory evaluation (clause = "[OrderExpression{PrimaryExpression{a.updated} descending}]")
DEBUG: DataNucleus.Query - JPQL Query : Processing the "resultClass" clause using in-memory evaluation (clause = "com.demo.MyEntity")
DEBUG: DataNucleus.Query - JPQL Query : Execution Time = 14 ms

为什么在QUERY操作期间会出现以下日志(生命周期为“P_NEW_DELETED”的PUT实体进入数据存储区)?以及如何避免这种行为?

DEBUG: DataNucleus.Datastore.Persist - Object "com.demo.MyEntity@30a7803e" being inserted into HBase with all reachable objects
DEBUG: DataNucleus.Datastore.Native - Object "com.demo.MyEntity@30a7803e" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"}

1 个答案:

答案 0 :(得分:1)

您启用了乐观事务,因此所有数据写入操作仅在提交时发生。您在发生之前执行了查询(并且没有为查询设置刷新模式),因此在执行查询时您的删除不在数据存储区中。

致电

em.flush()

在执行查询之前,或设置

query.setFlushMode(FlushModeType.AUTO);