我在批处理索引数据时遇到问题。
我想索引一个Article
列表,在我需要获取信息的成员上有一些@IndexedEmbedded
。 Article
从其他两个bean中获取其他信息:Page
和Articlefulltext
。
批处理正在更新正确的数据库,并通过Hibernate Search Annotations将新的Document
添加到我的Lucene索引中。但添加的文档有不完整的字段。似乎Hibernate Search没有看到所有的注释。
所以当我看看由于Luke得到的lucene索引时,我有一些关于Article和Page对象的字段,但没有关于ArticleFulltext的字段,但是我的数据库中有正确的数据,这意味着persist()操作是做得正确......
我真的需要一些帮助,因为我没有看到我的Page和ArticleFullText之间有什么区别......
奇怪的是,如果我使用MassIndexer
,它会正确地将Article + Page + Articlefulltext数据添加到lucene索引中。但是,每次我做出重大更新时,我都不想重建数百万的文档索引......
我将log4j日志级别设置为调试hibernate search和lucene。他们没有给我这么多信息。
这是我的bean代码和批处理代码。
先谢谢你的帮助,
Article.java:
@Entity
@Table(name = "article", catalog = "test")
@Indexed(index="articleText")
@Analyzer(impl = FrenchAnalyzer.class)
public class Article implements java.io.Serializable {
@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = "id", unique = true, nullable = false)
@DocumentId
private Integer id;
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "firstpageid", nullable = false)
@IndexedEmbedded
private Page page;
@Column(name = "heading", length = 300)
@Field(name= "title", index = Index.YES, store = Store.YES)
@Boost(2.5f)
private String heading;
@Column(name = "subheading", length = 300)
private String subheading;
@OneToOne(fetch = FetchType.LAZY, mappedBy = "article")
@IndexedEmbedded
private Articlefulltext articlefulltext;
[... bean methods etc ...]
Page.java
@Entity
@Table(name = "page", catalog = "test")
public class Page implements java.io.Serializable {
private Integer id;
@IndexedEmbedded
private Issue issue;
@ContainedIn
private Set<Article> articles = new HashSet<Article>(0);
[... bean method ...]
Articlefulltext.java
@Entity
@Table(name = "articlefulltext", catalog = "test")
@Analyzer(impl = FrenchAnalyzer.class)
public class Articlefulltext implements java.io.Serializable {
@GenericGenerator(name = "generator", strategy = "foreign", parameters = @Parameter(name = "property", value = "article"))
@Id
@GeneratedValue(generator = "generator")
@Column(name = "aid", unique = true, nullable = false)
private int aid;
@OneToOne(fetch = FetchType.LAZY)
@PrimaryKeyJoinColumn
@ContainedIn
private Article article;
@Column(name = "fulltextcontents", nullable = false)
@Field(store=Store.YES, index=Index.YES, analyzer = @Analyzer(impl = FrenchAnalyzer.class), bridge= @FieldBridge(impl = FulltextSplitBridge.class))
// This Field is not add to the Resulting Document ! I put a log into FulltextSplitBridge, and it's never called during a batch process. But if I use a MassIndexer, i see that FulltextSplitBridge is called for each Articlefulltext ...
private String fulltextcontents;
[... bean method ...]
以下是用于更新Database和Lucene索引的代码
批量源代码:
FullTextEntityManager em = null;
@Override
protected void executeInternal(JobExecutionContext arg0) throws JobExecutionException {
ApplicationContext ap = null;
EntityManagerFactory emf = null;
EntityTransaction tx = null;
try {
ap = (ApplicationContext) arg0.getScheduler().getContext().get("applicationContext");
emf = (EntityManagerFactory) ap.getBean("entityManagerFactory", EntityManagerFactory.class);
em = Search.getFullTextEntityManager(emf.createEntityManager());
tx = em.getTransaction();
tx.begin();
// [... em.persist() some things which aren't lucene related, so i skip them ....]
for(File xmlFile : xmlList){
Reel reel = new Reel(title, reelpath);
em.persist(reel);
Article article = new Article();
// [... set Article fields, so i skip them ....]
Articlefulltext ft = new Articlefulltext();
// [... set Articlefulltext fields, so i skip them ....]
ft.setArticle(article);
ft.setFulltextcontents(bufferBlock.toString());
em.persist(ft); // i persist ft before article because of FK issues
em.persist(article); // there, the Annotation update Lucene index, but there's not updating fultextContent (see my first post)
if ( nbFileDone % 50 == 0 ) {
//flush a batch of inserts and release memory:
em.flush();
em.clear();
}
}
tx.commit();
}
catch(Exception e){
tx.rollback();
}
em.close();
}
答案 0 :(得分:2)