I’ve come across some curious but consistent behaviour when processing large batches of results using either Spring JPA Slice or Hibernates ScrollableResult where an update is made on a column that is also part of the where clause. Issue is only half of the valid results are processed. Each time it is rerun it reduces by half.
As ehcaching is enabled I’m using a Hibernate StatelessSession to get around the memory overhead. Behaviour for the ScrollableResult is the same regardless of session type.
I’m processing the ScrollableResults in block sizes determined by the fetch size. New transaction per block which is committed before the next fetch. As I’ve millions of records to update and updates are atomic per entity prevents a fail in one block rolling back everything.
What I’m finding is if say I’ve 10,000 records and fetch size is set to 1000, processing will finish once 5000 are complete even without errors. If I rerun against remaining, processing will finish after 2500, and so on. Each time unprocessed is halved. I’m sure there is a very logical reason for this but haven’t found any documentation touching on this.
If I remove the column that is being updated from the clause all records will be processed. However this introduces a new issue. If a couple of rows fail to update the first time around and I want to retry, without this clause I have no way of determining what’s already updated so all rows including those already processed will be retrieved, and the same blocks will fail.
public int processRows(String sql, int fetchSize){
int pageNumber = 0;
int fails = 0;
SessionFactory sessionFactory = em.getEntityManagerFactory().unwrap(SessionFactory.class);
try (StatelessSession statelessSession = sessionFactory.openStatelessSession()) {
statelessSession.setJdbcBatchSize(fetchSize);
while (true) {
// Open a new transaction for each page, commit once page results updated
Transaction statelessSessionTransaction = statelessSession.beginTransaction();
Query query = session.createQuery(sql)
.setFetchSize(fetchSize)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setCacheMode(CacheMode.IGNORE)
.setCacheable(false)
.setFirstResult(pageNumber * fetchSize)
.setMaxResults(fetchSize);
ScrollableResults scrollableResults = query.scroll(ScrollMode.FORWARD_ONLY);
if (!scrollableResults.next()) {
break; // Exit the loop if there are no more results
}
do {
Entity e = (Entity) scrollableResults.get(0);
e.setCol2(e.getCol1);
session.update();
} while (scrollableResults.next());
//commit updated page
statelessSessionTransaction.commit();
pageNumber++;
}
catch (Exception exception){
statelessSessionTransaction.rollback();
pageNumber++;
fails++;
}
return fails;
}
At the moment I can get around this with two methods. The “processAllRows()” will be called first.
public void processAllRows(){
String sql = "select e from Entity as e where col1 is not null";
int fetchSize = 1000;
processRows(sql,fetchSize)
}
The second “retryFails()” will only be called if there are fails in the first. i.e.
public void retryFails(){
String sql = "select e from Entity as e where col1 is not null and col2 is null";
Integer elementsRemaining = getEntityCount(sql);
int fails;
int attempts;
do{
//halves fetchSize on each retry to break up offending block into smaller blocks
if(attempts > 0 && elementsRemaining >= 2){
fetchSize = elementsRemaining/2;
}
fails = processRows(sql,fetchSize);
elementsRemaining = getEntityCount(sql);
attempts++;
}while (elementsRemaining > 0 && fails != elementsRemaining);
log.debug("Completed, remaining entities: {}, fails: {}",elementsRemaining,fails);
}
This will work but due to the behaviour described above where only half the eligible rows are processed with the inclusion of the additional clause it effectively doubles the number of calls that are required to process all valid rows.
Any insights into this behavior would be really appreciated as well as ideas on how to reduce the number of calls required to process remaining rows.
Thanks
1