I implmented a custom cache store for Ignite backed by DynamoDB:
public class DynamoDBCacheStore extends CacheStoreAdapter<Integer, String> {
private final DynamoDbTable<Sensor> table;
public DynamoDBCacheStore() {
// initialize the DynamoDB client
}
@Override
public Sensor load(Integer key) throws CacheLoaderException {
// impl here
}
@Override
public void write(Cache.Entry<? extends Integer, ? extends String> entry) throws CacheWriterException {
// impl here
}
@Override
public void delete(Object key) throws CacheWriterException {
// impl here
}
}
I would like to enable SQL support for this so I can use the cache with JDBC driver:
CacheConfiguration<Integer, String> cacheCfg = new CacheConfiguration<Integer, String>()
.setCacheStoreFactory(FactoryBuilder.factoryOf(DynamoDBCacheStore.class))
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
.setName(cacheName)
.setReadThrough(true)
.setWriteThrough(true)
.setIndexedTypes(Integer.class, String.class)
IgniteConfiguration cfg = new IgniteConfiguration()
.setIgniteInstanceName("embedded-ignite")
.setClientMode(false)
.setCacheConfiguration(cacheCfg);
However, when the cache is empty (cold start) without calling the cache.loadCache()
, SQL API doesn’t trigger read through behvaior:
// Running SQL query
String sql = "SELECT * FROM LookUpTable WHERE id = ?";
try (QueryCursor<List<?>> cursor = personCache.query(new SqlFieldsQuery(sql).setArgs(25))) {
for (List<?> row : cursor) {
System.out.println(row);
}
}
Using the Key/Value API, read through behavior is enforced:
IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cacheName);
cache.get(24);
Does SQL API only operates on dataset that is already residing in-memory? What would be the approach in case underlying storage is bigger than Ignite capacity?