Hot questions for Spring Data Redis

Top 10 Java Open Source / Spring / Spring Data Redis

Question:

I am saving new entries with a Spring Data Repository. I have a TTL of 10 seconds for each entry.

When I save an entry with indexes, here is what i get in Redis

127.0.0.1:6379> keys *
1) "job:campaignId:aa"
2) "job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be"
3) "job:recipient:dd"
4) "job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be:phantom"
5) "job:listId:cc"
6) "job:accountId:bb"
7) "job"
8) "job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be:idx"

After the expiration, I still have data :

127.0.0.1:6379> keys *
1) "job:campaignId:aa"
2) "job:recipient:dd"
3) "job:listId:cc"
4) "job:accountId:bb"
5) "job"
6) "job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be:idx"

Without any TTL.

Why aren't they deleting themself ? How could I do that ?


Answer:

Spring Data Redis Repositories use multiple Redis features to persist domain objects in Redis.

Domain objects are stored primarily in a hash (job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be). Any expiry is applied directly to the hash so Redis can expire the key. Spring Data Redis also maintains secondary indexes (job:campaignId:aa, job:recipient:dd) to provide lookup by particular field values. Individual elements inside a set cannot be expired. Only the whole data structure can expire, but that's not the thing you want to do because all non-expired elements would disappear that way.

So Spring Data Redis persists a copy of the original hash as phantom hash (job:a6d6e491-5d75-4fd0-bd8e-71692f6d18be:phantom) with a slightly longer TTL.

Spring Data Redis subscribes to key-events (with setting @EnableRedisRepositories(enableKeyspaceEvents = EnableKeyspaceEvents.ON_STARTUP) to listen to expiry events. As soon as the original hash expires, Spring Data Redis loads the phantom hash to perform cleanups (remove references from secondary indexes).

The reason why the cleanup of your data wasn't performed can have multiple reasons:

  1. If you run a console application just to insert data and terminate, then the expiry removes the hashes but does not perform the index cleanup since your application is not running anymore. Any events published by Redis are transient, and if your application is not listening, then these events are lost
  2. If you have enabled repository support with just @EnableRedisRepositories (without enabling keyspace-events), then the Keyspace event listener is not active, and Spring Data Redis is not subscribed to any expiry events.

Question:

I want to set a ttl for my keys that are stored in Redis, and I have done that in the following way:

@Component
public class RedisBetgeniusMarketService implements BetgeniusMarketService {

    private static final int DEFAULT_EVENTS_LIFE_TIME = 240;

    @Value("${redis.events.lifetime}")
    private long eventsLifeTime = DEFAULT_EVENTS_LIFE_TIME;

    @Autowired
    private RedisTemplate<String, Market> marketTemplate;

    @Override
    public Market findOne(Integer fixtureId, Long marketId) {
        String key = buildKey(fixtureId, marketId);
        return marketTemplate.boundValueOps(key).get();
    }

    @Override
    public void save(Integer fixtureId, Market market) {
        String key = buildKey(fixtureId, market.getId());
        BoundValueOperations<String, Market> boundValueOperations = marketTemplate.boundValueOps(key);
        boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);
        boundValueOperations.set(market);
    }

    private String buildKey(Integer fixtureId, Long marketId) {
        return "market:" + fixtureId + ":" + marketId;
    }
}

But, when I am printing the ttl of the created key it's equal to -1.

Please, tell me what I am doing wrong.

The template bean is configured in the following way:

    @Bean
    public RedisTemplate<String, com.egalacoral.spark.betsync.entity.Market> marketTemplate(RedisConnectionFactory connectionFactory) {
        final RedisTemplate<String, com.egalacoral.spark.betsync.entity.Market> redisTemplate = new RedisTemplate<>();
        redisTemplate.setKeySerializer(new StringRedisSerializer());
        redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer(com.egalacoral.spark.betsync.entity.Market.class));
        redisTemplate.setConnectionFactory(connectionFactory);
        return redisTemplate;
    } 

Answer:

You need to call expire(…) and set(…) in a different order. The SET command removes any timeout that was previously applied:

From the documentation at http://redis.io/commands/set:

Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful SET operation.

In your case you just need to switch the order of expire(…) and set(…) to set(…) and expire(…).

    @Override
public void save(Integer fixtureId, Market market) {
    String key = buildKey(fixtureId, market.getId());
    BoundValueOperations<String, Market> boundValueOperations = marketTemplate.boundValueOps(key);

    boundValueOperations.set(market);
    boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);
}

Besides that, you could improve the code by setting the value and expiry in one call. ValueOperations (RedisOperations.opsForValue()) provides a set method that sets the key and timeout with the signature

void set(K key, V value, long timeout, TimeUnit unit);

Question:

I have noticed that some of my serialized objects stored in Redis have problems deserializing.

This typically occurs when I make changes to the object class being stored in Redis.

I want to understand the problem so that I can have a clear design for a solution.

My question is, what causes deserialization problems? Would a removal of a public/private property cause a problem? Adding new properties, perhaps? Would a adding a new function to the class create problems? How about more constructors?

In my serialized object, I have a property Map, what if I change (updated some properties, added functions, etc) myObject, would it cause a deserialization problem?


Answer:

what causes deserialization problems?

I would like to give you bit of background before answering your question,

The serialization runtime associates with each serializable class a version number, called a serialVersionUID, which is used during deserialization to verify that the sender and receiver of a serialized object have loaded classes for that object that are compatible with respect to serialization. If the receiver has loaded a class for the object that has a different serialVersionUID than that of the corresponding sender's class, then deserialization will result in an InvalidClassException.

If a serializable class does not explicitly declare a serialVersionUID, then the serialization runtime will calculate a default serialVersionUID value for that class based on various aspects of the class, It uses the following information of the class to compute SerialVersionUID,

  1. The class name.
  2. The class modifiers written as a 32-bit integer.
  3. The name of each interface sorted by name.
  4. For each field of the class sorted by field name (except private static and private transient fields:
  5. The name of the field.
  6. The modifiers of the field written as a 32-bit integer.
  7. The descriptor of the field.
  8. if a class initializer exists, write out the following:

    The name of the method, .

    The modifier of the method, java.lang.reflect.Modifier.STATIC, written as a 32-bit integer.

    The descriptor of the method, ()V.

  9. For each non-private constructor sorted by method name and signature:

    The name of the method, .

    The modifiers of the method written as a 32-bit integer.

    The descriptor of the method.

  10. For each non-private method sorted by method name and signature:

    The name of the method.

    The modifiers of the method written as a 32-bit integer.

    The descriptor of the method.

So, to answer your question,

Would a removal of a public/private property cause a problem? Adding new properties, perhaps? Would a adding a new function to the class create problems? How about more constructors?

Yes, all these additions/removal by default will cause the problem.

But one way to overcome this is to explicitly define the SerialVersionUID, this will tell the serialization system that i know the class will evolve (or evolved) over the time and don't throw an error. So the de-serialization system reads only those fields that are present in both the side and assigns the value. Newly added fields on the de-serialization side will get the default values. If some fields are deleted on the de-serialization side, the algorithm just reads and skips.

Following is the way one can declare the SerialVersionUID,

private static final long serialVersionUID = 3487495895819393L;

Question:

If I have 5 members with scores as follows

a - 1
b - 2
c - 3
d - 3
e - 5

ZRANK of c returns 2, ZRANK of d returns 3 Is there a way to get same rank for same scores? Example: ZRANK c = 2, d = 2, e = 3 If yes, then how to implement that in spring-data-redis?


Answer:

You can achieve the goal with two Sorted Set: one for member to score mapping, and one for score to rank mapping.

Add

  1. Add items to member to score mapping: ZADD mem_2_score 1 a 2 b 3 c 3 d 5 e
  2. Add the scores to score to rank mapping: ZADD score_2_rank 1 1 2 2 3 3 5 5

Search

  1. Get score first: ZSCORE mem_2_score c, this should return the score, i.e. 3.
  2. Get the rank for the score: ZRANK score_2_rank 3, this should return the dense ranking, i.e. 2.

In order to run it atomically, wrap the Add, and Search operations into 2 Lua scripts.

Question:

I am running a spring boot service using spring data redis and here is the following configuration.

The service seems to work but I am seeing a stream of Lost Sentinel messages in the logs. The sentinel nodes are reachable form the VM where I am running the service. I was able to telnet to them directly from that VM. Any idea why this is happening?

spring:
  profiles:
    active: core-perf,swagger
    default: core-perf,swagger
  redis:
    Pool:  #Pool properties
      # Max number of "idle" connections in the pool. Use a negative value to indicate
      # an unlimited number of idle connections.
      maxIdle: 8
      # Target for the minimum number of idle connections to maintain in the pool.
      # This setting only has an effect if it is positive.
      minIdle: 0
      # Max number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
      maxActive: 8
      # Maximum amount of time (in milliseconds) a connection allocation should block
      # before throwing an exception when the pool is exhausted. Use a negative value
      # to block indefinitely.
      maxWait: -1
    sentinel: #Redis sentinel properties.
      master: mymaster
      nodes: 10.202.56.209:26379, 10.202.56.213:26379, 10.202.58.80:26379

2015-06-15 17:30:54.896 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:30:59.894 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:30:59.897 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:04.975 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:04.976 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:09.976 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:09.976 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:15.054 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:15.055 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:20.055 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool    : Lost connection to Sentinel at  10.202.56.213:26379. Sleeping 5000ms and retrying.

Answer:

We discovered the issue. There was a blank between the node pairs in the application.yml and once we removed this " " the Lost Sentinel log message disappeared.

so from

nodes: 10.202.56.209:26379, 10.202.56.213:26379, 10.202.58.80:26379

to

nodes: 10.202.56.209:26379,10.202.56.213:26379,10.202.58.80:26379

It would probably be a good thing is the committers looked at this as it would seem to be somewhat mysterious for users.