Autoincrement in Redis

redis atomic operations
python redis hincrby
redis increment hash
redis llen
redis incr python
redis atomic get and set
redis query
redis cheat sheet

I'm starting to use Redis, and I've run into the following problem.

I have a bunch of objects, let's say Messages in my system. Each time a new User connects, I do the following:

  1. INCR some global variable, let's say g_message_id, and save INCR's return value (the current value of g_message_id).

  2. LPUSH the new message (including the id and the actual message) into a list.

Other clients use the value of g_message_id to check if there are any new messages to get.

Problem is, one client could INCR the g_message_id, but not have time to LPUSH the message before another client tries to read it, assuming that there is a new message.

In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.


I can't use the list indexes, since I often have to delete parts of the list, making it invalid.

My situation in reality is a bit more complex, this is a simpler version.

Current solution:

The best solution I've come up with and what I plan to do is use WATCH and Transactions to try and perform an "autoincrement" myself.

But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.

If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.

Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:

> INCR g_message_id
(integer) 123

# construct the message with id=123 in code

> INCR g_new_message_flag
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"

Possible alternative, if your clients can support it: you might want to look into the Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.

Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:

 # assuming the message queue contains "123", "456", "789"..
 # a client detects there are new messages, then runs this:

> WATCH g_msg_queue
> LRANGE g_msg_queue 0 100000
> DEL g_msg_queue
1) 1) "789"
   2) "456"
   3) "123"
2) (integer) 1

Update 2: Given the new information, here's what I would do:

  1. Have your writer clients use RPUSH to append new messages to the list. This lets the reader clients start at 0 and iterate forward over the list to get new messages.
  2. Readers need to only remember the index of the last message they fetched from the list.
  3. Readers watch g_new_message_flag to know when to fetch from the list.
  4. Each reader client will then use "LRANGE list index limit" to fetch the new messages. Suppose a reader client has seen a total of 5 messages, it would run "LRANGE g_msg_queue 5 15" to get the next 10 messages. Suppose 3 are returned, so it remembers the index 8. You can make the limit as large as you want, and can walk through the list in small batches.
  5. The reaper client should set a WATCH on the list and delete it inside a transaction, aborting if any client is concurrently reading from it.
  6. When a reader client tries LRANGE and gets 0 messages it can assume the list has been truncated and reset its index to 0.

HINCRBY – Redis, Increments the number stored at field in the hash stored at key by increment . If key does not exist, a new key holding a hash is created. If field does not exist the​  But this counters are always incremented setting an expire of 10 seconds so that they'll be removed by Redis automatically when the current second is a different one. Note the used of MULTI and EXEC in order to make sure that we'll both increment and set the expire at every API call.

Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.

If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id into a database with proper transaction handling.

Autoincrement in Redis, If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. Minipaas "Autoincrement" example. Contribute to minipaas/example.redis development by creating an account on GitHub.

You can simulate auto-incrementing a unique key for new rows. Simply use DBSIZE to get the current number of rows, then in your code, increment that number by 1, and use that number as the key for the new row. It's simple and atomic.

Redis keys with incremental integer value : redis, keyN N each of my key needs to map a unique number so i am mapping my keys to auto incremented numbers then inserting it to Redis via SET command. Redis INCRBYFLOAT – How to increment a floating point value in redis Lalit Bhagtani 1 year ago In this tutorial, we will learn about how to increment the string representing a floating point value stored at a key in redis datastore.

Auto increment IDs · ondrejbartas/redis-model-extension Wiki · GitHub, When it is creating id and when not. Depends on redis_key setup. Generating ID: redis key is not specified; redis key includes :id. I have a requirement for generating an counter which will be send to some api calls. My application is running on multiple node so some how I wanted to generate unique counter. I have tried followi

5.2.1 Storing counters in Redis, 1 Storing counters in Redis. As we monitor our application, being able to gather information over time becomes ever more important. Code changes (that can affect  redis auto increment test #8. Closed andruha wants to merge 1 commit into yiisoft: master from andruha: master. Closed redis auto increment test #8.

Redis Hashes: HINCRBY key field increment, Redis HINCRBY command is used to increment the number stored at the field in the hash stored at key by increment. If the key does not exist,  The range of values supported by HINCRBY is limited to 64 bit signed integers. *Return value. Integer reply: the value at field after the increment operation. *Examples. Since the increment argument is signed, both increment and decrement operations can be performed:

  • Interesting. This leads to a related problem: when a client polls the server with its copy of "g_new_message_flag", how does the server retrieve the messages from the list? It can just take out a number of messages based on the difference between "g_new_message_flag" and the users' flag, but if new messages are added between the subtraction operation and the LPOP, the wrong number of messages will be popped.
  • I see what you mean -- if a client is supposed to pop all available messages and zero out the list, that is trickier. I'll update my answer with some thoughts.
  • Actually, I don't want the client to delete the messages, just get the latest messages. I have several different clients with (possibly) different values of g_msg_queue, and each one needs to read the latest messages that have arrived. The list is only delete periodically by a background process (say once a day).
  • Ahh I saw you mention LPOP and assumed they were removing the messages from the g_msg_queue (LPOP removes the leftmost entry of the list and returns it)
  • Solved my problem .. UUID's are perfectly suitable for my case