1

I am currently working on an application which will schedule a task as a timers. The timers can be run on any day of the week with the configuration by the user. Currently it is implemented with bullqueue and redis for storage. Once the timer will execute it will execute an event and further process the business logic. There can be thousands of queue messages in the redis.

I am looking to replace redis with Kafa as I have read it is easy to scale and guarantee of no message loss.

The question is. Is it a good idea to go with Kafa? If yes then how can we schedule a jobs in kafka with the combination of bullqueue. I am new to Kafka and still trying to understand how can we schedule the jobs in Kafka or is it a good architecture setup to go with.

My current application setup is with nestjs, nodejs

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
JN_newbie
  • 5,492
  • 14
  • 59
  • 97

1 Answers1

0

Kafka doesn't have any feature like this built-in, so you'd need to combine it with some other timer/queue system for scheduling a KafkaProducer action.

Similarly, Kafka Consumers are typically always running, although, you can start/pause them periodically as well.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • Thank you an answer. As you have also mentioned about timer/queue I think bullqueue is a good option for scheduling but still it is a queueing system on the top of the redis. I want to avoid redis. If I go with the timers then the questions comes up how we can avoid the fault tolerance. I think I need some storage to store these schedulars. – JN_newbie Dec 20 '22 at 09:46
  • Kafka is a storage system on its own. You could create some compacted topic, generate some UUID per event for a Kafka record key, then when you're ready to send that event (constantly scan topic beginning to end) and check timestamps, send a null value for that UUID to delete from the topic. – OneCricketeer Dec 20 '22 at 14:42