DiceDB(dicedb.io)
235 points by rainhacker 53 days ago | 32 comments
kiitos 53 days ago
There are _so many_ bugs in this code.

One example among many:

https://github.com/DiceDB/dice/blob/0e241a9ca253f17b4d364cdf... defines func ExpandID, which reads from cycleMap without locking the package-global mutex; and func NextID, which writes to cycleMap under a lock of the package-global mutex. So writes are synchronized, but only between each other, and not with reads, so concurrent calls to ExpandID and NextID would race.

This is all fine as a hobby project or whatever, but very far from any kind of production-capable system.

kiitos 41 days ago
https://github.com/DiceDB/dice/pull/1588

This PR attempted to fix the memory model violation I mentioned in the parent comment, but also added in an extra change that swapped the sync.Mutex to a sync.RWMutex. The PR description claimed 2 benefits: "Eliminates the data race, ensuring thread safety" -- correct! at least to some level; but also "Improves performance by allowing concurrent ExpandID calls, which is likely a common operation" -- which is totally unsubstantiated, and very likely false, as RWMutex is only faster than a regular Mutex under very narrowly-defined load patterns.

In any case, the PR had no kind of test or benchmark to validate either of these claims, so not a great start by the author. But then a maintainer chimed in with a comment that expressed concerns about edge-condition performance details, without any kind of data or evidence, and apparently didn't care about (or know about?) the much more important fixes that the PR made re: data races.

https://github.com/DiceDB/dice/pull/1588#issuecomment-274521...

> I tried changing this, but I did not see any benefit in benchmark numbers.

No apparent understanding of the bugs in this code, nor how changes may or may not fix those bugs, nor really how performance is defined or can be meaningfully evaluated.

Again, hobby project or whatever, all good. But the authors and maintainers of this project are clearly, demonstrably, in over their heads on this one.

37 days ago
senderista 53 days ago
Haven't looked at the code, but enforcing mutual exclusion between writers but not readers can make sense for a single-writer lock-free algorithm.
ignoramous 53 days ago
> single-writer lock-free algorithm

I understand the need for correct lock-free impls: Given OP's description, simply avoiding read mutexes can't be the way to go about it?

nebulous1 52 days ago
I don't use Go.

https://go.dev/ref/mem

If I'm reading this correctly, they are recommending a lock in this situation. However, they are saying the implementations has two options, either raise an error reporting the race (if the implementation is told to do so), or, because the value being read is not larger than a machine word, reply to the read with a correct value from a previous write. If true then it cannot reply with corrupted data.

kiitos 52 days ago
> However, they are saying the implementations has two options, either raise an error reporting the race (if the implementation is told to do so), or, because the value being read is not larger than a machine word, reply to the read with a correct value from a previous write.

The spec says

> A read r of a memory location x holding a value that is not larger than a machine word must observe some write w such that r does not happen before w and there is no write w' such that w happens before w' and w' happens before r. That is, each read must observe a value written by a preceding or concurrent write.

These rules apply only if the value isn't larger than a machine word. Otherwise,

> Reads of memory locations larger than a single machine word ... can lead to inconsistent values not corresponding to a single write.

The size of a machine word is different depending on how a program is compiled, so whether or not a value is larger than a machine word isn't know-able by the program itself.

And even if you can assert that your program will only be built where a machine word is always at least of size e.g. uint64, the spec only guarantees that unsynchronized reads of a uint64 will return some previous valid write, it doesn't guarantee anything about which value is returned. So `x=1; x=3; x=2;` concurrently with `print(x); print(x); print(x)` can print `1 1 1` or `3 3 3` or `2 1 1` or `3 2 1` and so on. It won't return a corrupted uint64, but it can return any prior uint64, which is still a data race, and almost certainly useless to the application.

nebulous1 52 days ago
Thanks. So the structure in the OP is an array of uint32s.

> that unsynchronized reads of a uint64 will return some previous valid write, it doesn't guarantee anything about which value is returned

Your the second person saying this, so is my interpretation that this is dissallowed by the part that you quoted incorrect?

> must observe some write w such that r does not happen before w and there is no write w' such that w happens before w' and w' happens before r

edit: somebody is answering this below by the way

fashion-at-cost 52 days ago
The goalposts have been moved. The claim is that this pattern isn’t suitable for production code. The ground truth is that a compliant Go implementation may elect to: crash; read the first value ever set to the variable for the entire lifetime of the program; or behave completely as you’d expect from a single core interleaved execution order. The first is an opt-in, the latter two are up to the whims of the runtime and an implementation may alternate between them at any point.

Is that the kind of uncertainty you want in your production systems? Or is your only requirement that they don’t serve “corrupt” data?

Don’t be “clever”. Use locks.

nebulous1 52 days ago
I don't disagree, but that's not the claim I was replying to. The question I was asking about was

> I understand the need for correct lock-free impls: Given OP's description, simply avoiding read mutexes can't be the way to go about it?

I did note that the documentation recommends a lock.

> read the first value ever set to the variable for the entire lifetime of the program

That is not my reading of the current memory model? It seems to specifically prohibit this behaviour in requirement 3:

> 2. w does not happen before any other write w' (to x) that happens before r.

fashion-at-cost 52 days ago
In this context, ”happens before” is not a wall-clock colloquialism but in fact a term of art that is specifically described as:

> The happens before relation is defined as the transitive closure of the union of the sequenced before and synchronized before relations.

Without synchronization, the degenerate sequencing is perfectly valid.

That’s the problem with being “clever” - you miss a definition and your entire mental model is busted.

nebulous1 52 days ago
So, in the situation in the comment OP, with sychronized writes and and unsynchronized reads, what is this "happens before" stipulation prohibiting?
fashion-at-cost 52 days ago
A single reader thread cannot read a value written by a write, then later read a value written by a write that happens before the first write.
nebulous1 52 days ago
Thanks!
kiitos 52 days ago
Yep. And even if you were to lock down the implementation of the compiler, the version of Go you're using, the specific set of hardware and OS that you build on and deploy to, and so on -- that still doesn't indemnify you against arbitrary or unexpected behavior, if your code violates the memory model!
senderista 52 days ago
Oh, so cycleMap is a non-threadsafe structure? I don't know golang so I didn't realize this.
kiitos 52 days ago
Nothing in Go is thread-safe, unless explicitly documented otherwise. Some examples of explicitly-documented-otherwise stuff are in package sync and package sync/atomic.

cycleMap is definitely not thread-safe. The authors knew this, to some extent, because they synchronized writes via an adjacent mutex. But they didn't synchronize reads thru the same mutex, which is the issue.

senderista 52 days ago
OK, this doesn't inspire confidence then.
deazy 53 days ago
Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.

I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?

I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.

From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.

But I have the following question.

What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?

Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?

How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?

Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?

I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.

Thanks

deazy 48 days ago
I was hoping for a response, but no one bothered. I had noted the following when I made that comment and will just wrap up from my end so this could be used by others for reference later.

I'm skeptical that the re-execution approach can scale for complex queries, the latency and throughput improvements would be offseted by the computational cost and bottlenecks introduced for achieving it via its reactivity mechanism (query subscription), this might not work at scale and serve niche use cases.

There are various ways throughput and latency for kv stores can be improved, so bar is really high here.

The messaging with Dice seems unclear and confusing to describe its purpose/use-cases over alternatives, or how it achieves them, which could just be how it's marketed. But it seems to be a collection of ideas and a WIP project.

I think reducing data fetching complexity and complex key dependencies for end clients could be appealing, and it would be great to have it at the KV store level, but there is no reason this type of reactivity can't be implemented on top of various clients for existing KV stores (like Redis). And basic WATCH with transactions are even offered out of the box in them.

Deno kv seem nice but its vendor locked, also there are many others like dragonfly, valkey etc, redis could still work, even something over sqlite can work, deno has a selfhosted kv on top of sqlite - https://github.com/denoland/denokv

Also with dice its creator had made this talk

https://hasgeek.com/rootconf/2024/sub/how-we-made-dicedb-a-t...

From that and the thread so far it seems, they want to make some super cache by building a realtime multi-threaded kv store, improving latency and reducing its read load via its reactivity mechanism. Solving the problem of cache invalidation.

Not sure how this will be achieved but there is no harm in trying. From what is said and shared, rationale behind this design and its tradeoffs are not clear, code could be fixed/improved but providing clarity on this is essential for adoption.

bdcravens 53 days ago
Is there a single sentence anywhere that describes what it actually is?
DrammBA 53 days ago
I've seen this more and more with software landing pages, they are somehow so deep into developing/marketing that they totally forget to say what the thing actually is or does, that's why you show it to family and friends first to get some fresh eyes before publishing the site.
lucianbr 53 days ago
In a similar vein, lots of software is Mac-only, but omits to say this anywehere. You just get to the downloads page and see that there are only mac packages.

As if nobody ever uses anything else.

threatripper 53 days ago
Why should they care about non-users. Offering our even mentioning choice only creates uncertainty and confusion in potential customers.
SOLAR_FIELDS 52 days ago
How hard is it to add two sentences that says only macOS is supported now and in the near future? I’d rather do that than annoy future potential customers who might have a Mac or plan to get one at some point
johnisgood 53 days ago
Looks like a Redis clone. The benchmarks compare it to Redis.

Description from GitHub:

> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads.

pcthrowaway 53 days ago
Not 100% a Redis clone, but the API appears to be very similar to Redis of 10 years ago, with some additions that Redis doesn't have. See the list of commands: https://dicedb.io/get-started/installation/
johnisgood 53 days ago
"clone" was not the right term, maybe Redis-look-alike, or something along those lines, something that can be compared to Redis, at least.
jlengrand 52 days ago
I picked that up purely because of the logo / website palette / name choice combinations. Interestingly, not sure it's a good thing.
arpitbbhayani 53 days ago
Arpit here.

DiceDB is an in-memory database that is also reactive. So, instead of polling the database for changes, the database pushes the resultset if you subscribe to it.

We have a similar set of commands as Redis, but are not Redis-compliant.

nebulous1 53 days ago
Would "key-value" not have a place in the description?

This application may be very capable, but I agree with the person saying that its use-case isn't clear on the home page, you have to go deeper into the docs. "Smarter than a database" also seems kind of debatable.

remram 53 days ago
This is a lot clearer than any information I found anywhere else. There wasn't any room on your website, README, or docs for this summary?
bdcravens 53 days ago
This is a common enough pattern that it should have a name, where the submitted link isn't clear, but a single comment on HN is.
edoceo 52 days ago
The Cravens Conjecture
arpitbbhayani 53 days ago
It is right there on the landing page. But, let me highlight it a bit.
wesselbindt 53 days ago
When I ctrl+F the landing page for key and value, I find nothing. Reading it in full, I also come up empty handed. Which part of the landing page implies it's a key value store?
jstummbillig 53 days ago
They did not say anything about key/value in their message.
wesselbindt 53 days ago
You are absolutely right, my bad.
dagss 53 days ago
IMO, replace "More than a Cache. Smarter than a Database." with an actual description.

The saying is cute but does not really convey information the reader is after. And that spot is where you want people to immediately understand what it is.

arpitbbhayani 53 days ago
I changed that :) now the value proposition is right at the top.
dagss 53 days ago
Still not clear to me what it is. Only the features it has, without knowing what it is.

Like, imagine a page that only said "SuperTransport -- 0 to 100 in 5 seconds", but it is not clear for the reader if it is a car or a horse or a plane or a parcel service...

... and the reader has to go and guess "hmm, guess due to the acceleration it is probably a car or a motorbike -- wonder of it is for sale or for rent?".

Just put "fast on premise key/value database" in the big font that was there -- if that is what it is. That is purely a guess from me, no idea if that is what it is.

aloknnikhil 53 days ago
In the list of things that DiceDB is at the top, you should add "an in-memory database". Pretty critical thing to leave out right at the top.
pcthrowaway 53 days ago
in-memory key-value store seems much more accurate
ofrzeta 53 days ago
So like RethinkDB? https://rethinkdb.com/
dkh 53 days ago
Not a month goes by where I don’t remember it at least once and realize that I still miss it.

This seems more like Redis though

Aeolun 53 days ago
It kinda surprising it was never really continued, but performance was just too bad even if the interface was fantastic.
53 days ago
ofrzeta 53 days ago
Why don't you run the open source version?
NetOpWibby 53 days ago
I did for about a year and the issue is that ORMs have issues and maintainers don't feel the need to make changes.
Apofis 52 days ago
Question, how does DiceDB differ from Redis pub/sub? https://redis.io/docs/latest/develop/interact/pubsub/
lucianbr 53 days ago
No. I had the exact same problem.

Feels arrogant. "Of course you already know what this is, how could you not?"

goodpoint 53 days ago
The video is also advertisement rather than a real thing.
rvnx 53 days ago
A Redis-inspired server in Go
adhamsalama 53 days ago
Can't wait to feel the impact of garbage collection in my fast cache!
arpitbbhayani 53 days ago
We had a similar thought, but it is not as bad as we think.

We have the benchmarks, and we will be sharing the numbers in subsequent releases.

But, there is still a chance that I may come to bite us and limit us to a smaller scale, and we are ready for it.

raggi 53 days ago
Vertical scaling this language also gets into painful territory quite often, I’ve had to workaround this problem before but never with a thing that felt like this: https://github.com/tailscale/tailscale/blob/main/syncs/shard...
ganarajpr 53 days ago
Why are you guys building Yet Another DB ? Not trying to dissuade you, but what are you trying to solve that the plethora of DB's currently in market in the same space have not solved ? This should be highlighted in your landing page and since your primary audience is other dev's ( tough-est crowd to sell ), be very specific on what value your product brings over the other choices.
_bin_ 53 days ago
it might help to add 99th percentile numbers to the landing page; would do a better job of showing GC impact.
arpitbbhayani 53 days ago
Nope. it started as Redis clone. We are on a different trajectory now. Chasing different goals.
bob1029 53 days ago
> Chasing different goals.

What are those goals? I was struggling to interpret a meaningful roadmap from the issue & commit history.

remram 53 days ago
Secret goals are no selling point.
bdcravens 53 days ago
Even clicking through to the Github, after reading the "What is DiceDB?", I'm still not very clear. It feels more like marketing than information.

"What is DiceDB? DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads."

remram 53 days ago
The docs do, the site is useless.

> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware.

A Redis-like database with a Redis-like interface. No info about drop-in compatibility, I assume no.

53 days ago
ekianjo 53 days ago
seems like a key store, with an ability to watch/subscribe to monitor for the change of values in real time
arpitbbhayani 53 days ago
Yes. With DiceDB clients can "WATCH" the output of the commands and upon the change in data, the resultset are streamed to the subscribers.
mrbluecoat 53 days ago
"A key store, with an ability to watch/subscribe to monitor for the change of values in real time."

Should be the first sentence on their website and repo.

siddharthgoel88 53 days ago
Drop in replacement of Redis.
arpitbbhayani 53 days ago
Nope. We are not redis compliant.
schmookeeg 53 days ago
Using an instrument of chance to name a data store technology is pretty amusing to me.
bufferoverflow 53 days ago
No chance if we live in a deterministic universe.
dkh 53 days ago
This is essentially what all in-memory data stores have always been

Kinda refreshing to see someone own it and run with it

cozzyd 53 days ago
DiceDB sounds like the name of a joke database that returns random results.
BoorishBears 53 days ago
No it doesn't.
graynk 52 days ago
Yes it does.

Seems we're in a stalemate, where do we go from here?

BoorishBears 52 days ago
OP continues ignoring static from the people who jump to shoddy conclusions.
kreddor 53 days ago
It was my first thought as well, before reading the landing page.
BoorishBears 53 days ago
Yeah, and I'm sure someone clicked it thinking it was a DB for EA's Dice Studios.

If you expose something to enough people you'll get some unreasonable takes and interpretations of it. It's important to ignore them.

graynk 52 days ago
> If you expose something to enough people you'll get some unreasonable takes and interpretations of it. It's important to ignore them.

Quite literally the main function of dice is to give you random numbers. Looking over the website and readme I could not surmise why they would call it DiceDB except for "it sounds nice", but it's absolutely not unreasonable to look at the name and have a thought "it's probably a joke project about random results".

BoorishBears 52 days ago
There are literal mountains of software named for no particular reason (let alone sounding nice), or named by origins no person would ever infer without digging in deeper.

Reasonable people realize this and won't discard a project as a joke because of such a teneous connection, and the fact they've gotten traction is a testament to that.

graynk 51 days ago
> Reasonable people realize this and won't discard a project as a joke

I agree. Nobody said anything about discarding nothing though. Only that it's a reasonable first thought to have upon hearing the name. And it is.

weekendcode 53 days ago
From the benchmarks on 4vCPU and num_clients=4, the numbers doesn't look much different.

Reactive looks promising, doesn't look much useful in realworld for a cache. For example, a client subscribes for something and the machines goes down, what happens to reactivity?

alexey-salmin 53 days ago

  | Metric               | DiceDB   | Redis    |
  | -------------------- | -------- | -------- |
  | Throughput (ops/sec) | 15655    | 12267    |
  | GET p50 (ms)         | 0.227327 | 0.270335 |
  | GET p90 (ms)         | 0.337919 | 0.329727 |
  | SET p50 (ms)         | 0.230399 | 0.272383 |
  | SET p90 (ms)         | 0.339967 | 0.331775 |
UPD Nevermind, I didn't have my eyes open. Sorry for the confusion.

Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).

I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.

If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.

Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?

davekeck 53 days ago
> Something I still fail to understand is where you can actually spend 20ms

Aren’t these numbers .2 ms, ie 200 microseconds?

ajnin 53 days ago
I had the same reaction as you. And that's for 4 simultaneous clients, too, for a single client you get 3159 ops/s (from https://dicedb.io/benchmarks/). I'm not too familiar with in-memory databases in general but I would have expected figures in the millions on modern hardware. Makes me feel there's some hidden bottleneck somewhere and the benchmarks are not purely measuring the performance of the software.
esafak 53 days ago
They also sounded fishy to me. I'd expect closer to 10x as much throughput with Redis: https://redis.io/docs/latest/operate/oss_and_stack/managemen...
bitlad 53 days ago
Kerbonut 53 days ago
Looks like your units are in ms, so 0.20 ms.
alexey-salmin 53 days ago
oh thank you, it's just me being blind
53 days ago
53 days ago
OutOfHere 53 days ago
In-memory caches (lacking persistence) shouldn't be called a database. It's not totally incorrect, but it's an abuse of terminology. Why is a Python dictionary not an in-memory key-value database?
ac130kz 53 days ago
Any reason to use this over Valkey, which is now faster than Redis and community driven? Genuinely interested.
hp77 53 days ago
DragonflyDB is also in that race, isn't it?
ac130kz 53 days ago
From what I looked at in the past, they seem better on paper by comparing themselves to a very old version of Redis in a rigged scenario (no clustering or multithreading applied despite Drangonfly getting multithreading enabled), and they are a lot worse in terms of code updates. Maybe that's different today, but I'm more keen on using Valkey.
hp77 53 days ago
Does Redis support multithreading? Doesn't it use a single-threaded event loop, while DragonflyDB basic version is with multithreading enabled and shared-nothing architecture. Also I found this latest comparison between Valkey and DragonflyDB : https://www.dragonflydb.io/blog/dragonfly-vs-valkey-benchmar...
romange 53 days ago
Valkey/Redis support offloading of io processing to special I/O threads.

Their goal is to unload the "main" thread from performing i/o related tasks like socket reading and parsing, so it could only spend its precious time on datastore operations. This creates an asymmetrical architecture with I/O threads scaling to any number of CPUs, but the main thread is the only one that touches the hashtable and its entries. It helps a lot in cases where datastore operations are relatively lightweight, like SET/GET with short string values, but its impact will be insignificant for CPU heavy operations like lua EVALs, sorted sets, lists, MGET/MSET etc.

ac130kz 53 days ago
IO multithreading is still not fully there, there were significant improvements within the first couple of iterations, hopefully, it will improve further. I see that Dragonfly uses iouring, which is not recommended by Google due to security vulnerabilities.
romange 53 days ago
Dragonfly supports both epoll and iouring, and polling engine choice is quite orthogonal to its shared nothing architecture. I do not think that Valkey or Redis will become fully multi-threaded any time soon - as such change will require building something like Dragonfly (or use locks that historically were a big NO for Redis).

(Author of Dragonfly here)

hp77 53 days ago
I read Google is limitting the use of io_uring, but I have seen io_uring being used in other Databases, TigerBeetle is another DB which uses io_uring.
jorangreef 51 days ago
Joran from TigerBeetle here.

Yes, per [1] Google did restrict their use of io_uring on “production servers“, and in Android, ChromeOS etc.

However, within that same post, and what is often missed when that post is quoted, is that Google wrote that they did in fact “consider [io_uring] safe” for use by trusted components:

> For these reasons, we currently consider it safe only for use by trusted components.

A database like TigerBeetle is typically deployed in a trusted environment, and is such a trusted component.

[1] https://security.googleblog.com/2023/06/learnings-from-kctf-...

losvedir 53 days ago
I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?
remram 53 days ago
This seems orders of magnitude slower than Nubmq which was posted yesterday: https://news.ycombinator.com/item?id=43371097
arpitbbhayani 53 days ago
Different tool. I metrics I am optimizing for are different hence wrote a separate utility. May not be the most optimized one. But I am usign this to measure all things DiceDB and will be using this to optimize DiceDB further.

ref: https://github.com/DiceDB/membench

53 days ago
huntaub 53 days ago
What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?
zupa-hu 53 days ago
One example is when you want to display live data on a website. Could be a dashboard, a chat, or really the whole site. Polling is both slower and more resource hungry.

If it is built into your language/framework, you can completely ignore the problem of updating the client, as it happens automatically.

Hope that makes sense.

huntaub 53 days ago
Interesting -- is that normally done with database updates + polling vs. something purpose-built?
zupa-hu 53 days ago
Not sure how many such solutions there are out there so no idea about the norm. I doubt polling is a real option.

You may want to search for realtime databases.

alexpadula 52 days ago
15655 ops a second with a Hetzner CCX23 machine with 4 vCPU and 16GB RAM is rather slow for an in-memory database I hate to say it. You can't blame that on network latency as for example supermassivedb.com is written in go and achieves magnitudes more, actually x20 and it's persisted.. I must investigate the bottlenecks with Dice.
rebolek 53 days ago
- proudly open source. cool! - join discord. YAY :(
throwaway2037 53 days ago
FYI: Here is the creator and maintainer's profile: https://github.com/arpitbbhayani

Is there a plan to commercialise this product? (Offer commercial support, features, etc.) I could not find anything obvious from the home page.

sidcool 53 days ago
Is Arpit is the system design course guy?
arpitbbhayani 53 days ago
Yes. I do run a sys design course on weekends.
Aeolun 53 days ago
I feel like this needs a ‘Why DiceDB instead of Redis or Valtio’ section prominently on the homepage.
dkh 53 days ago
Did you mean Valkey, or has the js community now managed to shoehorn an entire high-availability database server into a javascript object proxy?
Aeolun 52 days ago
It’s only a matter of time xD but yes, I meant Valkey.

I was typing that out and felt like something was wrong but couldn’t put my finger on what.

DrammBA 53 days ago
I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.
spiderfarmer 53 days ago
Just use Bluesky. It’s the better middle finger.
arpitbbhayani 53 days ago
I prefer that over X icon.
datadeft 53 days ago
Is this suffering from the same problems like Redis when trying to horizontally scale?
weekendcode 53 days ago
I guess yes.
re-lre-l 53 days ago
> For Modern Hardware fully utilizes underlying core to get higgher throughput and better hardware utilization.

Would be great to disclose details of this one. I'm interested in using what DiceDB achieves higher throughput.

robertlagrant 52 days ago
> fully utilizes underlying core to get higgher throughput and better hardware utilization

FYI this is a misspelling of "higher"

nylonstrung 53 days ago
Who is this for? Can you help me explain why and when I'd want to use this in place of redis/dragonfly
deadbabe 53 days ago
I think Postgres can do everything this does and better if you use LISTEN/NOTIFY.
999900000999 53 days ago
I like it!

Anyway to persist data in case of reboots?

That's the only thing missing here.

Is Go the only SDK ?

lucifercr7 53 days ago
Snapshot functionality is WIP, which can be utilised to persist and replay data between reboots. For now Golang SDK is only one, more SDKs are to be added soon.
retropragma 53 days ago
Why would I use this over keyspace notifications in redis?
dkh 53 days ago
Based on this thread, I'm not sure you would want to use this over keyspace notifications, but I will also say that there comes a point in the maturity of a system when keyspace notifications become a complicated, unreliable, resource-heavy nightmare. They work fine is your needs and scale are limited, but it's definitely not what you want if handling lots of frequent chances across craploads of keys, with complicated logic for who needs them and how they get routed to them, and where it matters if the notification is successfully received.

But certainly you could build something to handle these and most other needs in this realm with mostly just redis, using streams for what needs to be more robust, in tandem with pub/sub, keyspace notifs, etc. in the areas they are suited to.

rednafi 52 days ago
Database as a transport?
spiderfarmer 53 days ago
DiceDB is an in-memory, multi-threaded key-value DBMS that supports the Redis protocol.

It’s written in Go.

arpitbbhayani 53 days ago
nope. We do not support Redis protocol :)
spiderfarmer 53 days ago
Did you remove support? Cause Google found mentions of it on your website.
ahazred8ta 53 days ago
Heh. Redis protocol support is still listed on their Linkedin. https://www.google.com/search?q=%22DiceDB%22%20%22supports%2...
curtisszmania 53 days ago
[dead]
cytocync 53 days ago
[dead]
Clemolomo 53 days ago
[dead]
theverg 53 days ago
[flagged]
dagss 53 days ago
I am not sure if this is satire or not...
bitlad 53 days ago
I think performance benchmark you have done for DiceDB is fake.

These are the real numbers - https://dzone.com/articles/performance-and-scalability-analy...

Does not match with your benchmarks.

arpitbbhayani 53 days ago
The benchmark tool is different. I mentioned the same on my benchmark page.

We had to write a small benchmark utility (membench) ourselves because the long-term metrics that we are optimizing need to be evaluated in a different way.

Also, the scripts, utilities, and infra configurations are mentioned. Feel free to run it.