Here's a post to the git mailing list from Martin Uecker describing the same from 2005: https://lore.kernel.org/git/20050416173702.GA12605@macavity/ . From the tone of the email it sounds like he didn't consider the idea new at that point:
> The chunk boundaries should be determined deterministically from local properties of the data. Use a rolling checksum over some small window and split the file it it hits a special value (0). This is what the rsyncable patch to zlib does.
He calls it a merkle hash tree.
Edit: here's one that's one day earlier from C. Scott Ananian: https://lore.kernel.org/git/Pine.LNX.4.61.0504151232160.2763...
> We already have the rsync algorithm which can scan through a file and efficiently tell which existing chunks match (portions of) it, using a rolling checksum. (Here's a refresher: http://samba.anu.edu.au/rsync/tech_report/node2.html ). Why not treat the 'chunk' as the fundamental unit, and compose files from chunks?
I am working on related things[r], using Merkle-fied LSM trees. Ink&Switch do things that very closely resemble Merklefied LSM[h], although they are not exactly LSM. I would not be surprised if someone else is doing something similar in parallel. The tricks are very similar to Prollies, but LSM instead of B-trees.
That reminds me my younger years when I "invented" the Causal Tree[c] data structure. It was later reinvented as RGA (Replicated Growable Array [a]), Timestamped Insertion Tree and, I believe, YATA. All seem to be variations of a very very old revision control data structure named "weave"[w].
Recently I improved CT to the degree that warranted a new algorithm name (DISCONT [d]). Fundamentally the same, but much cheaper. Probably, we should see all these "inventions" as improvements. All the Computer Science basics seem to have been invented in the 70s, 80s the latest.
[w]: https://docs.rs/weave/latest/weave/
[r]: https://github.com/gritzko/librdx
[d]: https://github.com/gritzko/go-rdx/blob/main/DISCOUNT.md
[h]: https://www.inkandswitch.com/keyhive/notebook/05/
[c]: https://dl.acm.org/doi/10.1145/1832772.1832777
[a]: https://pages.lip6.fr/Marc.Shapiro/papers/RR-7687.pdf links to the authors of RGA
FIRST (Float Int Reference String Term): Last-Write-Wins based on the timestamp,
PLEX:
- Tuples: per-entry LWW or recursive,
- Linear: DISCONT (CT/RGA type),
- Eulerian: per-key LWW or recursive,
- Multiplexed (version vectors, counters): per-author LWW or recursive.
https://github.com/attic-labs/noms/blob/master/doc/intro.md#...
The reason I thought a new name was warranted is that a prolly tree stores structured data (a sorted set of k/v pairs, like a b-tree), not blob data. And it has the same interface and utility as a b-tree.
Is it a huge difference? No. A pretty minor adaptation of an existing idea. But still different enough to warrant a different name IMO.
I have a related cryptosystem that I came up with, but is so obvious I'm sure someone else has invented it first. The idea is to back up a file like so: first, do a rolling-hash based chunking, then encrypt each chunk where the key is the hash of that chunk. Then, upload the chunks to the server, along with a file (encrypted by your personal key) that contains the information needed to decrypt each chunk and reassemble them. If multiple users used this strategy, any files they have in common would result in the same chunks being uploaded. This would let the server provider deduplicate those files (saving space), without giving the server provider the ability to read the files. (Unless they already know exactly which file they're looking for, and just want to test whether you're storing it.)
Tangent: why is it that downloading a large file is such a bad experience on the internet? If you lose internet halfway through, the connection is closed and you're just screwed. I don't think it should be a requirement, but it would be nice if there was some protocol understood by browsers and web servers that would be able to break-up and re-assemble a download request into a prolly tree, so I could pick up downloading where I left off, or only download what changed since the last time I downloaded something.
This comment could only come from someone who never downloaded large files from the internet in the 1990s.
Feels like heaven to me downloading today.
Watching video from YouTube, Facebook, etc., if accessed via those websites running their Javascripts, usually uses the Range header. Some people refer to the "breeak up and re-assembly" as "progressive download".
DASH is used sometimes, but not on the majority of videos I encounter. Of course this can change over time. The point is that downloading large files today, e.g., from YouTube, Facebook, etc., cf. downloading large files in the 90s where speeds were slower and interruptions were more common, has been relatively fast and easy by comparison, even though these websites might be changing how they serve these files behind the scenes and software developers gravitate toward complexity.
Commercial "streaming", e.g., ESPN, etc., might be intentionally difficult to download and might involve "reversing" and "glueing" but that is not what I'm describing.
The first phase was severely asynchronous, with a popup mentioning "the next few minutes", which turned out to be hours. Manually refreshing the page showed a cringeworthy deletion rate of about 500 messages per minute.
But at least it worked; the second phase was more special, with plenty of arbitrary stopping and outright lies. After repeated purging attempts I finally got an empty bin achievement page on my phone but I found over 50K messages in the trash on my computer the next day, where every attempt to empty the trash showed a very slow progress dialog that reported completion but actually deleted only about 4K messages.
I don't expect many JavaScript card castles of the complexity of GMail message handling to be tested on large jobs; at least old FTP and web servers were designed with high load and large files in mind.
HTTP Range Requests solve this without any clever logic, if mutually supported.
Understated comment in the thread.
The very first search hit on Google is none other than Mozilla's page on ranged requests.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Ran...
Here's the leading summary from that page.
> An HTTP Range request asks the server to send parts of a resource back to a client. Range requests are useful for various clients, including media players that support random access, data tools that require only part of a large file, and download managers that let users pause and resume a download.
Here's a RFC:
What's your threat model? This has "interesting"[3] properties. For example, given a file, the provider can figure out who has the file. Or, given a file, an arbitrary user can figure out if some other user already has the file. Users may even be able to "teleport" files to each other, like the infamous Dropbox Dropship[2].
I suspect why no one has tried this is many-fold: (1) Most providers want to store plaintext. Those few providers who don't want to store plaintext, whether for secrecy or deniability reasons, also don't want to store anything else correlatable, either. (2) Space is cheap. (3) Providers like being able to charge for space. Since providers sell space at a markup, they almost want you to use more space, not less.
[1]: https://en.wikipedia.org/wiki/AES-GCM-SIV [2]: https://en.wikipedia.org/wiki/Dropship_(software) [3]: "Interesting" is not a word you want associated with your cryptography usage, to say the least.
This gives the service provider the ability to see who is storing the same files, however, which can be sensitive information. Moreover, once they know/decrypt a file for one user, they know that file for all users.
Which has already thought about attacks on the scheme you described: https://tahoe-lafs.org/hacktahoelafs/drew_perttula.html
What's the name of the paper you're alluding to? I'm not familiar with it and it sounds interesting
Is there a way to search for a structure by properties? E.g. O(1) lookups, O(log(n)) inserts or better, navigates like a tree (just making this up), etc?
I never went to high school or college or anything.
I can't tell you how many times I come up with something, only to discover years later that someone else came up with the same idea later (or sometimes earlier), branded it, and marketed it.
It's almost a pity computers are as fast as they are and they are so rarely needed because having "arrays" and "maps/dicts/associative arrays/whatever" solves so many problems so much faster than we need anyhow. I don't get to pull out the bag of tricks very often. But then again, when I do, it's because it's a life saver and the difference between success and failure, so maybe it all balances out.
Developers have less control over the particulars (and may miss optimization opportunities if they can make guarantees about the shape of the problem) but it benefits the common case.
Googling "Prolly Trees", there's not much and this article is one of the top results.
If it was not probabilistic then the balancing would be guaranteed in all cases. This typically means that it somehow stores balancing information somewhere so that it can detect when something is unbalanced and repair it. In this data structure we’re just hashing the content without really caring about the current balance and then it turns out that for most inputs it will be fine.
In principle, if your data were specially crafted to exploit the specific hash function (and salt) you could get an aberrant case like 1 million entries in a single b-tree node or a million b-tree nodes with just one entry. But unless you intentionally exploit the hash function the chance of this is vanishingly small.