I recently found put about this paper paper by Tarjan et al describing a data structure that strikes a lot of similarity to the data structure a lot of us have been familiar throughout the years, the treap without rotation.

I was wondering what you think about it. Is it just a coincidence? Is there something more subtle that ‘Zip trees’ have that treaps without rotations don’t? And, moreover, are there no notable mentions of this Treap implementation in literature?

It looks like the only difference with treaps is that, instead of choosing $$$priority$$$ uniformly randomly from $$$1 \ldots n^{O(1)}$$$, we only store $$$\log(priority)$$$, which would be geometrically distributed from $$$1 \ldots O(\log(n))$$$. This means you only need to store $$$\log \log n$$$ bits instead of $$$\log n$$$ bits. Overall, seems like a very slim optimization.

The paper mentions these comparisons on the last paragraph of page 4 and the footnote.

‘and different insertion and deletion algorithms’

Also, this seems to me as the most praised ‘advantage’ of this data structure, from the paper.

Yeah, the footnote also discusses prior mentions of the split/merge implementation:

I think just split/merge treaps have not been novel enough to publish a separate paper, but if you add some other twist (like compressed priorities), then it becomes different enough to publish?

Coming up, Zip trees with $$$O(0)$$$ bits of extra memory for ranks.

Yeah lol, it's kinda already here, they mention that, both for zip trees and treaps, you can compute the rank as a pseudorandom function of the key (which breaks if keys aren't distinct).

In my opinion, decreasing the extra space from O(logn) to O(loglogn) is quite interesting, and random bits are expensive (they have a remark on page 11 about that). In "practice" we are using pseudorandom bits, so it makes sense to analyse how much randomness is really required... at least in theory.

Yeah, in theory it's a nontrivial improvement, but in a typical wordRAM model, O(log n) bits is a single word anyways, so it's a little moot. (We're already using O(log n) extra space to store pointers to children.)

On second thought, you're right that decreasing the necessary randomness from $$$O(n \log n)$$$ bits to EV $$$O(n)$$$ is a pretty big deal.

They are not but on the other hand they are in this strange position where they are "folklore" — known but really not documented very well. Maybe the situation has become better now when more people are writing CP tutorials but in 2016 when I first learned about treaps almost all sources just showed how to get split and merge from insert/delete and not the other way around. Finally someone just mentioned that you can do it the other way and I just kinda had to work it out on paper, I couldn't find any source that explained how it is done.

Yeah, totally agree, it's a shame that there isn't always a place where "folklore" is collected. People won't bother publishing them in academic journals because they're not novel results, but academic papers are often the main reference source people look for. Also, there's a lot of folklore that's just not very important to algorithms research, but is very useful for implementations/CP. It's good that things like emaxx/cp-algorithms or other books/sites are gathering this content, but it's definitely a work in progress.

Isn't there some conference/journal affiliated with IOI? Maybe that's the right place to publish such results.

There is an IOI journal: https://ioinformatics.org/page/ioi-journal-index/44

Check my block ~~~~~ Check my blog!)))))) ~~~~~