Please read the new rule regarding the restriction on the use of AI tools. ×

Zhtluo's blog

By Zhtluo, history, 23 months ago, In English

Are you scared when you hear about all these pesky data structures like Fenwick Tree, RMQ, Segment Tree in competitive programming?

Are you afraid of writing code to solve problems like finding the minimum, maximum, or sum of some range query, especially when there are updates to the data?

Well, fear no more! In this tutorial I will introduce the Swiss knife of all sequence manipulation data structure, one code that can (theoretically) solve every problem of this kind, one tree to rule them all — the Splay Tree!

So this is a tutorial I wrote on the Splay Tree:

https://zhtluo.com/cp/splay-tree-one-tree-to-rule-them-all.html

Feedback and additional practice problems are welcomed. Enjoy :)

  • Vote: I like it
  • +251
  • Vote: I do not like it

| Write comment?
»
23 months ago, # |
  Vote: I like it +5 Vote: I do not like it

Auto comment: topic has been updated by Zhtluo (previous revision, new revision, compare).

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

wikipedia says that they have worst case O(n) does that mean using them in contests will get my solution hacked?

»
23 months ago, # |
  Vote: I like it +36 Vote: I do not like it

Swiss knife of all sequence manipulation data structure

Unfortunately that is not the case, even though the tree is very natural in it's core
The problem is amortized complexity
It's not that it's bad, but when you need persistency or implicitness it just won't work

So I would recommend learning in this order

  • Treap with all it's expected height magic
  • Something deterministic, like AVL tree, RB tree or 2-3 Tree
  • Learn Splay Tree only in case you need to understand Link Cut Trees in $$$O(nlogn)$$$ or love amortized analysis
  • »
    »
    23 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    I'm not sure what you mean by implicitness, but I agree with you about persistency. Splay trees are pretty fast to write once you understand them, so I kinda get why people prefer them over something deterministic (other than the LCT reason), but I recommend AVL trees for deterministic BBSTs (over the other two you mentioned). A ton of the stuff that you can do with AVL trees can be found here, and it is arguably the simplest deterministic BBST once you know treaps.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +33 Vote: I do not like it

    I agree in principal. There is (generally) no universal algorithm that works on every problem. But Splay is good enough in most use cases, and when you have worked 1 week on something you would want to make a clickbait title :)

    However, I do not think that Splay cannot do persistency or implicitness as you suggested.

    Persistency means to create a new node (instead of editing the existing node) whenever a value changes on a node. Splay touches amortized $$$O(\log n)$$$ nodes like segment tree so I am not sure why it cannot be done.

    (Also see https://stackoverflow.com/questions/8348359/why-are-persistent-splay-trees-particularly-useful-in-functional-programming)

    Implicitness (if I understand correctly) means to dynamically allocate nodes to accommodate large ranges like $$$[0,10^{18}]$$$. I think in Splay we can also use one node $$$x_i$$$ to represent a range $$$[l_i,r_i]$$$ and expand when we need it.

    (If you mean more trivial things like using the index of the node to be its key then you should read my wonderful article! :) )

    I do not agree with the learning order though. I never really used Treap, AVL or RB tree in CP. I think Treap might have a use case in some niche problems but I cannot recall. I would recommend to learn in this order on sequence manipulation data structures (from easy to hard):

    • Fenwick Tree
    • Segment Tree
    • Splay

    as they are way more common in CP. Splay covers almost everything Fenwick Tree and Segment Tree can do so I would argue it is a Swizz knife in sequence manipulation.

    But again, the main argument against Splay is not what it cannot do, but the fact that it is slower than simple data structures like Fenwick Tree by a constant factor, just like how a Swizz knife does not excel in simpler tasks :)

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it +13 Vote: I do not like it

      The problem with persistency is pretty simple. With splay tree you can have a state with up to $$$O(n)$$$ height. The next operation might want to access an element that is deep down. Now persistency forces you to store every state so even read-only operations would blow up the running time on such a state.

      As for implicitness, it can work since we're usually creating implicit nodes in a perfect BST manner. I'm not sure, but I think the math checks out. But it would still blow up in running time in other cases.

      Segment trees and Fenwick trees are a must know, but they're not BST. Which means you can't insert in the middle and do other stuff BST can. So they shouldn't be a point in this article.

      Treap is WAY easier to implement. That's why it is used so much. Everybody knows it's downsides, but the simplicity of implementation outweighs them all. Also you can implement it without being able to prove it's correctness which has it's benefits.

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I am not writing from a BST perspective, because we have set, map, unordered_map and other stuff in C++ the need for a raw BST is very low.

        I am writing from a sequence manipulation perspective: you have a sequence $$$(x_1,x_2\ldots x_n)$$$ and you want to do some magic on it: find the sum of a segment, add something to a segment, set the segment of values to something, etc.

        That is also when I think Splay is useful because it allows dynamic change of sequence structure in addition to everything that a segment tree can do.

        I think you are right on persistency, but I also see a lot of posts on it (another one: https://stackoverflow.com/questions/16181382/persistent-data-structures-a-persistent-index) so I am not sure what is happening. I will have to think about it.

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        What are the downsides of treap?

        • »
          »
          »
          »
          »
          23 months ago, # ^ |
            Vote: I like it +22 Vote: I do not like it

          The most important downside is that it is randomized, probably.

          Even splay tree is deterministic, so you can be sure that your code always produces the same result on the same input. Noticeably simplifies debug.

          Other minor problems are:

          • You need a lot of random numbers. In case you don't need persistency, you can reduce this number to $$$O(1)$$$, but then you would have to pay with huge additional constant factor (something about 6 or so) or with the fact that you have no proof of the logarithmic upper bound.

          • You need to store additional info in a node ($$$y$$$). Again, you can throw this info away and have all the problems above in case you don't need persistency. Or, you can make this info more or less useful by replacing it with the size, but in that case you would need even more random numbers. And in the case of persistent treap you even have no choice.

          • $$$\Theta(\log)$$$ of expected additional memory per split/merge query in all cases where you don't store parent nodes. In comparison, splay trees can do all their operations with $$$O(1)$$$ additional memory worst-case.

          Also, the constant factor is not very pleasant in comparison to static trees like segment tree or Fenwick tree, but I don't think that it is a fair comparison. Sometimes the difference is even logarithmic because treaps have no inner structure on nodes, so you can't do a parallel binary search on two treaps simultaneously, but I doubt any dynamic tree has this property.

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
              Vote: I like it +39 Vote: I do not like it

            When debugging a treap locally, I always fix the seed for all rngs if I am compiling it locally. This way it is easier to stress-test without wondering which seed your rng was seeded with.

            How do you reduce the number of random numbers to $$$O(1)$$$ as you mentioned in the first point? I just wrote my own implementation of standard (but fast and high-quality) rngs, so it takes care of that to some extent; nevertheless, it is still pretty interesting to see how the number of random numbers can be reduced so much.

            • »
              »
              »
              »
              »
              »
              »
              23 months ago, # ^ |
                Vote: I like it +13 Vote: I do not like it

              Well, I do that too, but in case you have multitests it does not really helps unless you reseed rng on each test. The easiest general way to do it is to construct it once per test and then pass it everywhere, but I feel like it is quite messy. The same problem applies to the stress testing.

              Again, it is not a huge problem. Usually, when you see that your solution change its behaviour on seed change, you just go and check your treap, but it still may be unpleasant in some cases.

              Well, in order to deal with amount of random you just replace $$$y$$$ with some good hash of, let's say, a pointer to the node. Then, you indeed need only $$$O(1)$$$ random numbers to choose the hash from the family, but you actually need to have high independency. The number I know how to prove is 24-independence, I saw 5- or 6-independence in a literature with some restrictions (it gives only an amortized expected bound, not an expected bound per query, iirc), but it is still too much. Obviously, you can say screw it and choose a faster hash, but then you will be unable to prove the upper bound and sometimes it won't even work. For instance, I know a countertest for the hash "xor with the random number", you just need to add pointers in an increasing order.

              Maybe there exists some fast hash that also relatively easily provable, but I don't know it. I also think that the background on this hash-based approach deserves a separate post, because of the amounts of math I try to encapsulate in words like "will/won't work".

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
            Rev. 2   Vote: I like it +8 Vote: I do not like it

            ngl, I don't have a problem with those, at least for codeforces

            You can use the same seed when debugging. I don't see why you would need to debug multitests, since once you find the wrong case you just debug that case alone.

            Generating n random numbers is quite fast, and the 1 extra variable per node + extra memory per query is insignificant compared to the rest of the treap itself.

            The only downside for me is the higher constant compared to splay trees. Of course if a static tree is enough then by all means use it instead.

            • »
              »
              »
              »
              »
              »
              »
              23 months ago, # ^ |
                Vote: I like it +11 Vote: I do not like it

              About multitests I meant the annoying problem when you use your random number generator as a global variable and the WA occurs only when the test is third or even twenty-third in the order. Again, it is not a huge problem, but it could be annoying sometimes when you need a lot of different treap features in your solution and you forgot to push somewhere.

              Generating random numbers is quite costly. Try to replace your PRNG with random_device if you use linux and compare the performance. And if you use windows, you don't have true random at all, sorry, use linux. However, treaps with proper PRNGs (e.g. mt19937) always work well, but the reason why is basically the hashing argument.

              Extra memory per query is mostly theoretical point, but even one extra variable per node can make a lot of difference. If you are not lucky enough, it can multiply your memory consumption by two because of padding. And if you want to avoid this, you sometimes can use balancing with the size, but then generating even pseudorandom numbers adds some noticeable overhead.

              I am not sure which tree is faster, btw, splay or treap. Would be nice to see some comparisons. Obviously, both trees have cases when they are asymptotically faster, but in general case of random queries, I think, splay is a bit slower (I may be wrong, obviously).

              • »
                »
                »
                »
                »
                »
                »
                »
                23 months ago, # ^ |
                Rev. 3   Vote: I like it +9 Vote: I do not like it

                Regarding generating truly random numbers, it is almost impossible to use a true RNG in a competitive programming context. In most implementations, std::random_device involves a filesystem call to read from /dev/urandom, and this is why it is slow. One advantage of using this over other PRNGs is that it uses entropy from environmental noise, but it is just a PRNG too (although it is also cryptographically secure). Another way of generating a "good-enough" random number (but for a one-time use, for instance, for seeding your PRNGs) is to allocate a single byte of data and use a hash of its address as a seed.

                I had implemented some pretty fast PRNGs quite a while back in 146874714, and they have similar guarantees to std::mt19937. The Lehmer PRNG is still good enough for treap from what I can remember — and if you want to use it without remembering these magic constants, std::minstd_rand implements this PRNG. The wyhash PRNG has better quality, but is a tiny bit slower than Lehmer. It is notable that both lehmer (the 64 bit version) and wyhash (both 32 and 64 bit versions) pass BigCrush and practrand. Some references can be found here.

                Regarding splay vs treap, most of the fastest submissions here use splay tree.

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it +17 Vote: I do not like it

      Why do you put fenwick as easier than segtree…? It’s harder to learn and the only use case is if you need to optimize constant

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        The code is shorter.

        • »
          »
          »
          »
          »
          23 months ago, # ^ |
            Vote: I like it +2 Vote: I do not like it

          If I couldn’t use a template I would write a segtree instead of fenwick because chance to make a mistake is much lower

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
              Vote: I like it +43 Vote: I do not like it

            Each to their own, I guess.

            • »
              »
              »
              »
              »
              »
              »
              23 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Is there anything that fenwick can do but segment can't? If not, why learn it?

              • »
                »
                »
                »
                »
                »
                »
                »
                23 months ago, # ^ |
                  Vote: I like it +14 Vote: I do not like it

                Should be none in theory. Fenwick has more constraints than Segment trees in terms of what operations it could use (i.e. For querying any arbitrary interval, the inverse operation for the query operation should be clearly defined. This is why you generally can't use it for multiplication and min/max.) Still, the Fenwick Tree has a much smaller code size, and reduces the constant on memory/time complexity, so IMO it is still worth learning even if you know how to use segtrees.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Do you know of any resources where they compare time complexities of segment and fenwick? I keep hearing that fenwick has smaller constant than segment, but I don't know of the exact mechanism behind it. Wikipedia states fenwick can do range query in O(4*log(N)), which is same as segment.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 months ago, # ^ |
                    Vote: I like it -8 Vote: I do not like it

                  https://en.algorithmica.org/hpc/data-structures/segment-trees/

                  You can see a comparison of common implementations of segment trees and the wide segment tree (which I would consider "not so common") here, The update is on par with the bottom-up segment tree but you can see range query is quite faster.

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it +18 Vote: I do not like it

      As an addition to very commonly needed persistency and very rare implicitness, as far as I understand, splay tree also cannot insert or erase an element in $$$O(1)$$$ time if you don't need to find that element (e.g. if you have a pointer to it).

      In comparison, treap can do both in $$$O(1)$$$ expected time. This is not really commonly needed, but it is nice to have in some cases. Also, note that the dynamic optimality conjecture is not applicable here because of the difference in interfaces (you have an additional info of where the element you want to delete).

      But, on the other hand, splay trees are nice in a sense that you do not need to optimize the number of times you query the same element, because those redundant queries add only $$$O(1)$$$ to the running time as opposed to $$$\Theta(\log)$$$ for almost all other trees I know.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I'm curious, is treap actually slower than splay trees when using link cut tree? (I never used treap actually)

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I never compared too, but it should be. At least it is $$$\Theta(log^2)$$$ vs $$$\Theta(log)$$$ in the worst case when all the queries are random.

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It is slower anywhere else, for example in small to large. I assume its safe to generalize.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Why would you want to learn deterministic non-splay trees for competitive programming?

»
23 months ago, # |
  Vote: I like it +21 Vote: I do not like it

You are right, but I love fhq treap more <3

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    I have never heard of the fhq treap, is it some variation of the usual treap? And are there any resources or brief explanation for it online?

    • »
      »
      »
      23 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes. fhq treap is a powerful data structure invented by Fan Haoqiang.

      In general, it can support all balanced tree operations such as Treap and Splay by two operation split and merge, and supports persistence. The constant is far less than Splay. The only problem is that it cannot support LCT well.

      For beginners, it is easier to learn than Splay and easier to use than Treap. It is indeed a highly cost-effective data structure.

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Just to confirm, are this and this the same data structure as fhq treap? They look quite similar to the usual treap.

        • »
          »
          »
          »
          »
          23 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Yes it is. But I don't think it look similar to the usual treap. The similar between them is that they all based on BST. However, FHQ treap use split and merge to keep balance while another treap like treap and splay use rotate to keep balance.

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
              Vote: I like it +53 Vote: I do not like it

            I believe most CP people refer FHQ treap as just "treap".

            • »
              »
              »
              »
              »
              »
              »
              23 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Really? So how they call the ordinary treap?

              • »
                »
                »
                »
                »
                »
                »
                »
                23 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                I think it is sometimes called no-rotation treap and rotation treap, which can become treap when the context is clear (i.e. persistent treap).

              • »
                »
                »
                »
                »
                »
                »
                »
                23 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                For example, treap in cp-algorithms is actually FHQ treap.

                I don't see that much people talking about the "ordinary treap", so maybe like "ordinary treap" or "rotation treap"?

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 months ago, # ^ |
                    Vote: I like it +19 Vote: I do not like it

                  OK, it seems that many data-structure base on tree+heap called treap. Now I got it. Thank you so much <3

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    FHQ is the future!

    You should write a tutorial and I will upvote it :)

»
23 months ago, # |
  Vote: I like it +18 Vote: I do not like it

One thing I've always wondered about splay trees when it comes to BBST operations (as a benchmark on how flexible the data structure is) is how to "extract" a certain subrange of the tree (in treaps, we do this by splitting the treap into two twice), and how to "merge" two trees (in treaps, this is simply the reverse operation of splitting). This makes manipulating and moving around subarrays very easy. Is there a way of doing this using a splay tree? Relevant problems can be found in this gym, but I haven't been able to find a single splay-tree solution for those kinds of problems.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Actually my sample code deals with an operation 'REVOLVE' that needs BBST.

    Maybe I should add it to the list of operations but I think it is easy enough once you understand how Splay works :)

»
23 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Well, while I think the Splay Tree can be simple enough when you get the point of it, I think there are definitely easier "sequence manipulation data structure"s than a splay tree, that can do the same operations. Personally as an example, I would suggest the Skip-list, but Treaps should be easier than Splay Trees as well.

  • »
    »
    23 months ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    The issue is often not insertion but doing what a segment tree can do, i.e. find sum of segment, reverse a segment, etc.

    • »
      »
      »
      23 months ago, # ^ |
      Rev. 2   Vote: I like it +13 Vote: I do not like it

      Skip lists can do range queries too! A higher node can store sums of values on the lower nodes for the interval represented by the "skip", for example. This blog explains the method in detail.

      • »
        »
        »
        »
        23 months ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        Interesting.

        Can you try to do the sample problem http://mirror.codeforces.com/gym/103997/problem/A with a skip list?

        • »
          »
          »
          »
          »
          23 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          If you take a tree but keep the children in a linked list, with the parent only pointing to the first child, you basically get the skip list structure (after connecting level links). It's still expected constant time to access each child from parent because there's only a constant number of children due to randomization.

          So that's why anything you can do with trees you can usually do with skiplists.

          That said, this particular problem is terrible for skiplist because reversing children involves reversing a linked list and inserting it back into each level. You might have to use doubly linked lists? Does anyone see a different implementation?

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
              Vote: I like it +2 Vote: I do not like it

            You can maintain the original structure and the reversed structure simultaneously, and then swap (By "swap" here — I mean "split and reconnect") the corresponding subsegments for reverse queries. While I have not tried this method on skip lists (I did try it on SGI ropes though), this should technically work. Quite unsure how this will work out with other queries, though.

        • »
          »
          »
          »
          »
          23 months ago, # ^ |
          Rev. 3   Vote: I like it +13 Vote: I do not like it

          Making progress on it rn. The problem should be quite difficult to do on Skip Lists, because on the blog I linked there was only information about how to use it like a Fenwick Tree. Luckily, I could find out a way to use it like a Segment Tree myself. Basically, the approach is to save data on every node, on each height. We can save the query answer for the interval bounded by the current node and just before the next node on the height, as the sums of the answers for the nodes below the range. While I did not write a full skip list implementation for it just yet, I have made an array-based proof of concept for how a Skip List could work as a Segment Tree. The following is the implemented class, which performs RMQ to prove that it can do operations that a plain Fenwick Tree cannot do.

          Implementation

          p.s. now for that problem I need to implement a full skip list and then add lazy progagation on the thing. I already see pain D:

          • »
            »
            »
            »
            »
            »
            23 months ago, # ^ |
            Rev. 2   Vote: I like it 0 Vote: I do not like it

            How do you implement the "REVOLVE" operation using a skip list? Actually it is trivial once you know how to reverse a subarray (you can do three reverse operations to rotate an array), so how to reverse is one of the more important things.

            In general, how do you splice a skip list? Not sure if it is possible, but splicing each linked list could work.

            • »
              »
              »
              »
              »
              »
              »
              23 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              In theory, removing links that connect nodes inside the $$$[l,r]$$$ segment and outside it (There should be an expected $$$O(\log N)$$$ links before the segment, and $$$O(\log N)$$$ after the segment), and then connecting the 2~3 resulting parts to new HEAD/NULL nodes correspondingly should do the job for splicing a skip list. However, reversing a segment should be much more difficult, as it is not guaranteed that a node (on a certain height) will only have two children at maximum. When there are three or more children, some necessary concepts for query operations, such as lazy propagation, become far more complex to get our heads around.

  • »
    »
    23 months ago, # ^ |
    Rev. 2   Vote: I like it +22 Vote: I do not like it

    Skip lists are basically randomized BSTs (like treaps), but they are harder to write (at least for me) than treaps. This is a good resource on why this claim is true. I haven't benchmarked it yet (and my following claim might be wrong), but I think treaps are faster than skip lists, so there is no extra advantage of writing a skip list. However, it might seem more familiar to people who think of nodes in segment trees as intervals visually (which is what skip lists also do). The real benefit of skip lists is in parallelization — unlike usual BSTs, they can be made highly parallel.

»
23 months ago, # |
  Vote: I like it +10 Vote: I do not like it

Never prefer a BBST to fenwick tree / segment tree when you can use any of them. Learn to embrace offline approaches / value compression. Complexity will bite you back at some point.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Many thanks for this. I've so far looked at the following splay tree implementations.

So far, none of them fully satisfy me, but that may be due to my currently extremely superficial understanding of splay trees. I was led to splay trees again by this splay tree solution https://atcoder.jp/contests/abc276/editorial/5188 for https://atcoder.jp/contests/abc276/tasks/abc276_f , the code of which is currently incomprehensible to me. It requires both sum of elements greater than x and also the number of them. I had thought of using a pair<long long, int> to represent the sum and count respectively and plugging into one of these splay implementations, but I am not confident that I can do this without spending much more time than I am willing to.

I thought that perhaps someone could write and submit on AtCoder a solution using the splay tree implementation posted here or another one of the splay tree implementations listed above. Also, I have noted that this splay tree implementation is not key based, by which I mean one does the likes of lower_bound or upper_bound via a key. Thank you!