Блог пользователя tezk

Автор tezk, история, 20 месяцев назад, По-английски

I tried to solve 1801C-Music Festival. First I also compress the albums as the editorial solution, then normalize all the coolness value to avoid the case the maximum of $$$a_i$$$ is too big, then sort the albums in increasing order of the track with maximum coolness. This got me TLE at test case 25

In short, I have a

vector<vector<int>> a(n);

The sort function is

sort(a.begin(), a.end(), [&] (auto a1, auto a2) {
    return (a1.back() < a2.back());
});

Instead of sorting, create a map to store the position of albums with each maximum coolness pass

I didn't know about this, so I'm curious what's the time complexity of the sort function in this case ?

  • Проголосовать: нравится
  • +17
  • Проголосовать: не нравится

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится +32 Проголосовать: не нравится

I think it's O(n^2log(n)) because in the sorting lambda you are passing the vectors by value not by reference, so you make a copy of the 2 vectors for every comparison which is O(n) per comparison. Changing auto to auto& gives AC

»
20 месяцев назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

If I were to bet, given the quite strict TL, I’m not sure I agree with aymanrs ‘s hypothesis.

I’ve never seen a counter-test where passing vectors by value would degenerate complexity (I conjecture that it’s $$$O(\log n)$$$ comparisons per element, yielding correct complexity). Can someone indicate one such test or a strategy to build it and prove me I’m blatantly wrong?

However, I do think that the reason why passing by value is slow here and passing by reference is not is memory allocation (one allocates a linear amount of memory, the other allocates linearithmic memory). This adds a condiderate constant factor to the solution.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    one big vector + many small?

    • »
      »
      »
      20 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Still, you would need to build a test case where the big vector gets compared with $$$\omega(\log n)$$$ elements, which is non-trivial as far as I'm aware (without digging into the intrinsics of how the std::sort algorithm works).

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится -37 Проголосовать: не нравится

        if you will use merge sort, complexity will be O((sum len)* log(n)). It's because every element used in comparator O(log(n)) times.

        But std::sort using quick sort algorithm. It's pick element and put lower element at begin and bigger element at end. Then recursive sort. So, if we will pick long vector it will be compared O(len(curr arr)) size.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    As @oversolver said, the test case that got me TLE has 1 big vector and many small one. I have verified that it really is the sorting that kill my solution

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +11 Проголосовать: не нравится

      I never said otherwise. I was debating whether it's because the complexity becomes quadratic or the constant factor is too big.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +13 Проголосовать: не нравится

        just tested some cases and seems like std::sort make no more than log comparisons per item.

        and talking about auto vs auto& in comparator generally, I have lot of experience even with pair<int,int> where reference gives major performance

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 9   Проголосовать: нравится +27 Проголосовать: не нравится

        When n is sufficiently large, sorting an ordered array causes the second element to be compared $$$\Theta(n)$$$ times. (However, I am still don't know how to construct it so that it reaches the complexity of $$$\Theta(n^2\log n)$$$)

        Code

        If you do random_shuffle first and then sort, then the complexity is also correct. But I don't think there is a sorting algorithm that can do $$$O(\log n)$$$ comparisons with worst-case luck.

        upd: here's a sorting algorithm named "Bitonic Sort" can do $$$\Theta(\log n)$$$ comparisons for each element with worst-case luck.

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +26 Проголосовать: не нравится

    Let's keep things simple and assume std::sort is a merge sort algorithm. For the merge sort algorithm conjecture that it's $$$O(logN)$$$ comparison per element is blatantly wrong. It has a total $$$O(N*logN)$$$ comparison, but nowhere guaranteed that it's $$$O(logN)$$$ comparisons per element. Here is one such trivial counter-test case.

    Spoiler

    I do not want to dig into the intrinsics of how the std::sort algorithm works. What I know is std::stable_sort is implemented using merge sort. Here is a working example

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      JFYI: you can merge two arrays with O(log(n)) comparison for every element.

      then you have two arrays A and B just do binary search how many element you have to use from A until first B element. It will be work in O(log(ans)), ans then you will move this elements in O(ans), so total is O(ans).

      To do it you can firstly check all powers of two until first negative result and then search between $$$2^i$$$ and $$$2^{i+1}$$$

      Total will be O(n)

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится +16 Проголосовать: не нравится

        I never claimed all algorithms have $$$O(N)$$$ for an element. I just claimed native textbook code for merge sort can take $$$O(N)$$$ per element.
        Also, can you elaborate more on your analysis?
        Either I do not understand your algorithm, or it's $$$O(log^2N)$$$ comparisons per element, with the overall complexity being $$$O(N*log^2N)$$$

        Merge sort has $$$O(logN)$$$ layers, and if we use your algorithm as an immediate step, it will take $$$O(log 2^i)$$$ comparisons on $$$i$$$ step, totalling $$$log^2(N)$$$ per element.

        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
            Проголосовать: нравится +3 Проголосовать: не нравится

          I told about merge component only. In merge sort — yes, it will be $$$O(log^2(N))$$$ comparisons per element.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Yes, that's indeed true. But for randomized quick sort on the other hand, the number of comparisons seems to be $$$O(\log n)$$$ per element on average.

      Proof: An element $$$i$$$ will be compared to any other element:

      1. when $$$i$$$ is a pivot (expected number of comparisons is $$$\sum{P(\text{sequence has }l\text{ elements}) \cdot l \cdot 1/l}$$$ = $$$1$$$)
      2. when $$$i$$$ is compared to a pivot (expected number of comparisons is equal to expected depth, which is $$$O(\log n)$$$).

      As far as I'm aware, std::sort is more like quick-sort than merge-sort, so I thought maybe it behaves similarly.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        about std::sort — I wrote a thread under your previous comment. It downvoted a lot (Lol), but you can open comments.

        https://mirror.codeforces.com/blog/entry/114167?#comment-1014962

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 5   Проголосовать: нравится +8 Проголосовать: не нравится

        If I'm the problem setter, here is one way I will try to hack quick-sort solutions.
        Here, $$$N$$$ is up to

        Unable to parse markup [type=CF_MATHJAX]

        .

        Let's add $$$100$$$ vectors of length $$$1000$$$ each, rest $$$100000$$$ vectors with length $$$1$$$. For a particular test case, these large vectors get picked up as the first pivot with probability $$$0.001$$$. Add 100 such test cases, and the probability of avoiding TLE is $$$(1-0.001)^{100} = 0.904$$$; it's too risky when only one submission will be evaluated during system tests. It might work for educational rounds where people can make $$$10$$$ submissions. Parameters can be tweaked. Works well when sum is up to 2e6.

        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
            Проголосовать: нравится -8 Проголосовать: не нравится

          And what's wrong with having a pivot of length 1000? Only 100 comparisons on each level of recursion will look through all these 1000 elements, other comparisons will stop at length 1.

          • »
            »
            »
            »
            »
            »
            20 месяцев назад, # ^ |
            Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

            The comparator in the blog copies instead of pass-by reference. So it will make 1e5 copies of vector length 1000. Will probably tle for sumN 2e5. A sure shot tle if sumN is raised to 2e6.