Блог пользователя Xiaohuba

Автор Xiaohuba, 5 месяцев назад, По-английски

A Sequence Game

Idea from cmk666, Prepared by cmk666

Hint 1
Tutorial

B Even Modulo Pair

Idea from 244mhq, Prepared by NetSpeed1

Hint 1
Tutorial

C Dungeon

Idea from Link_Cut_qwq, Prepared by Xiaohuba

Hint 1
Hint 2
Hint 3
Tutorial

D Copy String

Idea from JoesSR, Prepared by JoesSR

Tutorial

E Journey

Idea from Link_Cut_qwq, Prepared by Xiaohuba

Hint 1
Hint 2
Hint 3
Tutorial

F1/F2 Chain Prefix Rank

Idea from JoesSR, Prepared by JoesSR

Tutorial

G Pointless Machine

Idea from Daniel777, Prepared by zjy2008

Hint 1
Hint 2
Hint 3
Tutorial

H PalindromePalindrome

Idea from zjy2008, Prepared by zjy2008

UPD: Unfortunately, the original analysis regarding the number of "relevant" pairs of disjoint substrings was incorrect. The proof was derived from an online blog concerning the range distinct palindrome substring query problem, but that proof turned out to be flawed. Consequently, we previously believed the complexity of the standard solution to be $$$O((n + q) \log n)$$$; however, after the contest, we discovered that it is actually $$$\Theta(n \log^2 n + q \log n)$$$ (and this lower bound is reachable). We apologize for any confusion caused.

Hint 1
Tutorial
  • Проголосовать: нравится
  • +176
  • Проголосовать: не нравится

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится -86 Проголосовать: не нравится

first.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +12 Проголосовать: не нравится

GuessForces

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +11 Проголосовать: не нравится

Thanks for problem B, quite an interesting idea!

  • »
    »
    3 месяца назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    But I think, if there is only one even number, there is no need to check if we can make an answer of (x,y) where y > x and x is even, because if y = kx + r, and x is even, then kx is even, and since we know that y is odd, then r will be odd therefore (x,y) is not an answer in this case.

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +23 Проголосовать: не нравится

I know number say otherwise(based on submissions)... but anyone felt D easier than C ?... I solved D under 40 minutes, but spent more than 90 minutes, in C :(

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

so B is just...brute force? uhhh man

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +13 Проголосовать: не нравится

    proving the complexity of brute force is challenging :(

    • »
      »
      »
      5 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      Why?

      I notices if x is included, then x + n < 2*x, st. n is odd, is always included, then the x cover all val > x, which are at even distance from x, and x + n covers all val > x which are at odd distance from x. So, for all integers > x is exhausted when such a sequence is found.

      if x is odd and x + n, st. n is remainder, n is odd, then x + n is even(odd + odd), and even can occur only once, therefore I have to search in 2*x + n(st n is odd), this keeps on happening inductively.

      If x, x + n, y, occur in the increasing sequence, y > 2*x due to the above proof, therefore the number of possible odd consecutive modulus pairs get exhausted as the ranges lowerbound is increasing exponentialy.

      Either way you increase lowerbound by 2 every (<3) numbers, making it terminate with n < log2(10^9).

  • »
    »
    5 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +27 Проголосовать: не нравится

    B is not "just" brute force, the important observation is about bounding the naive solution. If your takeaway was that it was "just" brute force, then you are missing the point.

    • »
      »
      »
      5 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      yea I get it, what I meant to say is that i just never for a moment considered brute forcing an apparent n^2 search due to pure habit...I am just disappointed at myself for that, and this problem taught me a real lesson.

    • »
      »
      »
      5 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      B is a pretty good problem -- very tight mathematical observation. I had a vague feeling about bounding during the contest but could not formalize it; had since been working entirely in the wrong direction (and spent more 2+ hours..)

      Anyways, does considering the "special case" where y < 2 * x comes naturally? It is natural in hindsight but feels difficult to perceive during the contest. I wonder if similar ideas had shown up frequently in CP? (i.e., considering a manageable special case and that special case must appear; otherwise the constraints on the input will be violated).

      • »
        »
        »
        »
        5 месяцев назад, скрыть # ^ |
         
        Проголосовать: нравится 0 Проголосовать: не нравится

        it does come naturally. if x,y are odd. y=x*q+r;

        r will be even when parity of floor(y/x) is same as y. Considering you don't want to craft a possible solution. You will make floor(y/x) as even. Considering smallest even number i.e 2.You can at make make an array of size 31. Hence size is bounded.

    • »
      »
      »
      5 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится

      can you please explain the complexity of B or suggest anything to understand this? i still couldn't find how an n^2 with this constraints!

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +44 Проголосовать: не нравится

problem B is awesome. problem G is genius. problem F is nice (though i have no idea how to come up with a solution for F1 which does not solve F2 by just adding some data structure). finally, sorry for being negative but... problem E..? it is so... obscure... the problem does not feel natural at all. i don't understand why in this problem the cost of a path is defined as it is. making it just the maximum edge on the path would make the problem just slightly easier (except if i don't see some obvious solution) but the problem would make soooo much more sense. and i would even call it a great problem. but the real problem statement just broke my brain.

P.S. also, expression $$$w_{\max_{i=1}^k e_i}$$$ does not make any sense because we can't use an edge as a subscript. it should be $$$w_{\textsf{argmax}_{i=1}^k e_i}$$$

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится -35 Проголосовать: не нравится

    https://mirror.codeforces.com/contest/2164/submission/347783243

    can you help me in finding why my code fails in the 5th test case.

    • »
      »
      »
      5 месяцев назад, скрыть # ^ |
       
      Проголосовать: нравится 0 Проголосовать: не нравится
              while (t < LLONG_MAX / j) {
                  t = t * j;
                  auto it = lower_bound(odd.begin() + i + 1, odd.end(), t);
                  if (it == odd.end()) {
                      break;
                  }
                  else {
                      long long pos = it - odd.begin();
                      if ((odd[pos] / odd[i]) % 2 == 1 || (odd[pos] % odd[i] == 0)) {
                          cout << odd[i] << " " << odd[pos] << endl;
                          return;
                      }
                  }
                  j += 2;
              }
      

      May be because: whenever you are incrementing the j to j + 2, you should also reset the t = odd[i] for the next iteration of j. now it is like odd[i] * 1 * 3 * 5 * 7...... which may not be right, i think !

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    For E, I guess having the cost of the path defined in that way forces you to actually construct the reconstruction tree and propagate values downward instead of being able to do everything in one pass. It was educational for me since I haven't done many reconstruction tree problems (please recommend any good ones!) but I do agree that the cost definition felt contrived and took away from the beauty of the problem.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +28 Проголосовать: не нравится

Great problems.

During contest, I came up with a solution that is really hard to implement on F2, and finally completed it after contest.

Spoiler
  • »
    »
    5 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится 0 Проголосовать: не нравится

    I had a pretty similar idea, but the way I optimized it is by noticing that the only time a factorial for some node in the numerator and a factorial for its parent in the denominator aren't the same is in the "interval" that adding this node splits(for example, if the root has value $$$a$$$ then the relevant intervals for the root are $$$(0,a)$$$ and $$$(a,n+1)$$$, and if one of its children has value $$$b \gt a$$$ then the relevant intervals for that child are $$$(0,a)$$$, $$$(a,b)$$$ and $$$(b,n+1)$$$; it split $$$(a,n+1)$$$ into $$$(a,b)$$$ and $$$(b,n+1)$$$); so, every node other than the root, if in its subtree $$$l$$$ nodes are on the left side of the split and $$$r$$$ are on the right side, multiplies the answer by $$$\frac{l!r!}{(l+r+1)!}$$$, and the root multiplies the answer by $$$l!r!$$$.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится -8 Проголосовать: не нравится

is there any other solution for problem b

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится -8 Проголосовать: не нравится

Upload codes

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +4 Проголосовать: не нравится

For B,I came up with a pretty interesting solution. We store all number in a map with key as the largest power of 2 which is smaller or equal to number. If one power has more than 2 values associated with it then answer will be just 2 of those 3 with same parity of remainder with the key, other a brute force solution will work because almost 64 elements can be there with no kay having more than 2 values by pigeon hole principal.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

this contest was hard for me !

i fell a little too much, but for C i didnt understand why using the largest damage sword is not good for stage 1 ?

i mean if we use the greatest damage say z , and there are swords with x<z and y<z damage

then at the end after all the nonzero-c monsters are evaluated, we will have sword with either damage z or >z

and we will also be left with other n-1 swords in vector a

this is my submission 347754985

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +1 Проголосовать: не нравится

    Oh, you missed one point here. You can obviously eliminate non-zero C monsters with the help of the largest damage sword, and you'll end up gaining a sword with damage max(x, max C of eliminated monsters). But you lost more power swords you could have collected in this process. For example, suppose initially you have swords with power 1, 1, 1, 3, and you have non-zero C monsters with C values 4, 4, 4, 4, and their respective health values are 1, 1, 1, 1. If you kill all those monsters with the largest damage sword, then finally you'll have 1, 1, 1, 4 for killing zero C monsters. But if you had used smaller damage swords also, then you could have swords with values 4, 4, 4, 4 to kill zero C monsters. I hope you get it. So ideally you should always use lowest possible damage sword to kill any moster to get maximum profit.

  • »
    »
    5 месяцев назад, скрыть # ^ |
    Rev. 2  
    Проголосовать: нравится +3 Проголосовать: не нравится

    Because you might need to "upgrade" worse sword.

    Lets say input

    1

    2 3

    1 6

    1 5 6

    5 0 0

    If you use your 6 sword for the (1, 5) monster, then it does keep its level 6. But you can only kill one more monster after with it. If you kill (1, 5) with damage 1, your sword gets upgraded to 5 and then you can kill both other monsters — one with each sword.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

When did codeforces add visualizers? This is so useful! Will this be the standard going forward?

https://mirror.codeforces.com/assets/contests/2164/E_ooxathoosh9Oob4quoiv.html

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Please anyone tell me why my code is not working on problem C

»
5 месяцев назад, скрыть # |
Rev. 6  
Проголосовать: нравится +36 Проголосовать: не нравится

Admittedly, I had never heard of a reconstruction tree until several people mentioned it to me after the round. But during contest I came up with a solution to E that does not use KRT or anything of the sort; in fact, I believe there are several such solutions. So, for other people who aren't familiar with KRT, here is a quick sketch of my solution. (It's actually probably very similar or equivalent to the editorial solution behind the scenes, but framed without a reconstruction tree.)

Like in the editorial, it is clear that you need an Eulerian circuit, and it is known that one exists if and only if all vertices have even degree. So we will add virtual edges between the odd degree vertices, then take the sum of the weights of all of the original edges and virtual edges.

Let $$$G_i$$$ denote the graph that contains all of the vertices, and the first $$$i$$$ edges. Then if $$$u$$$ and $$$v$$$ belong to the same connected component as $$$i$$$ in $$$G_i$$$, it is clear that we may add a virtual edge between $$$u$$$ and $$$v$$$ which has weight $$$w_i$$$. Furthermore, it's easy to see that this is a complete characterization of virtual edges we can add.

We have this notion that somehow only "suffix mins" of $$$w$$$ seem to be relevant, because for $$$i \lt j$$$, we have that $$$G_i$$$ is a subgraph of $$$G_j$$$, so if $$$w_j$$$ is smaller than $$$w_i$$$, we would like to just use this instead of $$$w_i$$$. However, this isn't quite true, because it might be the case that we want to connect two vertices in a different connected component from $$$j$$$ in $$$G_j$$$. So we introduce the notion of "connected component suffix mins," which consist of edges such that no lighter edge later gets directly connected to its connected component. (It's okay if a lighter edge gets indirectly connected, since remember that we are only adding virtual edges of weight $$$w_i$$$ when looking at graph $$$G_i$$$, and the stuff that gets indirectly connected have labels less than $$$i$$$.) Now, it is easy to see that these are the only relevant edges, because all other virtual edges can be exchanged for one with the weight of a relevant edge. Furthermore, we retain the nice property of monotonicity of suffix mins; that is, if you fix a particular pair of vertices, at some point they end up in the same connected component, and from then on, its connected component's suffix min is monotonically increasing.

We can find all of the connected component suffix mins by merging them small-to-large while we add edges one by one in index order via DSU. In other words, for each connected component, maintain a set of its relevant edges. Then suppose we are adding edge $$$i=\{u, v\}$$$. We merge the sets of relevant edges for $$$\texttt{find}(u)$$$ and $$$\texttt{find}(v)$$$, delete all elements with larger weights than $$$w_i$$$ (because they're not suffix mins), and insert edge $$$i$$$.

Now, we create a second DSU, where we insert the edges one by one in index order, while also maintaining the number of odd degree vertices in each connected component. Whenever we add a relevant edge $$$i$$$, if its connected component has two or more odd degree vertices, we can pair them off until there is at most one, adding an edge of weight $$$w_i$$$ for each pair. By the monotonicity property, it is clear that this is optimal, thus concluding the solution.

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +27 Проголосовать: не нравится

    In fact, what you did is basically the same as the Kruskal Reconstruction Tree but done in a non-explicit way. The first dsu you did is the process of building the tree and updating the minimum costs, and the second one is the process of finding the answer from bottom to top in the reconstruction tree.

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    I think the dsu solution to this problem based on enumerating over the edges by their ID is just doing the same thing as KRT. It's just u did the same greedy while invisibly building that tree structure. The KRT sol. is just kinda making the dsu sol.'s greedy updating procedure offline.

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +1 Проголосовать: не нравится

In problem B, We just have to run a O(n^2) loop to check if there is a pair exist or not....(no need to check any other cases like 1 even / multiple even).....

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Problem E

...

Now we have $$$f_i \le f_{f_{a_i}}$$$ which means it is optimal to pair up two vertex in the same subtree $$$p$$$ with cost $$$f_p$$$ as soon as possible.

MF what are $$$f_i$$$ and $$$f_{a_i}$$$?

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

I had no idea how to solve problem B without bruteforce. But after reading the editorial, I got goosebumps. It was nice and interesting problem

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

B was harder than C

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

As soon as I read the statement of C, I completely forgot that we only need to take max(c[i], x). I ended up spending 1.5 hours trying to solve an extended version of the problem where you only get a sword of strength c[i] after defeating the i-th monster. Totally wasted effort, lol. If anyone actually solved that “unwanted extended version,” I’d love to see your solution!

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

this was very difficult so so very difficult I have only do 1 problem

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

can anyone explain the problem H properly .

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

B is such a good question. I came up with the approach after 2 hours. I want to share my first solution which gave me TLE.

I came up with following conclusion, since we need $$$y\mod x$$$ to be even. Then $$$r=y-x*d$$$ which is remainder should be even. For that if we have 2 even numbers then it solves that case. But for odd case, I went like this:

  • If $$$y$$$ and $$$x$$$ are both odd, then remainder to be even, the $$$d= \lfloor \frac{y}{x} \rfloor$$$ must also be odd.
  • So now my task was to find the pair $$$x$$$, $$$y$$$ from the odd numbers such that $$$d= \lfloor \frac{y}{x} \rfloor$$$ is odd.
  • I approached this as follow:
// odd is the list of all odd numbers
for (auto x: odd) {
    if (x == 1) continue;
    for (ll low = x; low <= odd.back(); low += 2 * x) {
        // What I am trying to do here, is for x
        // I am trying to enumerate x * k, for k = 1, 3, 5, 7...
        // Then for a number y to give the floor(y/x) as odd, it's range should be 
        // between [low, low + x - 1]
        ll high = low + x - 1; 
        auto [l, r] = find_l_r(low, high, odd);
        if (l == -1 || r == -1 || l > r) {
            continue;
        }
        int idx = l;
        if (odd[idx] == x) idx++;
        if (idx <= r) {
            cout << x << " " << odd[idx] << "\n";
            return;
        }
    }
}
  • Here is my find_l_r function, this function is just finding the leftmost index $$$l$$$ such that $$$odd_l \gt = low$$$ and finding rightmost $$$r$$$ such that $$$odd_r \lt = high$$$:
auto find_l_r = [&](ll a, ll b, vector<int> &odd) -> pair<int, int> {
    int l = -1, r = -1;
    int low = 0, high = odd.size() - 1;
    while (low <= high) {
        int mid = (low + high) >> 1;
        if (odd[mid] >= a) {
            l = mid;
            high = mid - 1;
        } else {
            low = mid + 1;
        }
    }
    low = 0, high = odd.size() - 1;
    while (low <= high) {
        int mid = (low + high) >> 1;
        if (odd[mid] <= b) {
            r = mid; 
            low = mid + 1;
        } else {
            high = mid - 1;
        }
    }
    return make_pair(l, r);
};
  • The above approach is too naive and it will lead to TLE. I just wanted to optimize it. Does anyone have any idea how to optimize this?
»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится -9 Проголосовать: не нравится

lucky -138

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

can someone tell why my code is failing for C

347839121

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

At problem E ,should the graph we build be connected ? I mean the one which has all the odd nodes ? And also if someone could explain the solution a bit? I don't understand how we take the optimal edges.

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится 0 Проголосовать: не нравится

    Like how I understand it we build the krt first then every node that has odd degree we put it as 1 and when we go up if we have t nodes in our subtree we pair them and consider them as even from now on?

  • »
    »
    5 месяцев назад, скрыть # ^ |
     
    Проголосовать: нравится +11 Проголосовать: не нравится

    I'll explain an easy solution. First note that the number of odd degree nodes in a connected graph is always even, so in the final graph, all nodes can be matched. Now lets build the orignal graph step by step adding edges in order. At some point, before adding $i$th edge, lets suppose you have the number of odd degree vertex in each component stored. Note that we just need to pair these odd degree vertices with other vertices in the same component. So we also maintain all the weights of the edges used to connect these vertices.

    When adding an edge, we can always take a random stroll to this edge, add this to our path and join two odd deg vertices in these components. Since this edge has the highest index among all the edges, the path value would be the weight of this edge. So for all the pairs we had made in the components, we can use this edge too. So we just remove the ones which were previously joined using an edge with higher weight. All we need is to simulate this. Here is my submission for reference 347799843

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

In problem B, If there is no valid pair of x and y. Then why doesn't it give TLE as the time complexcity is O(n^2) ?

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится +18 Проголосовать: не нравится

Problem H Part II proof 2 isn't making sense to me, does anyone understand it?

  • What do "full circular string" and "Runs(l,r,p)" mean?
  • Why isn't $$$S[l...l+p-1]$$$ a full circular string?
  • What is an "extremely long cyclic palindromic substring"?
  • What is a "loop point"?
  • What is "the exponent of a Run"?

zjy2008

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Just wanted to share that for problem F2 I used linked list with square root decomposition for inserting and querying specific locations. I had a pointer for every 1000th element so overall it worked in 500*N.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

UPD: Tutorials for problem E and H are updated.

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I think the updated editorial of H still has some problems.

  • "$$$A$$$ and $$$B$$$ are palindromes, and $$$A$$$ is a prefix of $$$B$$$." Does it actually mean $$$B=AC$$$,where $$$A$$$ and $$$C$$$ are palindromes?

  • What if the period doesn't occur at least twice,such as $$$BA$$$?In that case,the conclusions about run can't be used,and the effective palindrome pairs might be $$$O(n\log n)$$$,which means the total time complexity might be $$$O(n\log^2 n)$$$.I think one possible hack is $$$S[i]=\log(\text{lowbit}(i))$$$,which looks like abacabadabacaba....

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

guessforces

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

C is a great problem.

It tell me a database called multiset

»
5 месяцев назад, скрыть # |
Rev. 3  
Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone figure out why this https://mirror.codeforces.com/contest/2164/submission/349829120 is wrong for problem C

is there a way to see hidden testcases?

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

Problem E can be solved (I guess unintentionally) in $$$O(n \log^2(n))$$$ by sorting edges by weight, storing vertices in DSU and merging their edges (by smaller to larger), and then doing BFS over edges with smaller index than our current one. And cherry on top: using priority_queue instead of set.

349991751

»
5 месяцев назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

C can be solved by repeat binary search right? O(nlogn) [edit: im dumb sry]

»
5 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

considering newbies like me please write solutions for problems like D in a more readable format omg T_T

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

how should i solve using this approach, sort all the bi's and rearrange ci's corresponding to the bi's . after that pick up any sword with damage x and consider all the monsters with health <=x and kill the monster with max ci value . but how do i find this monster with max ci value efficiently. please help.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Is there any binary search approach for C..

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится +2 Проголосовать: не нравится

H can be Accepted in $$$\mathcal{O}(q\sqrt{n}\log n)$$$,actually the data isn't strong enough.

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Can someone, roughly, explain the time complexity of my code for D?

This is the "optimal" one: https://mirror.codeforces.com/contest/2164/submission/356135976 And this is the non-optimal one: https://mirror.codeforces.com/contest/2164/submission/356135935

I'm not sure why both of them passed, as it seems like K * N to me(the final loop).

I wrote the optimal one first, and it passed, so I decided to check if the l and r were necessary, somehow got the same runtime, so I resubmitted, and it worked a little bit faster

»
3 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

B was really a great problem..