(Idea of the problem Div2A — ScreaMood)
(Developer of the problem Div2A — gop2024)
(Idea of the problem Div2B — kirillbogatiy)
(Developer of the problem Div2B — gop2024)
(Idea of the problem Div1A — Mr.Hakimov)
(Developer of the problem Div1A — Mr.Hakimov)
(Idea of the problem Div1B — gop2024)
(Developer of the problem Div1B — PeregudovSergey)
(Idea of the problem Div1C — ----------)
(Developer of the problem Div1C — ----------)
(Idea of the problem Div1D — gop2024)
(Developers of the problem Div1D — ---------- и gop2024)
Read this comment of saketh about another approach for this problem.
(Idea of the problem Div1E — gop2024)
(Developer of the problem Div1E — TheWayISteppedOutTheCar)
For Div1 D can someone help me prove/disprove how is this strategy working:
For any node L to get minimum sum we choose two children u and v such that dp[u]-2*n*sz[u] and dp[v]-2*n*sz[v] are smallest among all the children?
My code: 55909246
I'm not sure about your strategy. But I find that the complexity of your code can be reduced into O(n) by using bubble sort to obtain the minimum and second minimum value.
I support the question.
It looks like your code also fails on saketh's test case here.
Correct answer is 1019. Your code outputs 1018.
Here's the input:
35 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 1 10 1 11 1 12 1 13 1 14 1 15 1 16 1 17 1 18 1 19 19 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 19 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35
I just ran my code on this example. Its giving correct output- 1019
https://ideone.com/UAyyqj
Interesting. I guess the order of input data matters. We need the degree three vertex to be the root.
35 1 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 2 10 2 11 2 12 2 13 2 14 2 15 2 16 2 17 2 18 2 19 1 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 1 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35
https://ideone.com/kPEg0m
Thanks for the great editorial!
I have a question about Div1C editorial. In definition of x, isn't it supposed to be the maximum x (instead of minimal) that satisfies the mentioned condition?
If I'm wrong, can you please tell me why?
But money isn't a problem at all for Serge, so Serge is buying the most expensive dish if there is at least one remaining.
There is a typo in the editorial. Also if you check editorials source code for that problem, you will see that it finds maximal X.
In D, we only need to consider the two best dp values for each unique subtree size. With that observation, there's no need for CHT: we can just check all pairs quadratically and it will still be fast enough.
Sorry for my not understanding…… But doesn't it have a complexity of $$$O(n^2)$$$ when facing a graph that each vertex has an edge to the same vertex?
At each node, it’s quadratic in the number of distinct child subtree sizes. Your tree has a node with many children, but they have only one distinct subtree size.
I had the same idea. This clearly takes O(n sqrt n) but I'm wondering if there's a tighter bound.
I think it's way faster than that, like $$$n \log \log n$$$. Credit due to dinosaurs and danielfleischman.
Suppose $$$f(n)$$$ is an upper bound on work done on a tree with $$$n$$$ nodes. We need $$$f(n) \geq k^2 + \sum_i { a_i \cdot f(i) }$$$ for any $$$a_i$$$ where $$$\sum_i a_i \cdot i = n - 1$$$ and $$$a_i$$$ has $$$k$$$ non-zero elements.
Let's write $$$f(n)$$$ as $$$n \cdot g(n)$$$, and let's rewrite $$$\sum_i { a_i \cdot f(i) }$$$ as $$$\sum_{i < \sqrt{n}} { a_i \cdot i \cdot g(i) } + \sum_{i \geq \sqrt{n}} { a_i \cdot i \cdot g(i) }$$$.
The first sum is at most $$$\sum_{i < \sqrt{n}} { a_i \cdot i \cdot g(\sqrt{n}) }$$$ which is at most $$$n \cdot g(\sqrt{n})$$$. The second sum is at most $$$\sum_{i \geq \sqrt{n}} { a_i \cdot i \cdot g(n) }$$$ which is at most $$$\sqrt{n} \cdot g(n)$$$.$$$^\dagger$$$
Finally, $$$k^2 \leq 2\sqrt{n}$$$. So $$$f(n) \geq k^2 + \sum_i { a_i \cdot f(i) }$$$ becomes $$$n \cdot g(n) \geq 4n + n \cdot g(\sqrt{n}) + \sqrt{n} \cdot g(n)$$$.
Dividing by $$$n$$$ gives $$$g(n) \geq 4 + g(\sqrt{n}) + g(n)/\sqrt{n}$$$. $$$g(n) = 4\log \log n$$$ satisfies it for sufficiently large $$$n$$$.
$$$\dagger$$$ The root of a tree on $$$n$$$ nodes cannot have more than $$$\sqrt{n}$$$ child subtrees each with at least $$$\sqrt{n}$$$ nodes.
I think that $$$n \log \log n$$$ is also the worst case. We can construct a tree with a structure similar to the Sqrt-tree in which each node with subtree size $$$x$$$ has $$$O \left ( \sqrt{x} \right )$$$ children with distinct subtree sizes of at least $$$O \left ( \sqrt{x} \right )$$$. There will be at least $$$O \left ( \log \log n \right )$$$ layers that take $$$O \left ( n \right )$$$ time each.
Could you explain why $$$\sum_{i \geq \sqrt{n}} { a_i \cdot i \cdot g(n) }$$$ is at most $$$\sqrt{n} \cdot g(n)$$$ with more detail? For example, what if the node has only one child whose size is $$$n-1$$$ ?
I am wrong, sorry. Do you see how to fix it?
I don't know how to fix it either. A progress is that there exists a worst case in which $$$a_i \in \lbrace 0,1\rbrace$$$.
Proof: First observe that $$$f(a+b) \geq f(a)+f(b)$$$. Let $$$m$$$ be the largest number which satisfies $$$a_m > 0$$$. If there exists an $$$a_i >1$$$, it doesn't take less time to let $$$a_i=a_i-1, a_m=a_m-1,a_{i+m}=a_{i+m}+1$$$(after this operation, $$$m$$$ becomes $$$i+m$$$).
And I wrote a dp program which runs in $$$O(n^{2.5})$$$ to calculate the worst time. When n=500, the worst time is about 2700 operations. When n=5000, the worst time is about 31000 operations. It seems the time complexity is really something like $$$O(n \log \log n)$$$ or $$$O(n \log n)$$$.
If we assume that $$$f(n)$$$ is convex, it means that $$$f(x-1)+f(y+1) \ge f(x)+f(y)$$$ where $$$x \le y$$$. Following from this convex property and $$$a_i \in \lbrace 0,1\rbrace$$$, there is a worst case such the sizes of the children of any node with subtree size $$$n$$$ are $$$1, 2, 3, ..., k, n-1-\frac{k(k+1)}{2}$$$.
Consider the heavy path from any node with subtree size $$$n$$$ to a leaf. Whenever we do $$$O \left( k^2 \right)$$$ work and move down to the heavy child, the size of the subtree decreases by $$$O \left( k^2 \right)$$$, so the total work of this heavy path is $$$O(n)$$$.
Each time a node changes the heavy path it's on while moving to the root, the subtree size will be squared, so each node will be included in $$$O(\log \log n)$$$ heavy paths. It follows that the time complexity is $$$O(n \log \log n)$$$.
Good proof!
We can prove $$$f(n)$$$ is convex as follows, i.e. $$$f(n+1)-f(n) \geq f(n)-f(n-1)$$$.
If $$$f(n)$$$ is convex when $$$n \leq k$$$, then it can be proved that $$$f(n) \leq c \cdot n \log \log n+d(n \leq k+1)$$$ using your proof. Just let $$$f(n)=n \log \log (n)$$$ since the constants $$$c,d$$$ make no sense(by now) and $$$f$$$ represents the upper bound. We only need to prove $$$g(x)=x \log \log x - (x-1) \log \log (x-1)$$$ is an increasing function.
Though $$$g$$$ isn't always increasing, it's increasing when $$$x > 10$$$. We can assign some constants to $$$f(x) (x \leq 10)$$$ to ensure $$$f(x)$$$ is convex when $$$x \leq 10$$$. Then we add a proper constant $$$d$$$ to $$$f(x) (10<x\leq k+1)$$$ to ensure $$$f(x) (1 \le x \le k+1)$$$ is convex.
So $$$f$$$ is convex.
Oh, we knew this solution but we thought it`s something like $$$n^{\frac{3}{2}}$$$. Great thanks to you for this comment.
I am struggling to even see the correctness. Can you please give me some details?
gop2024 for div 1 c what you mean by the line.. "Now to find the answer we can use a segment tree that maintains a balance between the number of dishes and the number of pupils for all suffices of values"
which segment tree to use what balance we need to mantain. what to add in segment tree?. can you explain this more.
Sort all pupils and dishes with value, add them from big to small, dishes are $$$+1$$$, pupils are $$$-1$$$, the answer is the maximal $$$k$$$ that sum of $$$[k,1000000] > 0$$$.
But how to find this maximal k? What to store in nodes?
Use segment tree to count the maximum suffix sum.
And check every suffix? It'll be comething like O(n log n). Or maybe there is more efficient algo? Can you help me plz?
sir , my approach was same for question 1180b , but it failed at 9 th pretest .. reason was the big product value. i used 'long long int ' and i am not aware of any data type bigger than that. i solved using c language . please help ,how to overcome that.
https://mirror.codeforces.com/contest/1180/submission/55892852
here is my solution ,see 9th test
C++ can't handle big integer using inbuilt data type, at max i saw int128 something, so instead u can deal with logarithm of numbers and operation performed will be addition .even if u implement in python for handling big integers,it will rise tle,i guess.
You don't need to calculate the product, just work out its sign. This is easy, since after the loop all the ais are negative (if x>= 0 then |-x-1| > |x|). As such the product is negative if and only if n is odd.
See my code (in Python)
sir , i got the same approach in 1179b ,but was confused to think about the case in which case is not possible and we need to print -1 there ..... by this algo even if some points repeat , it will be very hard to keep on check on each point visited and will further will take a long time ,resulting in TLE ...... can you please give me an example where we cant visits all cell block and hence -1 prints .... thank you;
we never will print -1 because we always have a way which is mentioned above to visit the all cell block.
What is CHT?
where i have written CHT ??? or its a general question?
It's a general question.Would you please tell me what CHT is?
no bro , i too heard it first time... it sounds like a technique or algo ,dont know
edit got this on wiki == https://en.wikipedia.org/wiki/Circle_Hough_Transform
Hahaha I agree with quachtridat. CHT is Convex Hull Trick. Idk how you got that link, but maybe reading it a bit would tell you "Image processing" is not something you would think of here.
I think it is "Convex Hull Trick."
I would like to point out here, to editorialist, that it would be appreciated not to use short forms in tutorial/explanation so that people who don't know about it don't get further confused with something else. Even if you don't write the full form in the paragraph, maybe mention at the end, like below. If you let's say talk about CHT, and DSU.
Here ( just example )
CHT means Convex Hull Trick.
DSU means Disjoint Set Union, etc
To me it is not hard to get the acronyms provided that they are aforementioned in a short paragraph, because I got that CHT thing by quickly skimming through the tutorial for 1179D again when I saw his comment, but that is probably just me though :|
Also I think it is just as good to write a word's acronym right after itself, like "Convex Hull Trick (CHT)."
In DIV2 problem B why is the following statement required: "if (n != 3 || (v[0] != -3 || v[1] != -3 || v[2] != 2))"
Even without this line the code is being ACCEPTED.
submission
Didn't you answer your own question? If it is accepted then the statement is not required ( this is true, assuming tests are not weak ). And I can assure you, tests are not weak. A lot of people had submissions fail on system tests.
To properly answer your question, No, that line is not required.
Another tip: instead of trying to understand code provided, it is a better practice to read the explanation and try to write the code yourself.
" Now to find the answer we can use a segment tree that maintains a balance between the number of dishes and the number of pupils for all suffices of values. " What do they mean by maintaining balance and suffices of values? In div1 C.
Can someone please help me in fixing my code , it passed 8 pretests but blowed up on 9th one
the link of the code is given below https://mirror.codeforces.com/contest/1180/submission/55891316
Try this: 5 -3 -3 1 1 -1
I think you need to use operation on minimum element (not abs min) when n%2==1 my acc output: 2 -3 -2 -2 -1 your: -3 -3 -2 1 -1 In second and third tests all values equals after using operation on non-negative elements and it's not important what element changed.
(sorry for my english)
Nice editorial.
Maybe in problem A for di2 you mean O(1)?
Although the editorial code calculates it in O(1), it’s still possible to calculate it in O(n).
Hey, can you help me in understanding the function used in editorial code? How did he got that?
You can calculate the answer for small values of n by hand (draw a picture) and look for a pattern.
I think solution of Problem B of DIV 2 (1180B) is wrong. Because it's written that if the product is already positive, it's the answer. Else to apply the operation to the minimal number is obviously optimal. But I got the solution accepted when I applied the operation to the maximal number in the array and found it more sensible. Am I right here ? if not please correct me.
Author solutions means, that you should do all numbers maximum on modulo (because abs(-a-1) > abs(a)). And if we have even count of numbers, which are maximum on modulo it is the answer.(because — * — = +). But if count of numbers is not even we sholud find minimum of these negative integers, and we will do maximum positive integer. Maybe you mean find maximum on modulo?
In case you're looking for a proof/ intuition behind why the approach in 1180B — Nick and Array — works, here is some food for thought:
When n is odd, increasing the magnitude of all the elements will leave us with a negative product because of odd number of negatives.. So we have to apply the transformation on one of the elements again -- which will make the product positive again.
Let's just concentrate on the magnitude of numbers - we have a series of numbers to multiply — a1 * a2 * a3 * ... an — we have to decrease the magnitude of one of these by 1. Which element do you pick? — You pick the one with the highest magnitude
Why does this work?
we have to maximize (a1 * a2 * a3 * ... an) -- so we can think of it as maximizing the sum of their logarithms — (log|a1| + log|a2| + log|a3| + .. log|an|) .. Because of how log function "straightens out", the decrease in log value will be the least when |ai| is the highest -- we are looking further right on the log graph — that's why you choose the one with the highest magnitude and decrease it
In Div. 2 E/Div. 1 C, shouldn't it be the maximum
x
instead of the minimum?In div1B, the coordinates can be expressed as one dimension,that is i,j=x*m+(y-1). The jump is i->j->i... for all i-j and j-i are different, the algorithm is proved.
My code:55987211
For div1 C it is not written anywhere that people will leave after buying one dish so it is possible that people buy other dishes with remaining money but the solutions accepted doesnt seem to handle these cases for example- 2 1 99 1 1 1 2 1 100 for this testcase answer should be -1 but output is coming 1 can someone explain how??
Why I got TLE in the problem DIV2 D when I use endl but got AC in the '\n'? TLE Submission: https://mirror.codeforces.com/contest/1180/submission/56012176 AC submission: https://mirror.codeforces.com/contest/1180/submission/56012511
When you use endl, it is basically the same as '\n' but it also flushes cout and thus works a little bit slower. In fact, this optimization is usually pretty useless, in this problem replacing long long with int was enough.
Thank you.
In problem C-Div1, shouldn't the answer be "maximum" x which satisfies the condition that the number of dishes with cost ≥x is strictly more than the number of pupils who have more than x togrogs?
The std of Div1 D prints 180 while my program prints 181 when inputting this data onto them:
15
2 1
3 2
4 2
5 4
6 3
7 1
8 6
9 1
10 9
11 2
12 11
13 11
14 4
15 2
When we connect node 8 and node 10, we can get a better answer 181.
So is there a bug in std? Or have I misunderstood the problem?
My submission: http://mirror.codeforces.com/contest/1179/submission/56058316
(By the way, the data are so weak that both my code and std accepted the problem...)
can somebody help me with the following code: https://mirror.codeforces.com/contest/1180/submission/56155460
Your declaration takes a very huge amount of memory, namely.
This is a (long long) array of 101001111 elements of 64-bit size. The amount is approximately: 101001111 * 64 / (8 * 1024 * 1024) ~ 770.58 MB, which is larger than 256 MB allowed in the problem statement.
i still didn't understand div 2 B why aren't we considering the fact that pos nos are even or odd??because on converting odd no of postive integer we will end up with negative product. pls reply
I find this approach to be easier and intuitive. First you need to convert all elements to non negative integer that is either zero or positives. Now you just need to sort the array and then make elements negative pairwise so that you can get maximum product at the end. If your array length is odd then you don't need to add any condition you just need to leave the last element as it will handle itself the maximum product. For example if your array is -2 -4 -1 0 0 1 2 3 then after converting them to non negatives it becomes 1 3 0 0 0 1 2 3 now after sorting and making the elements pairwise negative you will get 0 0 0 1 1 2 3 3 --> -1 -1 -1 -2 -2 -3 -4 -4 now you can restore your array order by sorting it with index for that you can store it in pair.
Can someone please explain div2 A editorial? I am not able to understand the sequence given in the editorial code.
For Problem D, Div. 1, from what I understood they say that dp[L] = min(dp[p1] + dp[p2] + (n — szp1 — szp2) ^ 2). I agree with that, but then they say that we get n^2 + dp[p1] − 2⋅n⋅szp1 + dp[p2] − 2⋅n⋅szp2 + 2⋅szp1⋅szp2. Shouldn't it be n^2 + dp[p1] − 2⋅n⋅szp1 + dp[p2] − 2⋅n⋅szp2 + 2⋅szp1⋅szp2 + szp1^2 + szp2^2?
In div1D, what is the meaning of the sum A, B and C? Thanks in advance
in div2C i was getting MLE bcz i was using deque but same logic using vector works fine enough can anyone plz explain why this is so
Well i spent almost 2 hours on problem B(Nick and Array). Nice and tricky problem anyway!!!