|
+39
The last contest has a very exciting convolution problem, let's keep that tradition continue for this year... Also, I won't do any problem this year for sure... Also here's a rickroll (again): Click here! |
|
0
Problems are extremely hard, trust me |
|
На
anaconda666 →
CORNER SUBGRID COUNT CSES : PRAGMA TARGET(POPCNT) V/S PRAGMA O3 OPTIMIZATION, 4 недели назад
0
Would writing a function to wrap |
|
+2
Part A:
Part B: https://mirror.codeforces.com/blog/entry/44575?#comment-291148 The only change is that TST is having 3 problems and a total score of 300 points for each day, so APIO score (out of 300) and TST (out of 600) are added together (900 points maximum) to select the IOI team. There're no age restrictions on the IOI team (aside from being a high schooler), many did it while only being in 10th grade, so a maximum of 3 times. Usually 600-700 students are selected for round 1 of our national olympiad, through provincial or school selection tests. Part D is simmilar to the above. |
|
0
At least replace $$$l\leq r$$$ with $$$l,r$$$... Also, why did 20+ guys solved Z? Is the problem statement leaked or something? |
|
0
It took me a while to think why it works, but it actually works. My sketch |
|
На
O1_TooEasy_VK →
Sqrt Decomposition (codeforces round 429 DIV1) problem D Destiny, 5 месяцев назад
0
If such a element that occurs at least $$$(r-l+1)/k$$$ times, then you have at least $$$1/k$$$ chance of picking it by picking a number from $$$a[L..R]$$$. So there's at most $$$1-1/k$$$ chance that you wouldn't be able to pick it. So after $$$x$$$ iterations, there's at most a chance of $$$(1-1/k)^x$$$ that you wouldn't be able to pick the answer, hence the formula. Choosing a random index can be done simply as
|
|
+17
It's great to see another sqrt problem appearing in the round. |
|
На
O1_TooEasy_VK →
Sqrt Decomposition (codeforces round 429 DIV1) problem D Destiny, 5 месяцев назад
0
Imagine what happens if K just decides to increase while L and R stays the same. Clearly, the set S will remain unchanged. So the above code's idea is wrong. For the second code, 351935076, instead of brute forcing (so called "Fixed deterministic sampling"), just try to pick $$$\log_{1-1/k}(p)$$$ ($$$p$$$ is the failure rate, $$$10^{-9}$$$ is fine, also you should prepare the number of iterations as constants) numbers from the segment $$$[L..R]$$$ as indices. |
|
+8
The theoretical rating (the rating of a participant that has a 50% chance of solving) is calculatable from the rankings. For example, https://clist.by calculates them. The rating on CF is usually quite close to this rating, but maybe a bit lower (sometimes the [theoretical] rating can be inflated). |
|
0
Removed |
|
+1
You can see a bijection between the condition of "is LCA of a set of k nodes" and "subtree size larger than or equal to k", since there is always a way to do so (e.g. choose the root, then k-1 other nodes). Thus, we can solve this problem with a simple DP with rerooting, simmilar to 1092F - Tree with Maximum Cost. |
|
+5
|
|
+8
Why is there a surge of people doing heavy-light partitioning in Div 3? I mean, was it that hard to do a simple tree-based solution? Look, it's not a 2400-rated D2E. |
|
0
Considering that the base set and modulo in the editorial's solution is fixed, I would guess that test 97 is a hack case. Random fact |
|
0
Very simmilar problem to G |
|
0
That's some useful dimensional reduction you got there. I appreciate the effort |
|
+5
It's really weird that my solution passed just with a |
|
+2
I like to call it "heavy-light partitioning", a few problems can be found here: https://mirror.codeforces.com/blog/entry/139745?#comment-1248383. General strategy is to find a parameter that can be split to "heavy" and "light", in this case it's $$$k$$$ (process the "light" $$$k$$$'s ($$$\leq \sqrt{n}$$$) and the "heavy" ones separately then combine them somehow.) |
|
0
If you prune the divisors after each iteration, and combined with offline processing, you can get $$$O(d)$$$ for each query. Proof |
|
0
I think so, when |
|
0
I have replaced your |
|
0
Another problem that uses Taylor expansion: https://oj.vnoi.info/problem/vmsincos, but the error of approximation on this problem is not that big (I used 7-term Taylor expansion and it passed). |
|
0
Divide and conquer. You can see that it computes $$$f(a,b)$$$ by using a subproblem, $$$f(a - top_{bit},b - [top_{bit}\leq b]*top_{bit})$$$. |
|
+6
E can be solved using DNC too, see 311198305. |
|
+3
set of gcd of all prefixes of an array such that ai<=n contain at max log(n) elements. |
|
0
Can anyone explains why this brute force solution passes? I haven't be able to proof the complexity, but I think it's probably $$$O(log^2(r))$$$. Fun fact About hacks in F |
|
+4
966E - May Holidays combining HLD and sqrt decomp. 398D - Instant Messanger heavy light partitioning. 348C - Subset Sums heavy light partitioning again. 1580C - Train Maintenance heavy light partitioning AGAIN. 911G - Mass Change Queries small to large and sqrt decomp. 342E - Xenia and Tree sqrt decomp and rebuild. 522D - Closest Equals block processing / Mo's algorithm. Bit counting 617E - XOR and Favorite Number but extended. Sub-subsequence sqrt decomposition with tries. Delete numbers heavy light partitioning AGAIN (yes, this is starting to be boring now). And a lot of other problems that can be solved with other data structures, but I like sqrt decomposition the most :) |
|
+3
I have noticed a huge inefficiency in your code. You used |
|
0
My idea for H: Let $$$dp_0$$$ being the sum of all sequences ending at 0, $$$dp_1$$$ is defined the same way, $$$nums_0$$$ is the number of subsequences ending at 0 before position i, $$$nums_1$$$ defined the same way. The brute force code looks like this: So, how do we optimize this? Matrix multiplication is looking good here. Let's use matrix multiplication: The only thing left to do is to use a segment tree, which I implemented in 305304142. |
|
+8
Some other ways to solve the problem:
There's a general idea of rebuilding the data structure every $$$O(\sqrt{n})$$$ queries, I think that this idea is simmilar in a way that it makes the block size being bounded to $$$O(\sqrt{n})$$$, but more efficient because it only splits a single block. |
|
+1
I think that I have an $$$O(n^{5/3})$$$ solution. First, compress the numbers. Then, do Mo's algorithm with updates, keeping track of the count of the numbers and do they exist in the current segment. Update the existing-in-segment array using block decomposition (see technique 2 of https://mirror.codeforces.com/blog/entry/100910). Then, each query of type 2 costs $$$O(\sqrt{n})$$$ for getting the distinct count, that's a total of $$$O(n\sqrt{n})$$$ plus $$$O(n^{5/3})$$$ for the main Mo's phase. (Assuming $$$q=O(n).$$$) If there's no unique elements constraint then you can use a segment tree with an ordered set data structure in each node, to get $$$O(nlog^2(n))$$$ complexity. Edit 2: There's an asymptotically better solution that runs in $$$O(nlog^3(n))$$$ time. Consider the points $$$(i,nxt_i,a_i)$$$ and use Point Add Cuboid Sum. |
|
На
NahC0el →
Why does this code run for about 5s with C++20 but 1.2s with C++17/23?, 14 месяцев назад
-8
|
|
0
A kind of brute force based matrix multiplication solution for F: https://mirror.codeforces.com/contest/2060/submission/301857001. This completes the max test Max test in ~3.6s. The complexity is $$$O(klog^2(k)log(n))$$$. |
|
+3
|
|
0
It's possible, check out the only solution in the contest time. The actual solution only improves the time of updates and queries in the $$$blocks$$$ array. |
|
0
If you do block decomposition of size $$$O(n^{1/4})$$$ on the $$$blocks$$$ array, you could also attain $$$O(1)$$$ update and $$$O(sqrt(n))$$$ query complexity. I think it's more intuitive, but it seems a bit slower. |
|
0
No, the problem can be solved online. |
|
0
966E - May Holidays The problem that combines my two favourite techniques (HLD and sqrt decomp). I had great fun solving it. |
|
+6
|
|
+1
Another classical but rarely used idea: 1155E - Guess the Root. |
|
0
Even for $$$n$$$ up to $$$10^7$$$, the performance can be nearly equal to std::set. |
|
+13
You can simulate a set using $$$O(\sqrt{n})$$$ deques, and it can be made faster than PBDS (for $$$n,q\leq5*10^5$$$) while also being very memory efficient. Update: the worst case is now faster than PBDS, it also was much faster than the previous code. On average it's now 2x faster than PBDS. |
|
0
The complexity of problem C should be $$$O(nlog(a_i)+t(D(a_i)+f(a_i)))$$$, where $$$f(a_i)$$$ is the complexity for factorization. Using Pollard's rho algorithm C can be solved with $$$a_i\leq10^{14}$$$ (or $$$10^{18}$$$ even if $$$t\leq100$$$). Is there any way to solve it in faster than this time complexity (e.g. with the original constraint of $$$t\leq1000$$$ ? I don't think that fits in the 2s time limit.) |
|
0
E can be solved using divide and conquer: Let $$$f(l,r)$$$ be the result when considering the array from $$$l$$$ to $$$r$$$. Consider $$$mid=(l+r)/2$$$. We know that $$$f(l,r) = f(l,mid)+f(mid+1,r)\ +$$$ (sum of all arrays formed of by merging a non-empty suffix of $$$[l,mid]$$$ and a non-empty prefix of $$$[mid+1,r]$$$, each array has its sum under modulo $$$M$$$). We take every non-empty prefix of $$$[mid+1,r]$$$, sort them by (their sum modulo $$$M$$$). For each non-empty suffix of $$$[l,mid]$$$, let $$$suf$$$ be the sum of the suffix modulo $$$M$$$, we count the prefixes that have value less than $$$M-suf$$$ directly. Else we subtract the result by $$$M$$$ for each prefix that is greater or equal to $$$M-suf$$$. (since $$$suf$$$ + (a prefix modulo $$$M$$$) is less than $$$2*M-1$$$). To calculate this fast for each suffix we use binary search and prefix sum. Code: https://atcoder.jp/contests/abc378/submissions/59455453. Complexity: $$$O(nlog^2(n))$$$. |
|
0
Anyone solved E using divide and conquer? |
|
0
Basically the unordered_map hack is that you put a lot of multiples of a special prime that'll make the unordered_map not being able to resize correctly. It'll put all of those numbers in a single "bucket" which is like a normal array, so searching in that array will take $$$O(n)$$$ time, thus causing the operation time to be $$$O(n)$$$. So your complexity goes up to $$$O(n^2)$$$ which is not what you want. To fix this problem you can use a custom hash function as described here, which will make the hash unpredictable so the hashes can't trigger the special prime. |
|
+3
My favourite algorithm is SPFA (or FFT) |
|
0
Can you give more details about the solution |
|
0
It doesn't need to be online |
|
0
Is the queries guranteed to be $$$O(1)$$$? Because i need to do $$$O(n\sqrt{n})$$$ of them. |
|
0
I have found an $$$O(n)$$$ algorithm after searching for it online: https://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation. |
|
На
risingStark →
A variation of binary search. Need some help in correcting the logic., 19 месяцев назад
0
Maybe try ternary search since you're dealing with a convex function |
|
0
|
|
0
By the way, $$$O(n\sqrt{n}log(n))$$$ does work for subtask 2, which matches your constraints. |
|
0
There exist a variant of this problem which only deals with subtractions: https://oj.vnoi.info/problem/bedao_m23_e (obviously $$$O(n\sqrt{n}log(n))$$$ does not work here) |
|
На
code__void →
Need Suggestions on using "Mint" or (customizable int to keep value in certain range), 20 месяцев назад
0
|
|
0
It has to do with optimization and the tasks. An optimized matrix multiplication code with |
|
0
Finally, it happened. |
|
0
No, https://judge.yosupo.jp have GCC 14 for a few months now. |
|
0
Use |
|
0
The number of keys in the map in this case is constant (6). But map has a high constant factor, so it'll still affects accessing times by a lot. |
|
+1
Replacing endl with |
|
0
|
|
0
Doing |
|
0
See this article: https://mirror.codeforces.com/blog/entry/45485. It's a sparse table but with 2 layers. |
|
0
The init function is for you to prepare anything that could process the queries. Eg. a sparse table. |
|
+3
I just looped through all possible values of $$$n*a-b$$$ which take $$$O(log(n*a-b))$$$ time. Then, the process of computing the number of digits also take that time. (can be optimized a bit though) |
|
0
The complexity is actually $$$O(log^2(n*a - b))$$$ for $$$a\neq 1$$$. I also did the same thing. |
|
0
Just simple algorithms like greedy, binary search, sortings |
|
0
you need to do AB div2 fast, 5 div4 problems and 4 div3 problems. The E div4s and D div3s were usually 1400. |
|
0
Decompose the number K to fibonacci numbers: https://oeis.org/A003714. |
|
0
E can be solved in $$$O((n+q)\log(max(h_i))\log(max(w_i)))$$$ with the same amount of memory (if you choose the container as nested hash tables) or $$$O(max(h_i)max(w_i))$$$ (if you choose arrays which are faster) if you used 2D BITs. The solution is 2x faster than the normal solution: https://mirror.codeforces.com/contest/1722/submission/267849082, https://mirror.codeforces.com/contest/1722/submission/267849782. |
|
0
If you want a multiset count that works in O(logn) you can check out ordered_set. From cppreference: https://en.cppreference.com/w/cpp/container/multiset/count, note that the complexity is "Logarithmic in the size of the container plus linear in the number of elements found". https://mirror.codeforces.com/blog/entry/111217 mentioned this as "Mistake 22". |
|
0
Very useful explanation, thanks |
|
0
How can you explain the $$$length \lt 5$$$ in E? Also C's code has formatting issues. |
|
0
In F, if you accumulate the total damage at the first iteration of binary search ( |
|
+12
A O(nm) solution for E that uses xor hashes: Submission |
|
0
I almost used dynamic segment trees not knowing there is a much easier solution. |
|
На
mukel →
Concurrency, parallelism, multi-threading... non existing in programming contests?, 23 месяца назад
0
Some NlogN algorithms can still be parallelized by using threads (eg divide and conquer algorithms like merge sort). |
|
0
Code Algorithm |
|
-10
Spoiler |
|
+10
I see an example in CodeTON Round 8 F there were 2 facts (shown as key "observations"):
Those 2 facts were left completely unproven in the editorial. Also the editorial is hard to read, and I can't understand it clearly, but it is not as important as the 2 unproven facts. |
|
+9
Sorry for the downvote, it was by accident. I do want to upvote. |
|
0
I solved D before A,B,C in div3/4s and regret it because the penalties add up to the point where I became lower ranked than the one who does A,B,C first and D slowly. |
|
На
falcoln →
If you could add one new superpower to a programming language, what would it be and why?, 2 года назад
0
|
|
-12
The push function if called with a range of 1 doesn't need the r variable, so one can create a push1 function (which only pushes 1 element). It actually provided me with a 2x speedup in 1440E - Greedy Shopping. |
|
0
One can process everything that the program might receive in "O(1)" time since it's always bounded. How to process it is an another story. |
|
На
I_love_Saundarya →
Can FFT be applied if the coefficients are double/floating point numbers ??, 2 года назад
0
Why wouldn't you like to shift the dot? Also for big floating point numbers: the mantissa can be FFTed. |
|
На
I_love_Saundarya →
Can FFT be applied if the coefficients are double/floating point numbers ??, 2 года назад
0
You can do long division |
|
На
I_love_Saundarya →
Can FFT be applied if the coefficients are double/floating point numbers ??, 2 года назад
0
Explain it more clearly. |
|
На
I_love_Saundarya →
Can FFT be applied if the coefficients are double/floating point numbers ??, 2 года назад
0
I don't understand what you are talking about. FFT is suitable for complex/floating point convolution, there's no need to multiply by some power of 10 to pass integers to FFT.
|
|
На
I_love_Saundarya →
Can FFT be applied if the coefficients are double/floating point numbers ??, 2 года назад
0
Why did you think that FFT is only applicable to integers? Normal FFT uses floating point numbers in their calculations and the result returned is their floating point convolution. It's worked with as integers for multiplying polynomials with integer coefficients. |
|
0
That was my idea of using the terminal to do it. Of course the image needs to be resized to a square, but it should work. |
|
+8
I think that if G add the |
|
0
My reaction |
|
+1
The observing parts are quite hard but there are a few easier problems. The implementation is extremely easy though. |
|
0
Just check if anyone participated in previous April Fools contests and gets rated =) |
|
+9
- April Fools Contest |
|
+9
April Fools contest 2023: Completely REAL factorization of F Completely FAKE factorization of F (100% wrong) |