| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | strapple | 3515 |
| 6 | tourist | 3470 |
| 7 | dXqwq | 3436 |
| 8 | Radewoosh | 3415 |
| 9 | Otomachi_Una | 3413 |
| 10 | Um_nik | 3376 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 158 |
| 2 | adamant | 152 |
| 3 | Proof_by_QED | 146 |
| 3 | Um_nik | 146 |
| 5 | Dominater069 | 144 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | chromate00 | 134 |
| 9 | TheScrasse | 134 |
|
+96
Nice blog! As a researcher one of whose main areas of study is matrix multiplication, it's nice to see people interested in this topic. One additional remark on this post: Q2 asks whether there exists an $$$o(n^\omega)$$$ time algorithm. This question actually has a negative answer — with a technical caveat: it depends on the computation model. My answer assumes we are working in a model that restricts "computation" to arithmetic operations, but it is consistent with current knowledge that, over some Boolean models, one can go beyond $$$n^\omega$$$, for example via table lookup. The main reason is the nice self-similarity structure of the matrix multiplication problem, as already mentioned in the post — a fixed "algorithm" for a finite matrix multiplication can bootstrap itself into a genuine algorithm working for all $$$n$$$. Indeed, Strassen's algorithm shows that the fact "$$$2 \times 2$$$ matrix multiplication can be done with 7 multiplications" already yields the estimate $$$\omega \leq \log_2 7$$$. In hindsight, the academic community rephrased this idea through the notion of tensor rank. Glossing over some rigor, the tensor rank of $$$n \times n$$$ matrix multiplication is essentially the minimal number of "multiplications" required to multiply two such matrices. We then have two fundamental facts: Tensor rank to algorithm (generalization of Strassen's divide-and-conquer): If $$$n \times n$$$ matrix multiplication has tensor rank at most $$$r$$$, then there is an algorithm achieving exponent $$$\log_n r$$$. Algorithm to tensor rank: If an algorithm performs $$$T$$$ arithmetic operations to compute $$$n \times n$$$ matrix multiplication, then $$$n \times n$$$ matrix multiplication has tensor rank at most $$$O(T)$$$. So what happens if we combine the two? Suppose there is an $$$o(n^c)$$$ time algorithm for matrix multiplication (and, as noted, we assume it only performs arithmetic operations). Then $$$N \times N$$$ matrix multiplication can be done in $$$o(N^c)$$$ operations; for sufficiently large $$$N$$$, this means $$$T \lt N^c$$$, so there is an algorithm achieving exponent $$$\log_N T \lt c$$$. I deliberately avoided using the notation $$$\omega$$$ in the argument above, because the modern definition of $$$\omega$$$ is itself given through tensor rank: $$$\omega = \inf_{n \gt 1} \log_n r_n$$$, where $$$r_n$$$ is the tensor rank of $$$n \times n$$$ matrix multiplication. Under this definition, $$$\omega \leq \log_n r$$$ is essentially a tautology. This definition is also less ambiguous, since its value is independent of the computational model and only depends on the characteristic of the field. And you should believe it genuinely captures the complexity of matrix multiplication — unless you think there is some completely mind-blowing way to do arithmetic without actually doing it! |
|
+44
Rollback should occur more frequently. |
|
+29
future red coder right here |
|
+29
I tried hard to recall when the last AtCoder official contest had a task with a partial score and realized I was still a primary school student then. |
|
+27
I dont think it is a good enough reason for it to be unrated. Refer to this blog. |
|
+20
Your contest performance has remained roughly the same since the creation of your account, and you've been a specialist for months, seems like you are using an alt just to say this You really seem a bit set on thinking you're better than other people... go practice instead |
|
+16
your submission for E looks like AI-generated code bro. |
|
+15
|
|
+12
Team UK:
|
|
+11
Answer to the $$${Bonus}$$$ $$${Problem}$$$ $$${D:}$$$ My approach uses greedily finding the longest palindromic subsegments, but the concept of this approach works for the idea discussed in the Editorial as well. Since there are $$${exactly}$$$ $$${2}$$$ occurrences of each element in the array, therefore the number of pairs which are equal and of the form $$$a_{i}=a_{j}$$$ is only $$${n}$$$. Also, since the $$${positions}$$$ of each element is $$${fixed}$$$, therefore, for each $$${center}$$$, if you find palindrome with that center, there are $$${unique}$$$ set of palindromic pairs. If we iterate on the array, and at the current index we try to find the palindromic pair of this element. If the palindromic pair is at an index $$${j \geq i+1}$$$, then we check for each pair $$${(i, j)}$$$ $$${,}$$$ $$${(i+1, j-1)}$$$ $$${,}$$$ $$${(i+2, j-2)}$$$ $$${,}$$$ $$${....}$$$ $$${,}$$$ $$${(i+k, j-m)}$$$, where $$${i+k \leq j-m}$$$. While checking we will also mark them as visited in a visited array. If at any point we get to know that this selected subsegment is not palindromic we simply just increment $$${i}$$$ to the index where the palindromic condition first failed, i.e., $$${i=x}$$$, where $$${a[x] \neq a[y]}$$$ for some indices $$${x}$$$ and $$${y}$$$ during the checking of palindromic condition. After incrementing, check for other possible palindromic subsegments by repeating the process. But, if we find that the selected subsegment was a palindrome, then we iterate on the visited array from $$${0}$$$ to $$${n}$$$, and store the maximum $$${mex}$$$ found yet. Then we increment the $$${i}$$$ to $$${j+1}$$$, where $$${j}$$$ was the index of the palindromic pair of the element at index $$${i}$$$. We increment $$${i}$$$ to $$${j+1}$$$ because we have found the longest palindromic subsegments greedily, and since there are $$${exactly}$$$ $$${2}$$$ occurrences for each element in the array, therefore $$${any}$$$ subsegment of the palindromic subsegment $$${(i, j)}$$$ cannot be present in any other palindromic subsegment. Hence, we only are iterating on the array just $$${once}$$$ and we visit each and every element just $$${once}$$$. Therefore, only $$${O(n)}$$$ operations are performed in greedily finding the longest palindromic subsegment. Now, talking about the finding of $$${mex}$$$. Since $$${0}$$$ can be present in $$${at}$$$ $$${most}$$$ $$${2}$$$ palindromic subsegments, therefore, for all other palindromic subsegments the $$${mex}$$$ can be found in just $$${one}$$$ operation $$${( mex}$$$ would be zero in that case $$${)}$$$. $$${At}$$$ $$${most}$$$ $$${2}$$$ subsegments will take $$${O(n)}$$$ time to find their $$${mex}$$$, therefore in total it takes $$${O(2*n-2+2*n)}$$$ $$${=}$$$ $$${O(n)}$$$. Therefore, the total number of operations turn out to be $$${O(n) + O(n)}$$$ $$${=}$$$ $$${O(n)}$$$. My submission uses the same idea. |
|
+11
that is not good bro! |
|
+11
interesting! Any recommended reading? Can you share the book chapters you said you read? Also, is there any reason behind adding $$$\pm 0.1$$$ to $$$\omega$$$ in Q3 and Q4 instead of some other constant? |
|
+10
The first Premier Round with 2:30 duration. |
|
+10
Don't miss the contest! |
|
+9
Best Div 3 of the year? |
|
+9
wtf we have nearly identical solution even the naming convention for C this is my submission holy shit i hope i don't get banned lol. |
|
+9
instead of making suffix min array in problem E, we can simply track the max gain by maintaining two state variables... My Code |
|
+9
noted...and added the spoiler tag. thanks for the tip! |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 42 hours ago
+9
can be made online by a similar style of binary search let LCA(u,v) be a query with tin[u]<tin[v]. consider the preorder traversal: we want to find the smallest i (<=tin[u]) such that max(tout[preorder[i]], tout[preorder[i+1]], ..., tout[u]) <= tin[v] then answer is preorder[i-1] then implement it with a walk over seg tree. To make it more golfed, do the walk in this style https://mirror.codeforces.com/blog/entry/118682 |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 47 hours ago
+8
yes |
|
+8
don't do it after this contest , so good luck |
|
On
abhijeetballabh_23 →
CodeBlitz — A Blitz-Style Duelling Platform for Competitive Programming, 38 hours ago
+8
Tried this out with my friend and honestly, we had a great time! It’s a really helpful and amazing platform to compete with friends, felt super engaging and actually fun while solving problems. Loved the concept! |
|
+8
although I only solved A,B,C . it was a good contest ! hope I make it in the next few rounds ISA |
|
+8
Problem authors definitely made the hacking interesting this round. Having a lot of fun! |
|
+8
I dont know can it be called greedy. I just did simple dp, like we are not deleting leaves, and rerooted it. |
|
+8
My post contest discussion stream. |
|
+8
You're right! The point is that the paper I referred to also doesn't use the definition of $$$\omega$$$ — it just talks about a specific exponent $$$\log_n r$$$ achieved by a tensor of rank $$$r$$$. People sometimes do this for convenience. I'm also a big fan of this "speedup theorem" of Coppersmith–Winograd. It's quite a "philosophical" result and produces very interesting consequences, but it's probably not strong enough to imply what you mentioned. If one day we could prove a strengthened statement like "if $$$n \times n$$$ matrix multiplication has tensor rank $$$r$$$, then $$$\omega$$$ can be improved to $$$n^\omega \leq o(r)$$$," then it could be used to rule out the possibility $$$T = O(n^\omega)$$$ you described. As far as I know, however, the current speedup theorem only yields bounds of the form $$$n^\omega \leq (1 - o(1))\, r$$$. |
|
+8
hmm… I did not say $$$T=O(n^{\omega})$$$. That is probably indeed not enough. I said $$$T=n^{\omega}$$$ is not possible. or am I also wrong here? |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 9 hours ago
+8
Nice approach! Enforcing $$$tin_v \leq tin_u$$$ yields a lot of useful observations. Another fun fact I found(it may not be useful): Is that nodes in the prefix of preorder of $$$v$$$, always follow this: If this node is not ancestor of $$$v$$$, then it is not ancestor of $$$u$$$ also. Perhaps it is useful to find a closest node in an unweighted tree which is not ancestor of both $$$v$$$ and $$$u$$$, where closeness is defined as $$$d(x, v) + d(x, u)$$$. This problem divides into two cases: One of $$$v$$$ and $$$u$$$ is leaf, output $$$d(u, v) + 2)$$$, otherwise similar to your approach find largest $$$i$$$ in the preorder such that $$$tout_{ord_i} \lt tout_v$$$, then another cases from preorder of LCA to $$$u$$$ range is also similar, if we found one always answer is using this node or if this node outside the path of $$$v \leadsto u$$$, then we need to also find for subtree of $$$u$$$ and subtree of $$$v$$$, which also use same min/max i preorder logic, and find minimum one. |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 47 hours ago
+7
The average distance between two random nodes in a random tree is asymptotically approximately $$$\sqrt{\frac{n\pi}{2}}-1$$$. Therefore, in random trees there is not much improvement, as $$$\log\left(\sqrt{\frac{n\pi}{2}}-1\right) \approx \frac{1}{2}\log(n)$$$. |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 47 hours ago
+7
For the idea in the blog, yes it is true. It worked 2x faster than normal binary jumping approach on https://cses.fi/problemset/task/1688/ but I am more interested in this optimization: https://mirror.codeforces.com/blog/entry/153399?#comment-1362593 . |
|
+7
becouse of em now i prob loose ma rating again thats why i aint joining to the damn contests |
|
+7
14'th type is "AI slop blogs" |
|
+7
The example note in C made me question reality for a moment... |
|
+7
No, I do not :( |
|
+7
The rating distribution of the problems is really bad. For example, with a target rating of 1800, you get problems rated 1200, 1400, 1600, 1800. First: if I'm aiming for rating X, I should be given at least 1 task with a rating >X. Second: the 1200- and 1400-rated problems in that example are just a waste of time — meaning half of the recommended problems are trash. |
|
+7
Wait, Isn't it same as ThemeCP ? |
|
+6
Romania: Team 1, melonee: cadmiumky Voicu Mihai-Valeriu, Ianis Belu Ianis, Iancu007 Sandea Iancu-Ioan, Stroe Ioana Team 2, Ne Trebe Fanta: MateiKing80 Neacsu Matei, Ema_Nicole Gheorghe Ema-Nicole, Marghidan David-Vlad, Viespescu Carina-Maria Guest 1, melonie: mircea_007 Rebengiuc Mircea-Maxim, gets007 Grecu Tudor-Stefan, Dobromir Vlad-Andrei, Dumitrescu Ana-Gabriela Guest 2, Pomelos: anpaio Iorgulescu Andrei-Paul, zygo Mohanu Dominic, Laurra Moldovan Laura, Giusca Dragos Can't wait to see you in the great city of Piatra Neamt! |
|
+6
how about : https://mirror.codeforces.com/profile/Fanboymessi . He solve E,G,H in 5 min ? (even StarSilk or every lgm i checked they need mininum 15min) |
|
+6
It's pretty crazy isn't it? One of anthropic's models also found a really old bug in openbsd apparently, so it could be that systems are either going to be very secure or very insecure in the near future. But I wonder if any of these bugs have already been discovered by the $$$CIA$$$ / $$$NSA$$$ (like eternalblue) or if some of them were put there intentionally. Like this privesc didn't exist before $$$2017$$$, why is that? |
|
+6
United States of America Delegation: Team Leader: Dylan Karpf Team Name: The Anti-Bug Alliance
|
|
+6
wonderful contest, fun problem without mind boggling math tricks and complex data structure.the best div3 and wish more nice contest like this |
|
+6
لا إله إلا الله سبحانه وتعالى عز وجل |
|
+6
Hi Baitian! Thank you for your insight. I wanted to make the blog accessible to the general CP audience so I had to do some tradeoff between understandability and rigor. In particular, I am being a bit hand-wavy with what the "number of arithmetic operations needed to multiply matrices" means. I also decided against including tensor ranks to not scare people off :) Indeed, if we only use arithetic operations, we cannot get $$$o(n^{\omega})$$$ in full generality. If I understand correctly, this paper implies that even $$$T = N^c$$$ is not possible. What I meant by Q2 is slightly different though. At that point we did not yet give a precise definition of matrix multiplication, so we are treating it in an elementary sense where we either multiply integer matrices or matrices modulo some prime as we normally do in CP. So Q2 is supposed to be just a simplified version of Q4. If I understand correctly, the reference you provided says that the answer to this question is yes if $$$\omega \gt 2$$$. (But if I understand correctly, it works only for constant-sized fields? In particular, we don't know that for boolean matrix multiplication, right?) Thank you for this link. I did not know about that! |
|
+5
Fast editorial |
|
+5
Thanks for the contest , I just participated out of speed-running first few problems. Spoiler
btw C is a nice problem. I looked other problems , They look interesting too. Congrats on this successful contest . |
|
+5
I'm genuinely impressed with problem G, how can one come up with that idea? |
|
+5
glad to know that there's solution without using Fenwick Tree/Segment Tree in problem F. |
|
+5
Problem F was a modification/extension of problem E. This is the first time I have seen this sort of "multi part" problem in a codeforces round despite it being quite common in national/international Olympiads. Personally, I find these sorts of problems quite enjoyable. Kudos to the problem authors. |
|
+5
Instant tutorial, the contest was great, and the author is Egyptian!!! I can't ask for a better thing, thank you, yse |
|
+5
Thanks for bringing to my attention the Egyptian Koshari dish, I will try! |
|
+5
I realised the solution for E in the last 10 minutes but ran out of time :( Solved it 15 minutes later though :P This was a fun first contest |
|
+5
Very nice problems! |
|
+5
Suffix minimum idea in E is actually neat ! |
|
+5
Wow, this was my first contest. Solved problems A, B and C smoothly, struggled in D. But a great experience overall!!! |
|
+5
guys i made it :DD |
|
+5
Sure. SIMD operations and cache efficiency = fast, right? ʕ•ᴥ•ʔ |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 6 hours ago
+5
pi??? |
|
+4
blog was originally posted 11 days ago, however it was not public, now they opened it |
|
+4
I really liked the problemset |
|
+4
the persons who use ai , their family all died already btw, an excellent contest |
|
+4
Team Japan hirayuu_cf 3rd time at IOI, 1 attempt left. fact493 1st time at IOI, 0 attempts left. keisuke6 1st time at IOI, 0 attempts left. reil_st 1st time at IOI, 0 attempts left. |
|
+4
|
|
+3
H is a really cool problem, but how to solve G? Really good contest, thanks to authors |
|
+3
Mann ..Stack saved me at the last minute in E |
|
+3
delete cmt |
|
+3
From what I understand this change has been made by codeforces in their API, not from Cloudfare. Clist, Carrot, GetRating none of them are working |
|
+3
. |
|
+3
You are attributing your failure to external factors rather than your own lack of skill or knowledge. Sure maybe you would've lost less rating if cheaters didn't exist. But that doesn't change how good you are, you are the one that wasn't able to solve problem C. Go practice instead of going on your silly villain arc. greateric gave great advice too |
|
+3
I have Done D using Brute |
|
+3
Kevin114514 will win IOI2026 with AK <3 |
|
+3
$$$0.1$$$ is arbitrary. But it should be a small enough constant so that the questions stay interesting. For example, we know that $$$\omega \lt 2.4$$$. Thus, taking constant to be $$$0.4$$$ would make both of these questions less interesting. In terms of reading material, I would recommend this expositionary article. You could also try some chapters from this book. |
|
+3
yoo you still alive dude? |
|
+3
I think it’s somehow related to the API issues, because it’s also been down for about 9 days. Even the Carrot extension isn't working lol |
|
+3
got it mr. Do "// Up to 60 extra reliable primes/coprime candidates guaranteeing DP fluidity" Y. Homwork |
|
+2
codeforces |
|
+2
They kept this editorial private before then contest ended. |
|
+2
who used complex Manacher algorithm for D or just dumb me :D |
|
+2
i don't really think that my delta is enough for my peak |
|
+2
i myself feel awful man ... i feel like reporting myself to the community |
|
+2
By the way, G is solvable without having to use PBDS / segtree structures, which is helpful for easier implementation If we twist the perspective a little bit from the provided solution, then pref[l-1] < pref[r] when l is odd, and pref[r] < pref[l-1] when l is even. Notice how in both cases, the smaller side (l-1 and r respectively) are even, and the larger sides are odd. And in each case, r appears later than l-1 and vice versa. Since l-1 and r are different parity, they cannot be equal and hence each case goes either in the 'smaller' or 'larger' category, which these two criteria fit exactly. This means that we can just search for the number of EVERY pref[#odd] (regardless if the index is larger or smaller than that of the even index) that are larger than each pref[#even]. Since this requires no element addition or subtraction, this can be done with sorting + binary search. |
|
+2
It is. Proof
|
|
+2
I created video editorial for G. Drowning. |
|
+1
kskbl? |
|
On
Nasa →
An efficient, shorter and easier way to find LCA(Lowest common ancestor) in offline without Tarjan's, 47 hours ago
+1
I am not sure I have the right idea, but it seems like you are doing HLD? |
|
+1
I'm a super fan of Kuro_neko! |
|
+1
WTF damn how he/she was able to do it? |
|
+1
You should ask OpenAI for that. |
|
+1
they can't gain anything with this approach. |
|
+1
i hope we don't get reported then bruv!! |
|
+1
Thank God Codeforces, what would we do without you. |
|
+1
You seem to have an issue with greedy algorithms. If you take more than 15 minutes on those, your practice is probably not good (you solved them already!):
I didn't only pick greedy, but problems you failed hard on. It would be interesting if you can reply to this comment with the time you took to solve them again. Should I give up on 2500s and solve only 1200s until I stop overcomplicating? You know the answer already. But in case you are not convinced, https://mirror.codeforces.com/blog/entry/116371: However, going through too hard problems is just as bad is going through too easy problems. It is not worth spending 4h understanding a 3000 rated problem when you could learn much more concepts from 4 2300 rated problems in the same amount of time (if that's good for your skill level). |
|
+1
see the changes in statement. basically only 6 possible subsequences were given for an example while 12 were possible |
|
+1
sr i miss click up vote to down vote |
|
+1
I can't code, can't find a job, can't have GF, what I can do god, Please save me. Started CP at the age of 24 being a unemployed, who is aiming to become CMaster at CF. Fingers crossed :X) |
|
+1
dont be worry, as you can see its not a particularly difficult problem, so its just normal for ideas or coding styles to overlap (at least there many people do in that way). |
|
+1
It was a great contest! |
|
+1
E and F are amazing! |
|
+1
this was my first contest, letsgo solved a and b, and almost solved c, didn't know how to sort numbers that aren't divisible directly by 6, gotta work on my math |
|
+1
G's easier version on LC 2426. Number of Pairs Satisfying Inequality |
| Name |
|---|


