| № | Пользователь | Рейтинг |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | strapple | 3515 |
| 6 | tourist | 3470 |
| 7 | dXqwq | 3436 |
| 8 | Radewoosh | 3415 |
| 9 | Otomachi_Una | 3413 |
| 10 | Um_nik | 3376 |
| Страны | Города | Организации | Всё → |
| № | Пользователь | Вклад |
|---|---|---|
| 1 | Qingyu | 157 |
| 2 | adamant | 152 |
| 3 | Proof_by_QED | 146 |
| 3 | Um_nik | 146 |
| 5 | Dominater069 | 144 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | TheScrasse | 134 |
| 10 | chromate00 | 133 |
|
0
I had my WA2 on |
|
0
Problem E
I've only seen hexagon coordinates used in some real-life navigation cases (see H3), and it seemed extremely not-useful for contests, but this problem is a real good application. And indeed not obvious why it should be like that from the first look, but then the math mathes out. Best problem of 2026 I've solved so far! Not even worried about losing the rating |
|
+7
There is no button to rate the problem but B is a really good one! I think this idea could have been a foundation of a much more difficult problem, if the setting was a bit different. Nevertheless, I've learned some new idea (exchange principle to prove the function is optimal at the border point), so I'm happy! |
|
+14
I liked my experience in IOI-style contests much more than ICPC style ones, as they allow to approach some problems incrementally and it's more rare that you feel stuck. In other words, having detailing scoring helps to not under-perform if your strategy is good but you are unlucky with a problem. However, the points management is an important part of the strategy, and having public leaderboard removes this part of the challenge. For public standings contest, codeforces C1/C2/C3 problem split looks like a better alternative to me. Another thing is, not clear how to make the change without making all contest points-based. Basically, if you have 1-2 problems with scores and the rest is 0/100, the knapsack of possible scores has little variety so we'll see the same points across all the teams likely, so not even useful as a tie breaker. What can work though is a heuristics (!) problem that is a tie breaker. but afaik getting non-zero on heuristics is time consuming usually so it would hurt the real contest part. |
|
+3
Proud to announce that randoming 300/600 lesfs as $$$r$$$ in F and then solving for each one in $$$O(n)$$$ doesn't pass tests by WA/TL Which is a good thing for contest authors but bad thing for me :). |
|
+1
RMQ is replaceable by DSU here, so in $$$O(n \alpha)$$$ actually. |
|
0
This would be a strong case for C1/C2 then, or D1/D2. Like, after B i don't feel second part of C is too complex. |
|
+3
I might be completely out of touch, but that did not look similar to typical div1+div2. I had a feeling of solving a regular div1 tbh (which might be my personal skill issue). A seemed not that simple as A usually are, B is just what? (i couldn't figure out what can be a good approach even, there is a counterexample for everything). D-E seemed interesting and not that more difficult than B. What I did not figure out is why C is quadratic limits? After you've precalced answer for each single cross, putting 2 crosses is as easy as taking lca-ish point between two crosses, which is basically a point that is in the overlap, and can be processed bottom to top with merging segments and getting max, so $$$O(n \alpha)$$$ solution is definitely possible, and maybe even linear. |
|
0
you still have to sort after grouping. you can do some polar magic, but i was not sure and it seemed like 1e9 is not that bad for atan2 precision |
|
+1
Did anyone had problems in E when sorting by atan2? I switched to atan2l and it seemed to start working, but i am not sure, as i also did some +-1 with prefix sums. But the plain atan2 submission failed only 2 tests... |
|
0
Nice contest! How to G? Seems like just some segment tree with good merging and lazy updates, but the order seems tricky: if we have a lazy "flip+set to 0", and a lazy "add x", not clear what to do first. Is there a common pattern for this, like storing the timestamp of the lazy operation? |
|
+10
As a developer of Xeppelin contest watcher, glad to see the tool mentioned — even though the context is unexpected. Several thoughts on the topic:
|
|
0
a = [1, 2, 2, 2, 3] b = [2, 1, 3, 5, 4] You can't take neither min or max element with a=2, if you want it to fit nicely into 1-2-3 sequence |
|
+1
If you reorder pairs by A increasing, then getting LIS on B's gives you some valid non-intersecting subset and it is easy to show that this is the solution if all A's are different. Now, if A's collide, you can break the tie by B's decreasing. Now you can't take two elements of same A so LIS is still the answer. and the general LIS algo is what you need |
|
+19
I think the single voting is attached to all the problems — after I clicked "good problem" on D my "awesome problem" on E became a "good problem" It was a great contest! Happy New Year everyone :) |
|
0
If someone is interested, I figured out this does not break the solution because: 1) if we had to decrease 5->3, we shall create a new node anyway, so last one is not used 2) if we don't decrease the value, the 5->4 decrease makes the adjacent components non-final anyway. |
|
0
Is there a way to hack own solutions after the contest? I think editorial solution for E is wrong, even though it passes tests, as it doesn't do proper DSU rollbacks. on grid like if we reduce the middle 5 by 1, and then reduce all other 5 by 2, they will still have min neighbor = 1, even though the picture is now |
|
+20
I wanted to comment that typical ICPC contest should have 10-15 problems so actually judges had to remove something from the set (and this is usually the case). While trying to verify that, I found https://icpc.global/regionals/rules that states:
Which means technically a Codeforces div2 round with A-F problems might be held as a regional? That's funny. |
|
0
This is smart, will definitely remember this! thanks |
|
+14
I know no one really cares at this point but i figured out the difference between AC and not AC in E:
because when you do vertices you propagate both best and second to best distance and break bfs order for some updates that wait in the queue. And probably you can override something while doing so. Because of that, doing =inf check in bfs seems to not work at all, and min= with multiple repetitions seems to work good enough to be AC with some random shenanigans. |
|
+1
Ok this is kind of the same what I had in mind when coding my first submission https://atcoder.jp/contests/abc429/submissions/70450122, but instead of checking = with INF, i just was applying min= in array of distances. Still dont understand what's the difference. Apparently I don't know BFS... |
|
0
OK F really stupid... If you cross vertical line 3 times you could've crossed it once and it is strictly better. After that it's clear that you kind of want prefix and suffix achievability DP's and merge them together on updates — that's a classic dp-in-segtree trick. |
|
0
What is E? I have no idea how to find $$$k$$$ shortest distances to some set of vertices. What I managed to AC:
I have no idea:
And F also seemed difficult btw. Only good observation that for all vertical line you either cross it 1 or 3 times. |
|
+8
Just upsolved div1B, as mine solution failed on the added systest. And now I am confused. I did this (343430329) following the editorial, where I have ~516MB (twice as much won't fit right?). I have the input and $$$n \times n \times m$$$ memory (wlog $$$n \le m$$$). When my submission failed after retest (343345702) I thought that it is fair enough as I haven't really proved my solution memory consumption (I do rectangle search and then delayed range minimum updates with some priority queue trick to have better constant factor). But now I look at it and I see that instead of having $$$n\times n \times m$$$ dp I store the push-min updates on range that I afterwards process. Number of updates is sum of length long-side borders of the rectangles from $$$\mathbb{S}$$$. So it is like $$$2 \times n \times n \times m$$$. I do up to 4x factor as I am not careful when borders are shared but it is rather easy to handle (if you got MLE to fix lol). So it seems even if I do patches my solution doesn't fit. But it is also clear I have only 2 times more memory (well, there is std::vector overhead but I can manage). And the solution seems to me, well, careful enough, not abusing sets or segment trees. So I dont understand two things regardless of the missing maxtest:
upd: actually, when I say 1040Mb, I also assume I store 4-byte integers — but 3-byte integers can break this gap as well. |
|
0
First spoiler I agree, did the same. Second I can believe that the cost is small but 3 seems like a magic number. Will wait for the editorial |
|
+8
Cool C! $$$a_0$$$ brings some extra cases that seem a bit artificial but it probably makes the problem less OEIS-able. I dont understand the reason for having 25e4 in B instead of 1e5 because, well, it is $$$O(n\cdot m \cdot \min(n, m))$$$ anyway, right? And kind of the same memory, clear that constant factor is bad. For me sets werent working on the go so i had to do some stupid rewrites for 20 mins. What is D1? Problem looks like a classic "dp with super-dooper search" (have records list, recalc dp over first K, last K and random K elements), but doesnt seem to fit (i tried!). Some submissions seem to do some searches (i tried ternary, doesnt work). |
|
0
Same for me. No idea why it makes such big difference in this case. Spent quite a while on it |
|
+21
I had to push hard enough to make my MITM pass, changing map to unordered map helped and i am not sure why :) |
|
+22
Hey! How does the contribution work there? For example, you did the RMQ section and if someone wants to do the RSQ section -- what should they do? I assume, write some articles on prefix sums, BIT etc; also find some problems on CF for the practice. Should there be a video? Whom to contact to add it to the EDU section? And how to ensure no one is contributing to the same non-existing section at the same time? A genuine question as the post has a call for action but the action itself is a bit unclear. |
|
0
I think E had a bad quality where it doesnt implicitly tell you that you have to precalc binomials. I told myself that multitest times 5k factorizations should be enough and got 50/51 ac 1/51 tle. My idea was to say that answer is $$$\frac{sum!}{c_1! \cdot c_2! \cdot \ldots}$$$ do two pointers and keep track of how many factorials contain each number from 1 to $$$sum$$$, so i know how much it contributes to the answer. Limits are tight enough so this doesn't pass and not tight enough to convince me the approach shouldn't work and I should do something else instead. F's intended solution is nice. I just AC'ed hashset to keep track which transitions I already used. |
|
+26
Remove all users from codeforces but tourist. Give tourist k invite links. When person gets invited they also get k invite links. Wait until codeforces has many users again. When someone cheats, you ban all their invited subtree as well as their direct parent. If you want to make it more realistic, instead of removing the userbase you can just mark them "verified/banner/unofficial". |
|
+31
Div1 E1 is kind of problem that makes people submit the worst ideas they've got. My local stress started passing on 3:14 and it was not enough to fix minor stuff and submit :cry:. Not sure what was the purpose and intended approach of the problem, need editorial. Div1 BC are sweet, F is interesting (like I dont even understand how one can solve bamboo in 2 rounds). I definitely had some troubles with the way some problems were formulated — maybe this is even good against LLM's understanding problems but unfortunately it's also against me understanding problems. |
|
+48
As a tester, i told all my friends to participate as the round is really high quality and has lots of nice problems! |
|
+8
As a contestant, I wouldn't say that the contest was significantly different from the ones held in recent years. What was different though, is the distribution of teams skills. There are still few teams who can solve 10+ (congrats SpbSU!) problems from the set in 5 hours, but IMO there were exceptionally many teams who could solve 10+ in like 6 hours. Given less time, some of them scored 9 and some of them scored 8. And it doesn't seem possible to fairly split a group of 20-30 teams that have really similar high skill. Comparing to the last year's set, I would say 8 problems in 2025 (40th place) is at least 7 problems in 2024 (23rd place) and 9 problems in 2025 (17th place) is at least 8 problems in 2024 (11th place). And this is optimistic estimate from me as I personally think 2024 wf had a bit easier problemset. I would assume 50th WF will be more difficult to compensate for that. Personally, we had a rather bad, but not horrible performance and ended up 77th which is twice as bad as our worst virtual result of 40th place. |
|
0
Actually I found my bug and now it just works so I still have to submit but I'm more or less confident in this one. Damn, was so close, forgot to copypaste $$$\times \binom{r-l}{i-l}$$$ once. It would be a get to GM round for me |
|
0
I failed with one of the samples but that sounded good to me:
|
|
+8
From score distribution I thought to myself that three subproblems is a bad sign. Surprisingly it was quite reasonable, like you definitely have some advancements to do and it's good to be rewarded gradually here. Good round. Was really close to solve D,but couldn't figure out one of the samples on the paper in last several minutes — wasn't counting one of the cases. |
|
0
That's for sure, considering you solved 15 problems for 2 sets of 6 problems (which ig will overlap, giving total of ~10 unique problems in a set). So even in the best case it's only $$$\frac{2}{3}$$$ similarity to your experience :) |
|
+16
As a tester, I think you might want to check out this round! |
|
На
ahsoltan →
Order Capital Round 1 (Codeforces Round 1038, Div. 1 + Div. 2) Editorial, 9 месяцев назад
+40
I solved D at 0:21 (and even got an intermediate result of global top-2!). It is hilarious that I spent literally less then a minute thinking about bounds and told myself: we can solve in $$$10\,000$$$ steps because after each step we wait no more than $$$deg(v)$$$ and we know $$$\sum deg(v) = m$$$ and we know $$$n\sim m \sim 5000$$$, right? I feel I got really lucky. In retrospective it seems really not obvious why 10k is enough. Definitely kind of "I believe this should work" problem, idk if this is good for Div2D. |
|
На
Proof_by_QED →
EPIC Institute of Technology Round Summer 2025 (Codeforces Round 1036, Div. 1 + Div. 2) Editorial, 10 месяцев назад
0
actually can be done with some /median of $$$\frac{n}{5}$$$ groups of 5 elements/ messy approach in deterministic time, see this. |
|
На
Proof_by_QED →
EPIC Institute of Technology Round Summer 2025 (Codeforces Round 1036, Div. 1 + Div. 2) Editorial, 10 месяцев назад
0
the constraint is not even needed, you can get k smallest elements in $$$O(n)$$$, see nth_element in C++ |
|
На
Proof_by_QED →
EPIC Institute of Technology Round Summer 2025 (Codeforces Round 1036, Div. 1 + Div. 2), 10 месяцев назад
+22
There is some, as the purpose (i guess) is to show that output is guaranteed to be small and don't give out the solution in the same time. Still, 17 seemed too specific and might mislead someone(myself included). 20 or 50 would serve the same purpose |
|
+30
Great idea! And a good topic to go with in this format. And what about an advanced stream on fft's? Are you planning one? I would learn a ton from it I guess — I never went further than intermediate level. I believe it's a case for many people when they learned how to multiply polynomials and polynomials just don't usually come up. So they end up using FFT to take dot product with different shifts and that's it. |
|
0
i still dont know the model solution but basically after I figured out $$$f(x, y) \le 2 * min(x, y)$$$ I started filtering out all small elements and bruteforcing only on elements in the window $$$[\frac{ans}{2}, \inf)$$$. If too many, you take only top-$$$K$$$ (probably top-K max and top-K min in the window should work). With high enough K you might pass, so you need a big $$$n$$$ to punish big value of K. BTW, after that the full is just to do a full recalc each time $$$a_i$$$ is bigger than twice the current ans. But that's not easy to notice and one might try to just bruteforce more. Hence the constraints. 321166762 |
|
+2
Just to add a bit, even though some might just instantly remember this trick, I didn't, and here was my thought process:
When I said it's a classical one, I meant that all of this process felt familiar and clicked really quickly for me. I believe some similar problems were already found in the comments. |
|
+17
Completed the full set today. I guess that's the first time for me during the contest time. Feels good :) Problemset felt a bit more ICPC-style than usual, which is a good thing both for my performance and my impression of the problems. E seems like a kind of classic problem (even though the topic itself is not that popular so I liked it anyway!). Thanks for the contest, I needed this morale boost after several poor performances last weeks. |
|
0
I found a cool greedy for D. It can be proven that usually you pay 1 coin for each $$$a_i \gt 0$$$, but you can take segment $$$[l, r]$$$ matching pattern $$$2(44)^*2$$$ and cover it for $$$r-l$$$. We greedily try to extend this pattern and that's it. Code: 316951310. |
|
+15
dude... |
|
+5
F hint 2:
Why is that? Let's assume for the oldest bit I found a segment $$$[l, r]$$$ covering $$$i$$$. For the next bit there may exist a segment $$$[l',r']$$$ covering $$$i$$$ that is not "compatible" with $$$[l, r]$$$. This can (?) happen if there is a 1 between $$$i$$$ and both $$$r$$$'s and number of zeroes in the suffix has different parity. |
|
+28
I didn't look into status page to check submissions: I saw disbalanced stats on F in overall stats and opened it as a next one. Based on scoring I thought they should be relatively similar. Even keeping in mind that someone is doing gpt-submissions, it still made sense for me to read it first: if the task is solvable by llm, maybe the statement is clearer than in E, and this is important for problems with close scoring. I havent solved F but it was a close call. So I believe this statement However I agree that the situation is a ridiculous one. |
|
0
I spent a bit thinking how are ABC-strings related to the trees so you don't have to. Consider the string a traversal of a binary tree. A is left child, B is right child, C is a return to the parent. So AC or BC or AACBCC are cycle pathes that start from the root, go through different vertices and return you to the root. That's also why there are A+B = C in terms of number of occurrences. |
|
+8
On ubuntu that's just I agree that installing it during the contest sucks. But tbh there is no real rule that you should use python/cpp, and using them just because everyone uses them will solidify them without any particular reason. My complain about having VM in Kotlin is that it's just an extra layer of abstraction without any particular add-on to the problem. I am not sure it's really possible to distribute an ASM code to everyone so everyone can run it, but my problem is that I need to study this specific machine to verify if my ASM intuition works or not. (and, well, i spent a while decomposing some parts of it). Also the single code file is not documented while for any ASM you can find information online. And LLM's are probably better with converting any ASM dialects than with custom emulator. There should be a better sweet spot, but I haven't done my research on that, maybe I'm wrong. |
|
0
Just solved D in a cool way not mentioned in editorial, let me explain it here, I find it easier in some sense. Let's do binary search over answer and then build $$$b_i := a_i \ge x$$$. We know how much cuts we have to do (something like $$$B := \lfloor\frac{n-1}{k}\rfloor$$$, and we know how much elements at the end we will have ($$$N := \frac{n - B \cdot k}{2}$$$). Out of N, we should have more than half equal to one, so it's ($$$A := \frac{N}{2} + 1$$$) 1's that we kept until the end. Now, instead of noticing the fact on indices mod k for 1's (I haven't while solving), let's build the initial array, reversing the operations. In fact we can do it left to right, saying that we either add a single element that lives until the end or we take a segment of $$$k$$$. We can notice that it doesn't matter in which order we destroy such segments so this decomposition always works. So, we have to construct the array, having A ones and B segments. Supported operations are "append a segment" and "append current element and process if it's 1". I don't know how to solve it directly but the trick is to do $$$dp_{a,b}$$$ that denotes what's the minimal current index such as we already took $$$a$$$ 1's and $$$b$$$ segments. It has $$$O(1)$$$ transitions if you do $$$O(n)$$$ precalc. The issue is the number of states: but one can notice that we need $$$dp_{A, B}$$$ and $$$A = O(k)$$$ and $$$B = O(\frac{n}{k})$$$, so total number of states is linear. I think this is a cool approach, i liked it very much. 313844486 |
|
+16
I'm crying out loud |
|
+16
OMG it feels like bravery was rewarded in this problem. We got too scared by $$$n = 2000$$$ and completely ignored our binsearch + perfect matching idea. And it even runs faster than my implementation of what I thought to be the same as model solution... I can see why it runs fast. If everything is the same, you basically get $$$n$$$ edges. If everything is different, you get perfect matching from the first try. And in between — well, yeah, it halves, and also you should have enough vacant numbers when you take $$$\frac{n}{2}$$$ different $$$a_i$$$'s and generate $$$a_i \times j$$$, as each two generator's can't overlap on more than half occurrences. What can I say, well-deserved qualification to WF, congrats! |
|
0
Is it true, btw, that the second solution is exactly the same as maxflow from the editorial to the extent where one node is split into several? Instead of left to right it makes the graph right to left, but seems like no extra steps are involved (maybe additional $$$+E$$$) TBH I still don't get the $$$O(n\log n)$$$ bound on edges so for me both solutions seem like a hyper-optimized cubic one :) |
|
+18
I have two solutions for K: the one we didn't believe in during the onsite (passing in 1930ms on CF) and the one that I think more or less matches the intended: AC 1900ms: Kuhn on graph where left part has numbers we are matching to ($$$a[i] \cdot j$$$), right part is $$$i$$$. We build $$$O(n^2)$$$ edges in $$$O(n^2\log)$$$ (or with hashtable lookup, but it seems sorting would be faster). Then Kuhn with 2 default optimizations: timer-based AC 1300ms: on failed path push, remove all future and existing edges to right-part vertices that we failed to propagate the path from. Each time we fail we watched X edges and we remove more than X edges, and we succeeded $$$n$$$ times so it's more or less clear why it's $$$O(nE)$$$ and it's also fast because most of successful steps require much less than $$$E$$$ for dfs. Here $$$E$$$ should be something close to $$$O(n\log n)$$$ as this is pretty close to the jury's solution (though it is done a bit backwards). 308914819 Is it supposed to be this way? The gap between two solutions in terms of how hard it is to come up with is pretty big. The first one really just squeezes in TL tightly. The second one is not so fast itself. Was it the same experience for the onsite participants with 5+ attempts? Would appreciate some comments in general as well. I am thinking a lot now on this "remove edges for not-pushed vertices" optimization and how useful it could be in other problems |
|
0
Auto comment: topic has been updated by KiKoS (previous revision, new revision, compare). |
|
0
Auto comment: topic has been updated by KiKoS (previous revision, new revision, compare). |
|
0
It probably doesn't, I indeed do use the fact we have the groups of same values. It is just the next (not obvious for me) step — that the array transforms without swapping elements. |
|
0
Oh, that's smart, we basically never swap elements. Then there exists a clean solution, you are right! Still not sure if it is easily identifiable on the spot — i havent thought enough during the contest because pbds was not that difficult to include. |
|
0
ABC felt a bit too difficult, even though I enjoyed them. E is alright — maybe a bit more enjoyable if we solve for first player, but then it'd be too easy. Gap E-F is hard, but maybe it's my skill issue :). Overall felt kind of balanced. What I can't understand is the intended solution for D: whatever you do, you need to keep track of the two sorted sets, and be able to recover an element by index and vice versa. This is either a lot of pain to implement, or a no-brainer pbds. For div1 that's alright either way (I did a pbds), but I feel sorry for less experienced participants: if you don't know pbds, you'll likely have a really hard time debugging the implementation yourself, even though the greedy idea is a good one for this position. |
|
На
atcoder_official →
Daiwa Securities Co. Ltd. Programming Contest 2024(AtCoder Beginner Contest 383) Announcement, 17 месяцев назад
+3
You can sort edges based on the weight. As the cost is defined as max weight of the edges on the path, you can add them 1 by 1. When A and B become connected through new edge, it means $$$dist(A, B) = w_{edge}$$$. Then it's the greedy idea: if you can pair A, B with a minimal cost of the set, you should do it. Otherwise, the cost would be worse. Consider freshly connected A, B and still not connected C, D. If you don't connect (A, B) now and consider (A, C) + (B, D) was more optimal, that you'll wait until C is connected to the (A, B) component and D is connected with (A, B) component as well. At that moment (C, D) is in the same component too (but maybe they were connected even earlier). Then the 2 holds: $$$cost(A, B) \le cost(A, C)$$$ and $$$cost(C, D) \le cost(B, D)$$$. So, $$$cost(A, B) + cost(C, D) \le cost(A, C) + cost(B, D)$$$. And it is better to pair them greedily. Not sure this is a strict proof, but seems strict enough |
|
+10
Orienteering is extremely cool. You can have fun at both beginner level and if you decide to improve the skills and run advanced races. |
|
0
https://mirror.codeforces.com/problemset/submission/1361/294653357 https://mirror.codeforces.com/problemset/submission/690/294596558 etc, still doesnt work :(. All CE's that I submit after the redirect |
|
0
I tried too, didn't work for me. Maybe I'm doing it wrong, just submitting some CE to the opened tab |
|
0
It is indeed clear you do first $$$cx$$$ normally and then you should only get the ones that are divisible by $$$x$$$, and there will be somewhat like $$$\frac{m}{x}$$$ of them. However it can be y > m that converts to y $$$\oplus x \le m$$$, and you also need to if this. But then it turns out your $$$[0;cx]$$$ can be intersecting with your $$$[m-x,m]$$$. So you start if-ing cases when they are overlapping and then you just add even more ifs found by stress tests. Not saying it is a bad problem overall, but it is definitely an overkill for C2. There probably is an elegant way to solve without that many corner cases, but I dont believe that's a problem that is always solvable by not that experienced participant even after they got the main ideas. Also i think 1e5 is a bad constant as $$$x \le 10^6$$$. $$$x$$$ should be enough, and less than $$$x$$$ is not enough -- imagine $$$m = 10000{x}_2$$$ (having $$$x = \ldots101_2$$$), than you can take $$$m - x + 2$$$ and it will give you xor bigger than $$$m$$$. Probably the easiest way was just to solve $$$m \le 4x$$$ the naive way and do first $$$2x$$$, last $$$x$$$ and then formulas otherwise. |
|
+2
What was this with C2? I have no idea what I did for the solution, just stress-tested and ifed found cases until it passed. Gcd round indeed. |
|
+24
Liked C a lot:
Maybe a bit harder than a regular D2C, but definitely solvable (and in many approaches, whats not always a case). It's generally hard to create a good problem for position C and today it was just awesome. |
|
+7
Same. If it provides any additional information, tried submitting from public wifi and then from mobile hotspot, both failed. upd: Cloudflare Ray ID: 8b7b5c8d3e756fea |
|
+17
Enjoyed the round a lot, especially D and E are great. Thanks to coordinators, authors and testers! |
|
+51
|
|
0
Hi! I'm sorry you didn't like the editoral. Still, solution (in my opinion) has two main ideas:
|
| Название |
|---|


