Комментарии

I had my WA2 on h = ceil(sqrt(3) * r + 5) which gives good enough precision when r is big, but breaks when r is small. I've just removed +5 and it worked /shrug/

Problem E
На simplelifeCodeforces Round 1089 (Div. 2) Editorial, 4 недели назад
+7

There is no button to rate the problem but B is a really good one! I think this idea could have been a foundation of a much more difficult problem, if the setting was a bit different. Nevertheless, I've learned some new idea (exchange principle to prove the function is optimal at the border point), so I'm happy!

На QingyuPartial Scores in ICPC Contests, 5 недель назад
+14

I liked my experience in IOI-style contests much more than ICPC style ones, as they allow to approach some problems incrementally and it's more rare that you feel stuck. In other words, having detailing scoring helps to not under-perform if your strategy is good but you are unlucky with a problem.

However, the points management is an important part of the strategy, and having public leaderboard removes this part of the challenge. For public standings contest, codeforces C1/C2/C3 problem split looks like a better alternative to me.

Another thing is, not clear how to make the change without making all contest points-based. Basically, if you have 1-2 problems with scores and the rest is 0/100, the knapsack of possible scores has little variety so we'll see the same points across all the teams likely, so not even useful as a tie breaker.

What can work though is a heuristics (!) problem that is a tie breaker. but afaik getting non-zero on heuristics is time consuming usually so it would hurt the real contest part.

На OtterZCodeforces Round 1087 (Div. 2) Announcement, 5 недель назад
+3

Proud to announce that randoming 300/600 lesfs as $$$r$$$ in F and then solving for each one in $$$O(n)$$$ doesn't pass tests by WA/TL

Which is a good thing for contest authors but bad thing for me :).

На awang11Codeforces Round 1085 (Div. 1 + Div. 2) Editorial, 7 недель назад
+1

RMQ is replaceable by DSU here, so in $$$O(n \alpha)$$$ actually.

На awang11Codeforces Round 1085 (Div. 1 + Div. 2), 7 недель назад
0

This would be a strong case for C1/C2 then, or D1/D2. Like, after B i don't feel second part of C is too complex.

На awang11Codeforces Round 1085 (Div. 1 + Div. 2), 7 недель назад
+3

I might be completely out of touch, but that did not look similar to typical div1+div2. I had a feeling of solving a regular div1 tbh (which might be my personal skill issue). A seemed not that simple as A usually are, B is just what? (i couldn't figure out what can be a good approach even, there is a counterexample for everything). D-E seemed interesting and not that more difficult than B.

What I did not figure out is why C is quadratic limits? After you've precalced answer for each single cross, putting 2 crosses is as easy as taking lca-ish point between two crosses, which is basically a point that is in the overlap, and can be processed bottom to top with merging segments and getting max, so $$$O(n \alpha)$$$ solution is definitely possible, and maybe even linear.

На atcoder_officialAtCoder Beginner Contest 442 Announcement, 3 месяца назад
0

you still have to sort after grouping. you can do some polar magic, but i was not sure and it seemed like 1e9 is not that bad for atan2 precision

На atcoder_officialAtCoder Beginner Contest 442 Announcement, 3 месяца назад
+1

Did anyone had problems in E when sorting by atan2?

I switched to atan2l and it seemed to start working, but i am not sure, as i also did some +-1 with prefix sums. But the plain atan2 submission failed only 2 tests...

На atcoder_officialAtCoder Beginner Contest 441 Announcement, 3 месяца назад
0

Nice contest! How to G? Seems like just some segment tree with good merging and lazy updates, but the order seems tricky: if we have a lazy "flip+set to 0", and a lazy "add x", not clear what to do first. Is there a common pattern for this, like storing the timestamp of the lazy operation?

As a developer of Xeppelin contest watcher, glad to see the tool mentioned — even though the context is unexpected. Several thoughts on the topic:

  • As a cheater who knows coding and a bit of reverse engineering, you can falsify the contest log not only from Xeppelin but from any tool
  • Generally, as a cheater in online competition, you can cheat without any sign of you cheating.
  • You can't unify the setup for 10k participants.
  • Sending contest log is no different from recording the screen, which some official competitions already ask to do.
  • I believe unless we talking about top-100 in contest with prizes, contestants should not bother about cheaters because they don't deny you of anything but rating points. And rating points are virtual and you shouldn't care about them (or if you care, you should transfer this energy into self-improving).
  • CF is rather good at cleaning the standings post-contest — I don't see any Large Language Models over Legendary Grandmaster Models in the results in contests where it matters.
  • The real issue that contestants have because of LLM cheating is skewed standings during the contest. This would be great to fix but I have no idea how this can be possible.
На atcoder_officialAtCoder Beginner Contest 439 Announcement, 4 месяца назад
0

a = [1, 2, 2, 2, 3] b = [2, 1, 3, 5, 4]

You can't take neither min or max element with a=2, if you want it to fit nicely into 1-2-3 sequence

На atcoder_officialAtCoder Beginner Contest 439 Announcement, 4 месяца назад
+1

If you reorder pairs by A increasing, then getting LIS on B's gives you some valid non-intersecting subset and it is easy to show that this is the solution if all A's are different.

Now, if A's collide, you can break the tie by B's decreasing. Now you can't take two elements of same A so LIS is still the answer.

and the general LIS algo is what you need

На twosquaresGood Bye 2025 Editorial, 4 месяца назад
+19

I think the single voting is attached to all the problems — after I clicked "good problem" on D my "awesome problem" on E became a "good problem"

It was a great contest! Happy New Year everyone :)

На kingmessiCodeforces Round 1067 (Div. 2), 5 месяцев назад
0

If someone is interested, I figured out this does not break the solution because:

1) if we had to decrease 5->3, we shall create a new node anyway, so last one is not used 2) if we don't decrease the value, the 5->4 decrease makes the adjacent components non-final anyway.

На kingmessiCodeforces Round 1067 (Div. 2), 5 месяцев назад
0

Is there a way to hack own solutions after the contest? I think editorial solution for E is wrong, even though it passes tests, as it doesn't do proper DSU rollbacks.

on grid like

100 100 1 100 100
5 5 5 5 5

if we reduce the middle 5 by 1, and then reduce all other 5 by 2, they will still have min neighbor = 1, even though the picture is now

100 100 1 100 100
3 3 4 3 3
На TheScrasseCodeforces Round 1066 (Div. 1 + Div. 2), 5 месяцев назад
+20

I wanted to comment that typical ICPC contest should have 10-15 problems so actually judges had to remove something from the set (and this is usually the case). While trying to verify that, I found https://icpc.global/regionals/rules that states:

At least six problems will be posted.

Which means technically a Codeforces div2 round with A-F problems might be held as a regional? That's funny.

На chromate00Codeforces Round 1058 Editorial, 6 месяцев назад
0

This is smart, will definitely remember this! thanks

На atcoder_officialAtCoder Beginner Contest 429 Announcement, 6 месяцев назад
+14

I know no one really cares at this point but i figured out the difference between AC and not AC in E:

  • queue has vertices: not AC
  • queue has {vertex, dist, root}: AC.

because when you do vertices you propagate both best and second to best distance and break bfs order for some updates that wait in the queue. And probably you can override something while doing so. Because of that, doing =inf check in bfs seems to not work at all, and min= with multiple repetitions seems to work good enough to be AC with some random shenanigans.

На atcoder_officialAtCoder Beginner Contest 429 Announcement, 6 месяцев назад
+1

Ok this is kind of the same what I had in mind when coding my first submission https://atcoder.jp/contests/abc429/submissions/70450122, but instead of checking = with INF, i just was applying min= in array of distances. Still dont understand what's the difference. Apparently I don't know BFS...

На atcoder_officialAtCoder Beginner Contest 429 Announcement, 6 месяцев назад
0

OK F really stupid... If you cross vertical line 3 times you could've crossed it once and it is strictly better. After that it's clear that you kind of want prefix and suffix achievability DP's and merge them together on updates — that's a classic dp-in-segtree trick.

На atcoder_officialAtCoder Beginner Contest 429 Announcement, 6 месяцев назад
0

What is E? I have no idea how to find $$$k$$$ shortest distances to some set of vertices.

What I managed to AC:

  • store top-$$$k$$$ optimal distances for different points (d + color)
  • do bfs where you push different colors altogether, and do it levit-style, ie until everything is stabilized
  • when you have more than k distances stored, do prune
  • (might be not important) remember if you removed something at some stage to consider when calculating answer
  • (important) shuffle original queue
  • (important) take k = 10 and not k = 5 or k=2, otherwise WA

I have no idea:

  • Why it works in time
  • Why different $$$k$$$ matters in terms of WA — like I drop something that should be used later?

And F also seemed difficult btw. Only good observation that for all vertical line you either cross it 1 or 3 times.

На chromate00Codeforces Round 1058 Editorial, 7 месяцев назад
+8

Just upsolved div1B, as mine solution failed on the added systest. And now I am confused.

I did this (343430329) following the editorial, where I have ~516MB (twice as much won't fit right?). I have the input and $$$n \times n \times m$$$ memory (wlog $$$n \le m$$$).

When my submission failed after retest (343345702) I thought that it is fair enough as I haven't really proved my solution memory consumption (I do rectangle search and then delayed range minimum updates with some priority queue trick to have better constant factor). But now I look at it and I see that instead of having $$$n\times n \times m$$$ dp I store the push-min updates on range that I afterwards process. Number of updates is sum of length long-side borders of the rectangles from $$$\mathbb{S}$$$. So it is like $$$2 \times n \times n \times m$$$. I do up to 4x factor as I am not careful when borders are shared but it is rather easy to handle (if you got MLE to fix lol).

So it seems even if I do patches my solution doesn't fit. But it is also clear I have only 2 times more memory (well, there is std::vector overhead but I can manage). And the solution seems to me, well, careful enough, not abusing sets or segment trees.

So I dont understand two things regardless of the missing maxtest:

  • Am I wrong with my solution? Like, is it so bad to have 2x memory (1040 MB?) that it was not intended to pass?
  • Constraints + limits seem to be in this gray area, where they don't communicate what is expected and not expected to pass (sqrt time? sqrt time + memory? sqrt log time + sqrt memory? sqrt memory plain?). This is sometimes alright when model solution passes with a big gap but here we have $$$O(nm\sqrt{nm})$$$ time + memory as well. Why not make constraints 2 times easier or harder to be more transparent about the problem?

upd: actually, when I say 1040Mb, I also assume I store 4-byte integers — but 3-byte integers can break this gap as well.

На Proof_by_QEDCodeforces Round 1058 (Div. 1, Div. 2), 7 месяцев назад
0

First spoiler I agree, did the same. Second I can believe that the cost is small but 3 seems like a magic number. Will wait for the editorial

На Proof_by_QEDCodeforces Round 1058 (Div. 1, Div. 2), 7 месяцев назад
+8

Cool C! $$$a_0$$$ brings some extra cases that seem a bit artificial but it probably makes the problem less OEIS-able.

I dont understand the reason for having 25e4 in B instead of 1e5 because, well, it is $$$O(n\cdot m \cdot \min(n, m))$$$ anyway, right? And kind of the same memory, clear that constant factor is bad. For me sets werent working on the go so i had to do some stupid rewrites for 20 mins.

What is D1? Problem looks like a classic "dp with super-dooper search" (have records list, recalc dp over first K, last K and random K elements), but doesnt seem to fit (i tried!). Some submissions seem to do some searches (i tried ternary, doesnt work).

На atcoder_officialAtCoder Beginner Contest 427 Announcement, 7 месяцев назад
0

Same for me. No idea why it makes such big difference in this case. Spent quite a while on it

На atcoder_officialAtCoder Beginner Contest 427 Announcement, 7 месяцев назад
+21

I had to push hard enough to make my MITM pass, changing map to unordered map helped and i am not sure why :)

На ArpaI Filled an Empty Codeforces Page!, 7 месяцев назад
+22

Hey! How does the contribution work there? For example, you did the RMQ section and if someone wants to do the RSQ section -- what should they do? I assume, write some articles on prefix sums, BIT etc; also find some problems on CF for the practice. Should there be a video? Whom to contact to add it to the EDU section? And how to ensure no one is contributing to the same non-existing section at the same time?

A genuine question as the post has a call for action but the action itself is a bit unclear.

На atcoder_officialAtCoder Beginner Contest 425 Announcement, 7 месяцев назад
0

I think E had a bad quality where it doesnt implicitly tell you that you have to precalc binomials. I told myself that multitest times 5k factorizations should be enough and got 50/51 ac 1/51 tle. My idea was to say that answer is $$$\frac{sum!}{c_1! \cdot c_2! \cdot \ldots}$$$ do two pointers and keep track of how many factorials contain each number from 1 to $$$sum$$$, so i know how much it contributes to the answer. Limits are tight enough so this doesn't pass and not tight enough to convince me the approach shouldn't work and I should do something else instead.

F's intended solution is nice. I just AC'ed hashset to keep track which transitions I already used.

На LilyWhiteImpossible Ideas for the Problems Caused by AI, 7 месяцев назад
+26

Remove all users from codeforces but tourist. Give tourist k invite links. When person gets invited they also get k invite links. Wait until codeforces has many users again.

When someone cheats, you ban all their invited subtree as well as their direct parent.

If you want to make it more realistic, instead of removing the userbase you can just mark them "verified/banner/unofficial".

На TheScrasseCodeforces Round 1053 (Div. 1, Div. 2), 7 месяцев назад
+31

Div1 E1 is kind of problem that makes people submit the worst ideas they've got. My local stress started passing on 3:14 and it was not enough to fix minor stuff and submit :cry:. Not sure what was the purpose and intended approach of the problem, need editorial.

Div1 BC are sweet, F is interesting (like I dont even understand how one can solve bamboo in 2 rounds). I definitely had some troubles with the way some problems were formulated — maybe this is even good against LLM's understanding problems but unfortunately it's also against me understanding problems.

На BernatPCodeforces Global Round 29, 7 месяцев назад
+48

As a tester, i told all my friends to participate as the round is really high quality and has lots of nice problems!

На chenjbComments on World Finals Baku's Problemset Distribution, 8 месяцев назад
+8

As a tester, I enjoyed the round

As a contestant, I wouldn't say that the contest was significantly different from the ones held in recent years. What was different though, is the distribution of teams skills. There are still few teams who can solve 10+ (congrats SpbSU!) problems from the set in 5 hours, but IMO there were exceptionally many teams who could solve 10+ in like 6 hours. Given less time, some of them scored 9 and some of them scored 8. And it doesn't seem possible to fairly split a group of 20-30 teams that have really similar high skill.

Comparing to the last year's set, I would say 8 problems in 2025 (40th place) is at least 7 problems in 2024 (23rd place) and 9 problems in 2025 (17th place) is at least 8 problems in 2024 (11th place). And this is optimistic estimate from me as I personally think 2024 wf had a bit easier problemset.

I would assume 50th WF will be more difficult to compensate for that.

Personally, we had a rather bad, but not horrible performance and ended up 77th which is twice as bad as our worst virtual result of 40th place.

На wuhudsmCodeforces Round 1040 (Div. 1, Div. 2), 9 месяцев назад
0

Actually I found my bug and now it just works so I still have to submit but I'm more or less confident in this one.

Damn, was so close, forgot to copypaste $$$\times \binom{r-l}{i-l}$$$ once. It would be a get to GM round for me

На wuhudsmCodeforces Round 1040 (Div. 1, Div. 2), 9 месяцев назад
0

I failed with one of the samples but that sounded good to me:

  • first of all, any written number is something small like 12, because from each side you can't place more than 6 guys (first placed on the left blocks all to the left of him and half of the segment between him and target).
  • then you notice that when you put first element, it separates problem into two unused blocks to the left and right of him.
  • each block can't push values outside of it except it's two sides.
  • So you do $$$dp[l][r][push_A][push_B]$$$, where you solve on subsegment and see how much it can push for bigger problems.
  • Now you take some intermediate $$$l \le i \le r$$$ and see how to get $$$a_i$$$ out of left and right blocks sub-pushes:
$$$dp_{l,r,push_A,push_B} = \sum dp_{l, i-1, push_A, x} \times dp_{i+1,r,a_i-x,push_B} \times \binom{r-l}{i-l}$$$
  • Based on the block you know that one of push-values will increment by one
На wuhudsmCodeforces Round 1040 (Div. 1, Div. 2), 9 месяцев назад
+8

From score distribution I thought to myself that three subproblems is a bad sign. Surprisingly it was quite reasonable, like you definitely have some advancements to do and it's good to be rewarded gradually here.

Good round. Was really close to solve D,but couldn't figure out one of the samples on the paper in last several minutes — wasn't counting one of the cases.

На wuhudsmCodeforces Round 1040 (Div. 1, Div. 2), 9 месяцев назад
0

That's for sure, considering you solved 15 problems for 2 sets of 6 problems (which ig will overlap, giving total of ~10 unique problems in a set). So even in the best case it's only $$$\frac{2}{3}$$$ similarity to your experience :)

На hugopmCodeforces Round 1039 (Div. 2), 9 месяцев назад
+16

As a tester, I think you might want to check out this round!

I solved D at 0:21 (and even got an intermediate result of global top-2!). It is hilarious that I spent literally less then a minute thinking about bounds and told myself: we can solve in $$$10\,000$$$ steps because after each step we wait no more than $$$deg(v)$$$ and we know $$$\sum deg(v) = m$$$ and we know $$$n\sim m \sim 5000$$$, right?

I feel I got really lucky. In retrospective it seems really not obvious why 10k is enough. Definitely kind of "I believe this should work" problem, idk if this is good for Div2D.

actually can be done with some /median of $$$\frac{n}{5}$$$ groups of 5 elements/ messy approach in deterministic time, see this.

the constraint is not even needed, you can get k smallest elements in $$$O(n)$$$, see nth_element in C++

There is some, as the purpose (i guess) is to show that output is guaranteed to be small and don't give out the solution in the same time. Still, 17 seemed too specific and might mislead someone(myself included). 20 or 50 would serve the same purpose

+30

Great idea! And a good topic to go with in this format.

And what about an advanced stream on fft's? Are you planning one? I would learn a ton from it I guess — I never went further than intermediate level. I believe it's a case for many people when they learned how to multiply polynomials and polynomials just don't usually come up. So they end up using FFT to take dot product with different shifts and that's it.

На PvProCodeforces Round 1026 (Div. 2), 11 месяцев назад
0

i still dont know the model solution but basically after I figured out $$$f(x, y) \le 2 * min(x, y)$$$ I started filtering out all small elements and bruteforcing only on elements in the window $$$[\frac{ans}{2}, \inf)$$$. If too many, you take only top-$$$K$$$ (probably top-K max and top-K min in the window should work). With high enough K you might pass, so you need a big $$$n$$$ to punish big value of K.

BTW, after that the full is just to do a full recalc each time $$$a_i$$$ is bigger than twice the current ans. But that's not easy to notice and one might try to just bruteforce more. Hence the constraints. 321166762

На PvProCodeforces Round 1026 (Div. 2), 11 месяцев назад
+2

Just to add a bit, even though some might just instantly remember this trick, I didn't, and here was my thought process:

  • I need some sort of connections from one $$$(x, y)$$$ to $$$(x', y')$$$, and I need to be able to create a sort of hamiltonian path
  • I say "sort of", because I need to alternate movements horizontally and vertically
  • Let's just duplicate each vertex so the state "i just did horizontal move" and "i just did vertical move" are separated
  • I now add states like $$$(x, *, H)$$$ and $$$(*, y, W)$$$ — a point and the last move direction.
  • I put asterisk because it is an important notice: when you do a move on constant x, you can go from any $$$y$$$ to any $$$y'$$$.
  • So now I can do something like $$$(x, *, H) \to (x, y) \to (*, y, W) \to (x', y) \to (x', *, H) \to \ldots$$$, and I want to find sort of Hamiltonian path covering all $$$(x, y)$$$.
  • Hamiltonian path is too expensive to find, so tweak the graph a bit to make Euler paths. From the sequence above it's more or less clear that the $$$(x, y)$$$ should be the edge between $$$(x, *)$$$ and $$$(*, y)$$$.

When I said it's a classical one, I meant that all of this process felt familiar and clicked really quickly for me. I believe some similar problems were already found in the comments.

На PvProCodeforces Round 1026 (Div. 2), 11 месяцев назад
+17

Completed the full set today. I guess that's the first time for me during the contest time. Feels good :)

Problemset felt a bit more ICPC-style than usual, which is a good thing both for my performance and my impression of the problems. E seems like a kind of classic problem (even though the topic itself is not that popular so I liked it anyway!).

Thanks for the contest, I needed this morale boost after several poor performances last weeks.

На wuhudsmCodeforces Round 960 (Div. 2) Editorial, 12 месяцев назад
0

I found a cool greedy for D. It can be proven that usually you pay 1 coin for each $$$a_i \gt 0$$$, but you can take segment $$$[l, r]$$$ matching pattern $$$2(44)^*2$$$ and cover it for $$$r-l$$$. We greedily try to extend this pattern and that's it. Code: 316951310.

На NemanjaSo2005Codeforces Round 1019 (Div. 2) — Editorial, 12 месяцев назад
+15

dude...

На NemanjaSo2005Codeforces Round 1019 (Div. 2) — Editorial, 12 месяцев назад
+5

F hint 2:

You can look at all the bits separately and answer for each.

Why is that? Let's assume for the oldest bit I found a segment $$$[l, r]$$$ covering $$$i$$$. For the next bit there may exist a segment $$$[l',r']$$$ covering $$$i$$$ that is not "compatible" with $$$[l, r]$$$. This can (?) happen if there is a 1 between $$$i$$$ and both $$$r$$$'s and number of zeroes in the suffix has different parity.

На BlagojCodeforces Round 1019 (Div. 2), 12 месяцев назад
+28

I didn't look into status page to check submissions: I saw disbalanced stats on F in overall stats and opened it as a next one. Based on scoring I thought they should be relatively similar. Even keeping in mind that someone is doing gpt-submissions, it still made sense for me to read it first: if the task is solvable by llm, maybe the statement is clearer than in E, and this is important for problems with close scoring.

I havent solved F but it was a close call. So I believe this statement I don't think banning everyone who solved F but didn’t solve E would be inaccurate. is a bit too much: some people's strategy definitely was affected by standings.

However I agree that the situation is a ridiculous one.

На MidnightCodeCupMCC Qual Wrap Up, 13 месяцев назад
0

I spent a bit thinking how are ABC-strings related to the trees so you don't have to.

Consider the string a traversal of a binary tree. A is left child, B is right child, C is a return to the parent. So AC or BC or AACBCC are cycle pathes that start from the root, go through different vertices and return you to the root. That's also why there are A+B = C in terms of number of occurrences.

На MidnightCodeCupMCC Qual Wrap Up, 13 месяцев назад
+8

On ubuntu that's just snap install kotlin btw. And for windows/mac any LLM should give you a short instruction to get CLI version.

I agree that installing it during the contest sucks. But tbh there is no real rule that you should use python/cpp, and using them just because everyone uses them will solidify them without any particular reason.

My complain about having VM in Kotlin is that it's just an extra layer of abstraction without any particular add-on to the problem. I am not sure it's really possible to distribute an ASM code to everyone so everyone can run it, but my problem is that I need to study this specific machine to verify if my ASM intuition works or not. (and, well, i spent a while decomposing some parts of it).

Also the single code file is not documented while for any ASM you can find information online. And LLM's are probably better with converting any ASM dialects than with custom emulator. There should be a better sweet spot, but I haven't done my research on that, maybe I'm wrong.

На GlowCheeseEditorial of Codeforces Round 963 (Div. 2), 13 месяцев назад
0

Just solved D in a cool way not mentioned in editorial, let me explain it here, I find it easier in some sense.

Let's do binary search over answer and then build $$$b_i := a_i \ge x$$$. We know how much cuts we have to do (something like $$$B := \lfloor\frac{n-1}{k}\rfloor$$$, and we know how much elements at the end we will have ($$$N := \frac{n - B \cdot k}{2}$$$). Out of N, we should have more than half equal to one, so it's ($$$A := \frac{N}{2} + 1$$$) 1's that we kept until the end.

Now, instead of noticing the fact on indices mod k for 1's (I haven't while solving), let's build the initial array, reversing the operations. In fact we can do it left to right, saying that we either add a single element that lives until the end or we take a segment of $$$k$$$. We can notice that it doesn't matter in which order we destroy such segments so this decomposition always works.

So, we have to construct the array, having A ones and B segments. Supported operations are "append a segment" and "append current element and process if it's 1". I don't know how to solve it directly but the trick is to do $$$dp_{a,b}$$$ that denotes what's the minimal current index such as we already took $$$a$$$ 1's and $$$b$$$ segments. It has $$$O(1)$$$ transitions if you do $$$O(n)$$$ precalc. The issue is the number of states: but one can notice that we need $$$dp_{A, B}$$$ and $$$A = O(k)$$$ and $$$B = O(\frac{n}{k})$$$, so total number of states is linear.

I think this is a cool approach, i liked it very much. 313844486

На theSkeletona ton of cheeting, 14 месяцев назад
+16

// Additional redundant loop to obfuscate the code style further.

I'm crying out loud

На PetrEuropean Championship 2025 (EUC) Editorial, 14 месяцев назад
+16

OMG it feels like bravery was rewarded in this problem. We got too scared by $$$n = 2000$$$ and completely ignored our binsearch + perfect matching idea. And it even runs faster than my implementation of what I thought to be the same as model solution...

I can see why it runs fast. If everything is the same, you basically get $$$n$$$ edges. If everything is different, you get perfect matching from the first try. And in between — well, yeah, it halves, and also you should have enough vacant numbers when you take $$$\frac{n}{2}$$$ different $$$a_i$$$'s and generate $$$a_i \times j$$$, as each two generator's can't overlap on more than half occurrences.

What can I say, well-deserved qualification to WF, congrats!

На PetrEuropean Championship 2025 (EUC) Editorial, 14 месяцев назад
0

Is it true, btw, that the second solution is exactly the same as maxflow from the editorial to the extent where one node is split into several? Instead of left to right it makes the graph right to left, but seems like no extra steps are involved (maybe additional $$$+E$$$)

TBH I still don't get the $$$O(n\log n)$$$ bound on edges so for me both solutions seem like a hyper-optimized cubic one :)

На PetrEuropean Championship 2025 (EUC) Editorial, 14 месяцев назад
+18

I have two solutions for K: the one we didn't believe in during the onsite (passing in 1930ms on CF) and the one that I think more or less matches the intended:

AC 1900ms: Kuhn on graph where left part has numbers we are matching to ($$$a[i] \cdot j$$$), right part is $$$i$$$. We build $$$O(n^2)$$$ edges in $$$O(n^2\log)$$$ (or with hashtable lookup, but it seems sorting would be faster). Then Kuhn with 2 default optimizations: timer-based used[v] for O(1) clear on each successful path found, and shuffle(all(g[v]), rng) so we don't go too often into the same vertex and hopefully find a free index to pair with in first elements of adjacency list. Overall it should be $$$O(n^3)$$$ as we have $$$n$$$ vertices and $$$n^2$$$ edges, but in reality $$$n^2\log$$$ construction part took most of the time in my TL upsolving attempts. 308915147

AC 1300ms: on failed path push, remove all future and existing edges to right-part vertices that we failed to propagate the path from. Each time we fail we watched X edges and we remove more than X edges, and we succeeded $$$n$$$ times so it's more or less clear why it's $$$O(nE)$$$ and it's also fast because most of successful steps require much less than $$$E$$$ for dfs. Here $$$E$$$ should be something close to $$$O(n\log n)$$$ as this is pretty close to the jury's solution (though it is done a bit backwards). 308914819

Is it supposed to be this way? The gap between two solutions in terms of how hard it is to come up with is pretty big. The first one really just squeezes in TL tightly. The second one is not so fast itself. Was it the same experience for the onsite participants with 5+ attempts?

Would appreciate some comments in general as well. I am thinking a lot now on this "remove edges for not-pushed vertices" optimization and how useful it could be in other problems

Auto comment: topic has been updated by KiKoS (previous revision, new revision, compare).

Auto comment: topic has been updated by KiKoS (previous revision, new revision, compare).

На Cocoly1990Good Bye 2024: 2025 is NEAR, 16 месяцев назад
0

It probably doesn't, I indeed do use the fact we have the groups of same values. It is just the next (not obvious for me) step — that the array transforms without swapping elements.

На Cocoly1990Good Bye 2024: 2025 is NEAR, 16 месяцев назад
0

Oh, that's smart, we basically never swap elements. Then there exists a clean solution, you are right! Still not sure if it is easily identifiable on the spot — i havent thought enough during the contest because pbds was not that difficult to include.

На Cocoly1990Good Bye 2024: 2025 is NEAR, 16 месяцев назад
0

ABC felt a bit too difficult, even though I enjoyed them. E is alright — maybe a bit more enjoyable if we solve for first player, but then it'd be too easy. Gap E-F is hard, but maybe it's my skill issue :). Overall felt kind of balanced.

What I can't understand is the intended solution for D: whatever you do, you need to keep track of the two sorted sets, and be able to recover an element by index and vice versa. This is either a lot of pain to implement, or a no-brainer pbds. For div1 that's alright either way (I did a pbds), but I feel sorry for less experienced participants: if you don't know pbds, you'll likely have a really hard time debugging the implementation yourself, even though the greedy idea is a good one for this position.

You can sort edges based on the weight. As the cost is defined as max weight of the edges on the path, you can add them 1 by 1. When A and B become connected through new edge, it means $$$dist(A, B) = w_{edge}$$$.

Then it's the greedy idea: if you can pair A, B with a minimal cost of the set, you should do it. Otherwise, the cost would be worse. Consider freshly connected A, B and still not connected C, D. If you don't connect (A, B) now and consider (A, C) + (B, D) was more optimal, that you'll wait until C is connected to the (A, B) component and D is connected with (A, B) component as well. At that moment (C, D) is in the same component too (but maybe they were connected even earlier). Then the 2 holds: $$$cost(A, B) \le cost(A, C)$$$ and $$$cost(C, D) \le cost(B, D)$$$.

So, $$$cost(A, B) + cost(C, D) \le cost(A, C) + cost(B, D)$$$. And it is better to pair them greedily. Not sure this is a strict proof, but seems strict enough

На DNRWhat do you enjoy besides sport programming, 17 месяцев назад
+10

Orienteering is extremely cool. You can have fun at both beginner level and if you decide to improve the skills and run advanced races.

На pwnedTrain Better with ThemeCPs (+ Website)!, 17 месяцев назад
0
На pwnedTrain Better with ThemeCPs (+ Website)!, 17 месяцев назад
0

I tried too, didn't work for me. Maybe I'm doing it wrong, just submitting some CE to the opened tab

На YouKn0wWhoCodeTON Round 9 (Div. 1 + Div. 2, Rated, Prizes!), 17 месяцев назад
0

It is indeed clear you do first $$$cx$$$ normally and then you should only get the ones that are divisible by $$$x$$$, and there will be somewhat like $$$\frac{m}{x}$$$ of them. However it can be y > m that converts to y $$$\oplus x \le m$$$, and you also need to if this. But then it turns out your $$$[0;cx]$$$ can be intersecting with your $$$[m-x,m]$$$.

So you start if-ing cases when they are overlapping and then you just add even more ifs found by stress tests.

Not saying it is a bad problem overall, but it is definitely an overkill for C2. There probably is an elegant way to solve without that many corner cases, but I dont believe that's a problem that is always solvable by not that experienced participant even after they got the main ideas.

Also i think 1e5 is a bad constant as $$$x \le 10^6$$$. $$$x$$$ should be enough, and less than $$$x$$$ is not enough -- imagine $$$m = 10000{x}_2$$$ (having $$$x = \ldots101_2$$$), than you can take $$$m - x + 2$$$ and it will give you xor bigger than $$$m$$$.

Probably the easiest way was just to solve $$$m \le 4x$$$ the naive way and do first $$$2x$$$, last $$$x$$$ and then formulas otherwise.

На YouKn0wWhoCodeTON Round 9 (Div. 1 + Div. 2, Rated, Prizes!), 17 месяцев назад
+2

What was this with C2? I have no idea what I did for the solution, just stress-tested and ifed found cases until it passed.

Gcd round indeed.

На HoriCodeforces Global Round 27, 18 месяцев назад
+24

Liked C a lot:

  1. Solve for evens through solve for odds, always a cool trick. I used $$$solve(2n) = [\ldots, solve(2^{\lfloor\log(2n)\rfloor} - 1), 2n]$$$.
  2. A lot of ways to solve for odds: I used $$$[\ldots, 3, 1, n - 1, n]$$$ as $$$(n & (n - 1)) = n - 1$$$ for odd numbers and $$$1 & 3 = 1$$$ that we can push to the ans as well.
  3. $$$n \ge 5$$$ is a good constraint. I hadn't to manually if cases.

Maybe a bit harder than a regular D2C, but definitely solvable (and in many approaches, whats not always a case). It's generally hard to create a good problem for position C and today it was just awesome.

На moradiya84Why can't i submit??, 20 месяцев назад
+7

Same. If it provides any additional information, tried submitting from public wifi and then from mobile hotspot, both failed. upd: Cloudflare Ray ID: 8b7b5c8d3e756fea

На ZhouShang2003Pinely Round 4 (Div. 1 + Div. 2) Editorial, 21 месяц назад
+17

Enjoyed the round a lot, especially D and E are great. Thanks to coordinators, authors and testers!

++respect

На ch_egorCodeforces Round #751 Editorial, 5 лет назад
0

Hi! I'm sorry you didn't like the editoral. Still, solution (in my opinion) has two main ideas:

  1. calculate dp using bfs ~~~~~ queue q; q.push(0); while (!q.empty()) { int x = q.front(); q.pop(); FOR EACH i WHERE (i > x && i — a_i <= x) IS TRUE {
    for (int j : gr[i]) { // gr is a reversed i -> i + bi falls if (dp[j] > dp[x] + 1) { dp[j] = dp[x] + 1; q.push_back(x); } } } } ~~~~~

  2. FOR EACH i WHERE (i > x && i - a_i <= x) IS TRUE can be found in O(n log n), if we find i with segtree.index_of_min([x, maxn]) where segtree is built on array i — a_i