Comments

Yes, that is why I said solving problems is the best way to improve.

Um_nik has the best advice for this: just solve more problems.

There is no algorithm to solve these problems, just a collection of heuristics that you learn. "Taking a random idea" and using this to solve the problem is a terrible approach, because if your idea was truly "random" there is a 0% chance of it working. Im guessing when you say "random" you mean the first idea that comes to your mind, which is often better than random due to your experience.

Some heuristics are: 1- try the simple stuff first, if it ends up failing you don't waste much time. 2- If you are stuck try to prove some lemmas that you think would be helpful (e.g. "if i could guarentee X the problem is much easier" 3- Try solving an easier version of the problem, this often gives intuition for the original problem 4- When trying to prove something, first try to disprove it. Its often easier to find counterexamples and this saves a lot of time

That said, just memorizing these heuristics isn't gonna be that helpful. Solving problems is the best way to improve.

Radewoosh's suggestion here: https://mirror.codeforces.com/blog/entry/78358, basically a CF Wiki where helpful blog posts are stored. If implementation of this is too hard I think a lot of the Gym's code could be reused, i.e. needing "coach mode" to edit, rating difficulty, etc. The only extra thing needed would be sorting by "tags", but in the meantime we could just make sure to have descriptive titles.

https://mirror.codeforces.com/blog/entry/64625 does a pretty good job of explaining

On wtmooAmbiguous reasons for muting, 6 years ago
+8

I don't see how anyone could read: "Fact: If you do competitive programming then you will die" and thing it was genuine hate tbh, unless english is your second language (which could be the case idk). You will die != I will kill

say x is [somenumber]1000. The any interval of size 1,2,4 doesn't cover x (as if we had such an interval a-len(a) >= x), we have already proven that nothing of the same size can intersect, therefore the only possible ranges are those who are larger. You can generalize this argument for other range sizes.

riela never jokes

I don't think that this is a feature (or at least I haven't been able to find one), I just use this listor a search engine if im looking for a specific topic.

Sounds interesting. Personally, what i'd find useful (that isn't really common) are hints for problems before the editorial. I think these are much more helpful than just telling the whole answer in the editorial, as it gives people the chance to solve at least part of the problem on their own.

Yeah your idea sounds pretty much exactly like what i did, I agree that your segtree's constant is probably too large. Sorry, we made the bounds kinda tight to help counter potential cheese solutions.

Solving F using segment tree was the intended solution, some people had nicer solutions which ksun described

My understanding of a wildcard slot is that, since the number of people who are guaranteed a spot can vary depending on how many regional champions get medals NAC, we need to fill in the rest. Of course, they could have just said that the remaining spots will be filled based on position in NAC, but since they didn't I guess judges will choose the remaining wildcards based off of some feat in the contest (still will probably be strongly correlated with place).

To get blue you generally just need good implementation skills and very basic algorithm/data structures. Being able to solve A-C quickly should get you blue.

Here some tips i try to follow: - Don't look at the solution until you have no leads, and even then take a break and come back to the problem later to see if you can find a new lead. - Upsolve - Don't search by topic unless it is something completely new (figuring out what techinques to solve a problem is sometimes the most important part) - Don't overreach, that is dont try to solve problems that are too far above your level - If you solved a problem you found difficult, take a quick break then come back to the problem and see how you could have solved it faster

All this being said, if you are stuck at gray it is probably your implementation skills, and id recommend the atcoder beginner contests for improving those.

Id recommend E869120's pathway, its much more detailed and also lists needed data structures:

https://mirror.codeforces.com/blog/entry/66909

Its because the (x & -x) gets you the least significant bit of x.

let a=[A]10...00, then (thanks to 2-complements) -a = [A inverted]01...11 + 1 or [A inverted]10...00. Bitwise ANDing the two gets 000010...000, or the lsb of a.

For the covering, its because the length of the interval corresponding to x is equal to 2^(lsb) 1-indexed

You can check out my blog if this isn't clear.

If we are suggesting types of contests, i'd also like to add themed contests, like the atcoder educational dp contest. Personally i'd find these more educational than the current educational rounds.

0

Heaps have a pretty small constant so it should pass. Everything i can think of involves keeping a sort in some way, so I can't really think of an O(n) approach.

+18

Am i missing something or can we just use two heaps?

A similar idea applies, just to a lesser degree. The upper bound on ratings in Div 2 is either 1900 or 2100 depending on the contest, and those ratings are doable. Of course, not many people will solve high rated problems, but that is the point of having those problems- to distiguish people based on their skill. If a large number of people were finishing every problem, then this would be hard.

I believe for the problem ratings the approximation is something like if a problem is 400 points above your rating you have about a 10% chance of solving it, so problem ratings o 2000-2500 with the number of contestants is reasonable.

Your approach is wrong, i recommend reading the editorial

You are using O(n^2) amounts of memory, which is too much.

When you try to access an element in a map in c++, if the element isn't there the default value of that element is inserted into the map, so your dfs blows up the memory.

For magic portals, you are right in your approach.

To iterate through all pairings, we could just recursively generate all pairings by, at each point, iterating through all pairings with some "first" value. For example, for 1 2 3 4 5 6, we would pair 1 with all [2,6] and recursively generate pairings on the array. The branches never overlap (as they have a different pair), so we generate all pairs.

This is also the usaco problem WORMHOLE, whose solution can be found here

For Surprise Birthday I do think brute force would work actually, if you think about it in the right direction.

Instead of counting the number of ways to go from a string to our encrypted string, count the number of ways to go from our encrypted string to some other string using the reverse operation, where we transition like so

aaaaa aaaaa X -> aaaaa X aaaaa X aaaaa — > aaaaa X or X aaaaa X aaaaa aaaaa — > X aaaaa

we have O(n^2) possibilities for the string and O(n) transitions from each string to a different one (both following from that the new string must either be a prefix or a suffix of the old string) so we can use a bfs approach to get our solution. With naive string comparison, we can find the transitions in O(n^2) so we get an O(n^4) solution. If this is too slow, we can precompute all of the comparisons in O(n^3) time by using an unordered map or such (however with the size of n i feel that the constants matter more).

This sounds pretty cancerous to implement though so i haven't, so im not certain that this works.

+1

I believe you just treat it as a linear diophantine equation, use the extended euclidian algorithm to get solutions (where both counts are positive), then minimize their sum.

See this:linear diophantine equations for more details

Its not guaranteed that two numbers that are reachable share a bit. (the and checks two numbers for adjacancy, not the whole sequence.)

For example: 8 12 4

We can reach the 4 from 8, but they don't share any bit.

lets say the first value int the array is 3, and m is 6.

after each operation, we get the following values: 3, 4, 5, 0, 1, 2

If we increase 3 through the operation, we don't want to make it 4 or 5, as this just makes getting a nondecreasing sequence harder, we want it to become less than 3.

However, to make it 1 or 2 we need to go past zero. Zero is better for nondecreasing sequences, and it is also easier to get to.

Therefore, our only choices for the first one are zero or the initial value.

+30

Sure.

First we notice that the number of overall operations is equal to the maximum number of operations used on a single index.

Next we notice that, if we can solve the problem in <= X operations, then we can also solve it in <= X + 1 operations. This is a monotonic sequence, so we can solve for the minimum value of X with binary search.

The question is, how do we check for <= X? Well, notice that it is never worse to have the leftmost index to be lower. However, we can only increase-- until it loops around. Thus we either don't change the value at x, or we set it to zero. (we check if the required number of operations to do this is <= X). Next, we move on to the next index, this time setting the "base" value equal to the previous index's value to ensure our sequence is nondecreasing. If at any point our current index is less than the base and X operations on it are not enough to make up for this, then we need to raise our value on X.

Now we have O(log m) searches and O(n) checking, so we have an overall O(n log m) solution.

Huh thats odd, it fixed itself on my end as well. When it was 49 I checked on a different computer and it showed 49 on it as well. I guess my router had an error or something.

Thanks for pointing that out, its fixed now.

Well for the lower bound to be the smallest number we would need to prove that we can construct each layer such that each segment that covers the maximally covered area [i,i+1] is a part of each layer.

First off, note that the maximally covered segment is in the middle as a segment [i,i+1] is covered by (i+1)*(n-i) segments (taking one starting point <=i and ending point >=i+1). (There are two such areas when n is even)

Then we can construct a solution: starting from i=0 take any segment starting at i, and ending at n-i, and split that segment at some point. One of the segments will cover the maximally covered area. Repeat this for all such split points. increment i, repeat.

It is clear that every layer includes a unique segment that covers the maximally covered area, and every segment is included in one layer. Thus, as every layer includes a unique segment that covers the maximally covered area, the number of layers is the same as the number of unique segments that cover the maximal area, aka the lower bound. So the lower bound is the smallest number of layers.

To be fair 6 and 7 can be combined by putting the segment with the even numbers first, but thats still 8 cases.

Here is another way of thinking about it:

Consider two guests a,b with a being friends with b and x, while b is friends with a and y.

If a is processed first, then x and b will become friends, then when b is processed a and y, x and y will become friends.

If b is processed first, then a and y will become friends, then when a is processed b and x, x and y will become friends.

Either way the same endstate is reached.

Thus we can switch any adjacent guests without changing the endstate, so we can order the processing of the guests in any way we want (sort of like bubble sort) without changing the endstate.

On ODTCodeforces Round #449 Editorial, 8 years ago
0

Sorry, I stupidly conflated two different n's in my answer, so my answer doesn't work. Now I'm lost as well, I don't see how there is no linear n term in the complexity since n terms must be inputted/outputted.

On ODTCodeforces Round #449 Editorial, 8 years ago
0

Here is how I believe it is solved:

First you make a (binary search) tree with each index of the segment as a leaf. Thus a tree of height n can store 2^{n-1} indexes. There are a linear number of nodes in the tree relative to the leaves (n-1 to be exact). Thus, traversing the worst case (the whole tree) takes linear time. But note that this tree stores an exponential amount of indexes. Traversing an exponential amount of space in linear time is the same as traversing a linear amount of space in logarithmic time. Thus, each query takes O(log n) time, so with m queries the problem is solved in O(mlogn) time.

Edit: I'm stupid and confused two different n, this solution doesn't work.

On ODTCodeforces Round #449 Editorial, 8 years ago
+3

When you look at the problem, you notice that the strings are defined recursively. Storing each string is infeasible, however, as each f_n uses f_(n-1) twice, and therefore takes up over double of the previous string's space. Thus, storing each string would quickly exceed the memory limit. As N is limited by 10^6 and Q is limited by 10, we see a linear query solution is sufficient.

To answer a query Q( n (string number), k (character number)) you need to check if the character is within the length of string f_n, these can be precomputed in linear time by iterating letting L[i] = min (L[i-1]*2+(added parts length), 10^18) (we don't need to know lengths above 10^18 due to our restriction on k.) As the editorial notes, we really only need to calculate this until n is 60.

Now that we are able to check if an answer exists, we need to actually calculate the answer. Each valid query asks for a character in one of five parts, the start of the string ("What are you doing..."), f_{n-1} , the middle of the string " Are you busy... " , f_{n-1} (again), or the end of the string ""?". If the query asks for a character in the start, middle, or end, the query can return that. Else, it returns a query on f_{n-1} (while adjusting which character it asked for by subtracting previous lengths.) This repeats until a character is found. (Note that the base case is at n=0, meaning it takes n recursions at most to solve.)

Thus we have a O(NQ) solution, which is acceptable.

0

The answer is NO as it is impossible to sit four groups of 2 without two groups being adjacent. Here is an illustration of the closest possible seating to a proper answer (Note: — represents a division) : aa-bbcc-dd. In this case 4(b) and 5(c) are adjacent to each other, and so this solution doesn't work. In short: It's impossible to arrange these people such that no two people from different groups are adjacent, so the answer is no.

+1

You check if a node is a leaf by checking adj[cur].size()==1, but that doesn't work when node 1 only has one child. This causes your method to return immediately with ans=0. Here is a simpler test case where your code fails: 4 1 2 2 3 3 4

Clearly, given a color c and a number of repaints m you could bash through all possible intervals and find the maximum valid substring, but this would take too long, with n(n+1)/2 possible intervals and q queries, taking O(n^2*q) time.

We can resolve this issue as there are 26n possible queries (as there are 26 letters in the alphabet and m is constrained to [1,n]. Therefore we run through every possible query and store its answer in an array Ans[c][m]. (this takes O(n^2*26) time)

To run through every possible query we look at every possible color and every single possible subinterval, and find the minimum number of recolors needed to make that interval valid. We compare the size of this interval to the size currently stored in Ans[c][r-l+1-t], and store the maximum of the two.

However, this only finds the maximum value when exactly m colors are recolored, but in some cases not all m colors need to be recolored, for example take string axaxa for favorite color a, a maximum of 2 recolors will be needed even if we have more than 2 recolors available. Therefore, for each color we iterate through [1,m] and if Ans[c][j] is less than Ans[c][j-1], we replace Ans[c][j] with Ans[c][j-1]. Using the above example, Ans[a][2] would = 5 while Ans[a][3] would = 0, and so we set Ans[a][3] to 5. (Restating what I said above: the maximum vaild substring given a number of recolors m is equal to the maximum valid substring that can be obtained by any number p such that 1<=p<=m. We originally only found answers which used exactly m recolors.)

Then we handle each query by using this 2-D array as the solutions are precomputed.