mohammedehab2002's blog

By mohammedehab2002, 4 years ago, In English

1516A - Tit for Tat

The general approach to minimizing an array lexicographically is to try to make the first element as small as possible, then the second element, and so on. So greedily, in each operation, we'll pick the first non-zero element and subtract $$$1$$$ from it, and we'll add that $$$1$$$ to the very last element. You can make the implementation faster by doing as many operations as you can on the first non-zero element simultaneously, but it's not necessary.

Code link: https://pastebin.com/pBsychs2

1516B - AGAGA XOOORRR

So let's try to understand what the final array looks like in terms of the initial array. The best way to see this is to look at the process backwards. Basically, start with the final array, and keep replacing an element with the $$$2$$$ elements that xor-ed down to it, until you get the initial array. You'll see that the first element turns into a prefix, the second element turns into a subarray that follows this prefix, and so on. Hence, the whole process of moving from the initial to the final array is like we divide the array into pieces, and then replace each piece with its xor, and we want these xors to be equal. A nice observation is: we need at most $$$3$$$ pieces. That's because if we have $$$4$$$ or more pieces, we can take $$$3$$$ pieces and merge them into one. Its xor will be the same, but the total piece count will decrease by $$$2$$$. Now, checking if you can divide it into $$$2$$$ or $$$3$$$ pieces is a simple task that can be done by bruteforce. You can iterate over the positions you'll split the array, and then check the xors are equal using a prefix-xor array or any other method you prefer.

Additional idea: for $$$2$$$ pieces, you don't even need bruteforce. It's sufficient to check the xor of the whole array is $$$0$$$. Hint to see this: write the bruteforce.

Code link: https://pastebin.com/tnLpW23C

Bonus task: can you find an $$$O(n)$$$ solution? What if I tell you at least $$$k$$$ elements have to remain instead of $$$2$$$?

1516C - Baby Ehab Partitions Again

First of all, let's check if the array is already good. This can be done with knapsack dp. If it is, the answer is $$$0$$$. If it isn't, I claim you can always remove one element to make it good, and here's how to find it:

Since the array can be partitioned, its sum is even. So if we remove an odd element, it will be odd, and there will be no way to partition it. If there's no odd element, then all elements are even. But then, you can divide all the elements by $$$2$$$ without changing the answer. Why? Because a partitioning in the new array after dividing everything by $$$2$$$ is a partitioning in the original array and vice versa. We just re-scaled everything. So, while all the elements are even, you can keep dividing by $$$2$$$, until one of the elements becomes odd. Remove it and you're done. If you want the solution in one sentence, remove the element with the smallest possible least significant bit.

Alternatively, for a very similar reasoning, you can start by dividing the whole array by its $$$gcd$$$ and remove any odd element (which must exist because the $$$gcd$$$ is $$$1$$$,) but I think this doesn't give as much insight ;)

Code link: https://pastebin.com/aiknVwkZ

1516D - Cut

Let's understand what "product=LCM" means. Let's look at any prime $$$p$$$. Then, the product operation adds up its exponent in all the numbers, while the LCM operation takes the maximum exponent. Hence, the only way they're equal is if every prime divides at most one number in the range. Another way to think about it is that every pair of numbers is coprime. Now, we have the following greedy algorithm: suppose we start at index $$$l$$$; we'll keep extending our first subrange while the condition (every pair of numbers is coprime) is satisfied. We clearly don't gain anything by stopping when we can extend, since every new element just comes with new restrictions. Once we're unable to extend our subrange, we'll start a new subrange, until we reach index $$$r$$$. Now, for every index $$$l$$$, let's define $$$go_l$$$ to be the first index that will make the condition break when we add it to the subrange. Then, our algorithm is equivalent to starting with an index $$$cur=l$$$, then replacing $$$cur$$$ with $$$go_{cur}$$$ until we exceed index $$$r$$$. The number of steps it takes is our answer. We now have $$$2$$$ subproblems to solve:

calculating $$$go_l$$$

To calculate $$$go_l$$$, let's iterate over $$$a$$$ from the end to the beginning. While at index $$$l$$$, let's iterate over the prime divisors of $$$a_l$$$. Then, for each prime, let's get the next element this prime divides. We can store that in an array that we update as we go. If we take the minimum across these occurrences, we'll get the next number that isn't coprime to $$$l$$$. Let's set $$$go_l$$$ to that number. However, what if $$$2$$$ other elements, that don't include $$$l$$$, are the ones who aren't coprime? A clever way to get around this is to minimize $$$go_l$$$ with $$$go_{l+1}$$$, since $$$go_{l+1}$$$ covers all the elements coming after $$$l$$$.

counting the number of steps quickly

This is a pretty standard problem solvable with the binary lifting technique. The idea is to perform many jumps at a time, instead of $$$1$$$. Let's calculate $$$dp[i][l]$$$: the index we'll end up at if we keep replacing $$$l$$$ with $$$go_l$$$ $$$2^i$$$ times. Clearly, $$$dp[i][l]=dp[i-1][dp[i-1][l]]$$$ since $$$2^{i-1}+2^{i-1}=2^i$$$. Now, to calculate how many steps it takes from index $$$l$$$ to index $$$r$$$, let's iterate over the numbers from $$$log(n)$$$ to $$$0$$$. Let the current be $$$i$$$. If $$$dp[i][l]$$$ is less than or equal to $$$r$$$, we can jump $$$2^i$$$ steps at once, so we'll make $$$l$$$ equal to $$$dp[i][l]$$$ and add $$$2^i$$$ to the answer. At the end, we'll make one more jump.

Code link: https://pastebin.com/Ng314Xc8

1516E - Baby Ehab Plays with Permutations

Let's think about the problem backwards. Let's try to count the number of permutations which need exactly $$$j$$$ swaps to be sorted. To do this, I first need to refresh your mind (or maybe introduce you) to a greedy algorithm that does the minimum number of swaps to sort a permutation. Look at the last mismatch in the permutation, let it be at index $$$i$$$ and $$$p_i=v$$$. We'll look at where $$$v$$$ is at in the permutation, and swap index $$$i$$$ with that index so that $$$v$$$ is in the correct position. Basically, we look at the last mismatch and correct it immediately. We can build a slow $$$dp$$$ on this greedy algorithm: let $$$dp[n][j]$$$ denote the number of permutations of length $$$n$$$ which need $$$j$$$ swaps to be sorted. If element $$$n$$$ is in position $$$n$$$, we can just ignore it and focus on the first $$$n-1$$$ elements, so that moves us to $$$dp[n-1][j]$$$. If it isn't, then we'll swap the element at position $$$n$$$ with wherever $$$n$$$ is at so that $$$n$$$ becomes in the right position, by the greedy algorithm. There are $$$n-1$$$ positions index $$$n$$$ can be at, and after the swap, you can have an arbitrary permutation of length $$$n-1$$$ that needs to be sorted; that gives us $$$n-1$$$ ways to go to $$$dp[n-1][j-1]$$$. Combining, we get that $$$dp[n][j]=dp[n-1][j]+(n-1)*dp[n-1][j-1]$$$.

Next, notice that you don't have to do the minimum number of swaps in the original problem. You can swap $$$2$$$ indices and swap them back. Also, it's well-known that you can either get to a permutation with an even number of swaps or an odd number, but never both (see this problem.) So now, after you calculate your $$$dp$$$, the number of permutations you can get to after $$$j$$$ swaps is $$$dp[n][j]+dp[n][j-2]+dp[n][j-4]+...$$$. Now, let's solve for $$$n \le 10^9$$$.

Sane solution

Notice that after $$$k$$$ swaps, only $$$2k$$$ indices can move from their place, which is pretty small. That gives you a pretty intuitive idea: let's fix a length $$$i$$$ and then a subset of length $$$i$$$ that will move around. The number of ways to pick this subset is $$$\binom{n}{i}$$$, and the number of ways to permute it so that we need $$$j$$$ swaps is $$$dp[i][j]$$$. So we should just multiply them together and sum up, right? Wrong. The problem is that double counting will happen. For example, look at the sorted permutation. This way, you count it for every single subset when $$$j=0$$$, but you should only count it once. A really nice solution is: force every element in your subset to move from its position. How does this solve the double counting? Suppose $$$2$$$ different subsets give you the same permutation; then, there must be an index in one and not in the other. But how can they give you the same permutation if that index moves in one and doesn't move in the other?

So to mend our solution, we need to create $$$newdp[n][j]$$$ denoting the number of permutations of length $$$n$$$ which need $$$j$$$ swaps to be sorted, and every single element moves from its position (there's no $$$p_i=i$$$.) How do we calculate it? One way is to do inclusion-exclusion on the $$$dp$$$ we already have! Suppose I start with all permutations which need $$$j$$$ swaps. Then, I fix one index, and I try to subtract the number of permutations which need $$$j$$$ swaps to be sorted after that index is fixed. There are $$$n$$$ ways to choose the index, and $$$dp[n-1][j]$$$ permutations, so we subtract $$$n*dp[n-1][j]$$$. But now permutations with $$$2$$$ fixed points are excluded twice, so we'll include them, and so on and so forth. In general, we'll fix $$$f$$$ indices in the permutation. There are $$$\binom{n}{f}$$$ ways to pick them, and then there are $$$dp[n-f][j]$$$ ways to pick the rest so that we need $$$j$$$ swaps. Hence: $$$newdp[n][j]=\sum\limits_{f=0}^{n} (-1)^f*\binom{n}{f}*dp[n-f][j]$$$. Phew!

If you have no idea what the hell I was talking about in the inclusion-exclusion part, try this problem first.

Code link: https://pastebin.com/3CzuGvtw

Crazy solution

Let $$$[l;r]$$$ denote the set of the integers between $$$l$$$ and $$$r$$$ (inclusive.)

Let's try to calculate $$$dp[2n]$$$ from $$$dp[n]$$$. To do that, we need to understand our $$$dp$$$ a bit further. Recall that $$$dp[n][j]=dp[n-1][j]+(n-1)*dp[n-1][j-1]$$$. Let's think about what happens as you go down the recurrence. When you're at index $$$n$$$, either you skip it and move to $$$n-1$$$, or you multiply by $$$n-1$$$. But you do that exactly $$$j$$$ times, since $$$j$$$ decreases every time you do it. So, this $$$dp$$$ basically iterates over every subset of $$$[0;n-1]$$$ of size $$$j$$$, takes its product, and sums up!

$$$dp[n][j]=\sum\limits_{s \subset [0;n-1],|s|=j} s_1*s_2 \ldots *s_j$$$

Now, let's use this new understanding to try and calculate $$$dp[2n]$$$ from $$$dp[n]$$$. suppose I pick a subset of $$$[0;2n-1]$$$. Then, a part of it will be in $$$[0;n-1]$$$ and a part will be in $$$[n;2n-1]$$$. I'll call the first part small and the second part big. So, to account for every subset of length $$$j$$$, take every subset of length $$$j_2$$$ of the big elements, multiply it by a subset of length $$$j-j_2$$$ of the small elements, and sum up. This is just normal polynomial multiplication!

Let $$$big[n][j]$$$ denote the sum of the products of the subsets of length $$$j$$$ of the big elements. That is:

$$$big[n][j]=\sum\limits_{s \subset [n;2n-1],|s|=j} s_1*s_2 \ldots *s_j$$$

Then, the polynomial multiplication between $$$dp[n]$$$ and $$$big[n]$$$ gives $$$dp[2n]$$$! How do we calculate $$$big$$$ though?

Notice that every big element is a small element plus $$$n$$$. So we can instead pick a subset of the small elements and add $$$n$$$ to each element in it. This transforms the formula to:

$$$big[n][j]=\sum\limits_{s \subset [0;n-1],|s|=j} (s_1+n)*(s_2+n) \ldots *(s_j+n)$$$

Let's expand this summand. What will every term in the expansion look like? Well, it will be a subset of length $$$l$$$ from our subset of length $$$j$$$, multiplied by $$$n^{j-l}$$$. Now, let's think about this backwards. Instead of picking a subset of length $$$j$$$ and then picking a subset of length $$$l$$$ from it, let's pick the subset of length $$$l$$$ first, and then see the number of ways to expand it into a subset of length $$$j$$$. Well, there are $$$n-l$$$ elements left, and you should pick $$$j-l$$$ elements from them, so there are $$$\binom{n-l}{j-l}$$$ ways to expand. That gives use:

$$$big[n][j]=\sum\limits_{l=0}^{j} \binom{n-l}{j-l}*n^{j-l}* \sum\limits_{s \subset [0;n-1],|s|=l} s_1*s_2 \ldots *s_l$$$

But the interior sum is just $$$dp[l]$$$! Hurray! So we can finally calculate $$$big[n][j]$$$ to be:

$$$big[n][j]=\sum\limits_{l=0}^{j} \binom{n-l}{j-l}*n^{j-l}*dp[l]$$$

And then polynomial multiplication with $$$dp[n]$$$ itself would give $$$dp[2n]$$$. Since you can move to $$$dp[n+1]$$$ and to $$$dp[2n]$$$, you can reach any $$$n$$$ you want in $$$O(log(n))$$$ iterations using its binary representation.

Code link: https://pastebin.com/yWgs3Ji6

Bonus task-ish: the above solutions can be made to work with $$$k \le 2000$$$ with a bit of ugly implementation, but I don't know how to solve the problem with $$$k \le 10^5$$$. Can anyone do it? The sane solution seems far off, and I don't know if it's possible to do the convolution from $$$dp$$$ to $$$big$$$ quickly in the crazy one.

Full text and comments »

  • Vote: I like it
  • +170
  • Vote: I do not like it

By mohammedehab2002, 4 years ago, In English

It's me again!

Codeforces round #717 will take place on Apr/21/2021 16:35 (Moscow time), an hour early than the usual timing. It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems were created by me, and some of them are based on the Egyptian IOI qualification. I'd like to thank:

You'll be given 5 problems, but this time you'll have 2 hours to solve them.

UPD: the scoring distribution will be 500-1000-1500-2000-2750.

UPD: the editorial.

UPD: congratulations to the winners:

Div.1:-

  1. LJC00118
  2. antontrygubO_o because he has to get a mention
  3. Mehrdad_Sohrabi
  4. zscoder
  5. SSRS_

Div.2:-

  1. ShuijunDainifei
  2. Dolodu123
  3. alya_wow
  4. PolarFlea
  5. Kazaoka_Mari

Good luck & Have fun :D

Full text and comments »

  • Vote: I like it
  • +637
  • Vote: I do not like it

By mohammedehab2002, 4 years ago, In English

1514A - Perfectly Imperfect Array

If any element is not a perfect square, the answer is yes. Otherwise, the answer is no, because $$$a^2*b^2*...=(a*b*...)^2$$$.

Code link: https://pastebin.com/s83sFt3G

1514B - AND 0, Sum Big

Let's start with an array where every single bit in every single element is $$$1$$$. It clearly doesn't have bitwise-and equal to $$$0$$$, so for each bit, we need to turn it off (make it $$$0$$$) in at least one of the elements. However, we can't turn it off in more than one element, since the sum would then decrease for no reason. So for every bit, we should choose exactly one element and turn it off there. Since there are $$$k$$$ bits and $$$n$$$ elements, the answer is just $$$n^k$$$.

Code link: https://pastebin.com/0D8yL5WW

1514C - Product 1 Modulo N

So first observe that the subsequence can't contain any element that isn't coprime with $$$n$$$. Why? Because then its product won't be coprime with $$$n$$$, so when you take it modulo $$$n$$$, it can't be $$$1$$$. In mathier words, $$$gcd(prod \space mod \space n,n)=gcd(prod,n) \neq 1$$$. Now, let's take all elements less than $$$n$$$ and coprime with it, and let's look at their product modulo $$$n$$$; call it $$$p$$$. If $$$p$$$ is $$$1$$$, you can take all these elements. Otherwise, you should take them all except for $$$p$$$. It belongs to them because $$$p$$$ is coprime with $$$n$$$, since $$$gcd(p \space mod \space n,n)=gcd(p,n)=1$$$ since all the elements in $$$p$$$ are coprime with $$$n$$$.

Code link: https://pastebin.com/JnjZ3SQm

Bonus task: solve it for $$$n \le 10^{12}$$$.

1514D - Cut and Stick

Suppose the query-interval has length $$$m$$$. Let's call an element super-frequent if it occurs more than $$$\lceil\frac{m}{2}\rceil$$$ times in it, with frequency $$$f$$$. If there's no super-frequent element, then we can just put all the elements in $$$1$$$ subsequence. Otherwise, we need the partitioning. Let's call the rest of the elements (other than the super-frequent one) good elements. One way to partition is to put all the $$$m-f$$$ good elements with $$$m-f+1$$$ super-frequent elements; then, put every remaining occurrence of the super-frequent element in a subsequence on its own. The number of subsequences we need here is then $$$1+f-(m-f+1)=2*f-m$$$. There's no way to improve this, because: for every subsequence we add, the number of occurrences of the super-frequent element minus the number of good elements is at most $$$1$$$, so by making it exactly $$$1$$$ in each subsequence, we get an optimal construction. Now, the problem boils down to calculating $$$f$$$. Note that calculating the most frequent element in general is a well-known slow problem. It's usually solved with MO's algorithm in $$$O(n\sqrt{n}log(n))$$$, maybe with a tricky optimization to $$$O(n\sqrt{n})$$$. However, notice that we only need the most frequent element if it occurs more than $$$\lceil\frac{m}{2}\rceil$$$ times. How can we use this fact?

Randomized solution

We can pick ~$$$40$$$ random elements from our range to be candidates for the super-frequent element, then count their occurrences and maximize. If there's a super-frequent element, the probability it's not picked is at most $$$2^{-40}$$$, which is incredibly small.

To count the number of occurrences of an element in a range, we can carry for each element a vector containing all the positions it occurs in increasing order. Then, upper_bound(r)-lower_bound(l) gives us the number of occurrences in $$$O(log(n))$$$.

Code link: https://pastebin.com/APHEtfge

Deterministic solution

Observe that if a range has a super-frequent element, and we split it into $$$2$$$ ranges, then this element must be super-frequent in one of them. Now suppose we create a segment tree where every node $$$[l;r]$$$ returns an element in the range, and suppose every node merges the $$$2$$$ elements returned by its children as follows: count their occurrences in $$$[l;r]$$$ and pick whichever occurs more. In general, that doesn't return the most frequent element. However, if there's a super-frequent element, it must return it! That's because if there's a super-frequent element in $$$[l;r]$$$, it must be super-frequent in one of its children, so by induction the segment tree works. The time complexity is $$$O(nlog^2(n))$$$.

Code link: https://pastebin.com/qeGxT0W2

There are also tons of $$$O(n\sqrt{n})$$$ solutions that should pass if the operations are simple enough.

Bonus task: solve it in $$$O(nlog(n))$$$.

1514E - Baby Ehab's Hyper Apartment

Throughout the editorial, I'll call the first type of queries OneEdge and the second type ManyEdges.

The basic idea behind this problem is to find a few edges such that every path that could be traversed in your graph could be traversed using only these edges. With that motivation in mind, let's get started.

The first observation is: the graph has a hamiltonian path. To prove this, suppose you split the graph into $$$2$$$ halves, each containing some of the nodes. Then, we'll proceed by induction. Suppose each half has a hamiltonian path. I'll describe a way to merge them into one path. First, let's look at the first node in each path and ask about the edge between them. Suppose it's directed from the first to the second one. Then, I'll start my new merged path with the first node, remove it, and repeat. This works because no matter which node follows it, it sends an edge out to it. This is best described by the following picture:

We start with the $$$2$$$ hamiltonian paths we got by induction, then we query that red edge. We find that it's from the grey node to the white node. We then put our grey node as the start of the path and continue doing that with the rest of the nodes, and we don't care which node will follow it, because the edge is out from the black node either way!

If everything falls into place in your mind, you should recognize that this algorithm is merge sort. We just run merge sort on the nodes of the graph, using the comparator OneEdge. That gives you a hamiltonian path in $$$nlog(n)$$$ queries.

Now that we have a hamiltonian path, every edge that goes forward in it is useless, since you can just walk down the path itself:

So let's focus on the edges going backwards. Suppose we iterate through the hamiltonian path from its end to its beginning, looking at the edges going back from the nodes we met. An edge going backwards from the current node is important only if it goes back further than any of the edges we've met so far. That's because we can take a (much less direct) route instead of this edge if it doesn't go so far back:

Now with a few more edges we can form all the paths! How do we get these edges? We can use $$$2$$$ pointers. Let's iterate through the hamiltonian path from its end to its beginning, carrying a pointer $$$p$$$ that tells us how far back the edges we met so far can go. To update $$$p$$$, let's ask ManyEdges from the node we're currently at, to the first $$$p$$$ nodes in the hamiltonian path. While the answer is $$$1$$$, we'll keep decreasing $$$p$$$. This algorithm calls ManyEdges $$$2n$$$ times, since every time we call it, it either returns $$$0$$$ and the node we're at decreases, or it returns $$$1$$$ and $$$p$$$ decreases.

Code link: https://pastebin.com/RU1yA697

Bonus task: can you get the hamiltonian path using insertion sort instead?

Full text and comments »

  • Vote: I like it
  • +135
  • Vote: I do not like it

By mohammedehab2002, 4 years ago, In English

Hi everyone!

Codeforces round #716 will take place on Apr/19/2021 16:35 (Moscow time). It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems are based on the Egyptian IOI qualification, and they were created by me and mahmoudbadawy. I'd like to thank:

This may be the most and only balanced round I've ever set. You'll be given 5 problems and 2:15 hours to solve them.

UPD: the scoring distribution will be 500-1000-1500-2000-2500.

UPD: the editorial is out.

UPD: congratulations to the winners:

Div.1:-

  1. neal
  2. Savior-of-Cross
  3. Geothermal
  4. cookiedoth
  5. Tlatoani

Div.2:-

  1. alya_wow
  2. DatChelsea
  3. Trump_Constructs_China
  4. Dolodu123
  5. Kazaoka_Mari

Good luck & Have fun :D

Full text and comments »

  • Vote: I like it
  • +639
  • Vote: I do not like it

By mohammedehab2002, history, 4 years ago, In English

Hey everyone!

I'd like to invite you to participate in the ECPC 2019 Kickoff contest on gym on Wednesday, 08:30 UTC. The contest was originally held in Cairo in 2019, and we woke up today and decided to publish it.

The problems were created by me, TripleM5da, KhaledKEE, Mohammad_Yasser, Dark, and Jester. The contest will include 14 problems, and you'll have 5 hours to solve them.

We hope you enjoy it :D

Full text and comments »

Announcement of ECPC 2019 Kickoff
  • Vote: I like it
  • +64
  • Vote: I do not like it

By mohammedehab2002, 4 years ago, In English

We invite you to participate in the CodeChef October Lunchtime — the 3-hour contest which offers 5 challenging problems to be solved, next Saturday, October 31st, 19:30 to 22:30 IST.

Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.

The members of the problem setting panel are:

Prizes: The top 10 Indian and top 10 Global school students from ranklist will receive certificates and CodeChef laddus, with which they can claim cool CodeChef goodies. Know more here.

Video Editorials: You will now also be able to solve your doubts with the help of our amazing educators and their helpful video editorials. Don't forget to check out our YouTube Channel soon after the contest ends. Hit the Subscribe button so you don’t miss out on any video editorials in the future.

Good luck and have fun!

UPD: Thanks everyone for participating! Here are some quick solution sketches. You can find more detailed editorials on CodeChef.

ANDOR
SUBMEXS
GSUB
COPAR
SSO
CDSUMS
EFLIP

Full text and comments »

  • Vote: I like it
  • +157
  • Vote: I do not like it

By mohammedehab2002, history, 5 years ago, In English

1364A - XXXXX

Let's start with the whole array. If every element in it is divisible by $$$x$$$, the answer is $$$-1$$$; if its sum isn't divisible by $$$x$$$, the answer is $$$n$$$. Otherwise, we must remove some elements. The key idea is that removing an element that is divisible by $$$x$$$ doesn't do us any benefits, but once we remove an element that isn't, the sum won't be divisible by $$$x$$$. So let the first non-multiple of $$$x$$$ be at index $$$l$$$, and the last one be at index $$$r$$$. We must either remove the prefix all the way up to $$$l$$$ or the suffix all the way up to $$$r$$$, and we'll clearly remove whichever shorter.

Code link: https://pastebin.com/j2Y8AJBA

Alternatively, we can notice that this means the answer is either a prefix or a suffix, so we can simply bruteforce them all.

1364B - Most socially-distanced subsequence

TL;DR the answer contains the first element, last element, and all the local minima and maxima, where a local minimum is an element less than its 2 adjacents, and a local maximum is an element greater than it 2 adjacents.

Let's look at the expression in the problem for 3 numbers. If $$$a>b$$$ and $$$b>c$$$ or if $$$a<b$$$ and $$$b<c$$$, $$$|a-b|+|b-c|=|a-c|$$$, so it's never optimal to use $$$a$$$, $$$b$$$, and $$$c$$$ in a row because you can use just $$$a$$$ and $$$c$$$ and achieve a shorter subsequence. If you keep erasing your $$$b$$$'s from the original permutation, you'll end up with the first element, the last element, and the local minima and maxima. You can see that erasing any of them would decrease the expression, so this is the optimal answer.

Code link: https://pastebin.com/e2HHuKFY

1364C - Ehab and Prefix MEXs

The key observation is: if for some index $$$i$$$, $$$a_i \neq a_{i-1}$$$, then $$$b_i$$$ must be equal to $$$a_{i-1}$$$, since it's the only way to even change the prefix $$$MEX$$$. We can use this observation to fill some indices of $$$b$$$. Now, how do we fill the rest? Let's start by avoiding every element in $$$a$$$. Something special will happen if we avoid using any element from $$$a$$$ again. If we look at the first $$$i$$$ numbers in $$$b$$$, $$$a_i$$$ will indeed be excluded, so $$$MEX({b_1, b_2, \ldots, b_i}) \le a_i$$$. Now we need to make it as big as possible. How do we make it as big as possible? The logical thing to do is to fill the rest of $$$b$$$ with the numbers not in $$$a$$$ in increasing order. It turns out this construction always satisfies the conditions. Indeed, if we look at the first $$$i$$$ elements in $$$b$$$, every element less than $$$a_i$$$ will be present because $$$a_i \le i$$$ and we added the rest of the elements in increasing order.

Code link: https://pastebin.com/x9VtuBym

1364D - Ehab's Last Corollary

The common idea is: if the graph is a tree, you can easily find an independent set with size $$$\lceil\frac{n}{2}\rceil$$$ by bicoloring the vertices and taking the vertices from the more frequent color. Otherwise, the graph is cyclic. Let's get a cycle that doesn't have any edges "cutting through it." In other words, it doesn't have any pair of non-adjacent vertices connected by an edge. If its length is at most $$$k$$$, print it. Otherwise, take every other vertex (take a vertex and leave a vertex) and you'll end up with a big enough independent set. How to find such cycle?

First solution

Let's do a dfs in our graph. In the very first time we hit a node that has a back-edge, we take the back-edge that goes to the deepest possible node to close our cycle. This cycle can't have any edges crossing it because none of our node's ancestors has a back-edge (by definition.)

Code link: https://pastebin.com/wsCXuzGy

Second solution

Let's get any cycle in the graph. Now, let's iterate over the edges that don't belong to the cycle. Whenever we meet one that "crosses the cycle," we use it to cut the cycle into 2 cycles with smaller length and keep any of them. When we finish, we'd have our desired cycle.

Code link: https://pastebin.com/ezwEURKW

1364E - X-OR

The common idea is: if we find the index that contains $$$0$$$, we can query it with every element in $$$p$$$ and finish in $$$n$$$ queries (if you didn't do that, pleaaase share your solution.) How to get this index?

First solution

Let's try to make a magic function that takes an index $$$i$$$ and tells us $$$p_i$$$. Assume you have an array $$$z$$$ such that $$$z_j$$$ is some index in the permutation that has a $$$0$$$ in the $$$j^{th}$$$ bit. Building our magic function with it turns out to be very easy. We'll just return $$$query(i,z_0)$$$&$$$query(i,z_1)$$$&$$$\ldots$$$&$$$query(i,z_{10})$$$. Why does that work? If $$$p_i$$$ has a $$$1$$$ in the $$$j^{th}$$$ bit, this expression will also have a $$$1$$$ because $$$p_i$$$ will make every single clause have a $$$1$$$. If it has a $$$0$$$, $$$query(i,z_j)$$$ will also have a $$$0$$$, making the whole expression have a $$$0$$$!

But how do we find $$$z$$$? This turns out to be very easy. We'll query random pairs of indices, see where the result has a $$$0$$$, and update $$$z$$$. We stop once we fill every index in $$$z$$$. This works quickly because for any bit, at least half the numbers from $$$0$$$ to $$$n-1$$$ will have a $$$0$$$.

Now we already have an $$$nlog(n)$$$ solution (call our magic function with every index,) but how to make less calls? Let's carry an index $$$idx$$$ that's supposed to have the index of $$$0$$$ in the end, and let $$$p_{idx}$$$ be stored in $$$val$$$. Initially, $$$idx$$$ is $$$1$$$ and $$$val$$$ could be found with our magic function. Now, let's iterate over the permutation. We'll query the current index, $$$i$$$, with $$$idx$$$. If the result isn't a subset of $$$val$$$, $$$p_i$$$ can't be $$$0$$$, so let's throw it in the trash. Otherwise, we'll make $$$idx$$$ equal to $$$i$$$ and use our magic function to update $$$val$$$.

Code link: https://pastebin.com/kBQGrEqP

analysis

Second solution

Thanks, Utkarsh.25dec for this solution.

I'll describe a way to start with $$$n$$$ candidates to be $$$0$$$ and end up with $$$\sqrt{n}$$$ candidates. Let's query random pairs until we find a pair whose bitwise-or has at most $$$\frac{log(n)}{2}$$$ bits. Take one of the 2 indices in the pair (let's call it $$$i$$$) and query it with every candidate you have, and take the bitwise-and of the results. That will give you $$$p_i$$$. Now, let's make the numbers whose query result with $$$i$$$ is $$$p_i$$$ (hence, a subset of $$$p_i$$$) our new candidates. Since $$$i$$$ has at most $$$\frac{log(n)}{2}$$$ ones, the number of its subsets is $$$\sqrt{n}$$$, and we have our desired result!

Now, to find the index of $$$0$$$, we'll just do this recursively until we have 2 candidates. We'll keep querying them with random indices until the results differ. The one giving a smaller result is our $$$0$$$.

Code link: https://pastebin.com/zMV5CPAz

analysis

Third solution

Thanks, Mohammad_Yasser for this solution.

Assume you have 2 candidates for $$$0$$$ called $$$a$$$ and $$$b$$$ such that one of them is the index of $$$0$$$ at the end of our algorithm, and we always know $$$(p_a|p_b)$$$. Let's iterate over our indices in a random order and try to update $$$a$$$ and $$$b$$$. Assume the current index is $$$c$$$. Let's query to get $$$(p_b|p_c)$$$. We have 3 cases:

  • If $$$(p_a|p_b)<(p_b|p_c)$$$, $$$p_c$$$ can't be $$$0$$$, so we'll throw it away.
  • If $$$(p_a|p_b)>(p_b|p_c)$$$, $$$p_a$$$ can't be $$$0$$$, so we'll throw it away and change $$$a$$$ to be $$$c$$$.
  • Otherwise, $$$p_b$$$ can't be $$$0$$$ because that would imply $$$p_a=p_c$$$ (recall that $$$p$$$ is a permutation.) So we can throw it away and change $$$b$$$ to be $$$c$$$. But notice that we now don't know $$$(p_a|p_b)$$$, so we're gonna have to make one more query, since we need to keep track of it.

After we're done, we can narrow our 2 candidates down to 1 with the same way described in the previous solution.

Code link: https://pastebin.com/Trifp8p3

analysis

Full text and comments »

  • Vote: I like it
  • +340
  • Vote: I do not like it

By mohammedehab2002, 5 years ago, In English

Hi everyone!

Codeforces round #649 will take place on Jun/13/2020 18:05 (Moscow time). It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems were created by me. I'd like to thank my forever orzable coordinator antontrygubO_o; my incredible army of testers dorijanlendvaj, 300iq, Osama_Alkhodairy, AmShZ, taran_1407, TigranMec, _Aaryan_, Mohammad_Yasser, zoooma13, lavish315, Utkarsh.25dec, far_from_NOOB, and Laggy; and, of course, you-know-who for the amazing codeforces and polygon platforms.

This time, in an effort to kill type-races and because I'm lazy, you'll be given 5 problems and 2 hours to solve them.

UPD: the scoring distribution will be 750-1000-1500-2000-2500.

UPD: the editorial is out.

UPD: congratulations to the winners!

Div.1:-

  1. Um_nik
  2. 244mhq
  3. hank55663
  4. noimi
  5. neal

Div.2:-

  1. SanyaSaske
  2. Chloristendika
  3. el_risitas
  4. Ehaam
  5. RinkaSnow

Good luck & Have fun :D

Full text and comments »

  • Vote: I like it
  • +1165
  • Vote: I do not like it

By mohammedehab2002, history, 5 years ago, In English

Hi!

Codeforces round #628 (vintage Codeforces round #2) will take place on 14.03.2020 17:35 (Московское время). It's rated for the second division, but, as usual, first division participants can take part out of competition.

The problems were created by me. I'd like to thank, and orz, antontrygubO_o for coordinating the round; ramchandra, pajenegod, aryanc403, taran_1407, Kuroni, mcdx9524, dorijanlendvaj, Andreasyan, 300iq, zoooma13, Osama_Alkhodairy, Mohammad_Yasser, and TripleM5da (special shout-out) for testing the round; and of course MikeMirzayanov for the great codeforces and polygon platforms.

You'll be given 6 problems and 2 hours to solve them.

UPD: the scoring distribution will be 500-750-1250-1750-2500-2750.

UPD: the editorial is out.

UPD: congratulations to the winners!

Div.1:-

  1. tmwilliamlin168
  2. jqdai0815
  3. Um_nik
  4. imeimi
  5. HIR180

Div.2:-

  1. davooddkareshki
  2. rainboy
  3. PouyaNavid
  4. _Lucien
  5. socho

Good luck & Have fun!

Full text and comments »

  • Vote: I like it
  • +671
  • Vote: I do not like it

By mohammedehab2002, history, 5 years ago, In English

1325A - EhAb AnD gCd

$$$a=1$$$ and $$$b=x-1$$$ always work.

Code link: https://pastebin.com/ddHKD09B

First AC: sevlll777

Bonus task: can you count the valid pairs?

1325B - CopyCopyCopyCopyCopy

Let the number of distinct elements in $$$a$$$ be called $$$d$$$. Clearly, the answer is limited by $$$d$$$. Now, you can construct your subsequence as follows: take the smallest element from the first copy, the second smallest element from the second copy, and so on. Since there are enough copies to take every element, the answer is $$$d$$$.

Code link: https://pastebin.com/hjcxUDmY

First AC: socho

1325C - Ehab and Path-etic MEXs

Notice that there will be a path that passes through the edge labeled $$$0$$$ and the edge labeled $$$1$$$ no matter how you label the edges, so there's always a path with $$$MEX$$$ $$$2$$$ or more. If any node has degree 3 or more, you can distribute the labels $$$0$$$, $$$1$$$, and $$$2$$$ to edges incident to this node and distribute the rest of the labels arbitrarily. Otherwise, the tree is a bamboo, and it doesn't actually matter how you label the edges, since there will be a path with $$$MEX$$$ $$$n-1$$$ anyway.

Code link: https://pastebin.com/u4H7Dtbd

First AC: vintage_Vlad_Makeev

1325D - Ehab the Xorcist

First, let's look at some special cases. If $$$u>v$$$ or $$$u$$$ and $$$v$$$ have different parities, there's no array. If $$$u=v=0$$$, the answer is an empty array. If $$$u=v \neq 0$$$, the answer is $$$[u]$$$. Now, the length is at least 2. Let $$$x=\frac{v-u}{2}$$$. The array $$$[u,x,x]$$$ satisfies the conditions, so the length is at most 3. We just need to figure out whether there's a pair of number $$$a$$$ and $$$b$$$. Such that $$$a \oplus b=u$$$ and $$$a+b=v$$$. Notice that $$$a+b=a \oplus b+2*(a$$$&$$$b)$$$, so we know that $$$a$$$&$$$b=\frac{v-u}{2}=x$$$ (surprise surprise.) The benefit of getting rid of $$$a+b$$$ and looking at $$$a$$$&$$$b$$$ instead is that we can look at $$$a$$$ and $$$b$$$ bit by bit. If $$$x$$$ has a one in some bit, $$$a$$$ and $$$b$$$ must both have ones, so $$$a \oplus b=u$$$ must have a 0. If $$$x$$$ has a zero, there are absolutely no limitation on $$$u$$$. So, if there's a bit where both $$$x$$$ and $$$u$$$ have a one, that is to say if $$$x$$$&$$$u\neq0$$$, you can't find such $$$a$$$ and $$$b$$$, and the length will be 3. Otherwise, $$$x$$$&$$$u=0$$$ which means $$$x \oplus u=x+u$$$, so the array $$$[u+x,x]$$$ works. Can you see how this array was constructed? We took $$$[u,x,x]$$$ which more obviously works and merged the first 2 elements, benefiting from the fact that $$$u$$$&$$$x=0$$$.

Code link: https://pastebin.com/7XuMk1v8

First AC: jqdai0815

1325E - Ehab's REAL Number Theory Problem

Notice that for each element in the array, if some perfect square divides it, you can divide it by that perfect square, and the problem won't change. Let's define normalizing a number as dividing it by perfect squares until it doesn't contain any. Notice than any number that has 3 different prime divisors has at least 8 divisors, so after normalizing any element in the array, it will be $$$1$$$, $$$p$$$, or $$$p*q$$$ for some primes $$$p$$$ and $$$q$$$. Let's create a graph where the vertices are the prime numbers (and $$$1$$$,) and the edges are the elements of the array. For each element, we'll connect $$$p$$$ and $$$q$$$ (or $$$p$$$ and $$$1$$$ if it's a prime after normalizing, or $$$1$$$ with $$$1$$$ if it's $$$1$$$ after normalizing.) What's the significance of this graph? Well, if you take any walk from node $$$p$$$ to node $$$q$$$, multiply the elements on the edges you took, and normalize, the product you get will be $$$p*q$$$! That's because every node in the path will be visited an even number of times, except $$$p$$$ and $$$q$$$. So the shortest subsequence whose product is a perfect square is just the shortest cycle in this graph!

The shortest cycle in an arbitrary graph takes $$$O(n^2)$$$ to compute: you take every node as a source and calculate the bfs tree, then you look at the edges the go back to the root to close the cycle. That only finds the shortest cycle if the bfs source is contained in one. The graph in this problem has a special condition: you can't connect 2 nodes with indices greater than $$$sqrt(maxAi)$$$. That's because their product would be greater than $$$maxAi$$$. So that means ANY walk in this graph has a node with index $$$\le\sqrt{maxAi}$$$. You can only try these nodes as sources for your bfs.

Code link: https://pastebin.com/4ixLQyvg

Bonus task: try to prove the answer can't exceed $$$2\sqrt{maxAi}$$$.

1325F - Ehab's Last Theorem

Let $$$sq$$$ denote $$$\lceil\sqrt{n}\rceil$$$.

A solution using DFS trees

If you're not familiar with back-edges, I recommend reading this first.

Let's take the DFS tree of our graph. Assume you're currently in node $$$u$$$ in the DFS. If $$$u$$$ has $$$sq-1$$$ or more back-edges, look at the one that connects $$$u$$$ to its furthest ancestor. It forms a cycle of length at least $$$sq$$$. If $$$u$$$ doesn't have that many back-edges, you can add it to the independent set (if none of its neighbors was added.) That way, if you don't find a cycle, every node only blocks at most $$$sq-1$$$ other nodes, the ones connected to it by a back-edge, so you'll find an independent set!

Code link: https://pastebin.com/b9phCSW8

First AC: imeimi

A solution using degrees

There's a pretty obvious greedy algorithm for finding large independent sets: take the node with the minimal degree, add it to the independent set, remove it and all its neighbors from the graph, and repeat. If at every step the node with the minimal degree has degree $$$<sq-1$$$, that algorithm solves the first problem. Otherwise, there's a step where EVERY node has degree at least $$$sq-1$$$. For graphs where every node has degree at least $$$d$$$, you can always find a cycle with length $$$d+1$$$. To do that, we'll first try to find a long path then close a cycle. Take an arbitrary node as a starting point, and keep trying to extend your path. If one of this node's neighbors is not already in the path, extend that path with it and repeat. Otherwise, all of the last node's $$$d$$$ neighbors are on the path. Take the edge to the furthest and you'll form a cycle of length at least $$$d+1$$$!

Code link: https://pastebin.com/1VwZYPCj

First AC: imeimi after only 11 minutes!

There are probably other solutions and heuristics. Can you share yours?

Full text and comments »

  • Vote: I like it
  • +288
  • Vote: I do not like it

By mohammedehab2002, 6 years ago, In English

1174A - Ehab Fails to Be Thanos

If all elements in the array are equal, there's no solution. Otherwise, sort the array. The sum of the second half will indeed be greater than that of the first half.

Another solution is to see if they already have different sums. If they do, print the array as it is. Otherwise, find any pair of different elements from different halves and swap them.

Code link: https://pastebin.com/FDXTuDdZ

1174B - Ehab Is an Odd Person

Notice that you can only swap 2 elements if they have different parities. If all elements in the array have the same parity, you can't do any swaps, and the answer will just be like the input. Otherwise, let's prove you can actually swap any pair of elements. Assume you want to swap 2 elements, $$$a$$$ and $$$b$$$, and they have the same parity. There must be a third element $$$c$$$ that has a different parity. Without loss of generality, assume the array is $$$[a,b,c]$$$. You'll do the following swaps:

  • Swap $$$a$$$ and $$$c$$$: $$$[c,b,a]$$$.
  • Swap $$$b$$$ and $$$c$$$: $$$[b,c,a]$$$.
  • Swap $$$a$$$ and $$$c$$$: $$$[b,a,c]$$$.

In other words, you'll use $$$c$$$ as an intermediate element to swap $$$a$$$ and $$$b$$$, and it'll return to its original position afterwards! Since you can swap any pair of elements, you can always sort the array, which is the lexicographically smallest permutation.

Code link: https://pastebin.com/xhqGXLiu

Time complexity: $$$O(nlog(n))$$$.

1174C - Ehab and a Special Coloring Problem

Let's call the maximum value in the array $$$max$$$. Let the number of primes less than or equal to $$$n$$$ be called $$$p$$$. Then, $$$max \ge p$$$. That's true because a distinct number must be assigned to each prime, since all primes are coprime to each other. Now if we can construct an answer wherein $$$max=p$$$, it'll be optimal. Let's first assign a distinct number to each prime. Then, assign to every composite number the same number as any of its prime divisors. This works because for any pair of numbers $$$(i,j)$$$, $$$i$$$ is given the same number of a divisor and so is $$$j$$$, so if they're coprime (don't share a divisor), they can't be given the same number!

Code link: https://pastebin.com/tDbgtnC8

Time complexity: $$$O(nlog(n))$$$.

1174D - Ehab and the Expected XOR Problem

The main idea is to build the prefix-xor of the array, not the array itself, then build the array from it. Let the prefix-xor array be called $$$b$$$. Now, $$$a_l \oplus a_{l+1} \dots \oplus a_r=b_{l-1} \oplus b_{r}$$$. Thus, the problem becomes: construct an array such that no pair of numbers has bitwise-xor sum equal to 0 or $$$x$$$, and its length should be maximal. Notice that "no pair of numbers has bitwise-xor sum equal to 0" simply means "you can't use the same number twice". If $$$x \ge 2^n$$$, no pair of numbers less than $$$2^n$$$ will have bitwise-xor sum equal to $$$x$$$, so you can just use all the numbers from 1 to $$$2^n-1$$$ in any order. Otherwise, you can think of the numbers forming pairs, where each pair consists of 2 numbers with bitwise-xor sum equal to $$$x$$$. From any pair, if you add one number to the array, you can't add the other. However, the pairs are independent from each other: your choice in one pair doesn't affect any other pair. Thus, you can just choose either number in any pair and add them in any order you want. After you construct $$$b$$$, you can construct $$$a$$$ using the formula: $$$a_i=b_i \oplus b_{i-1}$$$.

Code link: https://pastebin.com/0gCLC0BP

Time complexity: $$$O(2^n)$$$.

1174E - Ehab and the Expected GCD Problem

Let's call the permutations from the statement good. For starters, we'll try to find some characteristics of good permutations. Let's call the first element in a good permutation $$$s$$$. Then, $$$s$$$ must have the maximal possible number of prime divisors. Also, every time the $$$gcd$$$ changes as you move along prefixes, you must drop only one prime divisor from it. That way, we guarantee we have as many distinct $$$gcd$$$s as possible. Now, there are 2 important observations concerning $$$s$$$:

Observation #1: $$$s=2^x*3^y$$$ for some $$$x$$$ and $$$y$$$. In other words, only $$$2$$$ and $$$3$$$ can divide $$$s$$$. That's because if $$$s$$$ has some prime divisor $$$p$$$, you can divide it by $$$p$$$ and multiply it by $$$4$$$. That way, you'll have more prime divisors.

Observation #2: $$$y \le 1$$$. That's because if $$$s=2^x*3^y$$$, and $$$y \ge 2$$$, you can instead replace it with $$$2^{x+3}*3^{y-2}$$$ (divide it by $$$9$$$ and multiply it by $$$8$$$), and you'll have more prime divisors.

Now, we can create $$$dp[i][x][y]$$$, the number of ways to fill a good permutation up to index $$$i$$$ such that its $$$gcd$$$ is $$$2^x*3^y$$$. Let $$$f(x,y)=\lfloor \frac{n}{2^x*3^y} \rfloor$$$. It means the number of multiples of $$$2^x*3^y$$$ less than or equal to $$$n$$$. Here are the transitions:

If your permutation is filled until index $$$i$$$ and its $$$gcd$$$ is $$$2^x*3^y$$$, you can do one of the following $$$3$$$ things upon choosing $$$p_{i+1}$$$:

  • Add a multiple of $$$2^x*3^y$$$. That way, the $$$gcd$$$ won't change. There are $$$f(x,y)$$$ numbers you can add, but you already added $$$i$$$ of them, so: $$$dp[i+1][x][y]=dp[i+1][x][y]+dp[i][x][y]*(f(x,y)-i)$$$.

  • Reduce $$$x$$$ by $$$1$$$. To do that, you can add a multiple of $$$2^{x-1}*3^y$$$ that isn't a multiple of $$$2^x*3^y$$$, so: $$$dp[i+1][x-1][y]=dp[i+1][x-1][y]+dp[i][x][y]*(f(x-1,y)-f(x,y))$$$.

  • Reduce $$$y$$$ by $$$1$$$. To do that, you can add a multiple of $$$2^x*3^{y-1}$$$ that isn't a multiple of $$$2^x*3^y$$$, so: $$$dp[i+1][x][y-1]=dp[i+1][x][y-1]+dp[i][x][y]*(f(x,y-1)-f(x,y))$$$.

As for the base case, let $$$x=\lfloor log_2(n) \rfloor$$$. You can always start with $$$2^x$$$, so $$$dp[1][x][0]=1$$$. Also, if $$$2^{x-1}*3 \le n$$$, you can start with it, so $$$dp[1][x-1][1]=1$$$. The answer is $$$dp[n][0][0]$$$.

Code link: https://pastebin.com/N8FRN9sA

Time complexity: $$$O(nlog(n))$$$.

1174F - Ehab and the Big Finale

Let $$$dep_a$$$ be the depth of node $$$a$$$ and $$$sz_a$$$ be the size of the subtree of node $$$a$$$. First, we'll query the distance between node 1 and node $$$x$$$ to know $$$dep_x$$$. The idea in the problem is to maintain a "search scope", some nodes such that $$$x$$$ is one of them, and to try to narrow it down with queries. From this point, I'll describe two solutions:

HLD solution:

Assume that your search scope is the subtree of some node $$$u$$$ (initially, $$$u$$$=1). How can we narrow it down efficiently? I'll pause here to add some definitions. The heavy child of a node $$$a$$$ is the child that has the maximal subtree size. The heavy path of node $$$a$$$ is the path that starts with node $$$a$$$ and every time moves to the heavy child of the current node. Now back to our algorithm. Let's get the heavy path of node $$$u$$$. Assume its other endpoint is node $$$v$$$. We know that a prefix of this path contains ancestors of node $$$x$$$. Let the deepest node in the path that is an ancestor of node $$$x$$$ be node $$$y$$$ (the last node in this prefix). I'll now add a drawing to help you visualize the situation.

So, recapping, $$$u$$$ is the root of your search scope, $$$v$$$ is the endpoint of the heavy path starting from $$$u$$$, $$$x$$$ is the hidden node, and $$$y$$$ the last ancestor of $$$x$$$ in the heavy path. Notice that $$$y$$$ is $$$lca(x,v)$$$. Now, we know that $$$dist(x,v)=dep_x+dep_v-2*dep_y$$$. Since we know $$$dep_x$$$, and we know $$$dep_v$$$, we can query $$$dist(x,v)$$$ to find $$$dep_y$$$. Since all nodes in the path have different depths, that means we know $$$y$$$ itself!

Another way to find y

Now, if $$$dep_x=dep_y$$$, $$$x=y$$$, so we found the answer. Otherwise, we know, by definition, that $$$y$$$ is an ancestor of $$$x$$$, so it's safe to use the second query type. Let the answer be node $$$l$$$. We can repeat the algorithm with $$$u=l$$$! How long will this algorithm take? Note that $$$l$$$ can't be the heavy child of $$$y$$$ (because $$$y$$$ is the last ancestor of $$$x$$$ in the heavy path), so $$$sz_l \le \lfloor\frac{sz_y}{2} \rfloor$$$, since it's well-known that only the heavy child can break that condition. So with only 2 queries, we managed to cut down at least half of our search scope! So this algorithm does no more than $$$34$$$ queries (actually $$$32$$$ under these constraints, but that's just a small technicality).

Code link: https://pastebin.com/CM8QwdUf

Centroid decomposition solution:

As I said, assume we have a search scope. Let's get the centroid, $$$c$$$, of that search scope. If you don't know, the centroid is a node that, if removed, the tree will be broken down to components, and each component's size will be at most half the size of the original tree. Now, $$$c$$$ may and may not be an ancestor of $$$x$$$. How can we determine that? Let's query $$$dist(c,x)$$$. $$$c$$$ is an ancestor of $$$x$$$ if and only if $$$dep_c+dist(c,x)=dep_x$$$. Now, if $$$c$$$ is an ancestor of $$$x$$$, we can safely query the second node on the path between them. Let the answer be $$$s$$$, then its component will be the new search scope. What if $$$c$$$ isn't an ancestor of $$$x$$$? That means node $$$x$$$ can't be in the subtree of $$$c$$$, so it must be in the component of $$$c$$$'s parent. We'll make the component of $$$c$$$'s parent the new search scope! Every time, the size of our search scope is, at least, halved, so the solution does at most $$$36$$$ queries.

Code link: https://pastebin.com/hCNW5BfQ

Full text and comments »

  • Vote: I like it
  • +155
  • Vote: I do not like it

By mohammedehab2002, 6 years ago, In English

Hi!

I'm back with not one, not two, but three contests, although I have no promises about when to expect them....

The first of them, codeforces round #563, will take place on 03.06.2019 17:05 (Московское время). It's rated for the second division, but, as usual, first division participants can take part out of competition.

I'm the problemsetter of the round. I'd like to thank KAN for coordinating the round (and his patience .. try coordinating ~20 problems), arsijo for helping with the preparation, Um_nik, _overrated_, Aleks5d, wiwitrifai, pllk, Bedge, Ivan19981305, and PrianishnikovaRina for testing the round, and MikeMirzayanov for the great codeforces and polygon platforms.

In this round, you'll be given 6 problems and 2 hours to solve them.

UPD: I decided to drop the 3 seconds rule. The scoring distribution is 500-1000-1500-1750-2500-2500. That means you should probably read both E and F :D

Good luck & Have fun!

UPD: here's the editorial.

UPD: congratulations to the winners!

Div.1+Div.2:-

  1. tribute_to_Ukraine_2022
  2. E869120
  3. 800iq
  4. cerberus97
  5. Anadi

Div.2:-

  1. 800iq
  2. Alex18mai
  3. Mikaeel
  4. prick
  5. wasyl

See you in the second round :D

Full text and comments »

  • Vote: I like it
  • +466
  • Vote: I do not like it

By mohammedehab2002, 6 years ago, In English

1088A - Ehab and another construction problem

Well, the constraints allow a brute-force solution, but here's an O(1) solution:

If x = 1, there's no solution. Otherwise, just print x - x%2 and 2.

Code link: https://pastebin.com/LXvuX8Ez

Time complexity: O(1).

1088B - Ehab and subtraction

Let s be the set of numbers in input (sorted and distinct). In the ith step, si is subtracted from all bigger or equal elements, and all smaller elements are 0. Thus, the answer in the ith step is si - si - 1 (s0 = 0).

Code link: https://pastebin.com/bpz1YxBe

Time complexity: O(nlog(n)).

1088C - Ehab and a 2-operation task

The editorial uses 0-indexing.

Both solutions make ai = i.

First solution, n adds and 1 mod

First, let's make ai = x * n + i (for some x). Then, let's mod the whole array with n (making ai = i). If the "add update" changed one index, we can just add i + n - ai%n to index i. The problem is, if we make ai = x * n + i, then update an index j > i, ai will be ruined. Just start from the back of the array!

Code link: https://pastebin.com/dBfhNBL8

Second solution, 1 add and n mods

Note: for any a, b, if b > a, a%b = a. Additionally, if a ≥ b > , a%b = a - b.

Let's add 5·105 to the whole array, loop over ai (in order), and mod prefix i with ai - i. Why does this work? Notice that ai%(ai - i) = ai - (ai - i) = i (the second note). Also, ai won't be changed afterwards (the first note).

Code link: https://pastebin.com/L6suPC1f

Time complexity: O(n).

1088D - Ehab and another another xor problem

This problem is particularly hard to explain :/ I recommend the simulation.

Let's build a and b bit by bit from the most significant to the least significant (assume they're stored in curA and curB). Then, at the ith step, and have all bits from the most significant to the (i + 1)th set to 0. Notice that whether x is greater or less than y is judged by the most significant bit in which they differ (the one that has 1 is bigger). Let's query with and . and can only differ in the ith bit (or a bit less significant). Now, if the results of the queries are different, a and b have the same value in this bit, and this value can be determined by the answer of respective queries (1 if the second query's answer is 1, 0 otherwise). If the queries give the same result, a and b must differ in this bit. How to know which of them has a 1 and which has a 0? We know that the greater between them (after setting the processed bits to 0) has a 1 and the other has a 0. The trick is to keep track of the greater between them. Before all queries, we send (0, 0) to know the greater. Every time they differ in a bit, the greater may change. It'll simply change to the answer of the 2 queries we sent! In other words, we know when we sent the queries that after making a and b equal in this bit, some other bit became the most significant bit in which they differ. Also, we know who has a 1 in this bit (the greater in this query). Thus, we'll keep the answer of this query for the future, so when this bit comes, we don't need additional queries.

Simulation for an example

Code link: https://pastebin.com/b9zgKuJ6

Time complexity: O(log(n)).

1088E - Ehab and a component choosing problem

Assume you already chose the components. Let the sum of nodes in the ith component be bi. Then, the expression in the problem is equivalent to average(b1, b2, ..., bk). Assume we only bother about the fraction maximization problem and don't care about k. Then, it'll always be better to choose the component with the maximum bi and throw away the rest! This is because of the famous inequality:

max(b1, b2, ..., bk) ≥ average(b1, b2, ..., bk) and the equality only occurs if all bi are equal!

This means that the maximum value of the fraction is simply the maximum sum of a sub-component in the tree. To calculate it, let's root the tree at node 1, and calculate dp[node], the maximum sum of a sub-component that contains node. Now, I'll put the code, and explain it after.

void dfs(int node,int p,bool f)
{
    dp[node]=a[node];
    for (int u:v[node])
    {
        if (u!=p)
        {
            dfs(u,node,f);
            dp[node]+=max(dp[u],0LL);
        }
    }
    if (f)
    ans=max(ans,dp[node]);
    else if (dp[node]==ans)
    {
        dp[node]=0;
        k++;
    }
}

ans denotes the maximum sub-component sum.

First, we call dfs(1, 0, 1). We calculate the dp of all the children of node. For every child u, we extend the component of node with the component of u if dp[u] > 0, and do nothing otherwise. Now, we solved the first half of our problem, but what about maximizing k? Notice that all components you choose must have a sum of weights equal to ans (because the equality occurs if and only if all bi are equal). You just want to maximize their count. Let's calculate our dp again. Assume dp[node] = ans. We have 2 choices: either mark the node and its component as a component in the answer (but then other nodes won't be able to use them because the components can't overlap), or wait and extend the component. The idea is that there's no reason to wait. If we extend the component with some nodes, they won't change the sum, and they may even have another sub-component with maximal sum that we're merging to our component and wasting it! Thus, we'll always go with the first choice, making dp[node] = 0 so that its parent can't use it, and increasing k :D

Code link: https://pastebin.com/8pCrTfuP

Time complexity: O(n).

1088F - Ehab and a weird weight formula

First, let's reduce the problem to ordinary MST. We know that each edge {u, v} adds log2(dist(u, v))⌉·min(au, av) to w. In fact, it also adds 1 to degu and degv. Thus, the problem is ordinary MST on a complete graph where each edge {u, v} has weight (⌈log2(dist(u, v))⌉ + 1)·min(au, av) + max(au, av)!

Let the node with the minimum weight be m. Let's root the tree at it.

Lemma: for every node u and a child v, av > au. In simpler words, the weight increase as we go down the tree.

Proof: the proof is by contradiction. Assume av ≤ au. Then, the condition in the problem (that every node has an adjacent node with less weight) isn't satisfied yet for v. Therefore, v must have a child k such that ak < av. However, the condition isn't satisfied for k, so k needs another child and the child needs another child etc. (the tree will be infinite) which is clearly a contradiction.

From that, we know that the weights decrease as we go up the tree and increase as we go down.

Back to the MST problem. From Kruskal's algorithm, we know that the minimal edge incident to every node will be added to the MST (because the edges are sorted by weight). Let's analyze the minimal edge incident to every node u. Let its other end be v. Except for node m, v will be an ancestor of u. Why? Assume we fix the distance part and just want to minimize av. We'll keep going up the tree (it's never optimal to go down, since the weights will increase) until we reach the desired distance. Now, since the minimal edge incident to every node will be added to the MST (by Kruskal's algorithm), and they're distinct (because, otherwise, you're saying that u is an ancestor of v and v is an ancestor of u), THEY ARE THE MST. Now, the problem just reduces to finding the minimal edge incident to every node and summing them up (except for m). To do that, we'll fix the log2(dist(u, v))⌉ (let it be k), and get the 2kth ancestor with the well-known sparse-table (binary lifting).

Code link: https://pastebin.com/vzJqh8si

Time complexity: O(nlog(n)).

Full text and comments »

  • Vote: I like it
  • +116
  • Vote: I do not like it

By mohammedehab2002, 6 years ago, In English

Hi!

I'm back with a new contest, a new color, and a new batch of xor problems.

Codeforces round #525, rated for the second division, is taking place on Dec/04/2018 17:35 (Moscow time). As usual, first division participants can take part out of competition.

I'm the problemsetter of the round. I'd like to thank 300iq for the great effort coordinating the round, isaf27, cdkrot, budalnik, and vintage_Vlad_Makeev for testing the round, scanhex for translating the statements to Russian, mahmoudbadawy for giving his opinions about the problems, and MikeMirzayanov for the great codeforces and polygon platforms.

Like my previous round, you'll be given 6 problems and 2 hours to solve them.

After the contest, I'll be on the community Discord server to discuss the problems.

UPD: the scoring distribution will be 500-1000-1500-2000-2500-3000.

UPD: something wrong happened and the editorial was deleted. I'll post it as soon as possible :(

UPD: the editorial has been re-written.

Good luck & Have fun!

UPD: congratulations to the winners!

Div.1+Div.2:-

  1. Madball
  2. Shayan
  3. ei133333
  4. paula
  5. Kuroni

Div.2:-

  1. paula
  2. DXC
  3. 0101-1001
  4. problem_destroyer420
  5. knil_GMO

Full text and comments »

  • Vote: I like it
  • +476
  • Vote: I do not like it

By mohammedehab2002, history, 7 years ago, In English

959A - Mahmoud and Ehab and the even-odd game

It's easy to see that if n = 0, the next player loses. If n is even, Mahmoud will choose a = n and win. Otherwise, Mahmoud will have to choose a < n. n is odd and a is even, so n - a is odd. Ehab will then subtract it all and win. Therefore, if n is even Mahmoud wins. Otherwise, Ehab wins. n = 1 doesn't follow our proof, yet Ehab still wins at it because Mahmoud won't be even able to choose a.

Code link (me) : https://pastebin.com/X3D08tg9

Code link (mahmoudbadawy) : https://pastebin.com/4u3RHE7n

Time complexity : O(1).

Bonus task : If there were multiple integers, and each player can choose which integer to subtract from, who will win?

Solution

959B - Mahmoud and Ehab and the message

It's easy to see that for every word, the minimum cost of sending it is the minimum cost of sending any word in its group. For each group, we'll maintain the minimum cost for sending a word in it (let it be costi) and for each word, we'll maintain its group (let it be groupi). For every word i in the message, we'll add costgroupi to the answer.

Code link (me) : https://pastebin.com/3RFeEkgD

Code link (mahmoudbadawy) : https://pastebin.com/sR5eZy7d

Time complexity : O((n + m)log(n) * len).

Bonus task : Try to solve the problem if the input was given as pairs of words that are synonyms (assuming synonymy is transitive).

Solution

959C - Mahmoud and Ehab and the wrong algorithm

The first tree

For n ≥ 6, you can connect nodes 2, 3, and 4 to node 1 and connect the rest of the nodes to node 2. The real vertex cover is the set {1, 2} of size 2 while the found vertex cover will have size min(3, n - 3). As n ≥ 6, that value will be 3 which is incorrect.

For n < 6, the answer doesn't exist.

The second tree

There are multiple ways to construct it. One easy way is the star tree. Connect all the nodes to node 1. The real and the found vertex cover will be simply {1}. Another easy way is a path. Connect node i to node i + 1 for all 1 ≤ i < n. The real and the found vertex cover has size .

Code link (me) : https://pastebin.com/7J8B9fXx

Code link (mahmoudbadawy) : https://pastebin.com/54jZ8sGM

Time complexity : O(n).

Bonus task : Try to find an elegant proof that the answer for n < 6 doesn't exist for the first tree.

Solution

959D - Mahmoud and Ehab and another array construction task

Common things : Let's call a number "ok" if it could be inserted to array b, as a new element, without breaking any of the conditions (i.e it should be coprime with all the previously inserted elements). Let's call the maximum number that could be inserted in the worst case mx. For each integer from 2 to mx, we'll precompute its prime divisors with sieve.

First solution by me

Create an std::set that contains all the numbers from 2 to mx. That set has all the "ok" numbers and will be updated each time we insert a new element to array b. We'll insert the elements to array b greedily one by one. At index i, let cur be the minimum number in the set greater than or equal to ai i.e std::lower_bound(a[i]). If cur isn't equal to ai, the lexicographically greater condition is satisfied and we're no longer restricted to a, so, starting from index i + 1, we'll greedily choose cur to be the first (minimum) number in the set instead. We'll insert cur to b. Each time, we'll remove all the integers that aren't coprime with cur from the set. To do that, we'll loop over the multiples of its precomputed prime divisors and remove them from the set.

Code link (me) : https://pastebin.com/bg3Hi6r2

Second solution by KAN

Let used[i] indicate whether some prime is already a factor of one of elements in b (so we shouldn't use it). Each time we insert an element to b, we update used by iterating over its precomputed prime divisors and make them all used. We'll start inserting elements to b greedily one by one. To check if a number is "ok", we'll iterate over its precomputed prime divisors and check that all of them aren't used. While ai is "ok", we'll keep inserting it to b. We'll reach an integer that isn't "ok". In this case, we'll iterate naiively until we find an integer that is "ok" and insert it to b. The lexicographically greater condition is now satisfied and we can insert whatever we want (no restriction to a). Notice that starting from now, b will be sorted in increasing order. That's because if it's not, we can sort it and reach a better answer without breaking any of the conditions. The naiive solution is to loop starting from 2 until we find an "ok" integer for each i. However, as the array is sorted, we can loop starting from 2 the first time and then loop starting from bi - 1 + 1 and save a lot of loops that we're sure will fail!

Code link (me) : https://pastebin.com/Xh2QgqUf

Time complexity : O(mxlog(mx)). mx has an order of because the nth prime is expected to be O(nlog(n)) and the number of primes less that n is expected to be .

959E - Mahmoud and Ehab and the xor-MST

For convenience, let n be the label of the last node not the number of nodes (i.e n = ninput - 1).

Denote lsb(x) = x&( - x) as the value of the least significant bit set to 1 in x. The answer is , which means that node u is connected to node for all 1 ≤ u ≤ n (node u is connected to node u without that bit).

Formal proof

Now let's see how to calculate that quickly.

Math solution

Let f(x) be the number of integers y such that 1 ≤ y ≤ n and lsb(y) = x, then . f(i) > 0 if and only if i is a power of 2 so this sum is equivalent to . Basically, the first number y such that lsb(y) = x is x and then the period is 2 * x. Take 4 to see that. The integers y such that lsb(y) = 4 are {4, 12, 20, 28, etc.} Therefore, for 1 ≤ x ≤ n and x is a power of 2.

Code link (me) : https://pastebin.com/dNuR9k0Y

DP solution

Let's see how the sequence of lsb(x) is constructed. We start with {1} and at the ith step, we copy the sequence and concatenate it to itself and add 2i in the middle.

Let . Let dp[i] = f(2i - 1).

You can see from the pattern above that dp[i] = 2 * dp[i - 1] + 2i - 1 for 1 < i (with the base case that dp[1] = 1). Let's find a recurrence for f(x). Denote msb(x) as the value of the most significant bit set to 1. The sum can be split into 2 parts : the sum from 1 to msb(x) and the sum from msb(x) + 1 to x. You can see that in the second sum, lsb(i) can never be equal to msb(x), so we can remove that bit safely without affecting the answer. Removing that bit is like xoring with msb(x) which makes the sum start at 1 and end at which is . Therefore, . The first part can be calculated with the help of our dp because msb(x) is a power of 2 and the second part goes recursively. Basically, for each i such that the ith bit is set to 1, we add dp[i] + 2i to the answer.

Code link (me) : https://pastebin.com/wnhBZx2v

Time complexity : O(log(n)).

959F - Mahmoud and Ehab and yet another xor task

Let's solve a simpler version of the problem. Assume the queries only ask you to see whether the answer is 0 or positive instead of the exact answer. We can answer all the queries offline. We can keep a set containing all the possible xors of subsequences and update it for each prefix. Initially, the set contains only 0 (the xor of the empty subsequence). For each index i in the array, we can update the set by adding to the set for all x in the set. The intuition behind it is that there's a subsequence with xor equal to x (as x is in the set) and if we add ai to it, its xor will be , so we should add it to the set. That's a slow solution to update the set, but we have some observations:-

  1. If x is in the set and y is in the set, must be in the set. To see that, let x be the xor of some elements and y be the xor of other elements. must be the xor of the non-common elements (because the common elements will annihilate) so it must be in the set.
  2. If x is in the set and y isn't in the set, can't be in the set. This could be proved by contradiction. Assume is in the set, then, by the first observation, must be in the set. This is equivalent to y which we said that it isn't in the set. Therefore, isn't in the set.

Basically, if ai is already in the set, we do nothing because updating the set would do nothing but extra operations according to the first observation, and if ai isn't in the set, we don't even waste a single operation without extending the set! That makes the total complexity O(n + maxAi) or O((n + maxAi)log(maxAi)) depending on implementation because each element is added to the set exactly once.

To solve our problem, let's see the naiive dynamic programming solution. Let dp[i][x] be the number of subsequences of the first i elements with xor x. . The intuition behind it is exactly the same as the intuition behind the set construction. Let's prove that dp[i][x] is equal for all x belonging to the set! Let's assume this holds true for i - 1 and see what happens in the transition to i. Notice that it holds true for i = 0. Let j be the value that dp[i - 1][x] is equal to for all x belonging to the set. If ai is in the set, and x is in the set, is in the set (observation #1). Therefore, dp[i - 1][x] = j and which makes dp[i][x] = 2 * j for all x in the set. Notice that the set doesn't change so dp[i][x] = 0 for all x that aren't in the set. If ai isn't in the set, we have 3 cases for x. If x is in the set, isn't in the set. Therefore, dp[i][x] = j + 0 = j. If x is to be added to the set in this step, is in the set. Therefore, dp[i][x] = 0 + j = j. Otherwise, dp[i][x] = 0. To summarize, we'll maintain the set. For each integer, if it's in the set, we'll just multiply j by 2. Otherwise, we'll update the set. We'll then answer all the queries for that prefix (saying 0 or j) depending on whether x is in the set.

Code link (me) : https://pastebin.com/Kfi0NWTi

Time complexity : O(n + maxAi) if you implement the "set" with a vector and an array.

Bonus task : Can you make this solution work online? Can you do that with maxAi < 230?

Solution

Full text and comments »

  • Vote: I like it
  • +112
  • Vote: I do not like it

By mohammedehab2002, 7 years ago, In English

Hello codeforces!

I'm glad to announce that codeforces round #473 for the second division will take place on Tuesday April 3rd 16:05 UTC. As usual, first division participants can take part out of competition.

I'm the problemsetter and the editorialist of this round. I'd like to thank mahmoudbadawy for creating the testdata and testing the round, FalseMirror, Livace, demon1999, and vintage_Vlad_Makeev for testing the round, KAN and Ahmad_Elsagheer for giving their great opinions and thoughts and helping in round preparation, arsor for helping translate the problems, and MikeMirzayanov for the great codeforces and polygon platforms.

You'll be given 6 problems and 2 hours to solve them.

UPD : the scoring distribution will be 500-1000-1250-1750-2000-2500.

UPD : Editorial and bonus tasks.

Good luck and Have fun!

UPD congratulations to the winners!

Div.1:-

  1. Um_nik
  2. dotorya
  3. kmjp
  4. natsugiri
  5. Lewin

Div.2:-

  1. StopBullying
  2. taeyeon_ss
  3. Tsuare
  4. readers2
  5. ajinkya1p3

Full text and comments »

  • Vote: I like it
  • +367
  • Vote: I do not like it

By mohammedehab2002, history, 7 years ago, In English

I want to change the registration time for a contest in a private group or register all the people in the group. Does anybody know how to do this ?

Thanks in advance.

Full text and comments »

  • Vote: I like it
  • +5
  • Vote: I do not like it

By mohammedehab2002, history, 7 years ago, In English

862A - Mahmoud and Ehab and the MEX

One can see that in the final set all the elements less than x should exist, x shouldn't exist and any element greater than x doesn't matter, so we will count the number of elements less than x that don't exist in the initial set and add this to the answer, If x exists we'll add 1 to the answer because x should be removed .

Time complexity : O(n + x) .

Solution link (me) : https://pastebin.com/ALfcu8Ab .

Solution link (mahmoudbadawy) : https://pastebin.com/yXLkmA5F .

862B - Mahmoud and Ehab and the bipartiteness

The tree itself is bipartite so we can run a dfs to partition the tree into the 2 sets (called bicoloring), We can't add an edge between any 2 nodes in the same set and we can add an edge between every 2 nodes in different sets, so let the number of nodes in the left set be l and the number of nodes in the right set be r, The maximum number of edges that can exist is l * r, but n - 1 edges already exist so the maximum number of edges to be added is l * r - (n - 1).

Time complexity : O(n) .

Solution link (me) : https://pastebin.com/w3bF7gKS .

Solution link (mahmoudbadawy) : https://pastebin.com/PMpte7nC .

862C - Mahmoud and Ehab and the xor

n = 2, x = 0 is the only case with answer "NO" .

Let pw = 217 .

First print 1, 2, ..., n - 3 (The first n - 3 positive integers), Let their bitwise-xor sum be y, If x = y You can add pw, pw * 2 and , Otherwise you can add 0, pw and , We handled the case x = y in a different way because if we add 0, pw and in this case, Then it's like adding 0, pw and pw, pw appears twice so we'll get wrong answer.

Handle n = 1 (print x) and n = 2 (print 0 and x) .

Solution link (mahmoudbadawy) : https://pastebin.com/w67KUY5u .

862D - Mahmoud and Ehab and the binary string

In the editorial we suppose that the answer of some query is the number of correct guessed positions which is equal to n minus hamming distance, The solutions in this editorial consider the answer of a query as n minus real answer, For convenience.

Common things : Let zero(l, r) be a function that returns the number of zeros in the interval [l;r] minus the number of ones in it, We can find it in one query after a preprocessing query, The preprocessing query is 1111..., Let its answer be stored in all, If we made a query with a string full of ones except for the interval [l;r] which will be full of zeros, If this query's answer is cur, zero(l, r) = cur - all, That's because all is the number of ones in the interval [l;r] plus some trash and cur is the number of zeros in the interval plus the same trash .

First solution by mahmoudbadawy

Let's have a searching interval, initially this interval is [1;n] (The whole string), Let's repeat this until we reach our goal, Let mid = (l + r) / 2 Let's query to get zero(l, mid), If it's equal to r - l + 1, This interval is full of zeros so we can print any index in it as the index with value 0 and continue searching for an index with the value 1 in the interval [mid + 1;r], But if its value is equal to l - r - 1, This interval is full of ones so we can print any index in it as the index with value 1 and continue searching for a 0 in the interval [mid + 1;r], Otherwise the interval contains both values so we can continue searching for both in the interval [l;mid], Every time the searching interval length must be divided by 2 in any case so we perform O(log(n)) queries .

Second solution by me

Let's send 1111... and let the answer be ans1, Let's send 0111... and let the answer be ans0, We now know the value in the first index (1 if ans1 > ans0, 0 otherwise), We can binary search for the first index where the non-found value exists, which is to binary search on the first value x where zero(2, x) * sign(non - found bit value) ≠ x - 1 where sign(y) is 1 if y = 0,  - 1 otherwise .

Solution link (me) : https://pastebin.com/Bc6q7TKv .

Solution link (mahmoudbadawy) : https://pastebin.com/RMyLDMxw .

862E - Mahmoud and Ehab and the function

Let's write f(j) in another way:-

Now we have 2 sums, The first one is constant (doesn't depend on j), For the second sum we can calculate all its possible values using sliding window technique, Now we want a data-structure that takes the value of the first sum and chooses the best second sum from all the choices .

observation: We don't have to try all the possible values of f(j) to minimize the expression, If the first sum is c, We can try only the least value greater than  - c and the greatest value less than  - c ( - c not c because we are minimizing c + second not c - second) because the absolute value means the distance between the two values on the number line, Any other value will be further than at least one of the chosen values, To do this we can keep all the values of f(j) sorted and try the elements numbered lower_bound(-c) and lower_bound(-c)-1 and choose the better (In short we're trying the values close to  - c only).

Now we have a data-structure that can get us the minimum value of the expression once given the value of the first sum in O(log(n)), Now we want to keep track of the value of the first sum .

Let the initial value be c, In each update, If the length of the updated interval is even, The sum won't change because x will be added as many times as it's subtracted, Otherwise x will be added to c or subtracted from c depending of the parity of l (the left-bound of the interval) .

Time complexity : O(n + (m + q)log(m)) .

Solution link (me) : https://pastebin.com/u828DjcS .

Solution link (mahmoudbadawy) : https://pastebin.com/dA3K8nfK .

862F - Mahmoud and Ehab and the final stage

First, Let's get rid of the LCP part .

observation: , That could make us transform the LCP part into a minimization part by making an array lcp where lcpi = LCP(si, si + 1), You could calculate it naively, And when an update happens at index a, You should update lcpa (If exists) and lcpa - 1 (If exists) naively .

Now the problem reduces to finding a ≤ l ≤ r ≤ b that maximizes the value:-

, If we have a histogram where the ith column has height lcpi, The the size of the largest rectangle that fits in the columns from l to r - 1 is , That's close to our formula not the same but it's not a problem (You'll see how to fix it later), so to get rid of finding the l and r part, We can make that histogram and the answer for a query will be the largest rectangle in the subhistogram that contains the columns from a to b - 1, One of the ways to solve it is to try some heights h and see the maximum width we can achieve if h was the height, call it w, and maximize with h * w, To solve the slight difference in formulas problem we'll just maximize with h * (w + 1)!!

Let BU be a value the we'll choose later, We have 2 cases for our largest rectangle's height h, It could be either h ≤ BU or h > BU, We will solve both problems separately.

For h ≤ BU we can maintain BU segment trees, Segment tree number i has 1 at index x if lcpx ≥ i and 0 otherwise, When we query, It should get us the longest subsegment of ones in the query range, Let's see what we need for our merging operation, If we want the answer for the longest subsegment of ones in a range [l;r], Let mid = (l + r) / 2, Then the answer is the maximum between the answer of [l;mid], The answer of [mid + 1;r], And the maximum suffix of ones in the range [l;mid] added to the maximum prefix of ones in the range [mid + 1;r] . So we need to keep all these information in our node and also the length of the interval, As it's a well-known problem I won't get into more detail. Back to our problem, We can loop over all h ≤ BU, Let the answer for the query on range [a;b - 1] in segment tree number h be w, The maximum width of a rectangle of height h in this range is w and we'll maximize our answer with h * (w + 1) .

For h > BU, Let's call a column of height greater than BU big, The heights we'll try are the heights of the big columns in the range, We don't have to try all the heights greater the BU, There are at most big columns (Where tot is the total length of strings in input), Let's keep them in a set, When an update happens, You should add the column to the set or remove it depending on its new height, The set's size can't exceed now, Let's see how to answer a query, Let's loop over the big columns in range [a;b - 1] only, If 2 of them aren't consecutive then the column after the first can't be big and the column before the second either, That's because if it were big, It would be in our set, So we can use this observation by making a new histogram with the big columns in the range only, And put a column with height 0 between any non-consecutive two, And get the largest rectangle in this histogram by the stack way for example in , The stack way will get us the maximum width w we can achieve for a rectangle containing column number i, We'll maximize with lcpi * (w + 1).

Also the answer for our main formula can be an interval of length one, All what I mentioned doesn't cover this, You should maintain another segment tree that gets the maximum length of a string in a range for this .

Maximize all what we got, You have the answer, Now it's time to choose BU, It's optimal in time to choose BU near (Reason in tfg's comment below) .

Optimization: The longest subsegment of ones problem is solved by BU segment trees and each one has 4 integers in each node, You can make them 2 integers (max prefix and suffix of ones) and make another only one segment tree that has the rest of the integers, That would divide the memory by 2 .

Time complexity :

Thanks to vintage_Vlad_Makeev for making it harder and more interesting .

Solution link (vintage_Vlad_Makeev) : https://pastebin.com/vQ4RJqh0 .

Solution link (mahmoudbadawy) : https://pastebin.com/t3Vetzwf .

Full text and comments »

  • Vote: I like it
  • +88
  • Vote: I do not like it

By mohammedehab2002, history, 8 years ago, In English

766A - Mahmoud and Longest Uncommon Subsequence

If the strings are the same, Any subsequence of a is indeed a subsequence of b so the answer is "-1", Otherwise the longer string can't be a subsequence of the other (If they are equal in length and aren't the same, No one can be a subsequence of the other) so the answer is maximum of their lengths.

Code : http://pastebin.com/aJbeTTjw

Time complexity : O(|a| + |b|).

Problem author : me.

Solution author : me.

Testers : me and mahmoudbadawy.

766B - Mahmoud and a Triangle

First solution :-

Let x, y and z be the lengths of 3 line segments such that x ≤ y ≤ z, If they can't form a non-degenerate triangle, Line segments of lengths x - 1, y and z or x, y and z + 1 can't form a non-degenerate triangle, So we don't need to try all the combinations, If we try y as the middle one, We need to try the maximum x that is less than or equal to y and the minimum z that is greater than or equal to y, The easiest way to do so is to sort the line segments and try every consecutive 3.

Code : http://pastebin.com/NsCkbQFS

Time complexity : O(nlog(n)).

Second solution :-

Depending on the note from the first solution, If we try to generate a sequence such that after sorting, Every consecutive 3 line segments will form a degenerate triangle, It will be 1 1 2 3 5 8 13 ... which is Fibonacci sequence, Fibonacci is a fast growing sequence, fib(45) = 1134903170, Notice that Fibonacci makes maximum n with "NO" as the answer, That means the answer is indeed "YES" for n ≥ 45, For n < 45, You can do the naive O(n3) solution or the first solution.

Code : http://pastebin.com/82XcJfgp

Let x be the number that satisfies these inequalities:-

fib(x) ≤ maxAi.

fib(x + 1) > maxAi.

Time complexity : O(x3) or O(xlog(x)).

Problem author : me.

Solutions author : me.

Testers : me and mahmoudbadawy.

766C - Mahmoud and a Message

Let dp[i] be the number of ways to split the prefix of s ending at index i into substrings that fulfills the conditions. Let it be 1-indexed. Our base case is dp[0] = 1. Our answer is dp[n]. Now let's calculate it for every i. Let l be the minimum possible index such that the substring from l to i satisfies the condition, Let x be a moving pointer, At the beginning x = i - 1 and it decreases, Every time we decrease x, We calculate the new value of l depending on the current character like that, l = max(l, i - as[x]). While x is greater than or equal to l we add dp[x] to dp[i], To find the longest substring, Find maximum i - x, To find the minimum number of substrings, there is an easy greedy solution, Find the longest valid prefix and delete it and do the same again until the string is empty, The number of times this operation is repeated is our answer, Or see the dynamic programming solution in the code.

Code : http://pastebin.com/4JiXSwfU

Time complexity : O(n2).

Try to find an O(n) solution(I'll post a hard version of some problems on this blog soon).

Problem authors : me and mahmoudbadawy.

Solution authors : me and mahmoudbadawy.

Testers : me and mahmoudbadawy.

766D - Mahmoud and a Dictionary

Let's build a graph containing the words, For every relation in the input add a new edge with the weight of 0 if they are equal and 1 if they are opposites, If adding the edge doesn't make the graph cyclic, Our relation is valid, Otherwise it may be valid or invalid so we'll answer them offline. Check if adding that edge will make the graph cyclic or not using union-find like Kruskal's algorithm. Suspend answering relations that will make the graph cyclic, Now we have a forest of trees, Let cum[i] be the xor of the weights on the edges in the path from the root of the component of node i to node i. Calculate it using dfs. To find the relation between 2 words u and v, Check if they are in the same component using union-find, If they aren't, The answer is 3 otherwise the answer is , Now to answer suspended relations, Find the relation between the 2 words and check if it's the same as the input relation, Then answer the queries.

Code : http://pastebin.com/WqwduaYs

Time complexity : O((n + m + q)log(n) * maxL) where maxL is the length of the longest string considering that union-find works in constant time.

Problem author : mahmoudbadawy.

Solution author : me.

Testers : me and mahmoudbadawy.

Wait for a hard version of this problem.

766E - Mahmoud and a xor trip

If we have an array ans[i] which represents the number of paths that makes the ith bit sit to 1, Our answer will be

Let arr[i][x] be the binary value of the xth bit of the number attached to node i(just to make work easier).

There are 2 types of paths from node u to node v where u is less in depth than or equal to v, Paths going down which are paths with lca(u, v)=u and other paths, Let's root the tree at node 1 and dfs, let current node be node, Let dp[i][x][j] be the number of paths going down from node i that makes the xth bit's value equal to j. A path going down from node i is a path going down from a child of i with node i concatenated to it so let's calculate our dp. A path that isn't going down is a concatenation of 2 paths which are going down from lca(u, v), Now we can calculate ans. See the code for formulas.

Code : http://pastebin.com/n2a3kijD

Time complexity : O(nlog(ai)).

Problem author : me.

Solution author : me.

Tester : mahmoudbadawy.

Full text and comments »

  • Vote: I like it
  • +103
  • Vote: I do not like it