Hello Codeforces!
On Aug/27/2022 17:35 (Moscow time) Educational Codeforces Round 134 (Rated for Div. 2) will start.
Series of Educational Rounds continue being held as Harbour.Space University initiative! You can read the details about the cooperation between Harbour.Space University and Codeforces in the blog post.
This round will be rated for the participants with rating lower than 2100. It will be held on extended ICPC rules. The penalty for each incorrect submission until the submission with a full solution is 10 minutes. After the end of the contest you will have 12 hours to hack any solution you want. You will have access to copy any solution and test it locally.
You will be given 6 or 7 problems and 2 hours to solve them.
The problems were invented and prepared by Adilbek adedalic Dalabaev, Vladimir vovuh Petrov, Ivan BledDest Androsov, Maksim Neon Mescheryakov and me. Also huge thanks to Mike MikeMirzayanov Mirzayanov for great systems Polygon and Codeforces.
Good luck to all the participants!
Our friends at Harbour.Space also have a message for you:
Harbour.Space University has partnered with Vodafone Business, a company rich in tradition that specializes in telecommunications and with Oneragtime, a fund-as-a-platform specialized in sourcing, financing and scaling early-stage tech startups from across Europe, to offer motivated tech talents Master’s degree scholarships in Data Science and Front-end Development with combination of work experience.
Candidates will be working on the following tasks:
Data Scientist at Vodafone:
- Data Science: Descriptive and predictive analytics: selection, definition, and execution of adequate algorithms, models, tests, visualizations, etc.
- Technology: Selection, definition, and execution of adequate programming languages/frameworks, file formats, data storage solutions, automatic data flows, etc.
- Architecture: Define and/or follow best practices for Big Data & amp; Analytics use cases while extracting, transforming, storing, and feeding data to/from different data sources using on-premises solutions as well as public cloud services.
- Management: Report to project managers, clients, external and/or internal teams. Estimate resources and timelines for different tasks and follow up during the full project life cycle.
- Innovation: Awareness of and continuous training in state-of-the-art techniques, models, frameworks, and technical approaches to be applied to Data Science activities.
Front-End Developer at Oneragtime:
- Define the technology stack / tools to be used in the frontend
- Suggest projects that will improve the product or code base
- Write scalable, maintainable, reusable, and well-tested software that adheres to best practices
- Make technical time estimation on future software deliveries
- Document solutions with clear and concise explanations
- Collaborate with Product Owners, Growth Hacker, UX / Product Designers
All successful applicants will be eligible for a 100% tuition fee scholarship (22.900 €/year) provided by Vodafone Business for Data Science and by Oneragtime for Front-end Development.
CANDIDATE’S COMMITMENT
Study Commitment: 3 hours/day You will complete 15 modules (each three weeks long) in one year. The daily class workload is 3 hours, plus homework to complete in your own time.
Work Commitment: 4+ hours/day Immerse yourself in the professional world during your apprenticeship. You’ll learn from the best and get to apply your newly acquired knowledge in the field from day one.
University requirements
- Bachelor's degree in the field of Mathematics, Statistics, Data Science, Computer Science or similar
- English proficiency
Work requirements
Data Science:
- Advanced knowledge and experience in SQL, Python, Spark/Scala, and bash
- Experienced use of Big Data technologies: Spark, HDFS, Kafka, etc.
- Hands-on experience with Data Science techniques: feature engineering,
Front-end Development:
- Proficient in HTML/JSX, CSS, and with strong fundamentals in Javascript ES6
- Understand Vue
- Familiar with UX/UI principles: Figma
- Familiar with Webflow
- Expertise with Web pack, gulp, similar frontend build tools (npm)
- Proficient in unit testing tools e.g. JEST or similar tools
- Proficient in either Sass, LESS, TailwindCSS, Bootstrap, Flexbox, or similar tools
- Understanding of REST API
- Familiarity with Docker is appreciated
- Familiar with SQL works (Bonus: if you understand Postgre)
UPD: Editorial is out
I hope this time it's going to be a good round
Hope a balanced Edu round this time.
Previous Edu was so bad with bad pset.. Hope this'll be better than before<3
More like:
Educational Codeforces Round 134 [Rated for Div. 1]
After seeing the comments about previous bad edu rounds, I'm a little bit afraid of participating in this round.
New Specialist arriving.......
Why are you revealing my new title before the beginning of the contest?
It may be mine new title.
It was a sarcastic comment.
Hope you become specialist in today's round, all the best!
Not in a sarcastic way just bit confident that I would made this time but Whenever I am very close to becoming a specialist, my contest goes bad. I feel depressed and bad but I will not give up.
About the work and study opportunities... why would anyone with these skills do a Master or internship if you are not even paid enough to live in the city? I mean, with the required skills, you can get a real job and get paid more.
WAforces
How to solve C ?
Video Solution for Problem C
Iterate from the end of array a.
For minimum, for each $$$a_i$$$, we look for the smallest $$$b_j$$$ such that $$$b_j \geq a_i$$$. Note that $$$j \leq i$$$ (since it must always be valid to map $$$a_i$$$ to $$$b_j$$$). It's valid to map $$$a_i$$$ to $$$b_j$$$ because the elements between $$$a_j$$$ to $$$a_{i - 1}$$$ can be mapped to the elements $$$b_{j + 1}$$$ to $$$b_i$$$. Obviously, we can't map $$$a_i$$$ to anything smaller than $$$b_j$$$. So the minimum $$$d_i = b_j - a_i$$$, where $$$b_j$$$ is the smallest element in $$$b$$$ that's $$$\geq a_i$$$.
Maximum is trickier. Consider the first index $$$i$$$ where $$$a_{i + 1} > b_i$$$. Now for each $$$j \leq i$$$, I claim there will always exist a valid mapping from $$$a_i$$$ to $$$b_i$$$. Proof: For $$$j < i$$$, we can simply map elements $$$a_i$$$ and below to $$$b_{i - 1}$$$ and below (since $$$i + 1$$$ is the first index where we can't do this) until we reach $$$a_j$$$, which we can map to $$$b_i$$$.
Furthermore, $$$b_i$$$ is the maximum choice for $$$a_j$$$ because $$$a_{i + 1}$$$ and above cannot map to $$$b_i$$$ and below, so by pigeonhole principle, the elements from $$$b_{i + 1}$$$ and above can only be occupied by $$$a_{i + 1}$$$ and above. Therefore, $$$d_j = b_i - a_j$$$, where $$$i$$$ is the first index in which $$$a_{i + 1} > b_i$$$ and $$$j < i$$$. We can then treat $$$a_{i + 1}$$$ as the new start, and look for the next index of this property, and so on.
My submission: https://mirror.codeforces.com/contest/1721/submission/169851465
Who else thought that question B will get solved by BFS/Dijkstra's Algo?
i thought it was dp and wasted alot of time then noticed test cases and then i thought for like 20 mins and ficured out greedy and then made a mistake with a bracket and waste another 30 mins and finally figured it out.
That would have taken way too much running time
You never want to go up or left. Now, any path, that can be cunstructed moving only right or down, will take you n + m — 2 steps to get to (n, m). So, i don't think, that Dijkstra or BFS is intended
It was my first idea as well but a quick glance at the bounds should immediately tell you that it's too slow. Since there's no "sum of $$$N$$$ is at most" condition and you can have up to $$$10^6$$$ cells to calculate distances too, the total runtime can get up to $$$10^{10}$$$ which is way too high.
I gave up on C ? can any one at least explain the problem?? Post Contest Obviously.
Video Solution for Problem C
Iterate from the end of array a.
What is wrong with my approach to problem C?
Approach : For Maximum : for each b[i] , find the largest j , such that a[j] <= b[i] , then mx[i] = b[j] — a[i]
For Minimum : find the leftmost j , such that b[j] >= a[i] , then mn[i] = b[j] — a[i]
contrary test: a = {1 1 1 5}, b = {4 4 5 6}. Following your logic, mx[1] = 4, but it's actually 5
Input:
1
3
6 8 10
9 11 12
Your output:
3 1 1
5 4 2
The correct output:
3 1 1
6 4 2
One array D where D[1] has the maximum possible value is the following:
6 1 1
So after adding D[i] to A[i] for each 1 <= i <= n, the resulting array B is:
12 9 11
And after we sort it in non-descending order we will obtain the same array as the one in the input.
for minimum can we use lower_bound function??
yep
Also possible to use the fact that arrays are sorted and maintain a pointer that only moves rightwards
the first edu round that i finally did good and solve ABC unexpectedly :>
How to solve D?
same question here
Solve for bits from most significant to least significant.
For any bit b, iterate through a and store the prefix in a multiset. Set all bits to 0 except the bits you've already set, and of course, this current bit.
Iterate through b and try to find the inverse prefix in the multiset and remove it. Set all bits to 1 except the bits you've already set, and of course, the current bit.
(By inverse prefix, we mean the prefix that when xor'd with the key in the map will produce 1 on every bit. So to find an inverse prefix, take a prefix, and xor it with the 1-mask to get the actual prefix).
If the multiset has nothing left after you're done, you can set this bit. Store the info that this bit is already set and move on.
Probably my implementation will get hacked, but here it is: 169888210
My solution has a similar idea, but personally find it easier to understand:
For each bit i from most to least significant: Sort a increasing and b decreasing. Compute the answer for the current state of a and b, lets call it k.
If the ith bit of k is set, then set the final answers ith bit to 1.
Otherwise set all ith bits in a and b to 0.
This works, because you greedily only consider bits which are used in the final answer when sorting the arrays and disregard all other bits. This solution works in $$$O(32 \cdot n \log n)$$$
code
Can you explain, why we have to sort arrays in such way?
You want to partition $$$a$$$ into 2 subsets:
all $$$a_j$$$ with the ith bit 1
all $$$a_j$$$ with the ith bit 0
You do the same for b
For the highest bit its easy to see that sorting will work to partition them into 2 subsets, because all $$$a_j$$$ with the ith bit 1 are strictly greater than $$$a_k$$$ with ith bit 0. So sorting a in increasing order will put all $$$a_j$$$ with ith bit 0 in the lower part and all $$$a_j$$$ with ith bit 1 in the higher part.
In order to increase the xor you greedily match all $$$a_j$$$ with ith bit 1 with $$$b_j$$$ with ith bit 0 and vice versa.
Now as you move down to the lower bits, you want to keep the matching of the previous assorted partitions (if they are part of the final answer), so you keep those bits as they were. Otherwise you erase them.
So to answer your original question (i hope this clears it up a bit):
Sorting a increasingly and b decreasingly matches the numbers with 0s at the ith bit in a with the numbers with 1s in the ith bit in b by putting the 0s of the 1s in each partition of a (and reversed in b).
Thank you very much!
great solution thank you
D is annoying implementation, unless I'm missing something obvious
You are missing
You can mask bits
Really? The implementation is fairly straightforward (though somewhat long) if you use a
set<pair<vector<int>, vector<int>>>
Right I see now, I used
set<pair<int, int>>
and overcomplicated everythingIn problem E Prefix Function Queries, Why am I get TLE on test 17 ? I used the concept of GFG problem. Its clearly in O(|s|+q*|t|).
Submission link
The issue is that while loop. Computing prefix function is amortized $$$\mathcal O(n)$$$, but when you have queries and can "rollback" some suffix of the string, you break the amortized analysis and devolve to $$$\mathcal O(nq)$$$.
Thanks. Then how to solve E?
To speed up the while loop, we note that the loop is just repeatedly jumping
len = lps[len - 1]
until we find a character that matches. Thus, we can compute a jump tablejump[i][c]
that marks the first index we jump to from index $$$i$$$ before reaching character $$$c$$$. This gives us a $$$\mathcal O((|s| + q|t|) \cdot \Sigma)$$$ solution where $$$\Sigma = 26$$$ is the alphabet size.Submission for reference
I'm in the same situation as you.169890422
This is far from "clearly" $$$O(|s| + q|t|)$$$.
Look at this code path. The
len = lps[len - 1]
line might happen many times as $$$i$$$ is not increased. The "vanilla" KMP will run in $$$O(|s|)$$$ because there is some amortized analysis saying it does. But now that you have all these different queries, it may happen that the parts of KMP that took a long time will happen every time.D is so obvious, but the memory limit and implementation is so fucking annoying.
B was very irritating
you are too
Can anybody tell me where's the solution?
thanks!
Or there's no solution until now?
I think solutions(if you mean tutorial/editorial) will be published after the open hacking.
Oh,thanks
Actually they're just lazy. Gotta wait more:(
I would like to report that problem D of this round had already been discussed (main idea as well as code) on StackOverflow 1+ year back and I believe many have used this and would lead to an unfair advantage to them in today's round.
Can someone explain an easy way to implement C ? I was doing it with sets but I am getting TLE on TC-5. My solution was to check for every ai if there exists a range of values bi ( l <= i <= r ) greater than or equal to ai and the smallest di would be b[l]-ai and largest di would be b[r]-ai.
Your idea is correct. You can optimize it with 2 pointers technique.
Anyone care to explain their solution to D.
Be greedy on MSB and check if its possible. If you are trying to make ans, then you only need to care about bits which are set in ans.
So iterate from i = 31 to 0
Then check if it possible and update ans = tar accordingly. To check if it is possible, notice that you only care about the bits which are set in the target variable. Store tar&b[i] in freq map. And for each a[i] check if there exists (a[i]&tar)^tar.
Is my approach for E correct (didn't have time to code it):
First create a hash for each prefix of string so that we can efficiently compute hashes of substrings. Now for each prefix check whether it is a suffix of the original string in $$$O(N \log N)$$$, firstly just store hashes of prefixes in a map, then traverse all suffixes and try to match them to prefixes using map.
Now we for each $$$i$$$ traverse intervals $$$(i,i)$$$, $$$(i,i+1)$$$, $$$(i,i+2)$$$, ..., $$$(i,i+9)$$$ and update maximum prefix that ends at $$$i-1$$$ that is also a suffix of the original string using the previously computed value. Now all that is left is for each query, for each prefix of $$$t$$$ add the maximum value + its length if its hash exists in the original string.
Is this valid, seems good on sample checking by hand?
can any one tell that why its giving tle on 3rd testcase in (B)
Your algorithm is probably O(m*n). m and n can be up to 1000, and there can be up to 10^4 test cases. Worst case time complexity would be 1000 * 1000 * 10^4
https://leetcode.com/problems/minimum-cost-homecoming-of-a-robot-in-a-grid/ recalled when I made that mistake in this problem
Nooo, i saw timer was on 3 minutes, so i thought i have enough time to upwrite D, but it finished like 10 seconds after i continued writing code... Could've got to expert, if solved D :(
But nethertheless, great round! Thanks for nice edu!
How to optimize D? I got TLE on test 4.
My algorithm : I maintained a vector of pair of vectors. Initially we have the pair of given 2 vectors. Now, from the highest bit to lowest bit, find if complementary number of bits are set in each pair of vectors. If yes, divide each vector into 2 vectors which can be paired with one another.
Submission link : 169890132
Make a check if the vectors are empty. In this case, don't push them into the set.
Thank you so much, how could I miss that :(
I missed that too at first (and probably a lot more people).
Thanks, man. I had the same idea and thought it was too slow, because of the TLE (and I can't seem to figure out how and why). You saved my day!
Strangely when I added this check into my code, my runtime only went from 1450ms to 1403ms: 169971754 vs 169857453. My solution checks for impossibility first before doing anything else though, so that might be it. Still, I'd expect the difference to be more substantial.
This is because you are using a set instead of a vector, and only one instance of empty arrays is inserted.
Oh shoot, thanks! Don’t know how that escaped me
Don't add pairs of empty vectors.
Instead of copying divisions to new vectors, you can simply rearrange and operate on indices similar to quick sort partitioning.
Submission: https://mirror.codeforces.com/contest/1721/submission/169880265
I tried to solve it with the same idea. I also got TLE on test 4. I tried to make it better by using pairs of numbers that show a range on the first vectors. I am getting MLE on test 15. I have no idea why it is using so much memory. Can anyone help me understand why my code is using so much memory while it should use much less? (just a few vectors should not reach the max limit memory.)
https://mirror.codeforces.com/contest/1721/submission/169912449
edit: I got the problem. It was because I was pushing unnecessary pairs to my vector. just like pushing empty vectors.
Checking to see if you can achieve the bits before creating the new pairs would probably help. In your solution you potentially waste a lot of time pushing pairs of vectors into things and then discarding them.
[Deleted]
I'm always stuck at DIV2D
One day you will not.
Shouldn't Problem B be m rows and n columns? M is Y-axis and N is X-axis.
$$$x$$$-axis is from up to down almost always in CP.
Please no
I'd rather prefer they didn't use x for Y-axis coordinates
God, I misinterpreted problem B and thought that robot is moving to a certain point that is given as part of the input... Wasted a lot of time due to this.
Making a checker for F seems very interesting. Does anyone know how it was done?
It probably only does checking for the each 2-query that the sum of edge indices matches the value printed on the last 1-query.
To check if the removal of a vertex results in a smaller maximum matching, it is sufficient to check that there is no alternating path from an unmatched vertex to its matched side (it doesn't matter which maximum matching we use for this check). This can be rephrased into dynamic connectivity.
Note that the checker doesn't need to be fully online, so there's lots of options for the problemsetter here. Whether the sum of edges is achievable is probably not tested, although there are some sums that are clearly not achievable, so there are at least some cases where a WA can be created this way.
It was kinda painful to implement.
The interactor doesn't check everything, but it checks a lot. First of all, the fact that deleting a subset of vertices decreases the matching is checked using the following assumptions:
Checking the sum of numbers in a matching is not perfect. You can, for example, print that the sum of edges is -1e18, and the interactor won't mind it. It relies on the fact that your solution doesn't know the moment when it has to process the query of type $$$2$$$ beforehand, so you can actually print some nonsense, but most probably you will get caught.
I have submitted two solutions for problem B because the previous solution submitted was hackable and now that solution is hacked, so will my most recent submission counts? I am asking this because my rank has been dropped after getting the old solution of problem B getting hacked
finally will make it to specialist :D
Anyone want to try to hack 169846064? Basically, I took the naive approach of recomputing the prefix function each query and put a trie on top of it to reuse computations. I made a hack where it runs in 763 ms (as compared to the current 311 ms) but maybe someone knows how to make a better test.
I think E is too easy and standard and there should be a harder "string suffix structures" problem for me to solve.
How to solve E?
Today's contest was really educational. But couldn't solve no more than two problem.
Can anyone please tell me why my code is giving wrong answer for this test case
Problem D
My idea is to start from the highest bit and try to pair a[i] and b[i] where the both bits are different
This contest sucks. _HDH noob.
Brute force can AC problem E because of the too small $$$|t|$$$. AC Submission.
That's too bad. I think problem E is really a bad problem.
Bruh I knew how to solve D at 1h left but I got so many bugs it took me 1h past the contest to finally AC...
Same :(
Am I the only one who solved D with binary search?
I thought about doing that, but I ultimately decided against that. I assume you mean binary searching on the answer, right? I didn't do that b/c I don't think it's guaranteed that if a value x works, then all values below x work.
Yes, I did that. Actually, "if a value x works, then all values below x work" is an incorrect statement lol, but in the loop I checked can we make a number for a certain m which contains 1 in all bits where m contains 1. And seems like we can find out the asked number bit by bit
Seems that if you can check whether answer $$$x$$$ is valid, you can get the optimal answer directly. So binary search is unnecessary.
Am I the only one who did E using Binary lifting and tree dp? all submissions seem way less complicated.
Can anyone help me improve this solution. I am just trying to gp the array elements based on count of setbits.
Could anyone explain why my solution of C is giving TLE when it should clearly run in O(nlogn) Submission:169910737
Time Complexity for s.erase() is O(sizeof(set))
s.erase(iterator) is O(1) while s.erase(val) is O(sizeof(set)) as you explained
Ok my bad,
Maybe the problem is erasing an empty set is an undefined behavior
https://mirror.codeforces.com/blog/entry/20553
Sorry to pester you with my silly doubts but using multiset.lower_bound() is still giving me TLE. Is this normal? Submission: 169937456
Your AC code 169944009
Thank you so much aryanc403 and indreshsingh. I was really sad after the contest since this question was utterly easy but i couldn't figure out my mistake.
Can someone tell what is wrong with my code
Question C — Min-Max Array Transformation (Div 2)
Submission Id — https://mirror.codeforces.com/contest/1721/submission/169917249
Here is a counter-example:
The maximum difference for $$$a_1$$$ should be 0. To see why, observe that $$$a_2 = 2$$$ cannot be mapped to $$$b_1 = 1$$$ (since $$$2 > 1$$$), which means that {$$$a_2, a_3$$$} must map to {$$$b_2, b_3$$$}. By the pigeonhole principle, this means $$$a_1$$$ can't map to {$$$b_2, b_3$$$}. Therefore, $$$a_1$$$ must be mapped to $$$b_1$$$ no matter what, so both the minimum and maximum $$$d_1$$$ are equal to $$$0$$$.
The error in your code is how you compute the maximum. Greedily picking the largest possible $$$k$$$, which only decrements if
idx==k-1
orb[k-1] == b[idx]
is insufficient. My approach, instead, was to find the next index $$$k$$$ for which $$$a_{k + 1} > b_k$$$, which means indices $$$\leq k$$$ cannot be mapped to $$$b_{k + 1}$$$ or above, but they can be mapped to $$$b_k$$$ instead, to yield the maximum possible difference.Hope you can understand my submission easily ,and will find your mistake. 169935846
is this problem D? image link
To not keep you waiting, the ratings are updated preliminarily. Tomorrow I will remove cheaters and update the ratings again!
Can anyone explain problem D?
The third test-case provides a nice hint. If my explanation is too long, feel free to skip to the end for the algorithm steps.
Suppose the answer is $$$x$$$. Consider the leftmost 1 bit (MSB) of $$$x$$$. Let's say it's at position $$$k$$$. Elements of $$$a_i$$$ with a 0 in bit $$$k$$$ would be paired with a $$$b_i$$$ that has 1 in bit $$$k$$$, and vice versa. This means, if you were to sort $$$a$$$ according to bit $$$k$$$ only (ignoring all higher bit positions), and if you were to reverse-sort $$$b$$$ according to bit $$$k$$$ only, then the bitwise XOR of $$$a_i$$$ and $$$b_i$$$ will always yield a 1 in bit $$$k$$$.
Okay, so it's easy to find the first $$$1$$$ in $$$x$$$, but how do we find the next $$$1$$$? Let's say that the second 1 from the left in $$$x$$$ is at bit position $$$k'$$$. This time, it's not enough to look only at bit $$$k'$$$ to pair $$$0$$$s in $$$a$$$ with $$$1$$$s in $$$b$$$ and vice versa, because we also need to make sure the relationship in bit $$$k$$$ is preserved.
Essentially, bit $$$k$$$ partitions $$$a$$$ and $$$b$$$ into two (where the $$$0$$$-partition in $$$a$$$ has the same size as the $$$1$$$-partition in $$$b$$$ and vice versa) and then within each partition pair, we need to make sure bit $$$k'$$$ achieves a similar partition. It follows that if we
(a) sort $$$a$$$ according to bit $$$k$$$ and reverse-sort $$$b$$$ according to bit $$$k$$$ as mentioned before,
(b) for the partition in $$$a$$$ that has the same bit $$$k$$$, we sort these elements according to bit $$$k'$$$, while for the corresponding partition in $$$b$$$ with the opposite bit $$$k$$$, we reverse-sort these elements according to bit $$$k'$$$
Then the bitwise XOR between $$$a$$$ and $$$b$$$ should now have both bit $$$k$$$ and bit $$$k'$$$ yielding $$$1$$$s. This partition idea may sound complicated to implement, until you realize that this is basically how numbers are sorted anyway! Sort by the most significant bit first, and then for those with the same MSB, we sort by the second most significant bit, and so on. It's just plain sorting! The only difference is that we skip the bit positions in which we're not able to form such pairs (i.e., some partition formed by $$$a$$$ does not have the same size as the corresponding partition formed by $$$b$$$), where $$$x$$$ will have 0 in such bit positions. But if we ignore these bit positions, then a sorted $$$a$$$ will map perfectly to a reverse-sorted $$$b$$$.
This leads to the following algorithm:
My submission: https://mirror.codeforces.com/contest/1721/submission/169955540
In my opinion ,Rounds Created by awoo always contains intresting problems. Thanks a lot, keep creating.
Are you serious? Last edu round?
Educational rounds — more like negative delta rounds :(
got JUST enough for specialist ^_^ Lets gooo
Dammn! at the border you're... Congrats for becoming unrated for upcoming Div4 LOL.
haha, true
Ohhh this was nice contest.
hope you will make contest again.
in que 1 we can directly take character array of 4 size and sort them. then just see how many times elements changing
Alternatively, we can just enter each of the four characters into a set. The answer is the size of the set minus 1. 169957444
Hello. Can anyone help me understand why this solution WA2. Thank you!
https://mirror.codeforces.com/contest/1721/submission/169843966
(Please, don't downvote)
I haven't examined the code itself, but it seems you're calculating maximum incorrectly. Consider the following test-case:
The maximum difference for $$$a_1$$$ should be 0. To see why, observe that $$$a_2 = 2$$$ cannot be mapped to $$$b_1 = 1$$$ (since $$$2 > 1$$$), which means that {$$$a_2, a_3$$$} must map to {$$$b_2, b_3$$$}. By the pigeonhole principle, this means $$$a_1$$$ can't map to {$$$b_2, b_3$$$}. Therefore, $$$a_1$$$ must be mapped to $$$b_1$$$ no matter what, so both the minimum and maximum $$$d_1$$$ are equal to $$$0$$$.
I'm not sure what you were trying to do for maximum, but my approach was to find the next index $$$k$$$ for which $$$a_{k + 1} > b_k$$$, which means indices $$$\leq k$$$ cannot be mapped to $$$b_{k + 1}$$$ or above, but they can be mapped to $$$b_k$$$ instead, to yield the maximum possible difference. 169851465
Thank you.
Come on why do Edu Round editorials take so long
can anyone tell me why i get WA on test 2, please tell me how to correst it, and thanks a lot.
please don't downvote!
my record
I didn't understand your program code, but here is a counter-test:
The minimum values for $$$d_1$$$ and $$$d_2$$$ are both 0, since either of them can be mapped to $$$b_1 = 1$$$. You can find the minimum $$$d_i$$$ by simply looking for the smallest $$$b_j$$$ such that $$$b_j \geq a_i$$$.
Here's an interesting solution I found for E, not sure if it's new. Implementation here, featuring enough spaghetti to feed a family of five for a week.
We do some precomputation first. Use a
map<int, int> m
, where values are hashes of a substring of $$$S$$$, and the corresponding value is the maximal number $$$i$$$ such that the length $$$i$$$ prefix and suffix of $$$S$$$ are the same. To calculate this, we iterate $$$i$$$ independently of the substring, and check if the prefix and suffix have the same hashes in $$$O(1)$$$ time. If so, we add updatem
on substrings $$$S[i+1..i+1+j]$$$ (technically their hash values). As we'll see later, we only care about $$$j \leq 10$$$, so this takes $$$O(|S|\log |S|)$$$ time which is fast enough (we can use anunordered_map
orgp_hash_table
as well but time is not an issue). Note that if we double-check our hash equality by directly comparing the substrings, we TLE on TC 36 which is all a's, since comparing strings/generating substrings is linear in the length of the string so our program runs in $$$O(|s|^2)$$$ in that case.We consider every $$$|S|+i$$$ more or less independently (I will use uppercase letters). Let $$$T_i=T[1..i]$$$ denote the length $$$i$$$ prefix of $$$T$$$. Suppose we have some prefix string $$$x$$$ of $$$S+T_i$$$ that's also a suffix string. There are three possibilities:
(1): Both the prefix and suffix strings lie in both $$$S$$$ and $$$T_i$$$. In this case, we can break $$$x$$$ up into $$$x_1,x_2,x_3$$$ (in that order) defined as follows: $$$x_1$$$ is the portion of the suffix string lying in $$$S$$$, $$$x_3$$$ is the portion of the prefix string lying in $$$T_i$$$, and $$$x_2$$$ is the remainder of $$$x$$$. To check if any $$$x$$$ of this type work, we iterate over $$$|x_3|:=j$$$, checking if the length-$$$j$$$ prefix and suffix of $$$T_i$$$ are the same. If so, then $$$x$$$ works if and only if the rightmost occurrence of $$$x_2$$$ in $$$S$$$ is as far right as it can be (i.e. occupies starting position $$$|S|-|x_2|$$$). In this case, the length-$$$|x_1|$$$ suffix of $$$S$$$ is also $$$x_1$$$, so we can use our precomputed
m
. There are at most $$$10$$$ values for $$$|x_3|$$$, so iterating over them is fast.(2): The prefix string lies entirely in $$$S$$$, but the suffix string does not lie entirely in $$$T_i$$$. This is more or less a simpler version of (1). We split $$$x$$$ into $$$x_1,x_2$$$ such that $$$x_2$$$ is the portion of the suffix string lying in $$$T_i$$$, and $$$x_1$$$ is the remainder. Since $$$x_2$$$ must be $$$T_i$$$ by definition, $$$T_i$$$ must appear in $$$S$$$ as a substring. Further, $$$x_1$$$ must also be the suffix string of $$$S$$$ when this happens. We can use
m
for this as well. Note that there's only one possible value for $$$x_2$$$.(3): The suffix string lies entirely in $$$T_i$$$. If $$$|S|>=i$$$, then that means the prefix string lies entirely in $$$S$$$. There's at most $$$10$$$ values for $$$x$$$, so we can iterate $$$j$$$ from $$$1$$$ to $$$i$$$ (inclusive) and check if $$$S[1..j]=T_i[i-j+1...i]$$$. Otherwise, we have to join $$$S$$$ and $$$T_i$$$, since the prefix string could partially lie in $$$T_i$$$ as well, but the actual check is the same. Since this case only occurs when $$$|S|\leq 10$$$, joining $$$S$$$ and $$$T_i$$$ is quick (if you join $$$S$$$ and $$$T_i$$$ no matter what, you should TLE). Either way, you iterate through at most $$$10$$$ possible values for $$$x$$$. Ignore the comment at the top lol
To find the final answer, we iterate through all $$$O(1)$$$ cases and update our maximum value each time. The final time complexity should be $$$O((|s|+q)\log |s|)$$$ which works if you're not being stupid.
Auto comment: topic has been updated by awoo (previous revision, new revision, compare).
Can someone find out why on earth this code got RE? thks in advance.
Edit: solved. (found that sort got an infinite loop. :( )