steveonalex's blog

By steveonalex, 12 months ago, In English

I cannot find any guide on this on Codeforces, which made me tweak out.

HackMD blog if you prefer it: HackMD: Debug interactive problem more effectively on Codeforces

Interactive problems are probably the bane of your existence on Codeforces, because they don't appear very often, they can fail on sample testcase and you wouldn't even know, debugging is also hard, among other things.

Thus, I will present an easy way (to the best of my knowledge, it is the easiest), to test, debug and probably minimize the number of bugs generated while implementing interactive problems too.

1. How to write better code in general.

Problem statement:

Given $$$N$$$ ($$$N \leq 10^{18}$$$). Jury has hidden an integer point $$$(x, y)$$$ on the 2D plane. It is guaranteed that $$$0 \leq x, y \leq N$$$. You have to somehow guess where the point is. You can ask the following question:

- ? i j: You will ask the point $$$(i, j)$$$, and jury will give you the Manhattan distance between the hidden point and the point that you asked.

When you are done asking, you should answer by ! i j, which means that you think the hidden point coordinate is $$$(i, j)$$$.

Task: Find out the hidden point in the least number of query possible.

Example interaction:
Solution:

Ok, very easy problem, very cool. Here, Let's look at how lil Timmy — our punching bag — a very young and gullible competitive programmer, will implement the solution.

lil Timmy's code:

He submitted the solution after only testing it on the sample input (which is just how a normal human test interactive problems), and got a WA on test 2 faster than he could blink. What gives?

Turns out, he used int on line 6, which led to his demise, as the jury output can be up to $$$2*10^{18}$$$, and int can only store numbers up to $$$2*10^{9}$$$, leading to overflow.

And this is the problem with how people normally write interactive code. They write input and output directly inside of the code. While you could do fine with it, there is some obvious problems going on:

  1. The code looks really ugly.
  2. Humans error. Sometimes, your finger slips and you write the interaction wrong, and suddenly you have to scratch your head for tens of minutes, possibly hours looking for that one pesky mistake.
  3. What if you read the interaction instruction wrong, and have to modify everything again? This simply isn't good code.
  4. It is also very unreadable. While this won't really matter in a 2 hours Codeforces round, it could hinder debugging in longer contests or team contests, and you will also have a hard time reading your own code when revisiting it after months.

And this is why you should just put everything inside of functions.

Fixed lil Timmy's code:

By putting every important interactions inside a function, we ensure that every operations behaves the same, which helps avoiding goofy bugs such as using int instead of long long and can focus on getting the logic correctly instead. This will also make our code much more readable.

Sure, it might seem finicky on this beginner problem, but the benefits becomes evident on problems that requires a lot more implementation and logic, such as 2106G2 — Baudelaire (hard).

Additionally, this setup will naturally leads to my way of debugging interactive code, which I shamelessly copied from how OI contests handle interactive problems.

2. How to debug interactive problems.

It's easy, just make a .h library file and write the stress tester there. There is really nothing much to it. Here is an example of how I might write the stress tester for the above problem.

Library file: findpoint.h
Code file: findpoint.cpp

Explaination: I commented out the main function and the two interaction functions answer and ask in the main code file. Then in the library file, I rewrite the two interaction functions and the main function to handle the stress testing.

Then just press compile and hey, it ran 1000 tests correctly, let's delete #include "findpoint.h" and uncomment the main, ask and answer function, which takes literally 3 key strokes — 1.8 seconds in freedom units, and you are done.

Additional notes:

  • If you don't know what .h files are, and what I just did means, either ask ChatGPT or look up C++ docs. Come on, figuring out how to use .h files can't possibly be that hard.
  • You shouldn't have a single cin or anything equivalent in the main solve function to make it easier to stress test (run the code a lot of times, with different inputs). In my code, I read n and pass it into the function. The principle should be the same if the input is an array, a graph or a tree. Try not to use cin and instead pass everything by parameters in the function.
  • Of course, this won't have any use if implementing the grader is harder, more tedious, or both, than implementing the solution itself. These problems are few and far between on Codeforces however.
  • Yes, strategy matters. Even if most problems only takes from 5-20 minutes to set up the grader, 20 minutes is 20 minutes, and sometimes it is just not worth it.
  • You can write the grader directly inside the main code instead of a separate .h file, and indeed me from my early day do it too. However, I realized that it would make the code really messy and also interfere with the main logic. It’s cleaner to separate what you don’t intend to submit.
  • You could also do the #ifdef LOCAL shenanigans, but from my experience it takes like 2 seconds to comment and uncomment all of these, so no, I don't like that. You can do it if you want though.

Full text and comments »

  • Vote: I like it
  • +60
  • Vote: I do not like it

By steveonalex, history, 15 months ago, In English

The official solution isn't exactly the clearest, so I guess I will present own solution to this problem.

HackMD blog if you prefer it: HackMD: My Solution to IOI17 P6 — Ancient Books

1. Abridged Statement:

Given $$$n$$$ tables, each table has a book. The table is numbered from $$$0$$$ to $$$n-1$$$, and each book has a distinct value from $$$0$$$ to $$$n-1$$$ (let's denote the value of the book on the $$$i^{th}$$$ table as $$$p_i$$$). Initially, lil Johnson starts at table number $$$s$$$ ($$$0 \leq s \lt n$$$), and is not holding any book.

Lil Johnson has four possible operations:

  1. If lil Johnson isn't holding a book, and there is a book in the current table, then he can pick it up.
  2. If lil Johnson is holding a book, and there is no book in the current table, then he can put that book down.
  3. If lil Johnson is holding a book, and there is also a book in the current table, then he can replace the book.
  4. Lil Johnson go to the left or to the right i.e. if he is current at position $$$s$$$, then he can either go to table number $$$s-1$$$ or $$$s+1$$$ (obviously, he has to maintain $$$0 \leq s \lt n$$$ at all time.

The first three operations can be done instantaneously, and the fourth operation will take 1 second. What is the minimum time lil Johnson need to sort the array $$$p$$$ and come back to where he start?

Example: $$$p = [1, 0], s = 0$$$. Here, Lil Johnson picks up the book at table 0, moves to table 1 to replace the book, returns to table 0, and places the book down. The total time is 2 seconds.

Constraint:

  • $$$p$$$ is a permutation consisting of distinct integers from $$$0$$$ to $$$n-1$$$.
  • $$$n \leq 10^6$$$, $$$0 \leq s \lt n$$$.
  • Time limit: 2 seconds. That means the complexity must be $$$O(n)$$$ or $$$O(n * log(n))$$$, or more if you are an optimization connoisseur.

2. The lower bound of the answer

For the sake of simplicity, we will assume that $$$p_0 \neq 0$$$, and $$$p_{n-1} \neq n-1$$$.

You can imagine that each time we do the operation (4), we are moving at most one book closer to its intended destination. Therefore, the answer would be at least $$$\sum^{n-1}_{i = 0}|p_i - i|$$$.

So can we always achieve this lower bound? No, but it is helpful to know when can this be achieved.

To make my life easier, just interpret "wasted time" as any additional time over the lower bound of the answer. For example, if the answer is 420 while the lower bound is 400, then we call those 20 seconds "wasted time".

One obvious case is that if we draw the edge $$$i \rightarrow p_i$$$, it forms a cycle (or circle for uncultured peasants). That is because you can bring the book at table number $$$s$$$ to table number $$$p_s$$$, then the book at table number $$$p_s$$$ to the table number $$$p_{p_s}$$$, and so on. This gives us an insight: Just sort the cycles, and if you can conveniently switch cycle while you are at it, without incurring any cost, then great, just do it!

image

Figure 1: How would the cycle looks like.

Another case is that you would come across another cycle when you are sorting the current cycle. Then, you can jump to that cycle, then get back later.

image

Figure 2: How would the "intersecting" cycles would looks like. As you can see, when lil Johnson was on the $$$[0, 2]$$$ cycle, he can actually hop on the $$$[1, 3]$$$ cycle and sort that cycle, then come back. The operation would look like this.

  1. Pick up $$$p_0$$$ book.
  2. Go to table number $$$1$$$ and replace the book. Note that we are switching to cycle $$$[1, 3]$$$ here.
  3. Go to table number $$$3$$$, replace the book, then go back to table number $$$1$$$ and replace the book. Now we are back to the $$$[0, 2]$$$ cycle.
  4. Go to table number $$$2$$$, replace the book, then go back to $$$0$$$ and place the book down. The entire thing cost $$$8$$$ seconds.

So, we know that we do not have to waste anytime switching to a different cycle, as long as we encounter a vertex of that cycle while traversing.

Therefore, our algorithm for checking whether the answer reached our lower bound would look something like this:

Explaination of the algorithm:

  • Consider the "0 wasted movement" range, which is initially $$$[s, s]$$$. As we can reach every vertex inside this range, we also have access to every cycle that each vertex in the range belongs to i.e. we can switch to these cycles while we are sorting the current cycle, without incurring additional time.
  • Since we can jump to those cycles, we can travel to the minimum and maximum elements of the cycles that these vertices belongs to as well. Just imagine that our range "eat" every cycles of each vertices, which makes it bigger, which allows it to "eat" more. The process ends when it cannot expand any more.

3. Snipe some easy subtask.

From the above code, we can determine how far to the left or right Lil Johnson can move without incurring additional wasted time.

Let's call a range "stretched" if it cannot be expanded further without incurring additional wasted time. For example, $$$p = [2, 3, 4, 1, 0, 5]$$$. The range $$$[1, 1]$$$ is not stretched, because Lil Johnson can expand the range to $$$[1, 3]$$$. Similarly, $$$[1, 3]$$$ is not stretched, because we can expand it to $$$[0, 4]$$$, but $$$[0, 4]$$$ and $$$[5, 5]$$$ are stretched.

When Lil Johnson arrives at the border of the "travellable" range, he can either move left or right and extend the current range at the cost of 2 seconds of wasted walking time (you have to go back too). That is, if the current range is $$$[l, r]$$$, then he can expand it to $$$[l-1, r]$$$ or $$$[l, r+1]$$$.

Let not forget what our goal is. We need to sort the permutation while incurring the fewest cost, which means we need to expand our "travellable" range to $$$[0, n-1]$$$, while ensuring the cost is minimized.

This let us arrive at a range DP solution, featuring three operations: expand to the left, expand to the right, and stretch the current range:

Note that the number of distinct states in our dp table is $$$(s+1) * (n - s)$$$, therefore, this would easily solve the $$$s = 0, n \leq 10^6$$$ and the $$$n \leq 10^3$$$ subtasks. That is an easy 70 points right there.

4. The last subtask (the only hard one)

Let's call the current range (stretched) we are working with $$$[l_0, r_0]$$$. Let us also assume there exists a cycle whose span $$$[x_1, x_2]$$$ strictly contains the current range, i.e., ($$$x_1 \lt l_0 \leq r_0 \lt x_2$$$).

Claim: Consider any "stretched" range $$$[l_x, r_x]$$$, such that it contains $$$[l_0, r_0]$$$, and it also contains a cycle whose span strictly contains $$$[l_0, r_0]$$$. Let's call them "beautiful". Now consider the smallest of them, let's call it $$$[l_1, r_1]$$$. We claim that regardless of which direction you expand $$$[l_0, r_0]$$$, $$$[l_1, r_1]$$$ will be the first "beautiful" range you reach.

Proof:

This means we just need to calculate the fastest way to get to $$$[l_1, r_1]$$$. Observe that since they are the first range in the way that strictly contains $$$[l_0, r_0]$$$, that means that expanding left will not make expanding right faster, and vice versa.

Therefore, it is best to expand in 1 direction until we encounter $$$[l_1, r_1]$$$. So, we can just simulate the process and choose which direction incurs the fewest costs.

image

If no cycle strictly contains the current range, then the range can only expand independently to the left or right. In this scenario, we calculate the cost of moving entirely to the left and to the right separately, then sum these costs to determine the total time.

Time complexity: $$$O(n)$$$ or $$$O(n * log(n))$$$, depending on the implementation.

My code

Full text and comments »

  • Vote: I like it
  • +67
  • Vote: I do not like it

By steveonalex, 17 months ago, In English

I didn't see any documentary and blog on this topic, so I guess it's Codeforce blog time .

Blog in HackMD if you prefer it

Prerequisite:

  • Convex Hull Trick.

1) The OG Convex Hull Trick:

This blog is not a tutorial to Convex Hull Trick, so I would assume that you guys already know about it before reading this. However, I'll briefly recap for the sake of completeness.

Problem Statement: Given $$$n$$$ lines of the form $$$y = a_i * x + b_i$$$ and $$$q$$$ random queries $$$x_j$$$. For each query, calculate the value $$$max(a_1*x_j + b_1, a_2*x_j + b_2, ..., a_n*x_j + b_n)$$$.

  • Constraint: $$$n, q \leq 3*10^5$$$, $$$a_i, b_i, x_j \leq 10^9$$$.
  • Time limit: 1s

This is a well-known problem, and you can find tutorial for this basically everywhere. In short, we will sort all the lines in ascending order of their slope, and remove all of the redundant lines, as shown in my beautiful painting below (A line is redundant if it lies entirely "below" its two adjacent lines). This is solvable in $$$O(n)$$$ using stacks and some geometry.

Figure 1: How a redundant line looks.

image

From this, we observe that the resulting function is convex (since slopes are sorted). Each slope is optimal for one continuous segment, starting from and ending with its intersection with its two adjacent lines in the hull.

Figure 2: How the convex hull should looks, alongside its intersections.

image

Once the convex hull is constructed, the problem is basically just binary searching over the intersections of the convex hull to find the optimal line for queried point $$$x_j$$$.

Sample Code

Time complexity: $$$O(n * log_2(n))$$$ for the preprocessing, and $$$O(log_2(n))$$$ for each query.

2) Extended CHT problem:

Problem Statement: Given $$$k$$$ and $$$n$$$ lines of the form $$$y = a_i * x + b_i$$$ and $$$q$$$ random queries of the form $$$x_j$$$. First, we denote $$$c_i = a_i * x_j + b_i$$$. For each query $$$x_j$$$, find the $$$k$$$ largest values of the array $$$c$$$.

  • Constraint: $$$n, q \leq 3*10^5$$$, $$$k \leq 10$$$, $$$a_i, b_i, x_j \leq 10^9$$$.
  • Time limit: 5s.

Since the lines in the convex hull are sorted by slope, we observe that the further a line from the queried point, the less relevant it is.

But why is that? Let's consider two adjacent lines $$$(b)$$$ and $$$(c)$$$ to the left of $$$x_j$$$ ($$$(c)$$$ is further from $$$x_j$$$). These two lines intersect at the same point, but because $$$(b)$$$ slope is greater than $$$(c)$$$, the value of $$$(b)$$$ at $$$x_j$$$ ends up being greater.

Figure 3: Illustration of the lines $$$(b)$$$ and $$$(c)$$$ to the left of $$$x_j$$$, and how $$$(b)$$$ is more relevant than $$$(c)$$$.

image

Thus, we only need to focus on the $$$k$$$ nearest lines from $$$x_j$$$, both to the left and to the right.

Figure 4: How the algorithm may work.

image

However, there is a flaw to this approach. For example, for $$$k = 2$$$, what if the "redundant line" is actually the second largest line?

Figure 5: How the second largest line might not be on the convex hull.

image

There is an easy fix! We will keep track of all of the "redundant lines" from our first run of constructing the CHT data structure, and we will use these lines to make a second CHT. So for the previous example, it would look like this.

Figure 6: How the 2-layer CHT would look like.

image

Then we will do the same thing for the second CHT i.e. brute forcing through the $$$k$$$ nearest lines from the queried point, both to the left and to the right.

Extending to the general case was pretty simple. We can just make $$$k$$$ CHT, with each one using all of the redundant lines from the previous CHT. We know that this is optimal, because on each layer, we only need to go to the left and right at most $$$k$$$ times, and we only need to dive down at most $$$k$$$ layers (Anything on the $$$k+1^{th}$$$ layer is just not needed, since all of the lines on the previous layers are better).

Sample Code

Time complexity: $$$O(n * k * log_2(n))$$$ for the preprocessing, and $$$O(k^2 + k*log_2(n))$$$ for each query.

The complexity in both the preprocessing and querying could be further optimized, but I'll leave it as an exercise for readers.

Are there any problem that feature this algorithm? Well uhh... I don't know, this is like mythical stuff that you will probably never encounter all your life. But now you have :))

Full text and comments »

  • Vote: I like it
  • +75
  • Vote: I do not like it

By steveonalex, 18 months ago, In English

I know this round is from like 2-3 months ago but screw that, let's dig it up because I just solved it recently. Also I cannot see any people who are doing the same thing as me.

1) The problem statement:

Problem link: 2003F — Turtle and Three Sequences

Given two positive integers $$$n$$$, $$$m$$$ ($$$n \geq m$$$), and three sequences $$$a$$$, $$$b$$$, $$$c$$$ of size $$$n$$$. Find a sequence of length $$$m$$$ $$$p_1, p_2, ..., p_m$$$, such that $$$p_1 \lt p_2 \lt ... \lt p_m$$$, and $$$a_{p_1} \leq a_{p_2} \leq ... \leq a_{p_m}$$$, and $$$b_{p_i} \neq b_{p_j}$$$, $$$\forall i \neq j$$$.

Constraint:

  • $$$n \leq 3000$$$, $$$m \leq 5$$$.
  • $$$1 \leq a_i, b_i \leq n$$$, $$$1 \leq c_i \leq 10^4$$$.
  • Time Limit: 3 seconds.

You can read the intended solution in this link: Editorial.

Spoiler:

I would recommend you to try the problem out before scrolling further (or just don't, because the problem is rated 2800, so if you are a fellow blue hardstuck-er then you don't really stand much of a chance anyway).

2) My solution:

Prerequisite:

  • Fenwick Tree.
  • Constant optimization skill.

It would be best if we start from something manageable first. $$$m \leq 2$$$ is pretty simple, you literally just have to brute, it literally cannot get any easier than this, so let's move on.

For $$$m = 3$$$, it is more challenging, but you can solve it by iterating through all pair $$$(i, j)$$$ $$$(i \lt j)$$$, and for each pair, find the best index $$$k$$$ to the left of $$$j$$$ in $$$O(n)$$$, so the total complexity is $$$O(n^3)$$$. To optimize this, observe that we don't really need to keep track of that many $$$k$$$. We only need to maintain an array $$$suff_j$$$, containing up to $$$m$$$ candidate tuples $$$(a_k, b_k, c_k)$$$ with the largest $$$c_k$$$, such that $$$k \gt j$$$, $$$a_k \geq a_j$$$, and all $$$b_k$$$ are distinct (because at worst, we only have to skip $$$m-1$$$ candidates of the $$$m$$$ tuples). We can precalculate $$$suff_j$$$ in $$$O(n^2)$$$, so the final complexity for $$$m = 3$$$ is $$$O(n^2*m)$$$.

$$$m = 4$$$ is pretty much the same thing, except you have to also maintain the array $$$pref_i$$$, which contains up to $$$m$$$ tuples $$$(a_k, b_k, c_k)$$$ with the largest $$$c_k$$$, such that $$$k \lt i$$$, $$$a_k \leq a_i$$$, and all $$$b_k$$$ are distinct, then iterate through all pair $$$(i, j)$$$ just like the previous algorithm and iterate through all the candidate tuples in $$$suff_j$$$ and $$$pref_i$$$. The complexity of this is $$$O(n^2*m^2)$$$.

$$$m = 5$$$ is pretty tough, however. Previously, calculating $$$m^{th}$$$ best candidate to the left of $$$i$$$ and to the right of $$$j$$$ is pretty easy, but how do we take it a step further? Iterating through all the possible pairs of candidates (not just candidates, pairs of candidates) to the left of $$$i$$$ or to the right of $$$j$$$ is pretty infeasible, so we only have one choice: maintaining the array $$$between_{i, j}$$$, containing up to $$$m$$$ tuples with the largest $$$c_k$$$, such that $$$i \lt k \lt j$$$, $$$a_i \leq a_k \leq a_j$$$, and all $$$b_k$$$ are distinct.

We can use data structures used to solve range maximum queries like Fenwick Tree to calculate this array. Here's how: for each $$$i$$$, iterate $$$j$$$ from $$$i+1$$$ to $$$n$$$. For each $$$j$$$, get $$$m$$$ candidate tuples such that $$$a_k \leq a_j$$$, then update the Fenwick Tree with the tuple $$$(a_j, b_j, c_j)$$$. Once you obtained the three arrays, you just do the same this as the previous subtask, except you also iterate through the $$$between_{i, j}$$$. The complexity of this is $$$O(n^2*m*(m^2+log(n))$$$.

As the complexity might suggest, you indeed have to go pretty crazy on the constant optimization here (probably anything other than Fenwick Tree won't even pass, who knows, my program runs in 2.93s, which is like one Minecraft tick away from getting TLE).

Can we extend this for $$$m = 6$$$? Nope, I think I've pushed this idea to its limit already. Which is scary to think about, because author set the constraint on $$$m$$$ precisely equal to $$$5$$$, while the intended solution can go beyond that. Does the author know about this solution, so they set $$$m = 5$$$? And if yes, why don't they just write this solution into the editorial? Guess we'll never know.

Full text and comments »

  • Vote: I like it
  • +49
  • Vote: I do not like it

By steveonalex, 3 years ago, In English


Hi guys, this is my first blog on Codeforces. So if there were any mistakes or suggestions, feel free to correct me down in the comment section. Anyway, I discovered a nice way to solve static range query problems using "Divide and conquer", and I'm eager to share it with you guys.

Pre-requisites:
• Prefix Sum.

Problem 1:

Given an array $$$A$$$ of $$$N (N \leq 10^{5})$$$ integers, your task is to answer $$$q (q \leq 10^{5})$$$ queries in the form: what is the minimum value in the range $$$[l, r]$$$?

For now, let's forget about Segment Tree, Square Decomposition, Sparse Table and such. There's a simple way to solve this problem without any use of these fancy data structure.

First, let's start with $$$L_{0} = 1$$$, $$$R_{0} = n$$$, and $$$M_{0} = \left\lfloor { \frac{L_{0} + R_{0}}{2} } \right\rfloor$$$. Let's just assume that every query satisfy $$$L_{0} \leq l \leq M_{0} \lt r \leq R_{0}$$$. We maintain two prefix sum arrays:
• $$$X[i] = min(A[i], A[i+1], ..., A[M_{0}-1], A[M_{0}])$$$
• $$$Y[i] = min(A[M_{0}+1], A[M_{0}+2], ..., A[i-1], A[i])$$$

The answer to the query $$$ [ l_{0} , r_{0} ] $$$ is simply $$$min(X[l_{0}], Y[r_{0}])$$$. But what about those queries that doesn't satisfy the aforementioned condition? Well we can recursively do the same thing to $$$L_{1} = L_{0}, R_{1} = M_{0}$$$, and $$$L_{2} = M_{0} + 1, R_{2} = R_{0}$$$, hence the name "Divide and conquer". The recursive tree is $$$log N$$$ layers deep, each query exists in no more than $$$log N$$$ layers, and in each layer you do $$$O(N)$$$ operation. Therefore this algorithm runs in $$$O((N + q) * log N)$$$, and $$$O(N + q)$$$ memory.

So... Why on earth should I use it?

While this technique has practically the same complexity as Segment Tree, there is an interesting property: You only perform the "combine" operation once per query. Here's a basic example to show how this property can be exploited.

Problem 2:

Define the cost of a set the product of its elements. Given an array $$$A$$$ of $$$N (N \leq 2*10^{5})$$$ integers and a positive integer $$$k$$$ ($$$k \leq 20$$$). Define $$$g(l, r, k)$$$ as sum of cost of all subsets of size $$$k$$$ in the set {$$$ A[l], A[l+1], ..., A[r-1], A[r] $$$}. your task is to answer $$$q (q \leq 5 * 10^{5})$$$ queries in the form: what is $$$g(l, r, k)$$$ modulo $$$10^{9} + 69$$$?.

Naive Idea
Now I'll assume that you've read the naive idea above (you should read it). Notice how combining the value of two ranges runs in $$$O(k^2)$$$. However, if one of the two ranges has the length of $$$1$$$, then they can be combined in $$$O(k)$$$. This means a prefix-sum can be constructed in $$$O(N * k)$$$.
Why is this important? Let's not forget that in the naive Segment Tree idea, the bottle-neck is the convolution calculation, and we wish to reduce that in exchange for less expensive operations, which is what our aforementioned divide & conquer technique can help with, since you only do the expensive "combine" operation once per query. And besides, unlike Segment Tree, you can calculate the answer right away using the formula $$$\sum_{t=1}^k X[l][t] * Y[r][k - t]$$$

This will runs in $$$O(N * log N * k + q * (k + log N))$$$, and $$$O(N + q + k)$$$ memory, which is much better than the Segment Tree idea.

Related problem:

Vietnamese problems:
MofK Cup Round 1 — E: Xor Shift
• Contest Link: DHBB 2023 Upsolve Contest; Problem Link: DHBB — 11th Grader Division — Problem 3: HKDATA (click on the link and join the contest, then click on the problem link)

English problems:
USACO 2020 January Contest, Platinum Problem 2. Non-Decreasing Subsequences
Thanks for reading :-D

UPD1: adding an English problem, suggested by szaranczuk. UPD2: minor adjustments.

Full text and comments »

  • Vote: I like it
  • +101
  • Vote: I do not like it