robinyqc's blog

By robinyqc, history, 32 hours ago, In English

GPT-o3 recently achieved a 2700 rating in OpenAI's Codeforces test, which means fewer than 200 active users worldwide have a higher rating (less than a single page on the CF leaderboard).

I used to feel unbothered by this because there were many people with ratings higher than GPT. Back then, my rating was also higher. I knew it would surpass me eventually, but I didn't expect it to happen so quickly—just about a year. Now, I feel slightly panicked: the only advantage I have over GPT is that I'm cheaper (I heard o3 costs a lot to solve a problem, while I can solve them for free, lol). I genuinely enjoy solving problems, but that doesn't stop me from feeling anxious. While I know my passion will keep me engaged in competitive programming, I still want to seek support.

I recalled that Um_nik once wrote a blog titled On AI ruining "solving math problems with computer", which expressed his views on AI. I wonder if he anticipated AI advancing this quickly. Back then, I thought the article made sense, but now, replacing the "1600" in it with "2700" and "Div2A/B" with "Div1C/D" feels really strange and unsettling.

As described in that article, the CF community now has people "running around yelling 'We will all die, somebody do something about AI.'" again. I used to think this was just unnecessary panic, but now I feel it's somewhat justified. I might soon become one of them!

These are my current thoughts: mild panic and anxiety. Many users with similar ratings to mine have expressed their own feelings. Kozliklekarsky's comment perfectly captures my sentiments: Is this the real life? Is this just fantasy?

I also have some thoughts about the future. Currently, o3 is prohibitively expensive. If its price drops in the future, people will undoubtedly start using it for competitive programming. (Don't tell me “o3's price will never drop”—it already achieved a 2700 rating, so I can't bring myself to believe it's impossible.)

This has both good and bad implications. On the bright side, GPT's level surpasses mine, which means I can use it as a resource to learn and improve. On the downside, it also means I could use GPT to cheat on CF and achieve a rating higher than my actual skill level. Sure, such a rating would be fake, but we're talking about a top 200 global ranking! Ordinary people have vanity. If I could achieve this ranking, I'd be laughing in my dreams. For an 1800 rating, I believe 85% of people could resist the temptation to cheat. But for a 2700 rating—my dream rating—how many could truly resist?

The influx of high-level accounts would also greatly impact the mindset of top competitors. Moreover, it feels like there's no way to prevent this from happening. For instance, if I asked GPT, “How do I solve this problem?” and then learned its approach and wrote the code myself, how could anyone prove it was AI-assisted? I admit this leans towards conspiracy thinking, but I can't help pondering it.

What are your thoughts on AI's current rating in competitive programming?

Full text and comments »

  • Vote: I like it
  • +36
  • Vote: I do not like it

By robinyqc, history, 5 months ago, In English

Hi everyone!

Here I want to share a method to count dominating pairs in high dimensions, while the number of points ($$$n$$$) is small but the dimensionality ($$$k$$$) is large (like $$$n = 32768, k = 8$$$).

The problem of counting dominating pairs in K-Dimension is as follows:

For two $$$k$$$-tuples $$$a = (a_0, a_1, \dots, a_{k - 1})$$$ and $$$b = (b_0, b_1, \dots, b_{k - 1})$$$, define $$$dom(a, b) = \prod\limits_{i = 0}^{k - 1}[a_i \geq b_i]$$$, which outputs $$$1$$$ if each $$$a_i$$$ is greater than or equal to $$$b_i$$$ ($$$a$$$ dominates $$$b$$$), otherwise $$$0$$$.

Given $$$n$$$ $$$k$$$-tuples $$$t_0, t_1, \dots, t_{n - 1}$$$, for each $$$i \in [0, n)$$$, compute $$$g(i) = \sum\limits_{j = 0}^{n - 1} dom(t_i, t_j)$$$.

When $$$k = 2$$$, the problem simplifies to counting inversions, solvable directly using a Fenwick Tree.

For $$$k = 3$$$, one can consider a Divide and Conquer (D&C) approach, achieving a complexity of $$$O(n \log^2 n)$$$, which is excellent. Alternatively, a K-D Tree approach achieves $$$O(n \sqrt{n})$$$ complexity, also a good option.

For $$$k = 4$$$, D&C embedded with D&C or just K-D Tree solutions are less optimal, with reduced complexity benefits.

At $$$k = 5$$$, the Divide and Conquer approach is notably less efficient than brute force, and the K-D Tree offers no significant advantage over brute force.

An optimized brute force approach involves using bitset to enhance efficiency.

Brute Force Approach

An obvious brute force approach is to enumerate $$$i, j$$$ and check whether $$$p_i$$$ dominates $$$p_j$$$ in each dimension in $$$O(k)$$$ time. The overall complexity is $$$O(n^2 k)$$$.

A key property is that only if $$$dom[(a_{0..i}), (b_{0..i})] = 1$$$, $$$dom[(a_{0..i + 1}), (b_{0..i + 1})]$$$ can be equal to $$$1$$$. Therefore, the enumeration order can be swapped. First, enumerating dimensions (tuple indices) $$$d$$$, maintaining the dominant relation under dimension $$$d$$$, and computing the relation up to the next dimension based on the current order:

Let $$$b_{d, i, j} = dom(p_{i, 0..d - 1}, p_{j, 0..d - 1})$$$. Then:

$$$ b_{d + 1, i, j} \gets b_{d, i, j} \cdot [p_{i, d} \geq p_{j, d}] $$$

In practice, dimension $$$d$$$ can be omitted because each $$$(i, j)$$$ pair is independent, simplifying to:

$$$ b_{i, j} \gets b_{i, j} \cdot [p_{i, d} \geq p_{j, d}] $$$

The total complexity is $$$O(n^2 k)$$$.

Bitset Optimization

It's observed that this problem can be optimized using bitset. Utilize a 2D array bitset<N> b[N]. Considering the transition condition $$$[p_{i, d} \geq p_{j, d}]$$$, sort points by $$$p_{i, d}$$$ to obtain $$$q$$$, where $$$q_i$$$ is located at $$$p$$$ index $$$l_i$$$.

Enumerating from $$$0$$$ to $$$n - 1$$$, deduce a relation array $$$r$$$, where $$$r_j$$$ indicates $$$p_{l_j, d} \leq q_{i, d}$$$. Then update $$$b_{l_i}$$$ using $$$b_{l_i} \gets \left(b_{l_i} \text{ bitand } r\right)$$$.

Time complexity is $$$O(\frac{n^2 k}{w})$$$, space complexity is $$$O(\frac{n^2}{z})$$$, where $$$w = 64, z = 8$$$, with excellent constant factors.

Space Optimization

If space constraints are strict, use grouping.

Divide $$$S$$$ (where $$$S \geq w$$$) points $$$p_{l..l + S - 1}$$$ into groups. Each time, focus only on the values of $$$b_{l} \dots b_{l + S - 1}$$$. Analyzing the complexity:

  • Sorting takes $$$O(nk\log n)$$$ time;
  • Calculating $$$r$$$ for each group is $$$O(nk)$$$, thus the total complexity is $$$O(\frac{n^2k}{S})$$$. Thus, without space optimization, calculating $$$r$$$ has a complexity of $$$O(nk)$$$;
  • Calculating the array $$$b$$$ for each group takes $$$O(\frac{Snk}{w})$$$, resulting in a total complexity of $$$O(\frac{n^2k}{w})$$$.

Since $$$S \geq w$$$, the time complexity is $$$O(\frac{n^2k}{w})$$$, and the space complexity is $$$O(\frac{Sn}{z})$$$.

Implementation (C++)

Related problem: CF1826E.

There is also a zh-cn version of this post. If you have any interest, try this link.

Full text and comments »

  • Vote: I like it
  • +47
  • Vote: I do not like it

By robinyqc, history, 16 months ago, In English

Given a continuous range — $$$V$$$, and the length of the sequence — $$$n$$$, how can a sequence with no duplicates be generated uniformly at random? For a sequence $$$S$$$, we say it has no duplicate if and only if $$$\forall i\neq j, S_i\neq S_j$$$.

This is very useful for sampling. For example, you have an array $$$a$$$, and you want to select some samples to calculate their variance. The general way is to generate an index sequence and select the element with the generated indices. For example, in Python we use random.sample to do this.

Here I will show some ways to generate such a sequence.

Method 1

Let's consider the simplest algorithm. We can put the entire value range directly into an array and then shuffle it. If we take the first $$$n$$$ elements, we get a generation complexity of $$$O(|V|)$$$. However, shuffling has a high constant factor, so the whole process can be seen as $$$O(|V|w)$$$, where $$$w$$$ is a constant factor. Is there a better approach?

Actually, there's an even simpler idea, but with a flaw: just randomly pick $$$n$$$ numbers to construct a sequence. The flaw here is the possibility of generating duplicate numbers. How can we fix this? A simple solution is to put the entire range of values into a treap, select elements from it, and then remove them. This approach would have a complexity of $$$O(|V| + n \log n)$$$, but with a large constant factor, which is not ideal. The solution is also quite simple: put the whole range of values into a std::vector $$$\boldsymbol x$$$, then randomly choose an index $$$y$$$, put the corresponding element $$$\boldsymbol x[y]$$$ into the target vector $$$\boldsymbol z$$$, swap $$$\boldsymbol x[y]$$$ with the last element of $$$\boldsymbol x$$$, and then pop it off. The total complexity remains $$$O(|V| + n)$$$, but due to constants associated with operations like random number generation, the entire process can be considered as $$$O(|V| + nw)$$$. Since $$$n \leq |V|$$$, this approach is consistently no worse than the previous one, and may be better in certain cases.

Analyzing correctness: Assuming the current selection is the $$$i$$$-th number and the range of values is $$$n$$$, then the probability of all previous selections not containing $$$\boldsymbol z$$$ is:

$$$ \prod_{j=1}^{i-1} \frac {n-j}{n-j+1}=\frac {n-i+1}{n} $$$

The probability of selecting $$$\boldsymbol z$$$ at the current step is $$$\dfrac 1{n-i+1}$$$. Therefore, the probability of selecting $$$\boldsymbol z$$$ is $$$\dfrac 1n$$$. This is indeed a uniform distribution.

Method 2

It's the same naive idea as before: just generate without worrying about duplicates. What if we encounter duplicates? Don't worry about it! Remember how we calculate quadratic residues? We pick a number at random, and if it's a duplicate, we don't care. We can just pick another one. The only concern is that after many selections, we might run into duplicates again and again.

However, since we are picking numbers randomly, randomness is built in :) So we have to calculate the expectation.

First, let's solve a similar problem:

Perform a task. Each time there's a probability $$$p$$$ of failure and a probability $$$1-p$$$ of success. After a failure, we keep trying until success, then stop. Find the expected number of steps to reach success. The expected number can be calculated as follows

$$$ \begin{aligned} E &= \sum_{x=1}^{\infty} p^{x-1}(1-p)x \\ &= \sum_{x=1}^{\infty} p^{x-1}x - \sum_{x=1}^{\infty} p^xx \\ &= 1 + \sum_{x=1}^{\infty} p^x(x+1) - \sum_{x=1}^{\infty} p^xx \\ &= 1 + \sum_{x=1}^{\infty} p^xx + \sum_{x=1}^{\infty} p^x - \sum_{x=1}^{\infty} p^xx \\ &= \sum_{x=0}^{\infty} p^x \end{aligned} $$$

This is just a geometric series! It becomes

$$$ \lim_{n\to \infty}\frac {p^{n+1}-1}{p-1}=\frac 1{1-p} $$$

Oops! The expected number of successes is the inverse of the probability of success! (Although it's a classic result and seems quite intuitive QwQ)

We want to find the complexity of randomly choosing a number that did not appear, so it suffices to compute an upper bound. Obviously, the maximum complexity occurs when selecting the $$$n$$$-th number. At this point, the probability of success is $$$\dfrac {|V|-n+1}{|V|}$$$. Let $$$|V|=(n-1)k$$$, then the expected complexity is

$$$ O(\dfrac 1{\frac {|V|-n+1}{|V|}})=O(\frac {|V|}{|V|-(n-1)})=O(\frac {k}{k-1}) $$$

So if we choose $$$n$$$ numbers, the total complexity will be $$$O(n\cdot\dfrac k{k-1})$$$. However, checking whether a number has appeared before may require using a self-balancing BST due to the large range of values, resulting in a complexity of $$$O(n\log n\cdot\dfrac k{k-1})$$$. By omitting $$$V$$$, a significant improvement is achieved!

Frankenstein's Monster (Patchwork Solution)

However, the above algorithm cannot satisfy all scenarios. For instance, when $$$n=V$$$, each selection incurs an enormous cost, leading to a shocking expected complexity of $$$O(n^2)$$$.

Then, similar to the approach for some $$$O(n\sqrt n)$$$ problems, we can choose the method based on which algorithm is faster. Obviously, if $$$k$$$ is larger, the $$$O(n\log n\cdot\dfrac k{k-1})$$$ method from the discussion below is preferred; otherwise, the $$$O(|V|+n)$$$ method from the previous discussion is adopted. But when exactly? Just by solving the equation below, we can determine the threshold:

$$$ |V|+n=kn\leq n\log n\cdot\frac k{k-1} $$$

Which leads to:

$$$ k^2n-kn(\log n+1)\leq 0 $$$

From this, we derive that $$$k\leq \dfrac {\log n+1}2$$$. Therefore, let's set a threshold $$$B=\dfrac {\log n+1}2$$$. When $$$k\leq B$$$, we adopt the former approach; otherwise, we go with the latter. Analysis of the complexity: when $$$k\leq 2$$$, since $$$|V|=k(n-1)$$$, the complexity is $$$O(|V|+n)=O(n \log n)$$$. When $$$k>B$$$, $$$\dfrac k{k-1}<2$$$, which can be regarded as a constant, so it's also $$$O(n \log n)$$$. The overall complexity is $$$O(n \log n)$$$.

Further Optimization

Optimization 1

Of course, there's more room for optimization. For instance, if $$$n\log n\geq |V|$$$, we can directly apply the $$$O(|V|)$$$ approach discussed earlier, or use std::sort for constant factor optimizations in time and space. This results in a time and space complexity of $$$O(\min {|V|,n\log n})$$$. However, given the reasonable threshold derived earlier, this optimization may not be critical.

Optimization 2

Another optimization concerns the time complexity. Checking whether a number has appeared can be done not only with a BST but also with a hash table. This would improve the time complexity, but the constant factors can be very large. In this case, the complexity of the second method would become $$$O(n\cdot \dfrac k{k-1})$$$. Solve the equation:

$$$ |V|+n=kn\leq n\cdot\frac k{k-1} $$$
$$$ k^2n-2kn\leq 0 $$$

Solving this, we get $$$0<k\leq 2$$$. So, let's set $$$B=2$$$. If $$$k\leq B$$$, use the first approach; otherwise, use the second one. Further analysis of the complexity: When $$$k\leq 2$$$, $$$|V|=k(n-1)$$$, and $$$k$$$ can be regarded as a constant. Thus, the complexity is $$$O(|V|+n)=O(n)$$$; when $$$k>2$$$, $$$\dfrac k{k-1}<2$$$, which can be seen as a constant, resulting in $$$O(n)$$$ complexity. The total complexity is $$$O(n)$$$.

Optimization 3

Still related to checking whether a number has appeared. When $$$|V|$$$ is not large and there's enough memory space, we can use the common 'vis' array for the check. Using std::bitset can greatly optimize constants. When $$$|V|\leq 10^8$$$, consider using this method for constant optimizations.

Optimization 4

In many cases, we don't require such randomness. Therefore, we can divide the value range equally into $$$K$$$ blocks and solve each block separately. This keeps the time complexity unchanged while reducing the space complexity to $$$O(\dfrac nK)$$$ or $$$O(\dfrac {|V|}K)$$$.

Method 3 / Optimization 5

While I was sleeping I thought of another method, method 3. I'm not sure if it's correct. So if you think it is wrong, please tell me in the comments. Specifically, we first randomly select $$$n$$$ numbers, then sort and remove duplicates (use radix sorting to accumulate). Let's say we have $$$m$$$ numbers left. Now set $$$n=m$$$ and solve recursively. I can't prove the complexity here, so let's just present a table:

Data Table

$$$Exp$$$ represents the average number of times required to obtain at least $$$n-100$$$ numbers using the method mentioned above.

It's evident that when $$$\dfrac n{|V|}$$$ is relatively small, the expected number of attempts is very small. Therefore, within the above method, if $$$k$$$ is large ($$$\dfrac n{|V|}$$$ is very small), applying this method to generate the first $$$n-100$$$ or first $$$x\%$$$ numbers, and then using Method 2 to generate the rest, can slightly optimize the constant factors in time and space.

Attached is the code for generating the table:

Code to Generate the Table

After I wrote the implementation of the method above, I realize that if I replace Method 2 with Method 3, will it be faster?

It seems that in Method 3, the constant becomes too large when the ratio is inappropriate. But compared to the constant in Method 2, is this constant truly large? In addition, the ratio should be less than 0.5, so the constant reaches a maximum of 20. Is this really more optimal?

Indeed it is.

Here is the table using only Method 3 (unsigned int):

Method 3 Data Table

The table above represents the performance of Method 3 when fully utilizing $$$n$$$. We don't use the n/V=0.8 group, so it looks good.

Running $$$6\times 10^6$$$ data points is 3 to 6 times faster than the hash approach.

It appears be slower in cases where the range of value is very small compared to the code above. However, we have a std::bitset approach that strictly outperforms the hash approach for small value ranges. Combining these methods results in an optimal solution!

For Method 3, we can still optimize! We noticed that the generated numbers are ordered among themselves, only the newly generated numbers are unordered. So we just need to sort and merge the newly generated numbers. Less constant time, get it!

For Optimization 3, there's even more optimization. If we want to generate sequence many times, we can pretreat the bitset for better constants. That is, if you use std::bitset::reset, it'll be faster than declaring one.

Then is code time.

Final Implementation

In fact, the program is much faster than Python. Almost 10x in $$$n=10^6$$$.

Then we can check whether the program is correct using the program below for $$$n=10^6,a=1,b=10^9$$$, where $$$a$$$ is the lower bound, $$$b$$$ is the upper bound:

Test Program
Result

We know mathematically that the expected mean is $$$\dfrac {a+b}2$$$ and the variance is $$$\dfrac {(b-a+1)^2-1} {12}$$$. That's consistent with the forecast.


In this edition, I only write the important content. There's a Chinese edition of this article. Also, it's the former one. In this edition, there are more interesting things in it. But they are not important.

Thanks to ChatGPT and DeepL. They helped me to improve the English expression of this blog.

Full text and comments »

  • Vote: I like it
  • +36
  • Vote: I do not like it