Блог пользователя platelet

Автор platelet, история, 22 месяца назад, По-английски

Given $$$k,m$$$ and $$$n$$$ numbers $$$a_i$$$, compute each $$$a_i\times k\bmod m$$$

I came up with a method that is almost twice as fast as the compiler implementation (when $$$m$$$ is a constant), which can effectively speed up the NTT and some DPs.

First of all

$$$ a_i\times k\bmod m=\{a_i\times \frac km\}\times m\tag1 $$$

where $$$\{x\}=x-\lfloor x\rfloor$$$ is the fractional part function. The principle of equation $$$(1)$$$ is that $$$a_i\times k\bmod m$$$ is only related to the fractional part of $$$\frac{a_i\times k}m$$$.

Let $$$\frac km\approx\frac p{2^{64}}$$$, $$$p$$$ is either equal to $$$\lfloor\frac km\times2^{64}\rfloor$$$ or $$$\lceil\frac km\times2^{64}\rceil$$$, which one to choose, we will decide later, then

$$$ \begin{aligned} a_i\times k\bmod m&\approx\{a_i\times \frac p{2^{64}}\}\times m\\ &=\frac{a_ip\bmod 2^{64}}{2^{64}}\times m\\ &=\frac{a_ip\bmod 2^{64}\times m}{2^{64}} \end{aligned} \tag2 $$$

There are two place we need to choose whether round up or down:

  • $$$p=\lfloor\frac km\times2^{64}\rfloor$$$ or $$$p=\lceil\frac km\times2^{64}\rceil$$$.
  • The final formula in (2) isn't always an integer, so we need to consider whether we round it up or down.

We choose $$$p=\lceil\frac km\times2^{64}\rceil$$$, which will slightly enlarge the answer, while the final formula in (2) rounded down, which will slightly lessen the answer. We'll prove that this set of choice can give us correct answer just if $$$a_i\le\frac{2^{64}}m$$$.

Proof
const int P = 998244353;

void calc(int n, int k, int a[]) {
    unsigned long long p = ((unsigned __int128)k << 64) + P - 1) / P;
    for(int i = 0; i < n; i++)
    	a[i] = (unsigned)a[i] * p * (unsigned __int128)P >> 64;
}

A few notes.

  • The code uses unsigned __int128 so it can only be used in a 64-bit architecture.
  • (unsigned)a[i] * p will automatically be modulo $$$2^{64}$$$.
  • * (unsigned __int128)P >> 64 is 64-bit multiplication retaining only the high 64 bits (in the rdx register), the same speed as multiplying two unsigned long longs together.
  • Converting a[i] to unsigned is because int to unsigned and then to unsigned long long is faster than going directly to unsigned long long, which requires sign extension.

Speed test.

code

It contains two parts.

  • The first part is the reciprocal throughput, the time taken by the CPU to be highly parallel (modern CPUs can be parallelized at instruction level on a single core), containing a total of $$$50000\times50000$$$ modulo multiplications.
  • The second part is the Latency, which is the time taken for each modulo multiplication to be performed sequentially without parallelism, containing a total of $$$50000\times25000$$$ modulo multiplications.

Possible output:

Throughput test(50000 * 50000):
Compiler's signed modulo:   1954.83 ms
Compiler's unsigned modulo: 1746.73 ms
My modulo:                  1160.47 ms

Latency test(50000 * 25000):
Compiler's signed modulo:   4329.33 ms
Compiler's unsigned modulo: 3945.29 ms
My modulo:                  2397.97 ms

By the way, a few general facts.

  • Constant modulo multiplication is almost 4 times faster in parallel than serial (as is modulo multiplication of my invention).
  • int to unsigned then to long long is faster than long long, but negative numbers will be wrong.
  • unsigned long long modulo constants is faster than long long.

Comparison with other methods

Comparison with Barrett reduction and Montgomery multiplication:

  • The purpose of my method is to compute $$$a\times b\bmod m$$$ for fixed $$$m$$$ and $$$b$$$, while Barrett reduction and Montgomery multiplication compute $$$a\times b\bmod m$$$ for fixed m. But my method is faster than the other two methods.

  • The derivation of my method is similar to Barrett reduction. So They both work when $$$m < 2^{32}$$$, while Montgomery multiplication works when $$$m < 2^{64}$$$ and $$$m$$$ is an odd number.

Extensions

It is also possible to compute $$$(a_1b_1+a_2b_2+\cdots+a_nb_n)\bmod m$$$, but $$$\sum a_i$$$ cannot exceed $$$\frac{2^{64}}m$$$.

Let $$$p_i=\lceil\frac{b_i}m\times2^{64}\rceil$$$.

$$$ (\sum a_ib_i)\bmod m=\lfloor\frac{(\sum a_ip_i)\bmod 2^{64}\times m}{2^{64}}\rfloor $$$
  • Проголосовать: нравится
  • +311
  • Проголосовать: не нравится

»
22 месяца назад, # |
  Проголосовать: нравится +92 Проголосовать: не нравится

Hey, thanks for your blog post! May I suggest that you place your proof and code in the blog post, perhaps under the spoiler tag, rather than making them separate blog posts? It is more convenient for the reader to see them without moving to another page, and also this way it consumes less space in "Recent actions" section.

Have you compared your idea with other existing ones, such as Montgomery multiplication or Barrett reduction? pajenegod has a very comprehensive article about them here.

  • »
    »
    22 месяца назад, # ^ |
      Проголосовать: нравится +41 Проголосовать: не нравится

    Thanks for your suggestion. I've changed the external links to spoiler tags and compared my method to the others.

»
22 месяца назад, # |
Rev. 3   Проголосовать: нравится +32 Проголосовать: не нравится

Daniel Lemire came up with this method about four years ago, and I describe it in my book. I heard there were some efforts to replace the algorithms used in GCC and LLVM with it, but I don't know if they went anywhere.

You can also use it for faster division-by-constant and divisibility checks, but it has the drawback that it needs 4x the machine word width for intermediates (Barrett reduction-like methods typically need 2x). It is also a less efficient than Montgomery multiplication if you need to execute a chain of modular operations and not just one modular reduction. Many optimized number theory algorithms, including NTT, largely rely on (SIMDized versions of) Montgomery multiplication.

  • »
    »
    22 месяца назад, # ^ |
      Проголосовать: нравится +41 Проголосовать: не нравится

    When k=1 my method is the same as Lemire Reduction, but there are some differences:

    • The purpose of my method is to compute $$$a_i\times k \bmod m$$$ multiple times for fixed $$$m,k$$$, while Lemire Reduction is modulo a single number.
    • When computing $$$a_i\times k \bmod m$$$, my method works for $$$m<2^{32}$$$ and with two 64-bit multiplications. If you calculate $$$a_i\times k$$$ first and then use Lemire Reduction, it only works for $$$m<\sqrt[3]{2^{64}}$$$ with three 64-bit multiplications.

    Maybe my method is an improvement of Lemire Reduction, which works for $$$m<2^{32}$$$ and is useful in competitive programming (common moduli are $$$10^9+7$$$ and $$$998244353$$$)

»
22 месяца назад, # |
  Проголосовать: нравится +31 Проголосовать: не нравится

Auto comment: topic has been updated by platelet (previous revision, new revision, compare).

»
12 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

wow it is so strong. stO \platelet/ Orz