Compiler GCC provides the ability to use assembler inserts. This can be useful, for example, for multiplying two 64-bit numbers by a 64-bit module.
The fact is that multiplying two 64-bit registers, the processor stores the result in a pair of registers rdx (upper part) and rax (lower part). Division works in a similar way: the divisible is taken from the registers rdx and rax, after which the quotient is stored in rax, and the remainder is stored in rdx.
Using this knowledge, you can implement an analog of the following function:
inline long long mul(long long a, long long b) {
return (__int128)a * b % 1000000014018503;
}
In this way:
inline long long mul(long long a, long long b) {
long long res;
asm(
"mov %1, %%rax\n"
"mov %2, %%rbx\n"
"imul %%rbx\n"
"mov $1000000014018503, %%rbx\n"
"idiv %%rbx\n"
"mov %%rdx, %0\n"
:"=res"(res)
:"a"(a), "b"(b)
);
return res;
}
We indicate the use of variables res for writing, a and b for reading. They accordingly receive designations %0, %1, %2. Operations are written using the standard AT&T syntax.
Now you can write hashes using a 64-bit module, which is equivalent to using a pair using a 32-bit module, without using __int128.
Auto comment: topic has been updated by Gornak40 (previous revision, new revision, compare).
You haven't declared any of the fixed registers you clobber with this code, so it's terrible undefined behavior: If the compiler was using rax for anything you are toast. Also, 64-bit idiv is very slow on some systems: You may find a floating-point-based method much faster. (And for hashing applications you can probably use Montgomery reduction instead of "ordinary" modmul for even better performance.)
I see a few problems with this code:
rbx
should be preserved (source), otherwise it may cause troubles when combined with GCC-generated code.__attribute__((naked))
(source).That being said, using x86 instructions directly is significantly faster than running
__int128
division (which calls a large, slower function__modti3
) when you only need 64 bits of modulus and output.Here is my version of the function in assembly (Windows call convention):
Codeforces has the gym named "Fast modular multiplication", where I have tested how fast the assembler insertion is.
So, the assembler insertion is slightly faster, but not significantly faster.
It's also possible to implement inline assembly version without any function call overhead:
But almost all CPU cycles are spent on executing the super slow division instruction in all code variants. The __int128 variant without inline assembly is also using the same division instruction (after some extra checks to ensure that division overflows won't be triggered).
Yes, and it is faster, ~1248 ms. And your function can be inlined (this removes one
jmp
and twomov
) despite the same assembler output (with exactness up to swapping commands' order andud2
opcode) (on Linux): https://godbolt.org/z/r5MeMxdbMOn Windows your function produces one extra
mov
, but inlining removes onejmp
and onemov
.If you really need
int64
multiplication, better consider this variant:There is a proof that it works here: https://github.com/kth-competitive-programming/kactl/blob/main/doc/modmul-proof.pdf
This is much faster, and is not correct only for some
int64
values, that are almost certainly much bigger than any modulo you chooseThanks!
All hail our emperor Stas