Kitamasa Method!

Revision en4, by Siyah, 2025-12-16 19:34:17

From Matrix Exponentiation to Kitamasa Method

--- Hi, I'm AmirMohammad (commonly known as Danet). Naturally, as a non-native speaker, the translation and editing process involved some assistance. If you spot any technical or linguistic errors, I would be grateful if you pointed them out!

Faster Computation of Linear Recurrence DP

Problem Statement

We are given a linear recurrence relation of order ( k ):

$$$ \begin{align} dp[n] &= c_1 \cdot dp[n-1] + c_2 \cdot dp[n-2] + \dots + c_k \cdot dp[n-k] \end{align} $$$

Along with the first ( k ) values:

$$$ \begin{align} dp[0],\ dp[1],\ \dots,\ dp[k-1] \end{align} $$$

Our goal is to compute ( dp[n] ) efficiently for very large ( n ).

Classical Solution: Matrix Exponentiation

State Definition

To compute the recurrence, we always need the last ( k ) values. So we define the state vector as:

$$$ \begin{align} V(i) &= [\, dp[i],\ dp[i-1],\ dp[i-2],\ \dots,\ dp[i-k+1] \,] \end{align} $$$

This vector fully describes the DP state at step ( i ).

Transition Rule There exists a fixed matrix ( M ) such that:

$$$ \begin{align} V(i+1) &= M \cdot V(i) \end{align} $$$

Applying this transition repeatedly:

$$$ \begin{align} V(n) &= M^{\,n-k} \cdot V(k) \end{align} $$$

So the problem reduces to exponentiating a matrix.

Transition Matrix The transition matrix has the following form:

$$$ \begin{align} M &= \begin{bmatrix} c_1 & c_2 & \dots & c_k \\ 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & \dots & 1 & 0 \end{bmatrix} \end{align} $$$
  • The first row applies the recurrence
  • The remaining rows only shift values downward

Time Complexity

$$$ \begin{align} \text{Matrix multiplication} &= O(k^3) \\ \text{Binary exponentiation} &= O(\log n) \\ \Rightarrow \text{Total} &= O(k^3 \log n) \end{align} $$$

This is correct but becomes too slow when ( k ) is large.


Key Observation

We do not actually need the entire matrix ( M^n ).

Important facts: - Only the first row affects ( dp[n] ) - All other rows are pure shifts - The transition itself is linear and fixed

This suggests that the problem can be solved without matrices.


Shift Operator Interpretation Introduce a symbolic operator ( x ) that represents one DP transition.

Interpretation:

$$$ \begin{align} dp[1] &\leftrightarrow x \cdot dp[0] \\ dp[2] &\leftrightarrow x^2 \cdot dp[0] \\ dp[n] &\leftrightarrow x^n \cdot dp[0] \end{align} $$$

So: - ( x ) behaves exactly like applying the recurrence once - ( x^n ) means applying the recurrence ( n ) times

This operator plays the same role as the matrix ( M ), but algebraically.

Characteristic Polynomial From the recurrence:

$$$ \begin{align} dp[k] &= c_1 dp[k-1] + c_2 dp[k-2] + \dots + c_k dp[0] \end{align} $$$

Replace ( dp[i] ) with powers of ( x ):

$$$ \begin{align} x^k &\equiv c_1 x^{k-1} + c_2 x^{k-2} + \dots + c_k \end{align} $$$

Move everything to the left-hand side:

$$$ \begin{align} x^k - c_1 x^{k-1} - c_2 x^{k-2} - \dots - c_k &= 0 \end{align} $$$

Define the characteristic polynomial:

$$$ \begin{align} p(x) &= x^k - c_1 x^{k-1} - c_2 x^{k-2} - \dots - c_k \end{align} $$$

This polynomial encodes the recurrence completely.


Main Idea of Kitamasa Method

We want to compute ( x^n ).

Using polynomial division:

$$$ \begin{align} x^n &= Q(x)\,p(x) + R(x) \end{align} $$$

Taking modulo ( p(x) ):

$$$ \begin{align} x^n &\equiv R(x) \pmod{p(x)} \end{align} $$$

Where:

$$$ \begin{align} \deg R(x) & \lt k \end{align} $$$

So the remainder can be written as:

$$$ \begin{align} R(x) &= r_0 + r_1 x + \dots + r_{k-1} x^{k-1} \end{align} $$$

Mapping this back to DP:

$$$ \begin{align} dp[n] &= \sum_{i=0}^{k-1} r_i \cdot dp[i] \end{align} $$$

Conclusion:
Computing ( dp[n] ) is equivalent to computing
( x^n \bmod p(x) ).


Fibonacci Example

Consider Fibonacci numbers:

$$$ \begin{align} f(n) &= f(n-1) + f(n-2) \end{align} $$$

Here ( k = 2 ), and the characteristic polynomial is:

$$$ \begin{align} p(x) &= x^2 - x - 1 \end{align} $$$

Which implies:

$$$ \begin{align} x^2 &\equiv x + 1 \end{align} $$$

Computing f(3):

$$$ \begin{align} x^3 &= x^2 \cdot x = x(x+1) = x^2 + x = (x+1) + x = 2x + 1 \end{align} $$$

Therefore:

$$$ \begin{align} f(3) &= 2f(1) + f(0) \end{align} $$$

Computing f(4):

$$$ \begin{align} x^4 &= x^2 \cdot x^2 = (x+1)(x+1) = x^2 + 2x + 1 = (x+1) + 2x + 1 &= 3x + 2 \end{align} $$$

Thus:

$$$ \begin{align} f(4) &= 3f(1) + 2f(0) \end{align} $$$

---

Binary Exponentiation on Polynomials

Assume we already know:

$$$ \begin{align} x^n &\equiv R(x) \pmod{p(x)} \end{align} $$$

To compute (x^(2n)):

$$$ \begin{align} x^{2n} &= (x^n)^2 = R(x)^2 \pmod{p(x)} \end{align} $$$

Steps: 1. Multiply two polynomials of degree ( < k ) 2. Reduce the result modulo p(x).

Each step costs O(k^2).


Final Complexity

$$$ \begin{align} \text{Polynomial multiplication} &= O(k^2) \\ \text{Exponentiation steps} &= O(\log n) \\ \Rightarrow \text{Total} &= O(k^2 \log n) \end{align} $$$

Using FFT:

$$$ \begin{align} \text{Total} &= O(k \log k \log n) \end{align} $$$

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en4 English Siyah 2025-12-16 19:34:17 4
en3 English Siyah 2025-12-16 19:33:43 2
en2 English Siyah 2025-12-16 19:33:15 8
en1 English Siyah 2025-12-16 16:46:10 5500 Initial revision (published)