Блог пользователя Siyah

Автор Siyah, история, 4 месяца назад, По-английски

From Matrix Exponentiation to Kitamasa Method

--- Hi, I'm AmirMohammad (commonly known as Danet). Naturally, as a non-native speaker, the translation and editing process involved some assistance. If you spot any technical or linguistic errors, I would be grateful if you pointed them out!

Faster Computation of Linear Recurrence DP

Problem Statement

We are given a linear recurrence relation of order ( k ):

$$$ \begin{align} dp[n] &= c_1 \cdot dp[n-1] + c_2 \cdot dp[n-2] + \dots + c_k \cdot dp[n-k] \end{align} $$$

Along with the first ( k ) values:

$$$ \begin{align} dp[0],\ dp[1],\ \dots,\ dp[k-1] \end{align} $$$

Our goal is to compute ( dp[n] ) efficiently for very large ( n ).

Classical Solution: Matrix Exponentiation

State Definition

To compute the recurrence, we always need the last ( k ) values. So we define the state vector as:

$$$ \begin{align} V(i) &= [\, dp[i],\ dp[i-1],\ dp[i-2],\ \dots,\ dp[i-k+1] \,] \end{align} $$$

This vector fully describes the DP state at step ( i ).

Transition Rule There exists a fixed matrix ( M ) such that:

$$$ \begin{align} V(i+1) &= M \cdot V(i) \end{align} $$$

Applying this transition repeatedly:

$$$ \begin{align} V(n) &= M^{\,n-k} \cdot V(k) \end{align} $$$

So the problem reduces to exponentiating a matrix.

Transition Matrix The transition matrix has the following form:

$$$ \begin{align} M &= \begin{bmatrix} c_1 & c_2 & \dots & c_k \\ 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & \dots & 1 & 0 \end{bmatrix} \end{align} $$$
  • The first row applies the recurrence
  • The remaining rows only shift values downward

Time Complexity

$$$ \begin{align} \text{Matrix multiplication} &= O(k^3) \\ \text{Binary exponentiation} &= O(\log n) \\ \Rightarrow \text{Total} &= O(k^3 \log n) \end{align} $$$

This is correct but becomes too slow when ( k ) is large.


Key Observation

We do not actually need the entire matrix ( M^n ).

Important facts: - Only the first row affects ( dp[n] ) - All other rows are pure shifts - The transition itself is linear and fixed

This suggests that the problem can be solved without matrices.


Shift Operator Interpretation Introduce a symbolic operator ( x ) that represents one DP transition.

Interpretation:

$$$ \begin{align} dp[1] &\leftrightarrow x \cdot dp[0] \\ dp[2] &\leftrightarrow x^2 \cdot dp[0] \\ dp[n] &\leftrightarrow x^n \cdot dp[0] \end{align} $$$

So: - ( x ) behaves exactly like applying the recurrence once - ( x^n ) means applying the recurrence ( n ) times

This operator plays the same role as the matrix ( M ), but algebraically.

Characteristic Polynomial From the recurrence:

$$$ \begin{align} dp[k] &= c_1 dp[k-1] + c_2 dp[k-2] + \dots + c_k dp[0] \end{align} $$$

Replace ( dp[i] ) with powers of ( x ):

$$$ \begin{align} x^k &\equiv c_1 x^{k-1} + c_2 x^{k-2} + \dots + c_k \end{align} $$$

Move everything to the left-hand side:

$$$ \begin{align} x^k - c_1 x^{k-1} - c_2 x^{k-2} - \dots - c_k &= 0 \end{align} $$$

Define the characteristic polynomial:

$$$ \begin{align} p(x) &= x^k - c_1 x^{k-1} - c_2 x^{k-2} - \dots - c_k \end{align} $$$

This polynomial encodes the recurrence completely.


Main Idea of Kitamasa Method

We want to compute ( x^n ).

Using polynomial division:

$$$ \begin{align} x^n &= Q(x)\,p(x) + R(x) \end{align} $$$

Taking modulo ( p(x) ):

$$$ \begin{align} x^n &\equiv R(x) \pmod{p(x)} \end{align} $$$

Where:

$$$ \begin{align} \deg R(x) & \lt k \end{align} $$$

So the remainder can be written as:

$$$ \begin{align} R(x) &= r_0 + r_1 x + \dots + r_{k-1} x^{k-1} \end{align} $$$

Mapping this back to DP:

$$$ \begin{align} dp[n] &= \sum_{i=0}^{k-1} r_i \cdot dp[i] \end{align} $$$

Conclusion:
Computing ( dp[n] ) is equivalent to computing
( x^n \bmod p(x) ).


Fibonacci Example

Consider Fibonacci numbers:

$$$ \begin{align} f(n) &= f(n-1) + f(n-2) \end{align} $$$

Here ( k = 2 ), and the characteristic polynomial is:

$$$ \begin{align} p(x) &= x^2 - x - 1 \end{align} $$$

Which implies:

$$$ \begin{align} x^2 &\equiv x + 1 \end{align} $$$

Computing f(3):

$$$ \begin{align} x^3 &= x^2 \cdot x = x(x+1) = x^2 + x = (x+1) + x = 2x + 1 \end{align} $$$

Therefore:

$$$ \begin{align} f(3) &= 2f(1) + f(0) \end{align} $$$

Computing f(4):

$$$ \begin{align} x^4 &= x^2 \cdot x^2 = (x+1)(x+1) = x^2 + 2x + 1 = (x+1) + 2x + 1 &= 3x + 2 \end{align} $$$

Thus:

$$$ \begin{align} f(4) &= 3f(1) + 2f(0) \end{align} $$$

---

Binary Exponentiation on Polynomials

Assume we already know:

$$$ \begin{align} x^n &\equiv R(x) \pmod{p(x)} \end{align} $$$

To compute (x^(2n)):

$$$ \begin{align} x^{2n} &= (x^n)^2 = R(x)^2 \pmod{p(x)} \end{align} $$$

Steps: 1. Multiply two polynomials of degree ( < k ) 2. Reduce the result modulo p(x).

Each step costs O(k^2).


Final Complexity

$$$ \begin{align} \text{Polynomial multiplication} &= O(k^2) \\ \text{Exponentiation steps} &= O(\log n) \\ \Rightarrow \text{Total} &= O(k^2 \log n) \end{align} $$$

Using FFT:

$$$ \begin{align} \text{Total} &= O(k \log k \log n) \end{align} $$$

Полный текст и комментарии »

  • Проголосовать: нравится
  • +249
  • Проголосовать: не нравится

Автор Siyah, история, 5 месяцев назад, По-английски

Hi, I'm AmirMohammad (commonly known as Danet). I was inspired to write this article after completing my Calculus 1 course to bridge the gap between theory and implementation. Naturally, as a non-native speaker, the translation and editing process involved some assistance. If you spot any technical or linguistic errors, I would be grateful if you pointed them out!

1. Mathematical Foundations and Definitions

The trigonometric and hyperbolic function families are intrinsically linked through complex analysis, specifically via Euler's formula:

$$$ e^{ix} = \cos x + i \sin x $$$

1.1 Trigonometric Functions (Circular Functions)

These functions are defined geometrically based on a point $$$(x, y)$$$ on a unit circle of radius 1, where the angle subtended at the origin is $$$\theta$$$ (or $$$x$$$ radians):

$$$ \begin{align} \sin x &= y \\ \cos x &= x \\ \tan x &= \frac{\sin x}{\cos x} = \frac{y}{x} \\ \csc x &= \frac{1}{\sin x} \\ \sec x &= \frac{1}{\cos x} \\ \cot x &= \frac{\cos x}{\sin x} \end{align} $$$

Key properties include periodicity ($$$2\pi$$$) and boundedness ($$$|\sin x| \le 1, |\cos x| \le 1$$$).

1.2 Hyperbolic Functions

These functions relate to the coordinates on the unit hyperbola $$$x^2 - y^2 = 1$$$. They are fundamentally defined in terms of the exponential function:

$$$ \begin{align} \sinh x &= \frac{e^x - e^{-x}}{2} \\ \cosh x &= \frac{e^x + e^{-x}}{2} \\ \tanh x &= \frac{\sinh x}{\cosh x} = \frac{e^x - e^{-x}}{e^x + e^{-x}} \\ \text{Cosech } x &= \frac{1}{\sinh x} \\ \text{Sech } x &= \frac{1}{\cosh x} \\ \text{Coth } x &= \frac{\cosh x}{\sinh x} \end{align} $$$

Unlike trigonometric functions, hyperbolic functions are generally unbounded and strictly monotonic over their respective domains.


2. C++ Standard Library Implementation (<cmath>)

In C++, trigonometric and hyperbolic functions are primarily accessed through the functions defined in the <cmath> header (or <math.h> in C). These functions operate on floating-point types: float (sinf), double (sin), and long double (sinl).

2.1 IEEE-754 Standard Context

Modern C++ floating-point arithmetic strictly adheres to the IEEE-754 standard (usually IEEE 754-2008). This governs the representation of numbers, rounding modes, and handling of special values:

  • Denormalized numbers: Used for extreme precision near zero.
  • Infinity ($$$\pm \infty$$$): Result of operations like $$$1/0$$$ or exponential overflow.
  • Not a Number (NaN): Result of invalid operations, such as $$$\sqrt{-1}$$$ or $$$\tan(\pi/2)$$$ if not handled by domain reduction.

2.2 Hardware Acceleration

Contemporary x86-64 processors (Intel/AMD) possess specialized Floating Point Units (FPUs) that include dedicated instructions for transcendental functions:

  • SSE/AVX Instructions: Instructions like FSIN, FCOS, FYL2X (though often deprecated in favor of newer vectorized extensions) are used. Modern compilers favor AVX and AVX-512 vector instructions which can compute multiple function evaluations simultaneously (SIMD processing).
  • Latency vs. Throughput: While a single sine calculation might have high latency (many clock cycles), the hardware can often start new calculations before the previous one finishes (high throughput).

3. Algorithmic Approximations Used Internally

When hardware instructions are unavailable or the required precision exceeds hardware capabilities (e.g., for long double), the standard library relies on sophisticated mathematical approximations, typically implemented in the system's math library (libm).

3.1 Range Reduction (Crucial Step)

Since trigonometric functions are periodic, input values $$$x$$$ far from the origin (e.g., $$$x = 10^{18}$$$) must be reduced to a small fundamental interval, typically $$$[0, 2\pi]$$$ or $$$[-\pi, \pi]$$$.

$$$ x_{\text{reduced}} = x - 2\pi \cdot \text{round}\left(\frac{x}{2\pi}\right) $$$

Accurate calculation of $$$\pi$$$ and the rounding process itself are crucial to minimizing the error introduced during this initial step. If range reduction is inaccurate, the final result will be catastrophically wrong, regardless of the approximation quality.

3.2 Polynomial Approximations

For the reduced interval, the functions are approximated using polynomial series.

Taylor/Maclaurin Series

For very small $$$x$$$ (close to zero), the Taylor expansion provides the fastest, most accurate path:

$$$ \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots $$$

The C++ implementation often switches to this series when $$$|x|$$$ is below a certain threshold (e.g., $$$10^{-4}$$$).

Chebyshev Polynomials

For larger arguments within the reduced domain, Chebyshev polynomials are superior to raw Taylor series because they minimize the maximum absolute error over the interval (minimax property). They provide higher uniform accuracy than a truncated Taylor series of the same degree.

3.3 CORDIC Algorithm

The Coordinate Rotation Digital Computer (CORDIC) algorithm is an iterative method often favored in hardware or resource-constrained environments. It calculates trigonometric functions using only additions, subtractions, shifts, and look-up tables of $$$\arctan(2^{-n})$$$. It is efficient because it avoids the costly multiplications and divisions associated with polynomial evaluation.

3.4 Hyperbolic Approximations

For $$$\sinh x$$$ and $$$\cosh x$$$: * For small $$$|x|$$$, Taylor series are used. * For moderate $$$|x|$$$, the definition using std::exp(x) is robust, provided $$$x$$$ is not so large that $$$e^x$$$ overflows the double range ($$$x \gt \approx 709.78$$$). * For very large $$$x$$$, say $$$x \gt 25$$$:

$$$ \sinh x \approx \frac{e^x}{2}, \quad \cosh x \approx \frac{e^x}{2} $$$
  • For $$$\tanh x$$$: As $$$x \to \infty$$$, $$$\tanh x \to 1$$$. If $$$x$$$ is large and positive, $$$\tanh x \approx 1$$$. If $$$x$$$ is large and negative, $$$\tanh x \approx -1$$$. This prevents loss of significance when computing $$$\frac{e^x - e^{-x}}{e^x + e^{-x}}$$$ where the numerator and denominator become nearly identical large numbers.

4. Performance and Precision Trade-offs

In competitive programming, balancing speed and 15-16 digits of precision is a constant concern.

4.1 Double Precision Reality

Standard double provides approximately 15.9 decimal digits of precision. The relative error guaranteed by the standard is typically within 1 or 2 Least Significant Bits (ULPs) of the true mathematical result, assuming the input argument $$$x$$$ itself is perfectly represented.

4.2 Performance Implications

While the FPU is fast, calculating transcendental functions is significantly slower than basic arithmetic. * Addition/Subtraction: $$$\approx 1$$$ clock cycle. * Multiplication/Division: $$$\approx 3-5$$$ clock cycles. * sin/cos: $$$\approx 40-80$$$ clock cycles (dependent on pipeline stage).

If millions of function calls are required, vectorized parallelism is essential.

SIMD-parallel sine computation using C++17 Parallel STL:

#include <iostream>
#include <vector>
#include <numeric>
#include <execution> // Requires C++17 and TBB in some compilers
#include <cmath>
#include <algorithm>

void parallel_sin_calc() {
// Initialize 1 million input values
std::vector<double> xs(1'000'000);
std::iota(xs.begin(), xs.end(), 0.0); // Fill with 0, 1, 2, ...

std::vector<double> ys;
ys.reserve(xs.size());

// Use parallel execution policy for vectorized execution
std::transform(std::execution::par_unseq, xs.begin(), xs.end(), std::back_inserter(ys), [](double x) {
return std::sin(x);
});
std::cout << "Computed " << ys.size() << " sine values in parallel.\n";
}


5. Robust Usage and Pitfalls

Incorrect use of trigonometric identities or poor domain management leads to catastrophic cancellation or overflow.

5.1 Domain Management for Inverse Functions

Inverse trigonometric functions (asin, acos) have restricted domains: * acos(x) and asin(x) are only defined for $$$x \in [-1, 1]$$$. Inputs slightly outside this range due to previous calculation errors must be clamped:

$$$ \text{safe\_acos}(x) = \texttt{std::acos}(\max(-1.0, \min(1.0, x))) $$$

Calling acos(1.0000000000000001) results in NaN.

5.2 Stability Near Zero

The stability of expressions involving differences of nearly equal numbers is critical.

Cosine Near Zero

Calculating $$$\cos(x)$$$ where $$$x$$$ is near 0 yields a value very close to 1. If you subsequently compute $$$1 - \cos(x)$$$, you suffer severe loss of precision (catastrophic cancellation). * Bad: 1.0 - std::cos(x) * Good: Use the half-angle identity, $$$\sin^2(x/2) = \frac{1 - \cos x}{2}$$$, or the dedicated library function std::expm1(x) (which computes $$$e^x - 1$$$ accurately near zero).

Tangent near Odd Multiples of $$$\pi/2$$$

$$$\tan x$$$ approaches $$$\pm\infty$$$ when $$$x \to \frac{\pi}{2} + k\pi$$$. If the input $$$x$$$ is slightly off due to previous calculations, the result might be a huge finite number instead of the expected large positive/negative infinity, which can break subsequent divisions. Range reduction must be extremely precise here.

5.3 Angle Normalization for Inverse Tangent

Always use the two-argument arctangent function, std::atan2(y, x), instead of the one-argument version std::atan(y/x). * atan(y/x) loses all information about the signs of $$$x$$$ and $$$y$$$, restricting the result to $$$(-\pi/2, \pi/2)$$$. * atan2(y, x) correctly returns the angle in the full range $$$(-\pi, \pi]$$$, using the signs of both arguments to determine the quadrant.


6. Sample Code Demonstration and Verification

Verifying library performance against known constants or simple geometric results is standard practice. We use $$$\pi/6$$$ (30 degrees) as a test case.

#include <iostream>
#include <cmath>
#include <iomanip>

// Define PI if M_PI is not guaranteed (common in some non-POSIX environments)
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif

int main() {
// Test 1: Trigonometric functions at PI/6 (30 degrees)
double x = M_PI / 6.0;
double s = std::sin(x);
double c = std::cos(x);

double s_ref = 0.5;
double c_ref = std::sqrt(3.0) / 2.0;

std::cout << std::fixed << std::setprecision(18);
std::cout << "--- Trigonometric Test at PI/6 ---\n";
std::cout << "Calculated sin(x):   " << s << "\n";
std::cout << "Reference sin(x):    " << s_ref << "\n";
std::cout << "Absolute Error(sin): " << std::fabs(s &mdash; s_ref) << "\n\n";

std::cout << "Calculated cos(x):   " << c << "\n";
std::cout << "Reference cos(x):    " << c_ref << "\n";
std::cout << "Absolute Error(cos): " << std::fabs(c &mdash; c_ref) << "\n";

// Test 2: Hyperbolic function at x=2
double h_sinh = std::sinh(2.0);
double h_ref = (std::exp(2.0) &mdash; std::exp(-2.0)) / 2.0;
std::cout << "\n--- Hyperbolic Test at x=2 ---\n";
std::cout << "Calculated sinh(2):  " << h_sinh << "\n";
std::cout << "Reference sinh(2):   " << h_ref << "\n";
std::cout << "Absolute Error(sinh): " << std::fabs(h_sinh &mdash; h_ref) << "\n";

return 0;
}

The output consistently shows absolute errors in the range $$$10^{-16}$$$ to $$$10^{-15}$$$, confirming that the double-precision implementation is operating near machine epsilon ($$$\epsilon \approx 2.22 \times 10^{-16}$$$).


7. Advanced Topics and Extended Precision

For applications demanding accuracy beyond 16 digits (e.g., high-precision orbital mechanics or physics constants), standard double is insufficient.

7.1 long double

The long double type often maps to 80-bit extended precision (on x86 architectures) or 128-bit quadruple precision (on ARM or sometimes Linux/GCC). This provides approximately 18 to 33 decimal digits of precision, respectively. Compiler support and hardware mapping for long double are less standardized than for double.

7.2 Arbitrary Precision Libraries

When extreme accuracy (e.g., 50 or 100 digits) is required, external libraries such as GMP/MPFR must be used, which implement the same underlying Taylor/Chebyshev approximations but with dynamic precision management.


8. Conclusion

Trigonometric and hyperbolic functions are fundamental building blocks of scientific computing, accessible in C++ via the highly optimized <cmath> header. While hardware acceleration speeds up execution, performance critically depends on minimizing the number of required calls. Correctness, however, relies on careful management of floating-point stability, especially avoiding catastrophic cancellation near singularities or zeros, and utilizing robust functions like atan2 for multi-quadrant angular calculations. Mastering these numerical nuances is paramount for reliable solutions in complex computational tasks.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +21
  • Проголосовать: не нравится

Автор Siyah, история, 2 года назад, По-английски

It is sad that

marzipan tried to convince others to write the blog goodbye 23 to get contribution.

but his blog got -3500 [so far]

Полный текст и комментарии »

  • Проголосовать: нравится
  • +29
  • Проголосовать: не нравится

Автор Siyah, история, 2 года назад, По-английски

Hi, Is there a way to download my submitted file using the Codeforces API?

i use this but not working

PLEASE HELP

Полный текст и комментарии »

Теги api
  • Проголосовать: нравится
  • -6
  • Проголосовать: не нравится

Автор Siyah, история, 2 года назад, По-английски

hey I wanted to get the most upvoted post in CF but I got negative contrib.

so I decided to get the most down voted post on CF.

please help me :((

Полный текст и комментарии »

  • Проголосовать: нравится
  • -22
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

OK — I GOT -20 , IGNORE THIS FUCK THIS LIFE

Hi ^~^

Part One [Useful Functiones]:

1.1 — power-Function:

[Binary Exponentiation] is a trick which allows to calculate $$$a^n$$$ using $$$O(\log n) $$$

The idea is that we can traverse through all the bits of a number from LSB to MSB in $$$O(\log n) $$$ time.

Write $$$n$$$ in base $$$2$$$.

The number has exactly $$$ \lfloor \log_2 n \rfloor + 1 $$$ digits in base 2, we only need to perform $$$O(\log n) $$$ multiplications, if we know the powers $$$a^1, a^2, a^4, a^8, \dots, a^{2^{\lfloor \log n \rfloor}}$$$ .

Implementation

1.2 — GCD-Function:

[Euclidean algorithm] is a trick which allows to calculate $$$gcd(a,b) $$$ using $$$O(\log \min(a, b))$$$ The idea is that subtract the smaller number from the larger one until one of the numbers is zero.

For Time Complexity and Binary GCD you can read This.

Implementation

Note that you can calculate $$$lcm(a,b)$$$ with $$$\frac{a}{gcd(a,b)}\ * b $$$

1.3 — Factorial & nCr & ...:

Sometimes you need to calculate $$$\binom n k $$$

For that first we precompute all factorials modulo $$$ mod $$$ with $$$O(N)$$$.

Implementation

BUT WE CAN PRECOMPUTE INVERSE OF FAC[I] IN $$$ O(Nlogmod) $$$

Implementation

1.4 Fibonacci in 20 line:

as you know you can calculate $$$n-th$$$ Fibonacci number with matrix.

here

it can be proved that :

F[2*n — 1] = F[n]*F[n] + F[n — 1]^2

F[2*n] = (F[n — 1] + F[n + 1])*F[n] = (2*F[n — 1] + F[n])*F[n]
Implementation

tnx kien_coi_1997 and I_love_tigersugar

1.5 Built-in useful function:

        vector<int> a(n);

        iota(a.begin(), a.end(), 1);
    // a = 123..

        random_shuffle(a.begin(), a.end());
    // a = random permutation of a

        vector<int> ps(n);
        partial_sum(a.begin(), a.end(), ps.begin());
    // ps[i] = a[0] + a[1] + .... a[i-1] + a[i] ( ps[i] = ps[i-1] + a[i])

        vector<int> h(n);
        adjacent_difference(a.begin(), a.end(), h.begin());
    // h[0] = a[0]
    // (i>0) h[i] =  = a[i] - a[i-1]

        cout << accumulate(a.begin(), a.end(), x) ;
    //cout x + a[0] + a[1] + a[2] + ... + a[n]

        cout << inner_product(a.begin(), a.end(), b.begin(), 234) << "\n";
    // x = 234 + sum(a[i] * b[i])

tnx Igorjan94 for this

Was this blog helpful?

Полный текст и комментарии »

  • Проголосовать: нравится
  • -30
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

i was doing CP but In queue . . .

UPD : FIXED

Полный текст и комментарии »

  • Проголосовать: нравится
  • +25
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

Hello ^^

I have seen many TODO editorials after 10-11 years.

Can it be completed?

for example: Blog 1 Blog 2

and so on

Полный текст и комментарии »

  • Проголосовать: нравится
  • -11
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

Hi today this user used public computer and didn't log out of his account afterward, so we are writing this blog to educate people about importance of logging out :)

Полный текст и комментарии »

  • Проголосовать: нравится
  • -40
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

Hi^^,

Can someone explain me the problem of SEERC2020 — Problem I? [I didn't understand the editorial]

And share the code if possible.

Link of problem : PROBLEM I

Полный текст и комментарии »

  • Проголосовать: нравится
  • -10
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

Hi ,

I was looking at blogs with tricks tag when I came across something interesting.

on the page that is specified for each tag and shows the blogs of that topic; The preview for any blog is the message that is written — not the message that needs to be displayed — .

Look at the picture below to see what I mean.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -9
  • Проголосовать: не нравится

Автор Siyah, история, 3 года назад, По-английски

In your opinion, what is the most important factor for a good Contest? and can u share a some good contest?

Полный текст и комментарии »

  • Проголосовать: нравится
  • -15
  • Проголосовать: не нравится