wil93's blog

By wil93, 13 years ago, In English

Is relatively easy to see the equivalent form ab+c=d(e+f) so you can store all possibilities for the left side of the equation and for the right side.
I thought a O(N3logN) algoritmh which exceeds the time limit, and another (same complexity) which passes using 11 seconds (the best solutions are around 2 seconds).
My first algo used STL maps, and the code is very elegant:

1st part:
for each (i,j,k)
...map_1[i*j+k]++
...map_2[i*(j+k)]++ (only if i is non-zero)

2nd part:
For each element "X" in map_1 I check if exists an element "Y" in map_2 such that X.first == Y.first, if I find it, then I increase "Answer" (initially 0) by X.second * Y.second

Analisys: 1st part's "for each i,j,k" is O(N3) and the body of the inner loop requires O(2 * logN3) = O(6*logN) = O(logN). The first part requires a total of O(N3logN). The 2nd part's "for each X" requires O(N3) while its body requires O(logN3) = O(logN). The second part requires a total of O(N3logN).
2 * O(N3logN) = O(N3logN).
Now: N is not more than 100, so the amount of operations performed won't be more than 7*1003 = 7.000.000 (definitely runnable in 2s time limit) and if we want to count the costant factor (which is x2 x3 x2 = x12) the amount of operations is 84.000.000 still far from the "edge" of 200.000.000 (2 seconds time limit).

SO why is this STL container so slow?

I switched to vectors "abc" and "def" ("def" will be sorted after precomputation) and then for each element in abc I search for the amount of "copies" in "def" via lower and upper bound functions. This runs in time (anyway on my pc is slower than the "map" version). But the complexity is the same! The reduction is only by a constant.
  • Vote: I like it
  • +5
  • Vote: I do not like it

| Write comment?
13 years ago, # |
  Vote: I like it 0 Vote: I do not like it
I'm trying to solve it using hash table, but getting TLE. My program works about 1s on random tests. Do you know some special testcases that cause tle?
  • 13 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Based on the problem the combinations of (a,b,c) are N3, not more than that. For each of them, you MUST consume "x" time units (where x is the time that "hash table" needs to find / insert). So I think random testcases just "do it well" (there aren't special cases).

    In the case you have got WA then you can find out some special testcase (like having a zero in input)
  • 13 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    1) do not use dynamic memory in your hashtable

    2) size of hash table must be about n^3

    UPD

    3) do not use vectors to resolve collisions, use lists (also not std::list, write it by your self)

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Experiencing the same issue. Got TLE with map and unordered_map, while the same code (complexity-wise) implemented using vectors, sort, lower_bound and upper_bound gives AC.

Any idea why this is so?

»
8 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

My code using unordered_map got ACed within 0.69sec. Same code using map was ACed in 1.7sec.