shash42's blog

By shash42, history, 5 years ago, In English

I invite everyone to the 3rd Contest of the ICO Preparatory Series 2018-19 to be held on Codechef. It is aimed at helping students in their preparation for the International Olympiad of Informatics. The topics can be very important for ICPC as well.

The contest guarantees original problems and fast editorials.

Date/Time 17th November 13:30 UTC
Contest Link

Setters and Testers: AsleepAdhyyan shash42 Jagreet Das Gupta

There are 250 laddus for the top 3 Indian school students.

Hope you enjoy the contest and find the problems interesting!

Full text and comments »

  • Vote: I like it
  • +19
  • Vote: I do not like it

By shash42, history, 6 years ago, In English

I have read this on various posts on Codeforces (some very old, some a little more recent) that Amortized Complexity algorithms lose this property when we perform rollbacks. An example: Comment by Retrograd here.

Rollbacks here means storing changes in some data structure and reversing back to the previous state until required.

While I would love a general answer, I am specifically asking for DSU Rollbacks with Path Compression. Is there any particular reason why it should not work? Intuitively I'm performing just as many operations as I did to reach the new state to go back to the old state.

Also some might state that between a Union and the time we rollback to it, there could be a 100 odd queries which would cause path compression. We lose all this data after rollback and have to recompute them for further queries. However I am not sure if this realllly messes up the complexity. Every query is still the same complexity?! Notice that this is rollbacks and not persistent DSU.

While I know union by size vs path compression is just NlogN vs N*alpha, there's this particular problem that had me interested: Codechef — Chef and Graph Queries. I had implemented an NRootNLogN solution with Mo's and DSU Rollbacks with Union by Size which TLE's despite many small optimizations. On the other hand the tester's solution does path compression with rollbacks. (Note I am aware a Mos+DSU solution without any rollbacks also exists, and it is quite beautiful but the question isn't about that).

I am attaching a rough implementation just for the idea:

Code

To sum it up, if we are really just reversing X operations, why would the complexity be more than 2*X, why do Amortized Algorithms fail on rollbacks. Thanks.

Full text and comments »

  • Vote: I like it
  • +36
  • Vote: I do not like it

By shash42, history, 6 years ago, In English

I was going through many online blogs on 2D Subrectangle Queries and Updates using Lazy Propogation. I was surprised to find that Subtrectangle Minimum and Maximum Queries were not possible in sub-linear time.

This was probably because most 2D Data Structure approaches were based on Fixing an Origin, and then handling queries as
Q(x1, y1, x2, y2) {x1<=x2 and y1<=y2} = Q'(x2, y2)-Q'(x1, y2)-Q'(x2, y1)+Q'(x1, y1).
[This example shows Range Sum Query]. This is like basic 2D prefix sums but with segment trees/2D BIT to handle updates.

However, I came up with a different approach. [Please note that there is some bug in the codes right now which is taking too long to debug, so I thought I'd first ask if I'm on the right track here before I spend more time debugging as I've already spent a whole day working on this.]
Since we are maintaining a Segment Tree of Rows where each Segment Tree holds another Segment Tree for columns, I went with the naive approach. Now, a segment tree node with range (x, x, y, y) holds the value at that point (arr[x][y]), not the sum from origin till there.

Спойлер

Updates are basically handled in a normal segment tree fashion. If a node's range is completely within update's target range then we make the update and propogate lazy to ranges within the node's range. To do this we first isolate idx nodes within the update's target x range and then for these nodes we set the idy nodes within update's target y range. Since there will be logarithmic of both, the complexity is logNlogM. The 2 lazy arrays are explained after the update codes.

Спойлер

Finally coming to the lazy part. We have 2 lazy arrays, lazy and lzy. Lazy basically propogates over x coordinates whereas lzy propogates over y coordinates. This is important (and can't be merged into a single 2D array) because of the way I chose to propogate:
Imagine a 2D space of idx, idy coordinates. Let the children of idx be below the node and children of idy be on the left. So now we need to propogate in a way that we compress a rectangle between (0, 0 & idx, idy) towards the origin. So an update needs to go both down and left. If we do this simultaneously in a single 2D array as:

Спойлер

the paths will start crossing creating a mess. For eg, an update could go left then down and down then left and thus double update the same node. To prevent this, we make a rule that an update that has been propogated to the left once cant be propogated down. So updates first go down and then each node they visit propogates them to the left. Thus we need 2 arrays lazy and lzy, values in lzy cant be propogated down where as values in lazy can be propogated leftwards once (then they become part of lzy and thus cant go down) and can be propogated down. Notice how calling querry function in a junk variable lazyupd helps to propogate updates for all nodes we visit (logN for each update on idx).

Спойлер

Finally I come to queries. We do the junk variable based lazy propogation here too. The rest is pretty trivial.

Спойлер

While this code is a little buggy still for some reason (would be great if someone could help me correct it), I want to know if the general idea is correct. This should definitely be better than a quad-tree approach or similar if I am not missing some key point. Please go through the post and let me know of your opinion, if it is correct I will make another blogpost hopefully transforming this post into my first tutorial :D

Full text and comments »

  • Vote: I like it
  • -9
  • Vote: I do not like it

By shash42, history, 6 years ago, In English

As part of the Indian Computing Olympiad Prep series being held on Codechef, we have decided to host a contest modelled after ICPC on popular demand. All problems will still be within IOI syllabus, providing a contest beneficial to both ICPC and IOI aspirants.

We are sorry that we got late in posting this on CF, the contest starts in less than an hour!

Contest Details:

Codechef Announcement

You can look forward to original problems and fast editorials! Any suggestions are appreciated! :D

Contest Link

Full text and comments »

  • Vote: I like it
  • +4
  • Vote: I do not like it

By shash42, history, 6 years ago, In English

As students prepare for IOI 2018 to be held in Tokyo, Japan, we present to you a contest series on CodeChef to help students prepare for their respective national selection rounds. The problems and leader board will be completely based on the IOI, with all contests having original problems related to popular concepts that the setters (who themselves are eligible for IOI-2018) have noticed in Indian selection rounds over the past years.

Even though the contests are modeled after Indian rounds, they can be beneficial for aspirants from all countries. The contest will have CF Div-2 Difficulty. The best part is, subtask difficulty ranges widely, ensuring that programmers of all skill levels are engrossed for 3 intense hours.

The topics that will be covered are:

  1. Clever Ad-hoc Problems (minimal Math required)
  2. Greedy Algorithms
  3. Sorting and Searching
  4. Dynamic Programming
  5. Basic Graph Algorithms
  6. Basic Data Structures
  7. Miscellaneous Algorithms (Easy)

More details are available at: https://discuss.codechef.com/questions/113355/invitation-for-ico-preparatory-series The first contest is on Friday, 13th October 2017, from 8 PM to 11 PM IST.

Contest Link: https://www.codechef.com/ICOP1801

Note: Only C, C++ and Java are allowed.

Hope you enjoy the contests and wish you luck!

Full text and comments »

  • Vote: I like it
  • +7
  • Vote: I do not like it