galen_colin's blog

By galen_colin, 6 months ago, In English

Spoiler: it's clickbait. Sorry. Titles like those are fun.

...but, it's the same style of clickbait as my roadmap [spoiler 2: if you haven't seen that, it markets itself as a standard roadmap (y'know, soulless and devoid of purpose), but then a few minutes in, says "ok but roadmaps are stupid and here's the advice that you really need" and completely changes the direction of the video]. That's my style of clickbait, and that's the vibe this blog goes for.

So... hear me out, please.

The start

First, a couple meta-notes. I will likely exaggerate my tone a bit here for dramatic effect. Don't take it personally, please. Also, this applies to almost all ratings. At the very least, I recommend you process the main point of this blog and see whether or not you already do something similar (for most people... probably not?)

The problem

I'm sure if you've looked at the recent actions tab for more than a few nanoseconds, you've noticed that there are an uncountably infinite amount of blogs asking questions like "what's the best way to practice?" or "what's the optimal way to reach rating X (often red)?", and you've possibly made a few more observations about such blogs:

(1) The people writing these blogs have clearly not tried to answer their own questions

(2) Some people do answer, and they almost always provide zero justification for their answers, saying "source: trust me bro". Some of these answerers are even lower rated than the poster of the blog, which, combined with the previous point, is... suspicious.

(3) Very few people actually follow the advice that is given, even if it's actually good

(4) There's often an enormous amount of emphasis on topic-based learning

...and that doesn't even cover what goes on in DMs. This list is not exhaustive, but it's enough to make a point.

Many people (with an enormous intersection with the people who write such blogs) have a plan like the following for improvement in CP (let's denote this plan as $$$P_1$$$): "Let's find the best practice strategy possible (maybe by asking some red or posting a help blog), then implement that for the rest of my life without questioning it" (perhaps worded a bit dramatically). The "without questioning it" part is evident from observation (2) and somewhat from (1), and (3) indicates that people are doing strange things with the information they get — maybe trying to combine everything into some ultimate strategy? I'm not sure, but it doesn't really matter.

I'll say, if you write such a blog asking for help like that, you're asking the wrong questions.

I note: if you actually find the optimal practice strategy, then $$$P_1$$$ is not that bad, because you'll be improving at the best possible rate. But there's a bit of an issue with $$$P_1$$$. To illustrate what it is, let's look at some of the best existing blogs about improvement/meta stuff on CF (not exhaustive, and in no particular order):

(a) Self-deception: maybe why you're still grey after practicing every day by -is-this-fft- — this blog was genuinely groundbreaking. I'd never seen insight of this caliber on CF before. The role of learning psychology in CP is vastly underappreciated as-is, and this blog was a good look into an incredibly significant factor stopping people from improving.

(b) How to practice Competitive Programming [Um_nik version] by... Um_nik — this is a really good framework for how to look at practice and what it should do for you. The "ask a friend who solved the same problem" is a great solution to the issue of missing out on knowledge from not seeing editorials.

(c) My opinion on how to practice competitive programming by Radewoosh — this blog illustrates a very simple point: if your brain wants to learn something (e.g. because you enjoy it), you'll have a much easier time learning it. I think the blog itself is more anecdotal than formal in illustrating this, but it's still a very important idea I've rarely seen addressed on CF.

These blogs are great. They're fucking amazing. They introduce new ideas to the CF improvement meta and answer a lot of questions people may have. But, the issue: any such blog is finite, but there are infinite possible ways that someone could be practicing wrong.

For example, here's something that none of these address — what if you think about problem-solving wrong? What if you try to solve everything like a school problem (or LeetCode), just throwing the knowledge you have at the problem and hoping it cracks? That doesn't work when a problem requires structural insight or observations (which is most CF problems). If you practice with that problem-solving framework in mind, it doesn't matter how much good practice you do — you'll be training for the wrong thing, and thus be forever doomed to be stuck. There are so many issues like this, where even if you do everything else perfectly, leaving those issues unaddressed will leave you to fail nonetheless (or, at least, seriously drag you down).

In a more general sense, these blogs provide great necessary conditions for having a good practice strategy, but they are far from sufficient, because there are infinite possible things that could be going wrong. There are a lot of blogs and theories and such about how your practice should work. Many of these theories are not wrong. But even if you combine all of the (correct) theories together and somehow manage to absorb all of that information, you still won't be able to find the perfect practice strategy, because it can't possibly cover everything that someone could be doing wrong.

So, here's my first bold (literally) claim — no existing resource is good enough to find a good enough practice strategy for everyone*, simply because there are too many necessary conditions to enumerate like this, and many people are doing certain things wrong that are often too hard or specific to see coming without actually experiencing it yourself (so no such resource would even think to mention it).

* There is maybe one exception I know of, by nor. But I don't think that blog puts nearly enough emphasis on the part I'm about to talk about.

A solution

So, then, what? What's the solution to this? Often people turn to coaching — to have some experienced pair of eyes look into your soul and identify the things you're doing wrong to correct your strategy. There are also some issues with that. First, most coaches don't actually do that, and instead just serve to enhance your own practice. But if your practice is wrong, it doesn't matter how much someone else enhances it, it'll still be wrong.

Another issue is that, it's hard for an external party to even know what questions to ask someone to draw out their issues. Usually, people who do things wrong, do them wrong because they think they're right. So if someone asks "what do you think you're doing wrong?", the wrong things won't even be part of the conversation. You almost have to know what the issues are beforehand to be able to diagnose them... which runs into the same problem as before.

No. There's something better. How about this — you should learn to diagnose your own issues. Here's the thing — nobody has the knowledge of you that you do. Nobody has access to your mind but you. You, and only you, know exactly what's going on in your thoughts.

Whenever someone asks me how to debug, I usually give them something of the form — to look at your code, inspect every line. Ask yourself, "Does this match what my solution idea actually wants it to do?". If your code passes that check, then your code is maybe fine, and the issue may lie in your idea. So then, do the same to your idea — for every piece of your idea (you can use your code to guide this), ask yourself "Does this make sense to be doing? Does it give me correct and necessary information to solve the problem? Does it fit within time/memory/query/etc. constraints? In what ways/cases could this be wrong?".

If your idea passes the check too, then check that your understanding of the problem is correct. If all of that works, then you missed something in that process, so repeat it, but more carefully. Don't mentally "skip" anything — ask about every line, no matter how obviously correct it may be, because... maybe it isn't. The subtlest of bugs show up in quite unexpected places.

Well... I see no reason why you can't apply this process to your practice strategy too. So here's the first real bit of advice: in semi-regular intervals, "debug" your practice strategy. You get a lot of feedback from problems and contests. For each problem you fail to solve or solve too slowly, there are some ideas you missed, or some wrong directions you took, or some knowledge you didn't have, etc.

This feedback often forms patterns — if you repeatedly fail problems due to some knowledge factor, that's a sign that you should brush up on that knowledge. If you miss observations, maybe that's a sign that you give up too early or spend too long dwelling on ideas that don't work. If you fail problems and the editorial often just tells you "lol just be smarter", that's probably a sign that you should work on your creativity or "raw problem-solving" (i.e. spend more time just thinking/spend longer on problems).

This feedback tells you what you should do to optimize your thought process. But it also tells you a lot about how you should practice to address these weaknesses. If you're trying to learn every topic under the sun, but every editorial you see tells you that you missed an observation... isn't that a sign that learning topics is maybe overrated for you (and the same feedback applies for the "solving problems school/LC-style" thing mentioned earlier)?

If you consistently fail binary search problems... isn't that a sign that you should spend more time on binary search? If you consistently don't fail binary search problems... isn't that a sign that your binary search is fine, and you probably don't need to direct much time to it? If you do much better in practice than in contests, isn't that a sign that you need to spend more time getting used to the contest environment/pressure (i.e. do virtuals or more contests)?

I could go on, but doing so would be pointless. The main thing is — you should learn to recognize these signs yourself. And, how? It's exactly like normal practice "should" work — you see that your strategy (either thought process or practice strategy) is failing for some reason, you use the feedback given to you by problems/contests on how your process is failing or the parts that it doesn't cover, and you make theories on why your practice fails and how to do better about it, then you test those theories and see if the feedback you get supports/rejects them. That's the scientific method, and you (probably) won't get anywhere without it!

Obviously, if you're not good at making theories (and you probably won't be at the start), the theories won't be good, and you may even lead yourself in the wrong direction. But, just like problem-solving, the more you practice it, and support or disprove your theories (also using the feedback), the better you'll be at making theories, and the better you'll be at improving your strategy.

Hopefully you can see how powerful this is. Now, instead of having a fixed strategy and thus a fixed rate of improvement, you're constantly trying to improve your strategy, and you're even constantly improving the rate at which you improve your strategy. If you're familiar with calculus, think derivatives. Already, we have cubic growth, instead of linear (which you'd have if you just stuck with one strategy). Magic, isn't it?

But... why even stop at cubic? We can have a polynomial of arbitrarily large degree — you can go so far as to inspect your process of improving your theorycrafting, and get quartic growth, and etc. — though at some point it becomes impractical and unhelpful. Cubic is probably good enough.

But the main point is — you don't have to ask things like "how do I improve" or "what am I doing wrong" if you're capable of answering that for yourself. You don't have to be confused on what to do if you can answer your own questions.

Yet, it's still somewhat useful to have someone look at your practice plan and tell you what's wrong with it. And those legendary blogs I mentioned earlier are still incredibly useful too, because they contribute many unique ideas. But they should be useful for the purpose of improving your theorycrafting, rather than just for statically making your strategy a bit better. Use that feedback as actual feedback, and think about it, and use it to make your improvement (and improvement of your improvement, and etc.) better, rather than just blindly following it. You can use others' knowledge and insights, but like anything else, you have to learn from it and absorb it into your own thought process, rather than just applying it without thinking.

Examples

Let's look at some commonly asked questions and see how this framework can allow you to answer them for yourself (e.g. with the feedback you get from solve attempts). Just thinking about things from the meta-improvement perspective can lead you to some pretty novel philosophy.

$$$~$$$ 1. Something that often shows up is stuff of the form "what topics do I need to know to solve div2C?". Think about what I've said in this blog, and how you could answer that for yourself. Try to answer before reading what I have to say on it.

Answer

$$$~$$$ 2. More generally — "should I practice more on topics or solve randomly?"

Answer

$$$~$$$ 3. Stuff like "what difficulty should I do? How do I choose problems to practice on?"

Answer

$$$~$$$ 4. Another common thing, regarding editorials — "how long to spend per problem?" and/or "how to read editorials?"

Answer

$$$~$$$ 5. More of a freebie — stuff like "Should I see tags/hidden test cases/difficulty/etc. while practicing?"

Answer

There are obviously many more questions. Many of them have answers of similar nature. You should not memorize my answers, you should understand how I reached my answers (they usually directly involve the feedback you get), and understand (i.e. think about) how to find such answers yourself (similar to the points made in question 4). Your end goal is to be able to come up with such answers yourself, so you can effectively critique your own practice.

Essentially, think thoroughly about everything. Then think thoroughly about how you think about everything. Then think thoroughly about how you think about how you think about everything. Then...

I'm willing to answer some more questions from comments, if (and only if) they're good questions, to better illustrate through example what I've spent all this time blabbing about.

Final notes

You could argue that this is a massive dose of overthinking. Most LGMs have probably not formalized their philosophy to this extent. But... people spend a lot of time thinking about this stuff, but they only address the first layer — just trying to find the best strategy rather than looking into how to answer their own questions or why those strategies are actually effective (or not!).

I'm not the first to recommend something like this, but maybe I can be the one to emphasize its power to everyone. It would be easy now, for each "how to improve" post, to merely link this blog and say "you're asking the wrong questions". And the chance that they actually process everything I'm saying is not the best. But it's better than just giving them some aimless advice and leaving them to deal with all their issues that they probably don't even know are issues.

The self-deception blog does raise an interesting point — we're often biased in how we look at ourselves. So how do we critique our practice if we can't even see it properly? I think a general approach for this is to treat learning as objectively as possible. You start out with some random vector of preconceptions of how learning should be done. Many of these will be wrong... of course, because it's random. Your goal is to fine-tune this vector and point everything in the right direction. No attachment to ego, no satisfaction from "knowing what you're doing" (because you almost certainly don't), just objectivity. Then you'll likely be able to see yourself clearly.

That's probably not sufficient to overcome certain biases. I'm not an expert on it, so I don't exactly know how to go about that, but there's almost certainly a way — psychology is a very broad field. But I will say — you should also incorporate the possibility of bias into the theories you make about your practice (or about your theorycrafting), and that will already do a lot for you.

A possible question — is a meta-improvement process like this necessary for good improvement? Did reds discover this by accident (or on purpose)? Is this some (no longer) well-kept secret within the upper echelon of CF?

...conspiracy theories aside, most people who got really good probably do this to some extent — for example, I did. I started on USACO, which (at the time) was much more of a game of "guess the right topic to apply". Then I came to CF thinking that same method would keep working, and I quickly learned, many times I failed to see a solution simply because I didn't think thoroughly enough and missed something. So I adjusted my general strategy using that feedback and converged to the right problem-solving approach from it.

I would call that quadratic growth, maybe — doing some meta theorycrafting about practice, but not to the extent outlined in this blog, and definitely not with as deep an understanding of it as what I have now. I'm sure many others have had some similar thoughts... but I'd love to hear it directly from you, if you have or haven't.

Meta notes

I was going to wait until LGM to make posts like this, to have some extra validation for the theories. But I've still improved a lot recently. If you want some direct validation for these theories... use my virtual rating. And I'm not waiting until LGM anymore because recently, I'm not sure if that will even happen anytime soon, or at all (the exact reason why for that will be apparent in another blog).

So... this blog is not an isolated incident. I plan to do more, and talk about some really deep psychology shit, and dump the "treasures" I've discovered about learning itself into the previously untainted seas of Codeforces, and hopefully start/continue some insightful conversation. I've had very little to do lately (have been quite sick for a while after WF, and doing most other things takes a lot of energy that I don't have at the moment), so this should all come fairly quickly. Look forward to it.

Tags hi
  • Vote: I like it
  • +207
  • Vote: I do not like it

»
6 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Interesting insights, good luck on reaching LGM!

»
6 months ago, # |
  Vote: I like it -58 Vote: I do not like it

Obvious one-liner "idea" conveyed through a wall of a text. Downvote.

»
6 months ago, # |
  Vote: I like it +10 Vote: I do not like it

Nice blog! Thanks for the mention — in the blog post you linked, I did mention taking feedback from the knowledge graph (which also implies that people would also take feedback from how they grow their knowledge graph) as well as concrete ways to develop it, but this blog makes the point more explicit.

I have some of my own comments on the content on this blog, and I am curious about what people other than me think about them. Apologies if it looks like I am picking out sentences without context — I urge the reader to go and read the line in the blog and the context around it before reading my comments. I agree with all these points in moderation, however it is only a discussion of when these things become impractical/counterproductive/completely false. (Also, some of these points might look like they're already addressed in the blog, but I implore the reader to look a bit deeper).

you should learn to recognize these signs yourself

This is true — introspection is very underrated, and almost everyone would benefit from introspecting for 10 more minutes a day than they already do (and people who are learning something new seriously would need to spend way more time introspecting). However, I feel that there is an exploration-exploitation tradeoff in all aspects of real life, and only in the limit that we have infinite time (i.e., asymptotically) does n-level introspection make sense for $$$n > 2$$$ or maybe even $$$n > 1$$$. Our brains need time to adapt to certain processes. We also find it harder to do things hierarchically (for a concrete example, most people I talked to find it harder to deal with segment trees compared to simple arrays) — simply due to very limited working memory, so adding more meta stuff tends to bog people down, making anything more than 2nd order pretty cumbersome. Practically speaking, my solution around higher order meta things (which I find is much easier) is to pick someone's brain about things that they do well (often concerning completely different fields), collaborate with them if possible, and use the second-hand experience to improve the meta parts of how I think, and do it myself for a while to make it subconscious — this reduces the exploration part by a great deal (and is probably what Newton meant by standing on the shoulders of giants) and gives me sufficient experience to be able to make confident decisions. This is because if there is a best (or near-optimal) way to learn for humans, it should be successful across fields, and if it is not, then you're probably spending too much time on things that don't matter. It has the added benefit of giving you access to a vast ocean of knowledge from others, too, so you prevent yourself from generalizing things "incorrectly" at worst and have a much more cohesive knowledge data structure in your head in the average case.

That's the scientific method, and you (probably) won't get anywhere without it!

This makes sense in the context of the blog, but it also assumes that the scientific method is the only correct way to go about these things, which is a bit misleading. In my opinion, it makes sense to take feedback about obvious red flags about your learning methodology but not treat it as a perfect source of truth (even as a model) as the scientific method dictates. (Here, by the scientific method, I mean coming up with a hypothesis, treating it as the truth, deducing things to make it testable (optional), testing it out, and then accepting/rejecting the hypothesis). For example, if you go down the path of higher-order introspection, you would suffer from self-observational errors (as in, things you perceive are affected by the act of perceiving them — as lower-level concrete examples, think of self-deception, stage fright, performance anxiety, or even the difference in performance that comes from having a different mindset when you have nothing to lose). The scientific method would here give you biases that would compound (using it is invalid anyway since accounting for errors here is impossible, but let's say you use it for the sake of using it). If you think of your 1st order strategy as gradient descent (as in you decide to go in the direction that makes the most sense, which is the gradient of the utility function), then self-observational errors correspond to not being able to compute even the loss function accurately (since observing changes things), and hence the partial derivative with respect to your strategy (and derivatives of noisy things are pretty useless, especially if you only have one realization of the given random variable). Now, if your 2nd order strategy is to tweak your gradient descent algorithm, these errors will compound and lead to larger errors. One can argue that if there is a global optimum, it is feasible to run this no matter what (unless this algorithm is unstable, which is quite likely by itself), but even then, it makes sense to stop after a few iterations as you would most likely be bouncing around in a neighborhood of the global optima. Note how this also sounds similar to what Goodhart's law implies.

I think what's more important is to first be accurate in self-assessment as well as build intuition (and not strong biases) about how things affect you. These come only after a decent amount of experience — first-hand or second-hand — with the method, which again brings us to exploration-exploitation. You also don't want derivatives that correspond to the person you were before you had all the knowledge you now have — you want derivatives that account for all the knowledge you have right now (and also a sufficiently informative description of how this derivative will evolve with time in different scenarios, which is sometimes asking for a lot) — this becomes especially important/hard if you try to be very systematic and learn only from your own mistakes. Conventional wisdom and second-hand experience come to the rescue in such cases and give you much more robust priors and gradient computing tools. I think the reason people turn to writing blogs is also guided by such logic subconsciously since second-hand experience works decently pretty much everywhere.

Most of the fastest learners I know are quite good at assessing themselves and making confident decisions based on that knowledge that they get in a smaller amount of time compared to others (and most of the time, without the scientific method), and that knowledge is almost always augmented by knowledge sourced from other people. We still don't understand how learning works, and most self-experimentation in terms of learning has too many confounding factors, so using the scientific method is the same as expecting reliable conclusions without sufficient data (which any data scientist can tell you is a rookie mistake). Applying it to the data you get from others is also incomplete because, as mentioned in the blog, only you can have perfect knowledge of what is going on in your head (and combining that with the confounding factors, it is not very reliable either). So you have to, at some point, start looking at some sort of aggregates and try out things without any apparent rigorous scientific basis. As again, I'd like to emphasize that when I say "scientific method" here, it means to build a hypothesis, treat it as the absolute truth and test it out, as opposed to having uncertainty around the truth of the hypothesis — the major flaw here is to assume that things are perfectly describable and, as a corollary, not infinitely complex (why is it not possible for nature to have a different set of laws of physics for every squared centimeter of the universe? And why is it not possible for different/seemingly opposite laws of learning to exist for a person at the same time?). It is missing this uncertainty that tunnel-visions us into a fake sense of knowledge, and I think it's pretty dangerous when it comes to something as critical as building your thought process.

For example, here's something that none of these address — what if you think about problem-solving wrong?

I have been re-evaluating my biases about this over the past year since I wrote my blog — and now I believe that there are no "wrong" ways to learn, just less correct ways to do so. Let's keep aside practicalities for a moment (I agree that there are some opinions around learning that are generally more practical and better for most people, as can be seen from the fact that I wrote a blog on learning explaining my own opinion). For instance, it is possible that you are fed with knowledge from different fields of study/walks of life, and when you sleep, you're forced to reconcile those pieces of information, coming to realizations that are completely novel and deeper than ones that all of us have been thinking about (emergence). I think it is more about what we know works rather than what does not work. IMO, we should be a bit more careful in claiming that certain things don't work at all since, in complex systems, there are more surprises than one would expect from their naturally simplistic intuition, which favors Occam's razor. I also think about it in this way — whenever we reason about something mathematically, we assume certain simple axioms. In those systems, there are things you can't prove/disprove, and to do that, you would need stronger axioms. In the end, things boil down to what set of axioms you need to model real life well enough. Given that it is easier to rule out impossibility (just show an instance of the thing claimed to be impossible) than rule out possibility, it makes sense that things in real life that are absolutely (or even practically) impossible are very few in number. And this is not accounting for weird things like quantum entanglement that were thought not to be observable macroscopically but turned out to be. Simple axioms seldom work in modeling real life, and those that work are probably already known. Similarly, well-accepted theories are good enough until they are not. But that's a discussion for another day. This goes into a discussion on what truth is, when something is the truth, when something should be perceived as the truth, and so on, but the main point is that what the truth would be for someone is not the same for someone else. A computer would probably learn better in different ways than a human would, and a modern computer definitely learns better in different ways than an old computing device did.

»
6 months ago, # |
  Vote: I like it 0 Vote: I do not like it

what do you suggest for peeks and drops in the journey? Recently, I was able to solve a div 2-C which was 1600 rated in the contest but I struggle solving 1300's 1400's normally. After that contest, I have been going through 3 bad contests. What would you do in this situation when you know that you're better than what you've performed recently?

  • »
    »
    6 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    fluctuations and lucky/unlucky contests happens , there is nothing you can do about it , we are all hummans. However you can get better to increase your chances of performing better. So dont demotivate yourself after a bad contests.(and vice versa , dont be so sure that you are underrated after a good contest)

  • »
    »
    6 months ago, # ^ |
    Rev. 4   Vote: I like it +12 Vote: I do not like it

    As someone who has a very inconsistant performance graph, I can relate.

    I don't know if it will help you much but I can tell you what's probably wrong for me (and maybe you'll find yourself in similar cases?)

    • I haven't seriously trained CP for 2 years (because I had more important things to do) and I only took some CF rounds casually on week ends. This lead to a loss of implem speed and accuracy (which is critical in the case of CF). Maybe you should consider training your speed (if, say, you solved ABC and you're at the bottom of ABC solvers)
    • I used to think that in 2022, I was way better at problem solving than I'm currently. However my rating was even more inconsistant (and lower). Again, the explanation is quite simple. My practice method at that time was to pick a hard OI problem and solve it after hours of thinking. It was a good practice for my national OI. However, it is probably not optimal at all for CF as the problem style is different, you don't even have the same time etc
    • At that time, I also hated certain type of problems and when they appeared in any contest, I would usually give them up very early (in the case of CF, I often quit mid round because I don't like the problem). Try to think if you have a similar issue (it can be more subtle, for example, maybe you're still trying to solve the problem but you're doing so passively and not actively, which is almost like quitting)

    You should also try to think about your strong/weak topics (in my case, I'm pretty bad at constructive problems because I basically never practiced them). Think of your rating as a weighted average. For each "topic"/"problem style", you have a certain rating. Now, depending on the contest, each "problem style" has a certain probability of appearing and your rating on the platform is the weighted average of your rating on each "problem style" (weighted by the appearance probability). Maybe you are 1600 rated on certain problems, but you're weaker on others. If the gap is too large, this can lead to high variance on your performances.

    Sorry for the wall of text, I guess I could have explained this in less words x)

  • »
    »
    6 months ago, # ^ |
      Vote: I like it +37 Vote: I do not like it

    you know that you're better than what you've performed recently

    No you are not, the poor performance in these contests already proves you wrong, as it is probably the most straightforward way to assess your ability, don't deceive yourself. And being able to solve some hard problem doesn't means you won't brick on easier one. But what you can do when doing bad in contest is to think why you perform bad and try to fix it. I personally would think bad contest as dropping some rating in exchange of a chance to know what you are weak at, and it's your job to improve yourself to prevent it happen again.

    (sorry for the above text being a bit harsh, but I also been through something like this, and I hope I can realize this earlier)

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      no need for sorry, i know that. that one thing has been the finishing touch. I have not been able to finish implemenetation perfectly. I'm almost at the point with all the observations but there's that one thing which I'm missing alot lately

      • »
        »
        »
        »
        6 months ago, # ^ |
        Rev. 2   Vote: I like it +26 Vote: I do not like it

        In that case, maybe just try to be more careful when making observation?(but not too careful too as it would take too much time), sometime even with a prove.

        Or if you're saying missing a lot on implementation, then try to have a rough thought about what you need to implement, what ds you need to maintain, breaking implementation into small pieces of functions that does something would help before actually implementing it.

        (I'm not sure you are saying making an false observation or fail to implementing sth)

        • »
          »
          »
          »
          »
          6 months ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          lemme break it down further. its like I come up with somewhat correct approach. Let's take the problem C from recent div. 2. I had the idea of how things would go and how would we distribute k and how would the score be calculated optimally but I missed binary search during the contest. Usually I can think of binary search(upper bounds, lower bounds) pretty quickly but I couldn't think of that there. Idk if it clears what I wanna say or not but I hope it would.

          • »
            »
            »
            »
            »
            »
            6 months ago, # ^ |
              Vote: I like it +11 Vote: I do not like it

            Instead of finding excuses to deceive yourself, you should just practice more. If you really think your ability is better than your current rating, then believe in the law of large numbers, your rating will eventually converge to where it should be.

            A good way to judge is that you could take the average performance/rating of the rounds you done in a month, and compare it to the previous month. If you didn't improve at all, then you know that you should practice more or switch your practice strategy.

  • »
    »
    6 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Based on the blog, I think you should probably think about it

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      how? i can't relate this issue with this blog. can you elaborate please?

      • »
        »
        »
        »
        6 months ago, # ^ |
        Rev. 6   Vote: I like it 0 Vote: I do not like it

        The blog basically says learn to how to reflect and adjust methods through trial and error yourself. So according to blog and your comment, you think you are doing worse performance than is feasible for you, ask yourself to list reasons why that might be case, and change something about your mindset based on the reasons you listed. See if the change helps, as as you practice making changes you'll gain intuition to make better changes over time.

        I would say along with above that whatever happens in the span of <max(5 contests, 1 month, 50 problems) is a matter of probability, and just practice more (as I say in comment below, I think for beginner nothing matters more than further exposure, hence the standard advice). You need to reflect/adjust more when you see longer trends of failure tho.

      • »
        »
        »
        »
        6 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Reflect on why you've done poorly in recent contests and try to work on that. Try very hard to answer your own questions. You will become much stronger this way!

        If your explanation for why you've done poorly is not great, it's ok. After a while, ask yourself why your self diagnoses are bad and work on that. That will make your getting-stronger process stronger, which will make you even stronger-er in the long run!

        • »
          »
          »
          »
          »
          6 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          is there any example on how to improve the reasoning? the reasons I wrote were the reasons which I could hardly think of. SuperJ6 and you please.

          • »
            »
            »
            »
            »
            »
            6 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            You mean like making observations? I'm not really sure.

            When I felt I was bad at making observations, I forced myself to upsolve without reading editorials. (Only read editorial after 2 months of trying without success a.k.a almost never) and when I read editorials I would try to modify my thought process to be able to come up with the ideas myself. (Also read as little as possible before trying to solve by myself again)

            This worked great for me, but note that I tailored that strategy specifically for me. It might not work for you. Instead, reflect about what might work for you and do that instead!

            Both of these ideas are better explained in Colin's blog btw

            • »
              »
              »
              »
              »
              »
              »
              6 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              not the observations in problem solving. observations when you're trying to figure out the mistakes and errors which is listed in this blog.

              • »
                »
                »
                »
                »
                »
                »
                »
                6 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Idea that helped me: derive observations from concrete data (e.g. from past contests) instead of relying on memory or imagination, as these are biased (e.g. by ego, which is very hard to remove in my case)

                But again, I don't know where you are failing so, even though I can give a general tip, the real answer is the same as my previous comment: figure it out by reflecting about it (yes, reflecting about reflecting, so meta!).

  • »
    »
    6 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    well first off you should probably stop cheating

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      it was that one time only and I regretted it alot. never did after that and the achievement in the blog was nowhere close to cheating. there are stupid people who were my competitors in national OI and they commented that stuff. the competition where I won was an onsite contest and they had firewalls in PCs so it was not possible.

»
6 months ago, # |
Rev. 5   Vote: I like it 0 Vote: I do not like it

I agree with the general idea you should run some gradient descent on your practice strategy (and any strategy), this is an important general thing to learn in life, and it is all you really need. However, maybe in line with nor's comment, it can take long time to get to that point from scratch and hard to analyze without sufficient knowledge, and you can end up bouncing around global optimum at some point.

So, especially in beginning for a relatively constrained task like cp, I think it is best to give pretty generic guidelines (for yourself or to give to others as advice) that make it as easy as possible to comprehend the idea of how to get a next problem and to start analyzing fundamental similarities of problems (getting out of plug and play mindset). As long as it's not completely wrong, following extremely simple strategy along this line without too much meta analysis on how to improve is probably best in beginning because you need exposure more than anything, and good generic advice should work for anyone to get towards their global optimum at the start without wasting time overthinking. Humans are pattern recognition machines and will get better most just with more data in beginning. I think one will likely naturally begin to increase meta analysis as necessary when they get up to level where it's worth thinking about, and then gradient descent on practice method once they have gained sufficient knowledge where it will make significant difference and they are likely to tweak in right direction.

Anyway, that is basis behind my practice method (i love self advertisement), and I think it is best generic advice to give someone before they're able to tweak it (hence why I wrote it).

»
6 months ago, # |
  Vote: I like it +27 Vote: I do not like it

I was recently at ICPC Luxor and discussed similar topics with some GMs and above. Most of them said "eww" or "ugh" when I said my training strategy involved reflecting about my weak points and about my practice strategy. I found this really surprising.

What do you think explains this? My hypotheses were that they are either so talented that this was never necessary (they do whatever and quickly improve anyways) for them or so biased to the point they dont realize they're already doing it subconciously.

»
6 months ago, # |
  Vote: I like it +75 Vote: I do not like it

This is something that I had thought about, but quickly, I realised the blog basically said exactly nothing. There are some words "have most people formalised their philosophy to this extent?" -- which is confusing to me, as I cannot see what there is to formalise. Here are what I believed have been conveyed through the blog

  • That your training strategy can be changed, depending on training results, actual performance, your targets, etc.
  • You gave a few examples of what types of trainings could have what effects
  • You can gain some positive value if you think "If it is broken, just fix it"

The first point could be new to some people: If you are not aware of this, then realising its consequences could be helpful, as it gives you an entire new space of methods to explore. It is also objectively true, but for the discussions we shall assume this to be an obvious fact.

The second point is likely true. It is also objectively useful: More understandings = more precise decisions.

Now for the third point.

I know you did not explicitly say "Find broken thing and fix it". You said "we should look for signs that strategies are not working". However, what to do with such signs? Throughout the blog, I seemed to only find "do whatever training necessary to improve this component" as the only solution, all your suggestions seems to be based on this basic idea. I afraid that you assumed this is the universal truth, and so everything from it is logical.

To me, this is just a basic 800 rated starting heuristics: The first thing you should try. If you did not think about this serious, use this strategy, it is fine. From experience, this have some positive gains. However, note that I said "from experience", there is no obvious reasons why this must be true, and I shall present som reasons why.

Let suppose there are three critical components to CP. For example, it could be {DP, Constructives, Data Structures}, or {Core Ideas, Implementation Ideas, Implementations}. Let your skills be $$$x_1,x_2,x_3$$$, a reasonable model is to assume your skill $$$R = min(x_1,x_2,x_3)$$$. (There are lots of reasons why this is min, not sum but this comment will be very long to explore this).

There is a simple strategy: Let's improve the weakest aspect.

This is equivalent to "find broken thing and fix it". In this blog, you are choosing something that you failed on and improve it, I argue this is the same thing as whatever you failed in practice is likely the weakest aspect.

Assume x_1 is your weakest aspect and now you have improved it by $$$\epsilon$$$, now provided that $$$x_1$$$ must be the weakest aspect by at least $$$epsilon$$$, then your rating have been increased by $$$\epsilon$$$, where as if you try to improve other aspects by $$$\epsilon$$$, there will be no increase in $$$R$$$, so this must be a good strategy right?

The trouble lies in considering the long run. In the long run, you will always improve each of the aspects. If training $$$A$$$ always improves component 1 by $$$\delta$$$, and training $$$B$$$ always improves component 2 by $$$\sigma$$$, then doing it in order $$$A,B$$$ and order $$$B,A$$$ will be the same result, even if one of them will give you a bigger partial skill, yet partial skill should not have been something too important.

Therefore, in order to show that this strategy truly have some gains, you must justify why $$$(A,B) > (B,A)$$$, that is, there is some advantage of learning $$$A$$$ first before $$$B$$$, be it speed and efficiency or something else. This is something that highly depends on what the components actually are, and just because component $$$A$$$ is weaker may not be enough reason.

Experienced CPers will immediately spot that the above only give you a locally optimal solution for adjacent swaps. A global optimal, even with perfect information, is even harder to imagine.

Improve the weakest aspect is a safe, baseline strategy, but there could well be other much more powerful strategies that are more efficient. Any strategies that could take note of this non-commutativity is far more useful.

This is why I think "fix whatever broken" is bad because it does not look at the full picture. However, there is a far more important issue that makes it directly wrong: Cross-Strategies.

There exist (this is an opinion), some strategies that can improve all aspects at once. Such strategies would not have been desirable if you are in the mindset of "Fix whatever is the weakest", since it is slow to fix your precise target. However, it will once again be better in the long run. If you have three strategies that that improve each components at rate of $$$(1,0,0), (0,1,0), (0,0,1)$$$,then a fourth strategy of $$$(0.4, 0.4, 0.4)$$$ would have been much more useful. Unfortunately, cross strategies are even harder to detect and verify, because it had less impact on overall skill $$$R$$$.

What I wish to conclude is that , if

  1. There have been no nontrivial non-commutatitivty , that some order (A,B) is better than (B,A)
  2. That your training methods are diverse, and could improve every component reliably and semi-efficiently
  3. That you are a sensible human being, and will improve on components that are significantly behind

Then what dominates your rate of improvements will be the efficiency of each of those individual training methods (and diversity). Under the scope of this blog, we assume you will not be getting new training methods, or improving the efficiency of each of these methods. In this case, any attempts to optimise the meta-strategy would be futile.

Now I want to make some direct deductions, that are basically my opinions,

Finally, here are some opinions. I don't have a serious justifications for each of these. I am only 70% to 80% sure about these points. I don't want to make such assertions, since I don't think my rating is good enough to imply it is a universal truth for everyone (I may do this when I am 3600 rating or similar).

  • You can reach LGM without ever thinking about meta-training strategies or training strategies, and only be moderately slower than if you had. Of course, this is under a moderate assumptions of point 2 and 3 from above.

  • Generally, improving the weakest aspects is slightly better: This serves as the non-commutative justifications for simple strategy of "Fix the weakest". This is by experience and from a few cases but each of them are pretty minor. A simple example is improving implementation tells allows me to be more aware of edge cases during ideas.

  • Nontrivial non-commutativity exists, but this highly depends on what you are good at. However, I won't share example until I absolutely proved it to be true for myself.

  • Cross strategies exists, that there exist ways to improve many aspects at once and be overall more efficient. Similarly, I could not share an example because I still haven't completely convinced myself.

  • »
    »
    6 months ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    Why are you turning the blog into math?

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Maybe because math is consensus-like (mostly) science and people that require to create or adapt things quickly often get tired of inconsensus-like politics demagogy and math is relatively independent world of reality (where you can't differ good from evil or find out transcendent numbers and many other things) after art and literature that shows abilities to verify simplified hypotheses and abstract thinking (modelling), yes, sometimes having lack of crithical thinking and common sense, but "best way to realise — explain to others", without experience you would have to find out other claims how to map additional knowledge experimentally, simply saying — because nothing better was not invented yet..

      • »
        »
        »
        »
        6 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Math is only consensus-like when you live in abstract nonsense world with agreed upon axioms. But as you try to model real world structures beyond human constructions with functionally infinite parameters in a small number comprehendible to humans, you are bound to have similar politics in all but the simplest examples. Only some survival of the fittest on models then determines what is "right".

        • »
          »
          »
          »
          »
          6 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Off-topic: — there was a post about finding number of (A[i] & A[j]) = 0 and people around tried SOS DP, but there passed solution with bitset link, would you get that next part of this message will be 'smart/dumb' without this knowledge? Without it would be more inconsensus-like and not consensus-like..

          But that actually feature of programming language — that operating over finite alphabets makes operating by numbers useful everywhere and not strings and not anything else, the great thing about those infinite alphabets and different math objects that they mostly won't appear in real life (so that is invariant vs economical 'good vs evil'?) in clear form so you can operate by them as you want (because that is that "abstract nonsense" if you only don't trying to find out how axioms appeared) and when it comes to politics you have to count every vote and PR and etc, so that is far from fittest because they require to clusterize eventually (and there comes negative selection)..

  • »
    »
    6 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    The idea from blog I get is that the space of practice strategies and solving strategies are connected differentiable spaces and if you know nothing about the spaces, you can only get any information by random walk at first. If you don't walk at all, you at least get 0 information. Once you get some information on either you can then see which directions work and head in that direction, as you get more information you can guess slope better of both, and they help understand each other. The key is not that it's very optimal walk, the key is that it is a strategy that will get you to optimum eventually (according to blog).

    So while I agree with what I can understand briefly reading about your justification that this is not an optimal walk, it doesn't matter to blog, the blog only cares that it is easy meta strategy anyone can follow with minimal extra info that will eventually lead you in right direction. However for beginner, I would say this just isn't best advice either because the goal is to take big steps in somewhat right direction right away, not gain as much information about the meta and that is harder to see correlations between practice and solving until you are near optimum of solving.

    I agree the blog does not say very much with lots of words, and definitely not what I would call a too rigorous meta-strategy either. But it says the main thing that beginner need to hear in way that maybe makes them pay more attention: "learn to reflect on whatever you're doing".

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it +11 Vote: I do not like it

      I agree that I actually missed some points of the post, since it is long and sometimes, what could be implied by the blog is much more than what is actually written, and also more than what is actually being read by the average reader.

      I agree the main takeaway from this blog is that: If your learning strategy is null/not-clearly-defined, then take this "Gradient Descent" or "Fix whatever is broken" method, and you will see some gains.

      What I disagree about the blog is that

      • I think it is far less impactful than what the blog implies. (I think it is something like 10-15% over a long time. Good to have, but not so bad without.)
      • I think it is far more difficult to get it right than what the blog implies

      My comments illustrate the issues that caused me to think this way.

      Specially about the "Gradient Descent" methods, I wrote that it is very basic, may actually not provide benefits, or hurts in the long run (see what I wrote for elaborations).

  • »
    »
    6 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    • You can reach LGM without ...

    Who is "You" here? Do you also mean given that there exists some meta-strategy and necessary preconditions someone can reach LGM with, you think they can also do it (slower) without ever thinking about meta-strategy? I feel I agree in this case but definitely wondering if you literally mean 70-80% sure anyone can reach LGM also without ever thinking about meta-training assuming points (2) and (3)

  • »
    »
    6 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Thanks for the comment, I love hearing others' thoughts on this stuff. I'll split this comment into sections, because, y'know, there's a lot to say.

    Why this blog exists
    About improving the min
    Why the blog is long

    Given all else, I admit to being sick while writing the blog (and I still am). Maybe I was a bit too assertive in the start about who it would benefit. Maybe it was a bit incoherent. Maybe this is, too. Maybe if I read it back again and see that it's more of a mess than I expected, I'll just take it down. Sorry. I really like the ideas you bring up — if you want to have a more serious conversation about it later (in a better place than a comment section lol), I'd be happy to.

    • »
      »
      »
      6 months ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Thank you for replying in so much details.

      I see that one thing I have missed is that the blog originated in situations that lower rated (Grey/green/cyan range) people face (some wordings in the blog did not help with this). I have read it under the assumption that it is meant for people that reached orange — the "median" rating under codeforces scale. I think that the two sensibility assumptions should already hold for people who reached orange. I agree that these sensibility assumptions could very well be not satisfied for lower rated people, in which case your blog is certainly helpful.

      It is true that there are lots of people that get stuck and cannot improve at grey-cyan range. I agree it is a very difficult and complicated issue. I should make it clear that I am extremely unqualified to say anything about this. Due to my background in maths, I am immediately purple when I started CP, and I had been frequently working with students in the ability range of purple-red. This also explains why (somewhat a bias of mine) I unconsciously assumed more than what I should assume for most people.

      I think your blog does propose a plausible and sensible explanation/solution. Then, given what I commented, I propose we also explore the two sensibility assumptions and view the issue from these two standpoints. I may also discus the min model since it can serve as an explanation.

      I also think that for this particular purpose, it would have been better to focus on a very basic, modest, but concrete "fix weakest/ fix whatever broken" meta-strategy -- (I don't know how often the if applies) if people are stuck due to not having good meta-strategy, then this is probably enough. Due to the reasons given in my comment, I don't think simply introducing the whole idea of "meta-optimisations" is all that beneficial (again, beneficial over this concrete strategy), especially when it just a pure "You can do this" without lots of specifics and insights offered.

      I am open to some more serious conversations, though I will be honest that what I said is almost everything that I know in this topic. Right now, I am betting on two very big non-trivial commutativity, which may really make or break my actual and CP career, and these "experiments" take years to realise, so I haven't got much more than what I wrote.

  • »
    »
    6 months ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    adhd torture ahh comment

»
6 months ago, # |
  Vote: I like it +48 Vote: I do not like it

The blog is unreasonably long, and after I used gpt to summarize it I see nothing new.

»
6 months ago, # |
  Vote: I like it +1 Vote: I do not like it

For people saying the blog is a waste of time and that it could have been wrapped up in a few paras then my friend you missed the entire point, stuff like this where you got to analyze your way of practicing or lets say self-inspection needs heart to heart talks where the writer has to pour his deep thinking into words and it can't be just.. you know..summarized into bullet points because if you do that then it wont hit you as hard as it might have if you read the entire blog. Now I will show you what I mean by that.

I copy pasted the entire blog to GPT and asked it to summarize it. This is what it throws at me

"The blog explores various questions commonly asked by individuals looking to improve their problem-solving skills, particularly in competitive programming. It encourages a meta-improvement perspective, where individuals learn to answer their questions by analyzing their practice and feedback.

Identifying Necessary Topics: Rather than asking what topics are needed to solve specific problems, the author suggests practicing problems and noting unfamiliar topics. If a topic consistently appears in solutions, it's likely important. Random Practice vs. Focused Topics: Practice should balance random problem-solving with focused topic practice. If failures stem from knowledge gaps, focus on learning. If failures are due to missed ideas, more practice is needed. Choosing Problem Difficulty: The advantages and disadvantages of practicing easier versus harder problems are discussed. The decision should be based on personal feedback from problems. Time Spent on Problems and Editorials: The time spent on problems and reading editorials depends on practice goals. The aim is not to memorize solutions but to understand problem-solving approaches. Practice Environment: Simulating contest conditions, without additional information like tags or difficulty, prepares individuals for real competitions. The blog emphasizes critical self-analysis and suggests that effective improvement involves not just finding the best strategy but understanding why strategies are effective or not. It also touches on overcoming biases and suggests treating learning objectively. Finally, it suggests that many successful individuals in competitive programming likely engage in some form of meta-improvement process, even if not as deeply as outlined in the blog."

Whoever has read the entire blog and GPT summary will understand that it doesn't hit as you as the blog did.