Блог пользователя alecs

Автор alecs, история, 14 месяцев назад, По-английски

I've always found pure dynamic programming problems to be easier than pure greedy problems (at the same rating) and I don't really understand why. I wonder if other people feel the same.

When I have a solution for a DP problem, it just... makes sense. I don't really understand how the states and the recurrences come up in my mind, but it feels really intuitive. When I try to explain my solution to a student, for example, I just get stumped. I can't find an easy, step-to-step approach for explaining it (I'm a tutor and it really annoys me).

It was just an "Eureka!" moment for me (I solved a couple of problems by myself a long time ago, and from then on, it became easy) and I didn't overthink it until now.

For context, I've never watched tutorials / been taught by someone, I just knew the solution for the maximum non-increasing subsequence problem and that's it.

When explaining, I always try to answer myself this questions:

  • Why are there $$$n$$$ states? Why do we need all of those? Why $$$n - 1$$$ states aren't sufficient? How did I come up with $$$n$$$ states in the first place?
  • How did I come up with the recurrence?
  • Why is it correct?
  • Do we cover all cases with this dynamic programming approach?
  • How did you find the base cases? Why did you choose those?
  • Why are the standard base cases: $$$0$$$ / $$$1$$$ for counting problems, $$$-\infty$$$ / $$$v$$$ (some specific value $$$v$$$) for maximum problems, $$$+\infty$$$ / $$$v$$$ (some specific value $$$v$$$) for minimum problems?

I find decent explanations for me personally, but when trying it to express to a person that doesn't have this acquired intuition feels impossible.

So, the questions for you is:

  • Is there an algorithmic way of clearly teaching / explaining dynamic programming solutions?
  • Do you feel that greedy problems are harder generally than dynamic programming problems?

Thanks for reading my blog!! :)

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

»
14 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Это зависит от того насколько человек больше математик и пытается свернуть все что получает или человек, что любит наоборот все развернуть и решать более глобально. Но странно говорить что дп проще жадности, т.к это полностью зависит от задачи. Тем более дп бывает очень разным, например, посчитать кол-во каких-то последовательности и просто найти цену эффективного пути это абсолютно разные дп, и одно может получаться хорошо, а другое плохо. Если бы я обобщал как я вижу дп, то это просто принцип вместо того чтоб смотреть глобально, разбить задачу на кучу одинаковых подзадач и потом с помощью них решить задачу глобально. В конце концов большинство задач на дп становятся проще, если добавлять размерности пока не найдешь какое-то очевидное решение, а после пытаться их уменьшать или делать пересчет быстрее.

»
4 месяца назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

I personally think dynamic programming approaches are more intuitive due to the step-by-step structure of the dynamic programming. Similar to proofs using induction, the idea of computing the answer for $$$n - 1$$$ steps and understanding the relationship between consecutive steps, hence being able to compute the next step makes a lot of sense. I think in general it is just more intuitive to more people. If you understand what you need to compute, then the relationship and transition between states also makes sense.

It just seems to be less abstract and less general. You don't need to understand every case of the problem, you can break it down into few cases and resolve them using the answers for "previous" step. While in greedy ideas, often understanding more general and abstract ideas and structure of the problem play key role. Since you need to understand the problem very well in order to understand the correctness of greedy approaches, it seems harder to come up with.

I personally think, teaching any kind of idea or approach in a simple manner stems from asking the right questions and "teaching" how to ask the right questions. For example, often in dp problems the right question might be, if I know the answer for case $$$n - 1$$$, can I compute the answer for $$$n$$$? Is there some sort of structure between steps and how can I use it?

For example, in LIS(Longest Increasing Subsequence), if I know the answer for first $$$n - 1$$$ elements, I can't easily compute the answer for $$$n$$$. However, any LIS among the first $$$n$$$ elements either

  • contains the $$$n$$$-th element or
  • is the LIS of first $$$n - 1$$$ elements. So we can understand the connection between consecutive steps. What happens if I remove the $$$n$$$-th element from LIS in the first case? Then, it has some last element among $$$1 \ldots n - 1$$$. So on and so forth with the right questioning and most of the questions you have listed are resolved with an answer.

I am not a teaching expert, but I believe, if you ask the right questions to the student and let them think for themselves and try to navigate (maybe help a little) their thought process through those questions, it will help them understand the topic more easily.

Finally, just a thought: the issue you are facing might be related more to the student’s lack of interest than to their intuition. I think that when a student is genuinely interested in a topic, intuition often follows naturally. Although I am not confident with my the last claim :)