Блог пользователя MBBN

Автор MBBN, история, 8 часов назад, По-английски

Matroid Union / Intersection — what finally made sense to me

If you’ve ever solved a problem where greedy almost works but keeps failing in strange ways — you were probably dealing with a matroid problem without realizing it.

For a long time, I skipped anything that mentioned “matroid”. It always felt like theory that wouldn’t help in a real contest.

But after solving a few problems where greedy almost worked but kept failing, I realized there is a clear pattern. The solution was never random — it always required fixing earlier decisions.

The key realization for me was: greedy is not wrong, it’s just incomplete. Sometimes you need to repair your choices.

Once I understood that these problems are basically “greedy + fixing using chains”, everything started making sense.

This is the simplest way I think about matroid union / intersection.


Quick example (why greedy fails)

Suppose: - You pick edges greedily (like building a matching) - You choose edge A first because it looks good - Later, you cannot add two better edges because A blocks them

But if you: - remove A - add those two edges

→ solution improves

Greedy got stuck because it couldn't "undo" itself.

The correct approach allows: → removing old choices to make better ones

This is exactly the idea of augmenting paths.


Visual intuition (augmenting path)

Think of current chosen set S and an element e we want to add.

We try to "fix" conflicts using a chain:

e  --->  x  --->  y  --->  z

Where: - e is not in S - arrows mean "can replace"

Interpretation:

e (try to add)
 |
 v
x (must remove because of M1)
 |
 v
y (can be added back via M2)
 |
 v
z (must be removed ...)

Eventually we reach a node where: → we can add without breaking anything

Then we flip everything along the path:

add e
remove x
add y
remove z
...

Final size of S increases by 1

This is exactly like augmenting paths in matching.


1) Problem structure

We have a set E.

We want S ⊆ E such that: - S satisfies constraint M1 - S satisfies constraint M2 - maximize |S|

Each constraint alone → greedy works
Together → greedy may fail


2) What matters about matroids

The only thing I actually use:

  • valid sets behave nicely (subsets stay valid)
  • you can fix bad choices using exchange

Exchange (informal): If one valid solution is larger, you can move elements and improve the smaller one.

This is exactly why greedy works when there is only one constraint.


3) Why greedy fails here

Naive approach: - iterate elements - add if valid

Fails because: - adding may break M1 - fixing M1 may break M2

So one-step fixes are not enough.


4) Correct way to think

Instead of greedy:

“I want to add this element. If it breaks something, can I fix it? If that fix breaks something else, can I continue fixing?”

So we allow a chain of swaps.

That’s the whole idea.


5) Core algorithm (augmenting path idea)

Maintain current set S.

For each element e:

1) If e can be added directly: add it

2) Otherwise: try to fix using swaps

3) This becomes: → search for an augmenting path (BFS/DFS on exchange graph)

4) If found: flip elements along the path

5) Size of S increases by 1

If no augmenting path exists: current S is maximum


6) Circuits (important detail)

When adding e breaks a matroid:

There exists a minimal dependent set → circuit

Only elements inside this circuit can fix the issue.

So: - we don’t try removing everything - only circuit elements are valid

In matching terms, this behaves like alternating cycles.


7) Graph view (precise version)

This is the exact moment where the problem becomes identical to matching.

Think of this as a directed graph:

  • Nodes = elements

Edges represent valid exchanges:

  • e → y if y is in the circuit formed when adding e in M1
  • y → e if after removing y, element e can be added in M2

Now the problem becomes:

→ Can we find an augmenting path starting from e?

This is very similar to augmenting paths in bipartite matching.

Bipartite matching can be modeled as intersection of two partition matroids.


Spoiler

8) Exact math (clarified)

For matroid union:

max |S| = min over X ⊆ E of: r1(X) + r2(E − X)

For matroid intersection:

Correctness follows from the exchange property and the augmenting path argument.


9) Final takeaway

What I actually remember in contest:

  • greedy works because of exchange
  • two constraints → greedy may fail
  • fixing requires chains → augmenting paths

If a problem feels like: “I need to replace earlier choices again and again”

then this is matroid union / intersection.

Once you see this pattern, matroid problems stop feeling like theory — they just become “greedy + repair”.


10) Practice problems


References

  • Проголосовать: нравится
  • -7
  • Проголосовать: не нравится