I recently attended an ICML tutorial on submodular optimization and I enjoyed it so much that I continued to read a review/tutorial paper: “Submodularity in Action: From Machine Learning to Signal Processing Applications”.

As I’m reading the paper, I just wanted to summarize what I’m learning on this blog. Initially, I was very ambitious and wanted to recreate something similar to the ICML tutorial. Soon, I realized that that will require probably one entire semester, not a week. Now, I’m just breaking things down into much smaller posts ðŸ™‚

Let me begin with the definition of submodular functions. The core concept in submodular function is diminishing return:

where Â is defined as:

So, if you consider as some reward function, is an additional reward you get if you add into the set .

The best way to visualize this is through the “covering” problem, e.g, covering area with Internet access. As you add more transmission towers, there is more area covered, i.e, that has Internet access. Initially, every new tower you build, you have a new area that newly gains Internet access. However, once you have a few towers, the area covered by a new tower will start to have overlap with already-covered areas, and the amount of newly covered area decreases. See the image below:

The concept of submodularity is important for two reasons:

- “Diminishing returns” scenario occurs frequently in real-world. Especially, it gained a lot of attention recently due to its applications in machine learning. For instance, feature selection can be modeled as a submodular optimization problem where you want to find features that explain the variance of the output the most. Clustering can also be formulated as a submodular optimization problem: choosing subsets that maximize the combinatorial dependence function.
- Submodularity is a discrete-function counterpart of convexity. Thanks to this similarity, some submodular optimization problems can be solved efficiently. For unconstrained submodular minimization, we can obtain an exact solution by utilizing convexity. For many constrained submodular optimization problems, there are greedy algorithms that provide approximation guarantees. (Will cover this in later posts!)

In this post, I am just going to talk about one thing:

Any set function can be represented as a difference between two submodular functions.

This is a really useful property because it lets you use techniques for maximizing the difference of two convex functions to maximize an arbitrary set function.

Now let’s prove why this is true!

Theorem 1.For any set function Â defined over a ground set , there exist submodular set functions and such that:(1)

**(Proof of Theorem 1)**

Suppose is any set function on and is a submodular function on . And, let be such that:

If

(2)

is satisfied for every , we are all set!

If not, find that violates (2), i.e.,

(3)

We will show that we can change that will remove the pair that violates the condition (2) while maintaining the submodularity of .

To do that, we will first prove a small lemma.

Lemma 1.Let be a submodular function on . Let be another set function on such that:

Then, is also a submodular function on .

**(Proof of Lemma 1)Â **

We have to show that for all and for all ,

(4)

Let’s divide this into three cases: i) ,Â ii) , iii) .

i) and . Hence, (4) holds.

ii)

Hence, .

iii)

where , and, the same will hold for . I.e.,

.

Hence, for either case, .

Now, let . Let

Then, is still a submodular function following Lemma 1. Now let .

We can repeat this process until there is no such pair that violates (2).

That’s it for today, and I hope I can come back with another submodular optimization post or quantum algorithm post soon ðŸ™‚