I recently stumbled upon a problem from Stanford Local ACM Programming Contest (SLPC) 2011.

(contest info)

(try the problem yourself here)

So the problem is, in short, given algebraic expressions of trigonometry like $$$x(sin^2x + cos^2x) − x$$$, you have to determine whether each expression equates to zero.

The intended solution, I am assuming, is to

1) convert the input string into a function of $$$x$$$ that returns the value, under the same process the expression is described,

2) input multiple values of $$$x$$$ into that function, and

3) confirm the identity if the function always returns zero — within the bounds of float precision error.

Unaware of this idea, I initially tried to solve this problem by building a recursive reduction algorithm.

Using the F# language, I made a custom algebraic data type, with a set of rules, such as $$$sin^2x + cos^2x = 1$$$, so that the algebraic expressions can be reduced according to the actual formulas of basic operations and trigonometry.

**the F# code I wrote for the ADT**

...Which turned out to be very difficult. Every time I found a case where the expressions could not be reduced under the current set of rules, I had to add new rules to cater the generalized pattern of that case, and that went on and on, often causing runtime errors.

After I realized that I didn't actually have to reduce the expressions and just substitute some float values for $$$x$$$, the rest was very easy compared to the tedious process I had underwent.

Being angry at myself for consuming energy on things that did not come to fruition, I came up with some questions, including:

1) If this problem was intended to be solved the way I first attempted (not by substituting arbitrary values for $$$x$$$ and reducing all the expressions), how would the conditions of the inputs have been different? Would this problem even be suitable for a programming contest?

2) The problem guarantees that the recursion of function arguments containing functions is no deeper than $$$sin(sinx)$$$. How much more difficult would it be to reduce these expressions if there was no such constraint?

3) I guess somebody smart would have already made an algorithm, or a software that can do this algebraic reduction that I attempted. I think WolframAlpha can do it given enough computation time, but are there any "lightweight" tools out there that can perform tasks like this?

I am not a math/CS/engineering major so I have little knowledge that is required to build such an algorithm myself.

After conducting a not-too-profound research I found there is something called "computer algebra system (CAS)" which seemed to be capable of performing tasks like I had, but to a much greater extent.

But I don't need all of the functionalities these software can provide, I am just wondering what knowledge I should be familiar with in order to write a program that conducts handling simple algebraic expressions.

Several months ago I attempted writing a program that does this on polynomials and rational expressions. It is unfinished ever since, but it actually works on some problems like this one.

So... to sum it up, my question to the Codeforces community is,

1) How would you write an a program (in any language) that can perform reducing algebraic expressions, under the constraints of the problem I explained above?

2) What subfields of math/CS precisely, should I be equipped with in order to write such an algorithm myself?

Much thanks.

Symbolic computation, or computer algebra is the field you're looking for.

Problems like this are in general very difficult. If enough operations are allowed in the input, then they are actually impossible (as in, it has been proven there can be no algorithm that solves the problem). I would have to search more to find out if the problem is impossible with the constraints you've provided.

Oh wow thanks!