Ahh, trig identities... a rite of passage for any precalculus student.
This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there. Now, all of a sudden, there's no set way of going about things. No "step 1 do this, step 2 do that". Now they have to rely on their intuition and "play" with an identity until they prove that it's correct.
And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students.
They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other. I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy".
Then one of my Honors Precalculus students called me on it.
He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct. After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist.
To begin with, one valid way to prove an identity is to work with each side of the equation separately and show that they are both equal to the same thing. For example, suppose you want to verify the following identity:
\[\dfrac{\cot^2{\theta}}{1+\csc{\theta}}=\dfrac{1-\sin{\theta}}{\sin{\theta}}\]
Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\). This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid: "things which are equal to the same thing are equal to each other."
But what about doing the same thing to both sides of an equation?
There are two important points to realize about what's going on behind the scenes here.
The first is that if your "thing you do to both sides" is a reversible step --- that is, if you're applying a one-to-one function to both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-if relationship. If that function is not one-to-one, all bets are off. You can't prove that \(2=-2\) by squaring both sides to get \(4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output.
It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.* A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169:
Reversible Steps Theorem
Let \(f\), \(g\), and \(h\) be functions. Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\),
1. \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\)
2. \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.]
3. If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\]
Later on p. 318, the book says:
"...there is no new or special logic for proving identities. Identities are equations and all the logic that was discussed with equation-solving applies to them."
Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too.
The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied domain of an identity matters a great deal. When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domain of both sides. Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion.
To illustrate why domain is so important, consider this example:
\[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\]
To verify this identity, I'm going to do something that may give you a visceral reaction: I'm going to "cross-multiply". Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\). I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity:
\[
\begin{align*}
(\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\
\cos^2{x} &= 1-\sin^2{x}\\
\sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare
\end{align*}
\]
"But wait," you ask, "what if \(x=\pi/2\)? Then you're multiplying both sides by zero, and that's certainly not reversible!"
True. But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place. For any value of \(x\) that does not yield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid.
Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what. This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\). If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal.
The real issue here can be explained via a quick foray into higher mathematics. Functions form what's called a ring -- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect. But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function.
Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even infinitely differentiable functions do! But if we move up to the complex numbers (where everything works better!), then the set of analytic functions --- functions that can be written as power series (infinite polynomials) on an open domain --- is an integral domain. And most of the functions that precalculus students encounter generally turn out to be analytic**: polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric. This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation.
So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity. Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.*** Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues.
Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other. Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus.
What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles. And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them explicitly state in their proofs that all the steps they have taken are reversible! If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other.
If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is. Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world.
--
* Rather, I would say it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations.
** Except perhaps at a few points. The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the poles of the function, where the function blows up to infinity because of division by zero.
*** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful. Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.