If you ever take an introductory physics course, there will almost certainly come a time when you find yourself analyzing an Atwood machine using Newton's Laws. The analysis yields simultaneous equations for the tension in the rope,

*T*, and the acceleration of the system,

*a*:

*T* − *mg* = *ma*

*Mg* − *T* = *Ma*.

If all you want is the acceleration of the system, then a quick way to eliminate the tension variable would be to add the two equations. That works nicely in the present case, because the coefficients of

*T* in the respective equations are equal and opposite.

The tactic

*works*, here—but why is it allowed in the first place? Have you ever wondered

*why* you are allowed to add equations? After all, equations aren't numbers, they are more like sentences...why can you add them, then? And if you can

*add* equations, then come to think of it could you also

*multiply *two equations? Would that be allowed? Would it ever be helpful?

In case you have ever wondered about this, here are some observations about what is going on.

**Theorem** ("adding equations"). If

A = B

and

C = D,

then we must also have

A + C = B + D.

**Proof**. Clearly, A + C = A + C. And since A = B and C = D, on the right-hand side of this equation we may replace A with B and C with D.

This shows that "you can add simultaneous equations" and the result will be an equation that also holds, provided the original two equations do. The sum of two things viewed one way must be equal to the sum of those things when each is viewed in an equivalent way.

In the theorem above, the quantities A, B, C, and D don't have to be two-variable expressions. The quantities could be matrices, squares of partial derivatives, indefinite integrals, really any sorts of objects that can be added together.

In particular, it is possible to add one-variable equations, although we don't find ourselves doing that very often. For example, suppose that a number

*x* is known to satisfy both of the following equations:

2*x* + 5 = 8 + 5*x*

−2*x* + −6 = −8 + −4*x*.

Of course, either of these equations would be pretty easy to solve in itself. But an even faster way to find

*x* is to add the two equations. The answer emerges immediately, as you can check for yourself.

Here is another one-variable example: Find all real numbers

*x* satisfying both of the following equations:

In this case, solving either of the equations directly would be a little daunting. But you might be able to get somewhere if you add the two equations and look carefully at the resulting equation.

**Multipying equations**
You can also multiply simultaneous equations. Here's why: if A = B and C = D, then from the fact that AC = AC we must also have AC = BD. The product of two things viewed one way must be equal to the product of those things when each is viewed in an equivalent way.

Multipying equations doesn't come up nearly as often as adding. But here is an example in which the technique proves useful. Find all pairs of real numbers (

*x*,

*y*) satisfying both of the following equations:

Try it by multiplying the two equations. You should find the value of one of the variables immediately.

Here is another example: Show, by multiplying the equations, that any solution (*x*, *y*) of the system

*x* + 1 = 5 − *y*

1 + *x* = 5 + *y*

must be located a distance of 5 units away from the point (−1, 0).

You don't have to graph anything to solve this problem, but here is what a graph of the situation looks like. Curves are shown for each of the two given linear equations, as well as the nonlinear equation that results from multiplying them together:

**Another way to look at it**

Let the symbols p, q, s, and t stand for any sort of objects—they could be indefinite integrals, Medicare billing codes, filenames, or letters in an encrypted message. And suppose F is just about any function of these quantities taking two arguments. Then, after all, it's pretty easy to see that

p = q and s = t together imply F(p, s) = F(q, t).

In the algebraic problems we considered above, F was either the sum function F(j, k) = j + k or the product function F(j, k) = j × k. But as this formulation shows, the reasoning we were using was much more general than any particular notions of adding, multiplying, or even algebra.

**Concluding note**
Some colleagues of mine have been kind enough to discuss these topics with me over the past few days, and our discussion has included some useful perspectives and takeaways, so I thought I'd touch on a couple of those here at the end.

First, the language of "adding equations," or subtracting them, or taking linear combinations of them, etc., is universal in classrooms, and it serves a function in the discourse so I'm not recommending that teachers or students avoid this language. Rather, if there is a point for practitioners here, it is that when students are being taught to add equations, they should also be taught why the technique works.

Second, while adding equations and multiplying equations can both be justified using the same logic (and I showed examples of both in order to throw that logic into relief), I should not give the impression that adding equations and multiplying equations are equally important in the curriculum. Adding equations is much more important than multiplying them. Linear systems are ubiquitous; they are more central to the curriculum than nonlinear systems, and moreover adding and subtracting equations is a special case of adding linear combinations of equations, which is an early phase in the study of linear algebra. Here, for example, is a page from Gilbert Strang's

Introduction to Linear Algebra:

Finally, as this image shows, there is a graphical side of the story that plays out in tandem with the syllogistic reasoning that I have related. It is interesting, and useful, for students to follow how the lines described by the equations transform as the equations are combined, always maintaining the same intersection point.