Thursday, November 22, 2007

Remedial Math: For Dummies?

These are some early thoughts on the growing debate about remedial math instruction in higher education. I'm still coming up to speed on the data and the issues in this area; in the meantime I welcome all comments, corrections, and criticisms of the ideas that follow (which are really more like instincts at this point...).


Background: A 2006 article by Bettinger and Long, "Institutional Responses to Reduce Inequalities in College Outcomes: Remedial and Developmental Courses in Higher Education," in Dickert-Conlin and Rubenstein (Eds.), Economic Inequality and Higher Education: Access, Persistence and Success, New York: Russell Sage Foundation Press; Achieve's 2006 report Ready or Not: Creating a High School Diploma that Counts; Sternberg's must-read 1997 book Beyond the Classroom; and Phipps's 1998 Institute for Higher Education Policy report, College Remediation: What It Is, What It Costs, and What's at Stake.


The article by Bettinger and Long gives a good overview of the research and debate about remediation in higher education. Some notable quotes (see the article for the references):

Greene and Foster (2003) found that only 32 percent of students leave high school at least minimally prepared for college. The proportion is much smaller for Black and Hispanic students (20 and 16 percent, respectively). Greene and Foster (2003) define being minimally "college ready" as: (i) graduating from high school, (ii) having taken four years of English, three years of math, and two years of science, social science, and foreign language, and (iii) demonstrating basic literacy skills by scoring at least 265 on the reading NAEP.



By 1995, 81 percent of public four-year colleges and 100 percent of two-year colleges offered remediation (NCES, 1996).


It appears that states and colleges know little about whether their remediation programs are successful along any dimension.


Moreover, a study of 116 two-year and four-year colleges found only a small percentage performed any systematic evaluation of their programs (Weissman, Rulakowski, and Jumisko, 1997).


On one hand, the courses may help under-prepared students gain the skills necessary to excel in college. On the other hand, by increasing the number of requirements, extending the time to degree, and effectively restricting the majors available to students (due to the inability to enroll in advanced coursework until remedial courses are completed,) remediation may negatively impact college outcomes such as persistence and long-term labor market returns.


Achieve's Ready or Not report has this to say:

Most high school graduates need remedial help in college. More than 70 percent of graduates quickly take the next step into two- and four-year colleges, but at least 28 percent of those students immediately take remedial English or math courses. Transcripts show that during their college careers, 53 percent of students take at least one remedial English or math class. The California State University system found that 59 percent of its entering students were placed into remedial English or math in 2002. The need for remedial help is undoubtedly surprising to many graduates and their parents — costly, too, as they pay for coursework that yields no college credit. (p.3)


This issue has been percolating for a while in both K-12 and higher ed. Sternberg in 1997 listed ten policy priorities, the eighth of which was to end remedial courses at four-year institutions. Sternberg writes (p. 192),

The current practice of providing remedial education in such basic academic skills as reading, writing, and mathematics to entering college students is disastrous. It has trivialized the significance of the high school diploma, diminished the meaning of college admission, eroded the value of a college degree, and drained precious resources away from bona fide college-level instruction.

I realize that despite whatever [reforms] are put into place at the elementary and secondary school levels, there will be students who pass through the system without some of the requisite academic skills. But these students should not be entering four-year colleges and universities. Rather, students who have managed to complete high school but who lack the necessary college entry skills should be required to pursue remedial coursework at local community and two-year colleges before they can apply for admission to more advanced institutions of higher education.


Weighing against Sternberg's proposal are the findings in Phipps's 1998 IHEP report. With respect to college remediation, Phipps argues that it is a core function of higher education; that it is not expanding in size or scope (Bettinger and Long disagree); that it is actually quite cost-effective; and that a number of undesirable consquences would ensue if it were to disappear.

The back-and-forth gets confusing, in part because some authors (like Sternberg) make a firm distinction between four-year vs. two-year institutions, while others (like Phipps) do not.

What seems to be missing in any of these debates, however, is any serious discussion of what is actually being taught in remedial classes - especially in math. I would like to see someone take a careful, critical look at the typical content covered in college remedial math courses. I suspect that such a study would show us that most of the material in these courses is useless for most of the college students forced to take these courses.

Achieve's Ready or Not report contains the following lofty-sounding quote from a Purdue math professor:

The ability to understand and apply the mathematical content typically taught in an Algebra II course is vital to a student’s success in science and social sciences courses required by our university.


After reading this, just for the heck of it I visited Purdue University's website. I wondered what is actually required in order to graduate from Purdue with a degree in, say, nursing.

Nursing at Purdue is not exactly science or social science, but it is science-based, and at Purdue it turns out to be a very rigorous program. Here is the four-year plan of study.

BIOL 203, Human Anatomy and Physiology
CHM 111, General Chemistry
ENGL 106, English Composition
NUR 102, Dynamics of Nursing
BIOL 204, Human Anatomy and Physiology
CHM 112, General Chemistry
PSY 120, Elementary Psychology
SOC 100, Introductory Sociology
Humanities elective
NUR 104, Foundations for Nursing Practice
NUR 105, Foundations for Nursing Practice-Clinic
NUR 206, Health Assessment
NUR 207, Health Assessment-Clinic
PHPR 202, Introductory Pharmacology
Guided philosophy elective
BIOL 221, Introduction to Microbiology
F&N 303, Essentials of Nutrition
NUR 208, Lifespan Human Development
NUR 214, Introduction to Pathophysiology
PSY 350, Abnormal Psychology
NUR 302, Adult Nursing I
NUR 303, Adult Nursing I-Clinic
NUR 312, Nursing of Childbearing Families
NUR 313, Nursing of Childbearing Families-Clinic
Guided statistics elective
NUR 304, Psychosocial Nursing
NUR 305, Psychosocial Nursing-Clinic
NUR 306, Adult Nursing II
NUR 307, Adult Nursing II-Clinic
NUR 310, Public Health Science
Guided sociology elective
NUR 402, Public Health Nursing
NUR 403, Public Health Nursing-Clinic
NUR 412, Pediatric Nursing
NUR 413, Pediatric Nursing-Clinic
Free elective
NUR 404, Leadership in Nursing
NUR 408, Research in Nursing
NUR 409, Senior Capstone Clinic
NUR 410, Issues in Professional Nursing
Humanities elective
Free elective

If I managed to get the HTML right, then you'll see that I color-coded the courses. Courses with a low math demand are blue. Courses with a medium math demand are green. Courses with a high math demand are black. (The plural in this case turns out to be superfluous.) This was just a very rough take; I didn't look at any syllabi. But I claim there is a strong message here - and it's a message that may strike many as counterintuitive: Very little math is required to succeed in nursing at Purdue. What little math there is could probably be handled perfectly well through tutoring arrangements for those students lacking even the basics.

As far as that one statistics elective is concerned, it turns out that the stated prerequisite for the eligible statistics courses is a two-semester sequence called Math 153-154 - or else the equivalent in high school preparation. Now, like many universities, Purdue is wary of the word "remedial." Having a lot of "remedial" students tends to compromise a university's image. In Purdue's case, the only courses deemed remedial are courses at the satellite campuses; the preferred word at the main (West Lafayette) campus is "preparatory." This kind of parsing notwithstanding, I'm going to classify Math 153 and Math 154 as remedial, because (1) they are prerequisites for the plan of study, but not listed in the plan of study; and (2) they cover material traditionally taught in high school Algebra II/Trig courses.

Here is the first midterm exam for Math 154.

1. Find the angle that is complementary to 26 degrees 9' 40''.
2. Express theta = 4.6 in degrees, minutes, and seconds, to the nearest second.
3. Find the reference angle for theta =122 radians, to the nearest hundredth of a radian.
4. Jupiter is the fifth planet from the sun and by far the largest. Jupiter is twice as massive as all the other planets combined (the mass of Jupiter is 318 times that of earth). Its orbit is 483,633,704 miles and it diameter is 88,846 miles. A nautical mile on a planet is the distance on the surface subtended by a central angle of 1' from its center. Approximate the number of land miles in a nautical mile on Jupiter to the nearest tenth of a mile.
5. Find the exact values of [the short leg and the hypotenuse] of the given right triangle [long leg 8 units, angle 30 degrees between long leg and hypotenuse]
6. Stonehenge in Salisbury Plains, England, was constructed using solid stone blocks weighing over 97000 pounds each. Lifting a single stone required 550 people, who pulled the stone up a ramp inclined at an angle of 8 degrees. To the nearest tenth of a foot, approximate the distance that a stone was moved along the ramp in order to raise it to a height of 34 feet above the level ground.
7. Approximate sec(78 degrees 18') to four decimal places.
8. csc(x)/cot^2(x) is equivalent to which of the following? csc(x)cot(x), sec(x)cot(x), csc(x)tan(x), cos(x)sin(x), sec(x)tan(x)
9. Find the exact value of tan(theta) if theta is in standard position and the terminal side of theta is in quadrant II and is parallel to the line 3x+5y = 9.
10. If sec theta = 8 and cot theta < 0, find the exact value of sin theta.
11. Let P(t) = (-7/25, -24/25) be the point on the unit circle that corresponds to t. Find the exact value of P(-t+pi).
12. Complete the statement: As x -> pi/2+, tan(x) -> ?.
13. Approximate, to the nearest 0.01 radians, all angles theta in the interval [0, 2pi) that satisfy the equation cot(theta) = -2.3412.
14. Which of the following statements are true about the graph of y = -2 + cos(x)? The graph intercepts the y-axis at -2; The graph intercepts the x-axis at the origin; (pi/2, -2) is a point on the graph; The graph is always below the x-axis.
15. Find the equation of the graph shown below, in the form y = a sin(bx+c), with a>0, b>0, and least positive real c.

I think there is a serious content-validity problem here. How many students at Purdue actually need to know any of this? In particular, is Math 154 any sort of reasonable preparation for future nurses - or future lawyers, psychologists, social scientists, doctors, or even cell biologists? To me, Math 154 looks like welfare for college math instructors. I think we should stop taking university faculty at their word when they say that large fractions of incoming students "need" remediation. And we should start questioning whether these faculty members' syllabi accurately define the skills necessary for college.

Achieve (and many others) seem to want to use the college remediation crisis as a hammer to swing at our high schools. I would certainly agree that high school math education could benefit from a couple of solid whacks; but I am not inclined to accept the universities' terms in this debate without question. My intuition is that the hammer has to swing both ways.

Personally, I suspect Sternberg is right (about math at least) when he says that four-year institutions should abolish remedial math courses. As Sternberg notes, such a policy would result in many students attending two-year colleges in order to build up their math skills. However, given the way college math requirements tend to get exaggerated, I would want to take extra care to ensure that the two-year/four-year bar is set in the appropriate place, based on realistic studies of what it actually takes to get a C or better in university gateway courses.

Instead of using SATs, ACTs, or other tests to place students in non-credit-bearing courses, universities could redirect their remediation expenditures towards studying how these test scores relate (or don't relate) to the grades students ultimately receive in the university's gateway science and math courses. Universities could publish these statistics, as well as statistics on how these scores relate to attrition rates in various majors. Knowing the proportion of students with ACT scores of X earning grades of Y in courses Z, students can make informed decisions about whether to attempt a given major, whether to get a tutor for a required course, and how to design a schedule that allows enough time to devote to the class in question. Advising could help ensure that students interpret the data properly.

My intuition is that many students who "lack algebra II/Trig" could just as well skip the two-year college altogether and go directly to State U. Let them sign up for those introductory engineering courses if they want. Let them get tutoring, use existing academic support mechanisms, learn the crucial content in real time, and work as hard as they can to catch up - and let them earn college credit when they pass. If, instead, they fail or withdraw, then they'll find that there are lots of interesting things to do in a university besides engineering and physics. And they won't have spent a year treading water in remedial classes before coming to that decision point.

Monday, October 8, 2007

75th Anniversary Lecture

Bennington College threw itself a great party last weekend to celebrate its 75th Anniversary. Over 400 alumni came back for a three-day weekend of lectures, performances, exhibitions, classes, dinners, dances, and parties. President Elizabeth Coleman delivered an exhilarating keynote address, about which there is much, much more to say. Right now I'm just going to quickly upload the presentation I made on Sunday morning.

It was basically a lecture, although I also gave everyone in the audience a packet with illustrations and other things to read in parallel.


The lecture hall was filled to capacity - 75 enthusiastic people, from alumni to parents to students. I was touched by how willing everybody was to follow me through some peculiar twists and turns. There were some terrific questions at the end, touching on all of the threads in the talk, from the literary to the mathematical to the pedagogical.

A PDF of the presentation is here. I'm not sure how well it works when read on paper. It was composed to be read aloud, and I think that my voice, my presence, and my audience all combined to create the rewarding event that we all experienced. My family being there didn't hurt either. :)

Monday, September 17, 2007

Rochester Turning

Just a heads-up to those who are interested...

Today I heard from a friend who is writing for a local political blog called Rochester Turning. Admittedly, I'm not a technophile by any means, but this kind of site, a group blog, was something new to me. Because the blog has so many writers, it's updated constantly - much more often than a single-person blog. The blog is about local politics in the upstate-NY area. Essentially, it seems to me that they are using the weblog technology to create, at very low cost, a burgeoning news site that bypasses the conventional media. Some of their press is here.

It's also interesting to see how intensely the writers have thrown themselves into the work. These are professionals and busy people. But many of the posts say things like, "...I'm here at the convention with X and Y...", and, "...I took a long lunch to hear the speech by Z..." This is not armchair stuff. These people are putting some significant hours into learning their landscape, as a way of doing something to take back this country.

I'm glad to see it. And I'm wondering what more I myself can do. We have left this sort of work to the worst of us for too long.

Monday, September 10, 2007

Goldbach Variations

After my last post (see especially the comments), I got to thinking about prime numbers and prime decompositions. The most famous unsolved problem along these lines is Goldbach's Conjecture, which hypothesizes that every even integer greater than 2 is a sum of two primes. (Examples: 4 = 2 + 2, 18 = 7 + 11, etc.) Goldbach's conjecture has been verified for every even number up to 100,000,000,000,000,000 (10^17) by the Portuguese mathematician Tomás Oliveira e Silva and his research group; their results are here. But nobody knows whether the conjecture is true or not. There could be a counterexample just around the corner.

Driving over to the nursing home to visit my parents the other day, it occurred to me to change the word "sum" in Goldbach's conjecture to "difference." Here then is the conjecture:

Every even number is a difference of two primes.


Examples: 2 = 5 - 3 and 65,036 = 65,053 - 17.

I have verified the conjecture out to 65,036. For me to go beyond that would require a little more investment of time.

I also made bold to send Professor Oliveira e Silva the conjecture, and he very kindly answered, saying that it was not known whether this conjecture is true, but that it does appear to be so.

Stopping at Burger King to pick up some hamburgers for my folks, I jotted down on a napkin a more general problem: given integers M and N, for what integers K is it the case that K is a weighted average of primes,

K = (Mp + Nq)/(|M| + |N|)


for some primes p and q. With M = N > 0 we have Goldbach's conjecture. With M = -N, we have (OK, until I hear otherwise, let's just say it) "Zimba's conjecture." (Henceforth abbreviated ZC.)


Number theory is a valuable subject for an educator like myself, because some of the discipline's hardest questions are so near the surface. Or as the number theorist G.H. Hardy put it, "...there are theorems, like 'Goldbach's Theorem,' which have never been proved and which any fool could have guessed."

***

More notes, as I read up on this:

In 1849, Alphonse de Polignac conjectured that every even number is the difference of two consecutive primes. This has not been proven, but it would imply ZC if true. (Ref)

The thread continues in the comments below: a reference for the ZC, and another try at a conjecture - this one new for sure...!

Saturday, September 1, 2007

The Prime of Life

In my post Where Credit Is Due, I posed the question, Can somebody tell me when I became middle-aged? Well, the answer is, "today." Today I'm 38 years old - and according to the actuarial table here, 38 is the very age when a man's present age equals his expected remaining years of life. However, a bit of linear interpolation on the data suggests that I've got a little time left. I won't pass the halfway point of the table until June 7th of next year, around dinnertime. No need to rush into that motorcycle purchase just yet.

I also made this graph using the data from the actuarial table. It shows the risk of dying in the next year for men of different ages, from age 10 to age 50. Some interesting patterns....



(Before anybody panics, let me point out that the vertical scale only goes up to 1%.)

It so happens there's a piece on mortality statistics in this month's American Scientist - one of the best magazines of any kind published today. The article is here, in the Marginalia section.

***

A couple of random notes, and then I've got some Key Lime Pie to attend to.

(1) In that same issue of American Scientist, there is a courageous article on the scandalous state of modern cosmology - and by extension, the deep confusion within contemporary theoretical physics as a whole. The article is "Modern Cosmology: Science or Folktale?".

(2) A friend has generously mailed me a copy of The Black Swan: The Impact of the Highly Improbable. I'll give it a more serious look soon, but on first flip-through, I have to say, it should have been called The Black Swan: The Impact of the Highly Unreadable. Somebody get this guy an editor! Better yet, give the whole thing to Malcom Gladwell, and let him condense it down to a nice pithy piece for the New Yorker. Having said that, the bibliography looks very valuable, and the graph on page 276 is immediately convincing. There is obviously a lot here, although the author's pretentious style keeps him in the foreground, at the expense of his message.

(3) Today it struck me that 38 can be written in four different ways as the sum of a prime and a perfect square: 38 = 1^2 + 37 = 3^2 + 29 = 5^2 + 13 = 6^2 + 2. Amazing! In number theory, you frequently see decompositions into squares and decompositions into primes, but I have never seen a mixed decomposition problem like this.

Thursday, August 23, 2007

On Mansfield's 2007 Jefferson Lecture

Later this fall, I'll be giving a lecture as part of the celebration surrounding Bennington College's 75th Anniversary. (Listening to lectures is considered to be a celebratory activity in academia.) I'm still considering what to talk about. The temptation is always there to do something with "the two cultures," or (particularly in the Bennington environment) to address art/science connections. But it's hard to avoid tepid generalities in that sphere. As a reminder of what not to do, I clipped the following paragraph from Harvey Mansfield's 2007 Jefferson Lecture, "How to Understand Politics: What the Humanities Can Say to Science". (See also the 05/09/07 article in the Chronicle of Higher Education.) Mansfield is a famous Harvard political scientist, but the first time I heard of him was in connection with his amusing system for assigning "ironic grades" in his courses, which made the papers a few years ago.

In this passage from the lecture, Mansfield is contrasting literature with science:

Literature, to repeat, besides seeking truth, also seeks to entertain - and why is this? The reason is not so much that some people have a base talent for telling stories and can't keep quiet. The reason, fundamentally, is that literature knows something that science does not: the human resistance to hearing the truth. Science does not inform scientists of this basic fact, and most of them are too consistent in devotion to science to learn it from any source outside science such as common sense. The wisdom of literature arises mainly from its attention to this point. To overcome the resistance to truth, literature makes use of fictions that are images of truth. To understand the fictions requires interpretation, an operation that literature welcomes and science hates for the same reason: that interpreters disagree. Literature is open to different degrees of understanding from a child's to a philosopher's, and yet somehow has something for everyone, whereas science achieves universality by speaking without rhetoric in a monotone, and succeeds in addressing only the company of scientists. Science is unable to reach the major part of humanity except by providing us with its obvious benefits. Literature takes on the big questions of human life that science ignores - what to do about a boring husband, for example. Science studies the very small and the very large, surely material for drama but not exploited by science because in its view the measure of small and large is merely human. Literature offers evidence for its insights from the observations of writers, above all from the judgment of great writers. These insights are replicable to readers according to their competence without the guarantee of scientific method that what one scientist sends is the same as what another receives. While science aims at agreement among scientists, in literature as in philosophy the greatest names disagree with one another.


One clue that this is gibberish is that you can interchange the words "science" and "literature" everywhere in this paragraph, and many of the resulting sentences remain true. Here are a few truisms resulting from this search-and-replace operation:


1. Science knows something that literature does not: the human resistance to hearing the truth.

Galileo knew this well. (So do you, if you've ever argued with a Creationist.)


2. Science is open to different degrees of understanding from a child's to a philosopher's.

We are obviously teaching children something in fifth-grade science class. And I'm guessing that that something is...science.


3. Literature is unable to reach the major part of humanity.

This truism holds because the major part of humanity doesn't read literature.

Well, I should qualify that: the major part of American humanity doesn't read literature. The 2004 NEA study Reading at Risk concluded that only slightly more than a third of adult males now read literature. Overall, less than half of adults in the study read any literature during 2002, the year covered by the questionnaire. "Reading literature" here means reading any novels, plays, short stories, or poetry. For example, reading a Left Behind novel would count as reading literature, for the purposes of the study.

It is a sad thing to realize that literature is unable to reach the major part of humanity, just as it is a sad thing to realize that science is unable to reach the major part of humanity. But I don't think that the explanations for either circumstance are to be found within science and literature themselves.


4. Science takes on the big questions of human life that literature ignores.

Where did it all come from...what is it all made of...where is it all going...what control can we exert over the forces that buffet us...why cannot my father raise himself from his bed?

These, I would argue, are some very big questions of human life, which literature can take up, but not take on.


5. In science as in philosophy the greatest names disagree with one another.

I think of the Einstein-Bohr debates...or of the debate between Einstein and Newton, which unfolded over centuries.


6. Science, besides seeking truth, seeks to entertain.

Guess what: science is fun. And maybe I'm a rarity for trying to use a little rhetoric and trying to avoid monotone in my scientific papers, but I don't think so. We all compete for each other's attention, and a little zip in the sauce never hurts. It's no different in the marketplace of scientific advances. For that matter, I think that excellent writing can be the clearest and most effective writing, even in a purely scientific context.


We could go on like this, but I have probably already become boring, so let me try to wrap this up quickly. (Also the semester is about to start, so I'd better get hopping on that.)

The question I came away with after reading this paragraph is, What does Mansfield think science is? By "science" he seems to mean some corpus of settled material, with its dry prose and careful skirting of controversy. He seems never to have witnessed the liveliness of scientists in their actual milieu: their daily bushwhacking along a dark and bewildering research frontier, as well as their Friday afternoon gab sessions, when everybody kicks back and debates the big picture. Listen in on one of these conversations, and you will realize that even two coauthors on the same journal article can differ importantly in the way they view their results.

Mansfield seems to view scientists as dispassionate masters of a circumscribed and uncontroversial text. But scientists disagree constantly with one another and argue heatedly amongst themselves, sometimes even about textbook material. Scientists also spend 90% of their day confused and off-balance: at a loss to understand their data; crumpling up the fiftieth attempt to get a calculation right. But no one in Mansfield's part of the campus seems to see their struggle and confusion; no one registers the rise and fall of their anxieties and ambitions. Perhaps this is because the scientists run academia now, and it is not in their interests to let their humanity show.

Sunday, August 19, 2007

Where Credit is Due

I remember the night Visa found me. It was 1993 and I was living in England. I was working late at the Mathematical Institute in Oxford. The phone rang. I was used to getting calls from my friends at all hours of the night - for one thing because of the time difference with the United States, and for another thing because my friends are the kind of people who are awake at all hours of the night no matter what time zone they're in. But there was no friend on the other end of this line. Visa had found me.

I left college in 1991 with more debt than I could handle and a bad habit of letting myself forget about it for months at a time. During my senior year of college I had fought a pretty good bout against American Express. I won on points: they wrote off the debt, and I gave back the card. But with Visa I got on my bicycle. I thought if I rode it all the way to England, I'd be safe. I didn't count on my mother turning me in.

***

My grad school years were lean, but they were also fat, because in Berkeley we lived like paupers but ate like kings. If one of us got a fellowship check, all of us ate Roquefort. I liked to say back then that my friends and I would be the first graduate students in the history of graduate school to come down with gout.

None of this lent itself particularly well to paying bills. I remember when I was first beginning to date a particular young woman, and the two of us went back to my apartment for the first time. We walked in, and I flicked the light switch. Nothing. PG&E had shut me down. Gamely, she lit some candles, then said she'd be just a minute in the bathroom. Soon, word came through the closed door: no toilet paper. Gamely, she accepted the paper coffee filters I passed through to her. Gamely, she paid my electric bill the next day. Surprisingly after such a beginning, our relationship lasted another two and a half years.

But to return to the issue of credit. In my fourth year at Berkeley, Capital One, an unknown but clearly hungry new company, offered me my first credit card in years, with a thousand-dollar limit. I took them up on the offer. On the very same day the card arrived, I maxed it out with a plane ticket to Amsterdam. Somebody over at Capital One lost their job that day, I'm pretty sure. But it was legitimate! I was scheduled to give a paper at a conference on quantum mechanics. But I didn't have funding for the airfare, so the card arrived just in time. My relationship with Capital One has lasted to this day, and it's had all the ups and downs of any relationship, though in this case mostly to do with APRs.

***

People ask me mathematical questions all the time. A friend once sent me the following cryptic email:

situation: test.
pool of possible questions: 5
possible amount on test to choose from: 2 or 3.
number of questions to answer: 1.

If I just choose two questions to study, what are the odds of having one of those show up on the test as a posed question to answer?


More recently, a friend asked how best to pay off a high-interest credit card. She'd been paying $600 per month on a $10,000 balance at 21.9% interest, and she was now considering cashing in a 403(b) account to eliminate the debt. After finding out some more about the person's situation, I advised taking out a home equity line of credit to pay off the debt. Assuming she stops using the card completely, then in three years, she can have the home equity loan paid off, and by that time she can also have $10,000 cash reserves in the bank - all for the same $600 per month she's paying now on the credit card. (The caveat of course is that you really do have to stop using the credit card, otherwise in three years you'll just find yourself maxed out again.)

In my own life, I have often wondered, if I owe X amount on a credit card at Y% interest, and I pay Z per month, then how long will it take me to pay off the card, assuming I don't charge anything else on it? The mathematics required to answer this is not trivial. There are online calculators that will do the computation for you - see this one for example. But the downside of using a calculator is that you don't get any insight into the problem you're facing. So I sat down one day several years ago and derived the answer once and for all. Here it is:



In this formula, b_0 is the initial balance, p is what you have decided to pay every month, and i is the daily interest rate, that is, your APR divided by 365. Note that the payoff time N turns out to be a function not of b_0 and p separately, but rather a function of the ratio p/b_0. Physicists will have seen this coming, thanks to their penchant for dimensional analysis. (The answer to the problem - a number of months - has no dollar signs on it; so the dollar signs on p and b_0 have to be got rid of. The only way to do that is to work with the ratio p/b_0.)

The derivation is here.

Getting into the details of this problem gives you a gut-level feel for an important financial fact: payments to the credit card company are mathematically the same as investments that grow with a guaranteed rate of return. Millionaires pay ludicrous fees to hedge funds in exchange for a guaranteed rate of return. Schmucks like us can do the same thing just by overpaying our credit card bill.

For convenience, I have put together the following table, suitable for printing out and stashing away in the utility drawer (click to enlarge):



For example, suppose your starting balance is $10,000, your interest rate is 12.9%, and you figure you can afford to pay $400 per month. Your monthly payment equals 4% of the starting balance. So look down the 4% column until you get to the row for an APR of 13%. You see that it will take you 29 months to pay off the card, or about two and a half years. (After a few months of making payments, call the company every so often to request a lower APR.)

Another example. Your starting balance is $10,000, your interest rate is 12.9%, and you really want to have this card paid off in a year, because you're planning to refinance your house in 18 months, and you want your credit score to be as high as possible. [Can somebody tell me when I suddenly became this middle-aged??] Looking at the table in the 13% row, you see that in order to have a 12-month payoff time, you're going to have to pay 9% of the starting balance, or $900, every month.

***

I liked Jean Chatzky's advice on her financial webpage. But the trouble with Chatzky's program is that it has nine steps. Who can remember nine steps? Who can carry them out? The people who can follow nine steps are not the people with maxed-out credit cards. So, as a recovered credit-card delinquent, I thought I would share my own program, which is so simple it has only one step:

Tithe 10 percent.

Tithing 10 percent means that any time any money whatsoever comes into your household - be it a paycheck, tax refund, honorarium, royalty, or even a good night's poker winnings - sit down that very night and send a check to your credit card company for 10 percent of whatever amount you brought in that day. Don't even wait for your account statement; just keep those checks moving out the door.

(If you have more than one credit card bill, divide your 10% tithe among the various cards in a ratio that makes the most sense to you.)

If your annual net income is, say, $30,000 per year, then tithing will divert $3,000 a year to your credit card, and what's more that'll be on a continual basis, hammering away at the debt before interest can pile up. They say interest never sleeps; well, don't let your payments sleep either. Think of your steady stream of checks as a flurry of jabs that will keep the credit card companies constantly on their heels.

If your tithe isn't bringing the balances down, then you can increase the percentage, make an extra payment when the account statement comes in the mail, and, most important of all, STOP USING THE CARD.

The system works, because in all honesty you can probably spare 10% of whatever you make, especially if you part with it immediately. After all, with the money gone, you can't very well blow it on Roquefort, can you?

Saturday, August 4, 2007

Flop on Pop

My new baby daughter loves to sleep on papa.





Unfortunately, this tends to pin papa down. Here's a quick review of some of the things that go through your mind when even a slight shift of position could throw your entire household into screaming chaos.

1. The best way to stay sane when you can't move a muscle is to invent word games on the spot and then try to play them. I have created dozens of these (see Word Puzzles for the Seriously Smart), and I'm constantly thinking of more. One game I'm playing lately is to think of what I call "unambiguous words." These are words that have only one meaning. By that I mean that the word has only one definition in your dictionary of choice. So far I have thought of the following examples, which have only one meaning in the American Heritage Dictionary, 4th Edition:

pub
schadenfreude
anyone
mattock
logy
vim
withered
bagel
inveigh
inure
iterate
venerate
emanate


I think it would be fun to assemble hundreds of these words and then use them to write poetry or short fiction. Would the paucity of meaning and the poverty of connotation lead to flat writing? Or would every word appear to be, in virtue of its specificity, "le mot juste"?

2. SLEW strikes me as an interesting word, because it can be interpreted as a noun - as in, "a slew of examples" - or as a past-tense verb (Cain slew Abel). So another game I'm playing is trying to find more words like this. The solutions tend to be rather choice. Here's what I have so far:

slew
dove
rung
spoke
felt
dug
spelt
moped
stove

I love these examples because (1) as verbs, they can only be past-tense; and (2) the noun sense and the verb sense have absolutely no conceptual connection to each other. A word like CUT is a past-tense verb and a noun, but CUT is also a present-tense verb, so it violates (1). A word like THOUGHT is a past-tense verb and a noun, but the verb sense and the noun sense are obviously related conceptually. Would love to see more examples satisfying (1) and (2), feel free to add more in the comments section.

3. As we all know, there is no such thing as "up" or "down." Better to think of it as "away from the center of the earth" and "towards the center of the earth." A propos of nothing, I wonder if you could raise a child in such a way that she understood this from the beginning. For example, you would never allow yourself to say things in front of the child like "What goes up, must come down." Instead you'd say, "What goes out, must come back in." Or when a song came on the radio like "Love Lift Us Up Where We Belong," you'd say, "What they really mean, honey, is Love Push Us Out Where We Belong."

Eh. Probably wouldn't work.

4. Although you can't move when you're sitting in that rocking chair, the upside is that you have lots of time to think about moving. Here is something I devised while thinking about locomotion and how it works on planet Earth.

***

The cosmic speed limit of 300,000,000 meters per second imposed by Einstein's theory of relativity is well-known. But it was only recently that I realized there's a way in which ordinary Newtonian physics also places some practical limits on your ability to move quickly from point A to point B on this planet.

To see how this comes about, let's suppose you plan to travel a distance D, beginning in a state of rest and arriving at your destination in a state of rest. Suppose also that your mode of travel relies on friction with the ground to make it work.

I should say, restricting yourself to a friction-based form of locomotion is not as limiting as it may seem. If you plan to run, walk, cartwheel, ride a bike, drive a car, or piggyback on the shoulders of a friendly robot, you'll be using friction to get where you're going. Among animals also, friction underlies the hopping of a toad, the inching of an inchworm, and the slithering of a sidewinder. For eons of evolutionary time, friction has been the basic engine used by man and beast for traveling on land.

An acceleration generated by friction will always scale as b*g, where g = 9.8 is the strength of the earth's gravitational field in standard units, and the dimensionless coefficient b characterizes the roughness of the two surfaces involved - say, the pavement and the soles of your shoes. The presence here of a material property such as the coefficient of friction needs no explanation. The reason for the presence of g is that, as it turns out, the maximum friction force attainable between two surfaces scales directly with the strength of the contact force pressing the two surfaces together. This is why you use "elbow grease" to get out a stain: by pressing harder as you scrub, you are making available a larger friction force to pull the dirt loose. In the case of locomotion, it's the earth's gravity that applies the elbow grease, pressing you to the ground. Hence, the maximum friction force you can use to push yourself forwards ultimately scales with the gravitational field strength g.

The implication of the acceleration-scale b*g is that, for a journey powered solely by friction, the time required to complete a journey of distance D will scale as (D/b g)^(1/2). Turning the reasoning around, we find that the greatest distance D you can cover under "friction power" within a fixed time T is given by D ~ b g T^2.

Plugging in some numbers, we find that in a lifetime of threescore and ten, the greatest possible distance you can cover - and still come to rest when the good Lord says you must - will be on the order of 10^(19) meters. This is a hundred times further than the earth-sun distance, meaning that the friction limit is not one that we'd actually bump up against in practice!

Here I've taken the coefficient of friction b to be of order unity, which is typically the case. Of course, the coefficient of friction does vary in value, depending on what sort of surface you're walking on (rough or slippery); and likewise it depends on what sort of material your bootsoles are made of.

So, one moral of the story: If you want to go far in life, wear track spikes.

***

The Newtonian limit of 10^(19) meters is actually not as strict as the Einsteinian limit, which is c*T ~ 7*10^(17) meters. But the two limits coincide when b g T^2 ~ c T, or when b ~ c/gT ~ 0.01. On a highly polished planet, the Newtonian limit would actually be the sharper of the two.

***

The friction limit is not fundamental physics. For example, it doesn't apply to travel by jet (or, what is much the same, travel by rocket). Rockets move by harnessing a controlled explosion near the rear of the craft. Essentially, the debris from the explosion bangs against the backside of the ship, knocking it forwards. Using a jet engine, you can lift gently off the ground, accelerate forwards at a great rate, coast at high speed, and then reverse the thrusters to bring you back to a state of rest, touching down gently at your destination. The material properties of the intervening land will have nothing to do with your trip time.

An ordinary propeller plane is a closer analogy to the friction limit. This is because a propeller plane depends for its propulsion on the viscosity of the air - and viscosity is more or less the fluid equivalent of friction between solids. A similar mechanism would be an oceangoing ship's propeller screw: it depends for its effectiveness on the viscosity of water. As Rayleigh observed in the 19th Century, if water had no viscosity - today we can produce such "superfluids" in the laboratory - then a ship's propeller would be useless; the ship would merely agitate in place, going nowhere, as the propeller blades slipped through the water without generating any thrust. Of course, as Rayleigh probably also observed, if water had no viscosity, ships would hardly need propeller screws; you could just give the ship a good shove, and it would glide through the water for miles.

***

In a watery environment such as the sea, organisms have a choice between using friction on the seafloor to drive them forwards, or using the viscosity of the water to generate thrust. Creatures who swim leave creeping things in their wake; but on land, where the parameter values are vastly different, the best horizontal runners and the best horizontal flyers enjoy similar speeds, topping out in the 70 mph range.

***

Earlier, I argued qualitatively that a friction-powered trip cannot be completed in a time less than about T ~ (D/bg)^(1/2). A detailed analysis shows that the precise lower limit is T_min = 2(D/bg)^(1/2). The analysis leading to this bound assumes that during the trip, your center of mass remains within a plane parallel to the ground. However, if you can launch yourself through the air Incredible-Hulk-style, then you can actually get to your destination faster. This is because, during the liftoff phase, you'll be pressing into the ground with a force greater than your own weight, and this extra "elbow grease" will allow you to generate a larger horizontal friction force than you otherwise could. Of course, you'll waste some time rising into the air and coming back down; but if you launch yourself at the shallowest attainable angle (ArcTan(1/b)), then the gambit proves worthwhile. In fact, it turns out that a distance D can be leapt in only a time T_min = (2D/bg)^(1/2). The factor of 2 in the ground-based strategy has become a factor of 2^(1/2), which is a time savings of about 30%.

If D is a distance of many meters, then leaping all the way to the destination will be out of the question for a mere human. But thinking of D as being only a meter or two, we see that the strategy of loping is superior to that of a gliding walk. While this observation hardly explains exactly how or exactly why humans spontaneously break into a run to save time - that depends on metabolism and the shapes of our bodies - it may go some ways towards explaining why running as a strategy exists among animals at all.

***

If you find this kind of thing at all interesting, let me recommend a wonderful book, Life's Devices, by Stephen Vogel. This book is the On Growth and Form of our times.

***

My scientific hero, John Bell, once said that impossibility proofs in physics are proof of a lack of imagination. My proof that you can't travel a distance D in a time less than (2D/bg)^(1/2) is no different. For, rather than leaping the distance D, you could simply invent the starting block and get to your destination as fast as you like. (Don't forget the anabolic steroids.)

I sometimes wonder if the cosmic speed limit imposed by Einstein's relativity theory has a similarly simple countermeasure.

***

If you've made it this far, I end with a scrap of creative writing inspired by the foregoing.

***


The Ribenians secrete a thin fluid from their pores, which renders their bodies nearly frictionless. Life in their society is very difficult for this reason.

Efforts to clothe the Ribenians proved fruitless. After many attempts, a shoe was placed on the foot of a child. It slid off as soon as he rested his foot on the ground. Adults, bound in linen cloth, soon faced into their wrappings, or writhed out of them entirely.

The Ribenians organize themselves into clans. Each clan lives in a hollow, where the clan members wriggle over one another in a great tangle of bodies. Life in the clan is a continual struggle to reach the higher levels, where there is more light and air, and where one may drink fresh rainwater instead of ground-seepage. The traditional technique for ascent is to migrate to the bottom of the hollow, then push directly downwards with the feet, with enough force to propel oneself upwards through the layered bodies to reach the surface. The overzealous individual emerges with too much residual momentum, expelling himself from the clan. He glides smoothly along the gently rising grade of the hollow. Slowing continuously, he may eventually fall back towards the clan, or he may slide over a crest and into a neighboring clan's hollow. Such events allow the Ribenians to maintain genetic diversity in their tribes.

Friday, July 6, 2007

Reverie on the Principle of Equivalence

Dry under my porch, I lean against a weathered wooden table, drinking coffee and watching a steady rain.

As I look out toward the mountains, the scene reinterprets itself. The table, my house, the mountains: all of us hurtle upwards, rising faster and faster into the sky, splashing our way through a motionless mist.

Might I only slow, and stop, and find myself among sparkling constellations, each round suspended droplet spinning slowly like a liquid star.

Still gaining speed, I set down my cup, which now seems heavy as it pins the saucer to the table.

How long can we keep climbing like this?

Sunday, June 17, 2007

Inertia and Determinism

For one reason or another, I often find myself driving the Taconic State Parkway at night (which, if you've ever done that, you know what I'm saying). On one of these trips I was heading back up to Vermont after a dinner meeting in New York. It was already late when I left the city, and out on the dark road I was keeping myself awake by thinking of some fiendish physics problems to give to my students. (Such are the revenge fantasies of physics professors.) I thought of this one:

A particle moving in one dimension has the trajectory x(t) = t^4.

(a) What is the velocity of the particle at time t=0?
(b) What is the net force acting on the particle at time t=0?
(c) In view of (a) and (b), why does the particle move at all?


I ran through the answers in my mind: (a): The particle is at rest at time t=0. (b): There is no net force on the particle at time t=0. (c): (c): (c):

My grip on the wheel loosened, and my eyes focused on the far distance, as I realized that I didn't really know the answer to part (c) myself!

The Taconic is basically a giant deer park, especially in the middle of the night, so for safety's sake I had to drop the problem and get back to scanning the verges. But over the next year I thought about the problem from time to time, struggling to clarify my thinking on some subtle issues. Finally I wrote up some of the results, and the resulting article, entitled "Inertia and Determinism," has been accepted for publication in
The British Journal for the Philosophy of Science.

I continue to reflect on the curious trajectory of this project - from its whimsical origins in my teaching practice, to its fruition as a published research paper. And meanwhile, my Year of Isaac Newton continues: this summer I'm writing the solutions for the end-of-chapter problems in my manuscript on Newton's Laws. Hopefully I'll know how to answer my own questions this time....

***

Just for the sake of interest, I'll end by excerpting some of the less mathematical material from the paper; for references, see the PDF:

From J. Zimba, "Inertia and Determinism," to appear, Brit. J. Phil. Sci.:

"Beginning in the 19th Century with Ernst Mach and Rouse Ball, and continuing on to more recent times, commentators on Newtonian mechanics have universally asserted that the Law of Inertia follows immediately from the Second Law. Does it? If we are willing to insist, as an axiom on a par with the Laws of Motion themselves, that non-Lipschitz forces do not belong to the theory, then the answer is yes. But if we do not wish to make this a priori restriction on the kinds of forces that may appear in the theory, then we can say two things. First, the Law of Inertia itself stands incomplete, or at least ambiguous, until a specific approach to [[problems like the x(t) = t^4 problem]] is selected. And, second, whatever approach is selected, the completed Law of Inertia will no longer follow mathematically from the Second Law. The Law of Inertia would instead act as a boundary condition, selecting the physical trajectory from among the many mathematical solutions to the Second Law differential equation."

...

"Within the community of mathematicians and physicists, it is often taken for granted that Newtonian mechanics has a deterministic structure. At times, this is even made a matter of definition. Arnold defines classical mechanics as the study of 'the motion of systems whose past and future are uniquely determined by the initial positions and initial velocities of all particles of the system.' Landau and Lifshitz go beyond mere definition, claiming that determinism is in fact an observed feature of classical systems. But such observations could only apply to the small class of non-chaotic systems. And for that matter, to the best of our knowledge, our world is not, in fact, deterministic; so the claim that determinism has ever been observed is open to dispute.

"Mathematicians like Arnold probably want to impose smoothness conditions because doing so makes theorems easier to prove. Then, having imposed the smoothness condition, they don't want to feel that they are leaving out any interesting behaviors; so they define classical mechanics to be the very mathematical object they are studying. Maybe the mathematicians are correct that their theorems are not leaving out any interesting behaviors. But I don't think they can be correct in saying that the smoothness conditions of their treatises are mandated by observed facts about determinism."

...

"One response to this example might be to attempt to repair [the Law of Inertia], or to strengthen it, leading to a conception of the Law of Inertia so strong that it ensures determinism in all possible situations. But it is unclear whether such a program is mathematically possible. Another approach would be to give up the attempt to complete the Laws of Motion, and simply conclude - despite the prejudices of history - that Newtonian mechanics is, and always has been, an indeterministic picture of the universe."

Sunday, June 3, 2007

This Time, It's Personal

Tomorrow morning I'm flying to Detroit. I'll be returning to Vermont the very next day on a chartered plane. With me on the charter will be some crew members, some medical personnel, and both of my parents, who'll be secured in gurneys for the flight. After we land at the Bennington regional airport, an ambulance will carry my parents and me to the Prospect House nursing home, which stands at the edge of the campus where I teach.

When we get to the nursing home, an aide and I will wheel both of my parents into my dad's room. My mom will want to see where her husband will be staying. Next I'll wheel my mom into her room, so she can meet her roommate. I'll hang some family pictures on the walls of both rooms. I'll make a list of what their rooms will need in order to be comfortable. Then I'll wheel my dad over to visit with my mom in her room. Later, my wife will come over from work, and she and I will sit with my parents for dinner. When dinner is over, my parents will each go back to their rooms and sleep among strangers in strange beds.

Apparently, my dad was quite a traveler in his day. I've seen pictures of him that were taken in rural Mexico during the 1960's, and he's told stories of a trip to California back in 1955. (He hired a farmer to fly him around Orange County in a cropduster, back when it was all farmland there.) When my sister and I were three or four years old, he brought us back some souvenirs from a trip to Morocco. But that trip in 1973 or thereabouts was apparently the last hurrah. In the 1970's the bottom started dropping out of Detroit, and the restaurant my parents operated began losing money. In 1985, after years of struggle, my dad finally sold the business and the land it was on - land that had once been pasture on his father's farm.

By 1999 my parents were doing better, thanks to the income my dad was earning as a screw machine operator. A strong man, and a hard worker ever since his childhood on the farm, he was now (at the age of 72) working sixty-hour weeks in a hot factory. I know how hot it was, because I worked with him there back in the summer of 1992, when I was between terms at Oxford. My dad and I were both on second shift. As a Rhodes Scholar/Night Janitor, I was responsible for cleaning the front offices, as well as the shop floor and bathrooms. (Let me tell you sometime about the hazing rituals that factory workers have for new college-boy janitors. For now, I'll just say that one of them involves playfully permuting the functions of various bathroom fixtures.)

That summer my dad and I both reported to the night foreman, who was an asshole named Hugh, but everybody called him Baby Huey when he wasn't around. One night I got some kind of food poisoning, and I was lying flat on my back out on the oily shop floor, moaning and holding my stomach. Huey came by and saw me lying next to my mop bucket, and he said, "What's wrong?" I said, "I'm sick." He said, "You know, your dad works." (His tone added a parenthetical, "Unlike you.")

In 1999 I had found more agreeable work as a doctoral student in physics at Berkeley. On the academic schedule I had plenty of free time, and my mom and I thought it would be nice if my dad could do some traveling, like when he was young. So I planned a trip to Istanbul and Rome. I flew from Oakland to Detroit, where I picked up my dad, and together we flew from Detroit to Istanbul for the first leg of the trip.

When I saw him, the first thing I noticed was that he was walking on his tiptoes with an odd shuffling motion. When I mentioned it, he said that he'd been healing slowly from his hernia operation. (He had finally gotten around to having an operation for a hernia that he got while working construction back in 1985 - his first job after selling the restaurant.) This made sense, so I didn't think much of it.

That shuffling gait was the onset of a neurodegenerative disease, one that resembles Parkinsons in its symptoms, while not responding to Parkinsons treatments. Today my dad is in bed almost 24 hours a day. Apart from the motor control and related issues, he's in fairly good health. He's just completely helpless.

When I graduated from Williams College in 1991, my whole family came out for the commencement ceremonies. During the weekend, we took a side trip to Mount Equinox in southern Vermont. My mother, who grew up in Chattanooga living amidst the Smoky Mountains, pronounced Vermont to her liking. She said that if there were anyplace she would ever go to live besides back to Chattanooga, it would be Vermont.

Were she born in a different time and place, my mother would have been a senator. Though uneducated, she has an iron will and rare qualities of intellect. As recently as six months ago she was enjoying her usual pastimes, which include correcting the poker players' mistakes on ESPN, reading 50 books a month, and doing the Sunday Times crossword with fearful automaticity.

Then in January she came down with pneumonia and septicemia, spending the next month in and out of the hospital. The doctors also diagnosed her with emphysema and a heart condition. (It had been decades since she'd had a physical.) Her illnesses have aged her: dulled her wits, left her weak. Since February she's been mostly confined to her bed at home, breathing compressed oxygen.

My half-brother Wayne, who lives in my parents' basement and who is frankly a little slow, has been my dad's primary caregiver these past several years. As long as my mom was healthy, things were stable. The situation had its drawbacks, but it kept my parents in their home, which is what was most important to them.

But since February, when my mom returned home from the hospital, my parents have both needed care 24 hours a day from home health aides, to the tune of $24,000 per month. As my parents' attorney-in-fact, I have been signing the checks. From the start, simple arithmetic said they would have to move.

None of my parents' options were good. My brothers and sisters and I have chosen as well as we could for them. On Tuesday, my mom will see the green hills of Vermont again, and my dad will have one more plane ride.

Sunday, May 27, 2007

The Fundamental Theorem of Weight Loss

Background:

"Low-Carb Diets Get Thermodynamic Defense", on Nature.com.

"Is a calorie a calorie? Biologically speaking, no" in Letters to Am. J. Clin. Nutr. 2004;80:1445-54.

Feinman RD and Fine EJ, Thermodynamics of Weight Loss Diets, Nutr. Metab. (Lond.) 2004 Dec 8;1(1):15.

Fine EJ and Feinman RD, "A calorie is a calorie" violates the second law of thermodynamics, Nutr. J. 2004 Jul 28;3:9.


Recently I gave my physics students the following scenario, basically just for fun:

Imagine what would happen if food scientists were to invent a kind of intense "supersweetener" with 3500 calories in a single ounce. If you were to ingest an ounce of this sweetener, then how much weight would you gain?

A few of the students knew offhand that 3500 calories is the equivalent of a pound of fat. So everyone figured, well, if you ingest 3500 calories, then you should gain a pound.

But obviously, if you ingest only one ounce of material, then your weight could not increase by any more than one ounce.

The students hated this answer! For one thing, many of them had forgotten (or never known in the first place) that mass is always conserved in chemical reactions. And they had also never really viewed human metabolism in the abstract as just one great big chemical reaction; even the biochemistry students were down in the details of ATP and such.

So I said, well, just imagine that you're standing on a scale when somebody hands you the ounce of supersweetener. As soon as they put it in your hand, the scale will tick up an ounce. And nothing more will happen when you swallow it. The scale doesn't know or care whether you're holding the supersweetener in your hand, or in your mouth cavity, or in your stomach cavity. And as you digest the supersweetener, the molecules will separate and go here and there, but until they emerge from your body and find their way into the environment, the scale reading won't change one bit.

There's a Fundamental Theorem of Calculus, a Fundamental Theorem of Algebra, and a Fundamental Theorem of Poker. I nominate the following as the Fundamental Theorem of Weight Loss:

Weight loss per day = mass in - mass out.

(You'll have to forgive the conflation of weight with mass; I'm assuming that all weight loss programs will take place in a static and uniform gravitational field, so that it will not cause a problem.)

According to the Fundamental Theorem of Weight Loss, if you want to lose weight, then your challenge is simply one of routinely defecating, urinating, sweating, vomiting, and exhaling more mass than you ingest on any given day. (You could also amputate something, deliver a baby, clip your toenails, get a haircut, or hawk a really big loogie.)

By the way, people suffering from eating disorders have long understood the Fundamental Theorem. With a ruthless logic, the anorexic minimizes the "mass in", while the bulemic maximizes the "mass out" using the ultimate weight-loss "foods", namely laxatives and purgatives, which trigger mass losses in excess of their own mass.

But if "mass in minus mass out" is all there is to weight loss, then why all the talk about counting calories? Don't calories make you fat?

Calories do in fact work well as an indicator of the kinds of foods that tend to make you fat. So in view of the Fundamental Theorem, calories must basically be a rating of how much the human digestive system "grabs onto" different foods. Eat a piece of celery, and most of its mass will find its way out of your body through defecation (cellulose passes through) or urination and respiration (much of the mass of celery is water). But eat a Snickers bar, and your body's going to say hey, let's hang onto that good stuff. Eat an extra two-ounce Snickers bar every day, at something like a fifty percent mass retention rate, and at the end of a year you'll be 20 pounds heavier.

So here's an idea: Instead of printing calorie counts on food labels, why not show the weight you will actually retain by eating the item in question? For example, the label on a candy bar with a net weight of 2 ounces could say something like, "Retained Weight 1 ounce." In other words, of the 2 ounces of input mass, your body's going to hang onto 1 ounce for the long term.

Labeling foods this way might make it easier psychologically for people to resist foods that are going to make them fat. The motivation factor would be clearer because you're no longer trying to avoid the abstract threat of a calorie; instead you're scoring yourself by the very same metric that shows up on the bathroom scale. If you knew exactly how much of that candy bar was still going to be with you in the morning, you might pass it up. (Thanks to the Fundamental Theorem, I can now visualize the act of eating a candy bar as amounting to a process of melting the chocolate down and smearing it all over my midsection. Want to eat a whole pizza? Why not just save time and staple it to your shirt front. Will that Twinkie go straight to your thighs? Well, not all of it - just half an ounce or so.)

Something else the students challenged me on is the question of exercise. Isn't the goal of exercising to burn calories? If "mass in minus mass out" is really all there is to weight loss, then how does exercising help you to lose weight?

Somehow, exercising must turn out to be an exercise in the expulsion of mass, the key mechanism presumably being breathing out CO2. CO2 molecules don't weigh much, but they weigh a lot more than the O2 molecules you breathe in to fuel the metabolic process--about 30% more.

There are charts that tell you how many calories you will burn by exercising in various ways for given lengths of time. But maybe the charts should cut to the chase and tell you how much mass you can expect to lose by exercising in various ways for given lengths of time. Personally, if my goal is to change what the scale says, then I'd prefer for everything in the conversation to be couched in the scale's units.

We might for example have a universal table like the following, which I sketched out using the rough conversion 8 Calories "=" 1 g of fat (sources here and here):

Butter: 90 grams retained out of every 100 grams consumed (sigh)
Bagels: 25 grams retained out of every 70 gram bagel consumed
Beef tenderloin: 1.2 ounces retained out of every 4 ounces consumed
Carrots: 0.2 ounces retained for every 4 ounces consumed
Jogging: 17 minutes to lose 1 ounce (for a 190-lb person)
Raking leaves: 33 minutes to lose 1 ounce (for a 190-lb person)
Rowing: 14 minutes to lose 1 ounce (for a 190-lb person)

***

Postscript: I first thought of the Fundamental Theorem of Weight Loss back in 2004, but this past semester was the first time I tried using the example in class. Well, my students were pretty skeptical of the whole notion. With my pride thus challenged, I went to the web later that night and found two experts, Dr. Richard Feinman of SUNY Brooklyn Health Sciences Center and Dr. Eugene Fine of the Albert Einstein College of Medicine, who have been publishing technical papers on metabolism and diet for quite some time. The papers linked to at the top are very much concerned with the question of whether "a calorie is a calorie is a calorie."

The two men were kind enough to respond to my emails, and I'm hoping that they will come to Bennington sometime to discuss their work. They verified the truth of the "mass in minus mass out" thesis - as an application of basic physical law, it could hardly have been wrong - and they also had many more interesting things to say. Two brief excerpts from their emails:

"Calories in the context of diet are a nutritional invention with many unfortunate and misleading consequences, but the concept has become so entrenched that it is impossible to discuss weight change without making reference to this usage." (Fine)

"This established, the remarkable thing is that, under most conditions, where careful measurements are made, a calorie IS a calorie, that is, the calories in food predicts weight gain or loss between diets. The above, however, means that this is not a thermodynamic effect but rather the specific characteristic of living systems." (Feinman)

So calories seem to work well as a proxy for mass retention.

Sunday, May 6, 2007

Understanding Exponential Growth

Ever since I saw this awful page about exponential growth (namely zebu.uoregon.edu/1999/es202/l3.html, which I complained about earlier), I've been considering the question of what a good piece of curriculum would look like for teaching exponential growth.

With apologies for diving right in, let me share a few working hypotheses:

1. Fluency with the mathematics of the exponential function does not automatically or inevitably lead to a rich intuitive grasp of exponential growth.

I arrived at this hypothesis by reflecting on my own educational trajectory. Though I mastered this body of mathematics as a teenager, I would say that my instinctive feel for exponential growth has become strong only in the past few years. (In fact, I wonder if my technical facility with mathematics actually shielded me from ever having to develop a rich mental idea of exponential growth.)

2. Fluency with the mathematics of the exponential function is not even necessary for having a rich intuitive grasp of exponential growth.

This is stating the case strongly; but for now I'm interested in pushing this perspective as far as I can.

3. A great piece of curriculum for exponential growth would be a valuable, eye-opening, and even transformative experience for a wide variety of audiences, including college students of all kinds, college faculty members, and adults outside of academia.

***

I don't have this magic piece of curriculum yet, but what I assembled recently for my Rediscovering Math class was perhaps a small start. A portion of what we covered is reproduced below. A leisurely ramble it may be; but I would also say that here & there it contains some real mathematical insights about exponential growth. (With respect to Hypothesis #1 above, I should say that I arrived at some of these insights for the first time as I prepared to teach this class!)

***

We begin, as one might expect, with vampires.

Vampires

A biologist on the Bennington faculty pointed me to this amusing paper by two physicists that aims to debunk various items of folklore about ghosts, vampires, and zombies. The vampire section was one of those surprising examples of exponential growth. The authors point out that according to standard (pre-Anne Rice) vampire lore, vampire-human ecology is simply a non-starter. The authors argue that with vampires feeding on people who turn into vampires who feed on people who turn into vampires who feed...and so on...then it would only take about three years for the entire world's food supply to run out!

When I saw this argument, my first reaction was embarassment that the absurdity of vampire population dynamics has always been right in front of my face without my ever having noticed it. My second reaction was to defensively poke holes in the argument. For example, the authors conclude by reductio ad absurdum that there's no such thing as vampires (or else we'd all be vampires by now); but we might alternatively conclude from the reductio that we're all just about to be vampires, or that vampires must have natural enemies, such as werewolves. (As in the "Blade" movies. Perhaps Hollywood understands exponential growth better than most.)

I encourage you to read the article to get a sense of how the numbers work out. But for the sake of time, let's move on.

Paradoxes of paper folding

There's an old saying that you can't fold a piece of paper more than seven times. In class, we tried it with 8.5x11 sheets of paper, and everybody managed the same number of folds—just six. Then we went out in the hallway and tried another folding experiment, this time folding a very long sheet of paper towels, over a hundred feet long. (We only used lengthwise folds in this case.) As it turned out, the difference between a single sheet of paper and a hundred-foot-long strip of paper was only a single fold! Seven, instead of six.

It was fascinating to enact the folding process for the long strip. After the first two or three folds, everything seemed to be going fine. Then, when we went from fold #5 to fold #6, the game was suddenly up. (More about the suddenness of exponential growth below.)

But why is paper folding an example of exponential growth at all? There's clearly some sort of doubling going on—or, what is the same, some sort of halving. And somehow this must be related to the difficulty of persisting in the folding process beyond a very few steps. But to draw the connection more clearly, I presented the students with a rough mathematical model of paper folding. The derivation is shown pictorially below; it leads to the equation L/t = 2^(2N), where L is the length of the strip of paper, t is the thickness of the paper, and N is the maximum number of folds obtainable. Though the model is crude, it does reveal the exponential nature of the process, and it shows N to be a function of the length-to-thickness ratio, as we would expect. Using this formula, we were able to estimate the number of times one would be able to fold a strip of paper that initially encircles the earth along the equator. (Guess how many!)



There's a nice echo of this in Philip and Phylis Morrison's excellent book The Ring of Truth, based on their 1987 PBS series. The Morrisons explain that expert Chinese chefs are able to make incredibly thin noodles called "Dragon's Beard" by repeatedly stretching the noodles by hand and cutting them in half at each step. In two minutes, the chef interviewed in the Morrisons' book has achieved twelve doublings, yielding about four miles of noodles, each noodle about twice the thickness of a human hair. Legendary chefs of the past were said to attain thirteen doublings.

Grains of rice

We all know the paradox of the Chinese emperor and the grains of rice. The way I tell the story, a wise man does a favor for the emperor, and the emperor asks what he might do in return. The wise man asks for 1 grain of rice to be placed on the first square of a chess board on the first day, 2 grains to be placed on the second square on the second day, 4 grains to placed on the third square on the third day, and so on, doubling the number of grains each day. The emperor agrees, and after a couple of weeks, all of the rice in the empire belongs to the wise man!

The night before class, I wondered whether I might attain a greater understanding of this paradox by acting out the wise man's challenge. So I sat on my kitchen floor and arranged 1+2+4+8+16+32+64+128+256 = 511 objects into a geometric pattern.


(The largest objects are dried beans, then it's rice grains, and then anise seeds.)

Just as in the experience of making "A Few Iron Posts of Obseration", I found it instructive to "think with my hands" for a while. I sat peacefully on my kitchen floor, pushing the pieces around, planning the next stage, and repairing damage from the occasional errant finger or too-vigorous exhaling of breath. With my hands busy, my mind was free to roam. I reflected on the way my ever-shrinking materials —beans, rice, seeds—resembled the ever-shrinking computer chips that carry out our society's calculations. If I needed to take that next step to an outer tier of 512 objects, how would I fit them into the structure? What objects could I use? Salt grains? How then would I manipulate such tiny objects and put them in the proper places? How would I better control my breathing and other destructive effects?

Likewise, how will we continue to shrink our processors to reach the next tier? What will we make them out of, and how will we assemble their circuits? How will we protect them from environmental interference? Can our ingenuity keep up with Moore's Law forever?

The Megamountain

The latest model I've come up with for explaining exponential growth is something I call "the Megamountain." Here's how it goes.

We're going to imagine climbing a mountain. First, hink about what it would be like if the mountain had the same steepness all the way up. What would this mountain look like? Try to draw it.

Next, think about what it would be like if the steepness of the hill kept increasing steadily as you went up. What would this mountain look like?

These first two mountains look something like the cartoons shown below.



Now for the mountain you don't want to climb: The Megamountain. On the Megamountain, the steepness of the mountain at any point is proportional to the altitude at that point.

That's the rule of the Megamountain: The steepness is proportional to the altitude.

If you think about this rule carefully, then you begin to realize that the Megamountain is a runaway situation. Because if you're high up, then [by the rule] it's steep; but, if it's steep, then because of that your next step gains a lot of altitude; but [by the rule] that means it's now going to be even steeper; but that means your next step will gain altitude even faster than before; but that means it'll now be even steeper; and...AAAHHH! It makes my head hurt to think about it!

When I think about what it would be like to climb the Megamountain, I actually get a panicky feeling that I can't possibly keep on going this way. I don't even want to take that next step, because every step is feeding a vicious cycle.

(By the way, you'll notice that I'm not going to try to draw the Megamountain. That's because it can't be drawn; not really. Sure, you can plot a graph of y = e^x, but in the end, you're going to find yourself plotting only the region around x-values of order unity. And in this region, the curve looks roughly similar to a parabola, so you haven't shown what is special, and terrifying, about runaway exponential growth.)

A collection of runaway situations

There are a lot of runaway situations like the Megamountain, including:

* Unchecked population growth: The number of babies born is proportional to the number of people already here. More babies make more people make more babies make more people....

* Gestation: Suppose you had to build a hundred billion houses in 9 months. I think you would quickly hit upon the idea of building houses that build houses. This is how we get from a single fertilized egg cell to a big fat baby in only 9 months. The cell is a house that builds houses. Like grains of rice on a chess board, the number of cells added is twice the number of cells that were there before. More cells make more cells make more cells make more cells....

* Chain reactions. A uranium nucleus splits into two, and the two products strike two more uranium nuclei, causing them both to split in two; their four products strike four more uranium nuclei, and so on. This process is called fission; it was actually named for the biological process of cell division (which was called fission first). In the same way that a big, chubby baby begins with a single cell, here a single subatomic "pop" is magnified, in a millisecond, into an explosion that can level a whole city.

Mathematically, a pregnancy is a runaway chain reaction in the uterus...an explosion of a kind, but one that takes 9 months to unfold.

* Compound interest. The amount of money credited to your account is proportional to the amount of money already there. More money makes more money makes more money makes more money....

But you know, I have had savings accounts, and I have never exactly had the feeling that my money was undergoing an explosive chain reaction! The reason is that the interest rate is so small. It's true that if you wait long enough, your money will double, then double again, and eventually bankrupt the Chinese empire. But the question is, how long will it take to double?

The rule of 72: Divide 72 by the interest rate, and that's how many years it will take to double.

Example: You have a CD earning 4 percent interest. Divide 72/4 = 18, so your money will take 18 years to double. After 18 more years, it will double again.

Warning: The cost of goods and services is also growing exponentially at a 4 percent rate (at least), so by doubling your money in 18 years you are really just keeping up. Your $100 today will double to $200, but that $200 will only buy what $100 buys today. Hence, if your money is not earning at least the same as inflation, the growing cost of goods and services will outstrip the value of your money, and you will actually be losing money in real terms. This is called "inflation risk," and it's the reason you have to put at least some of your money into higher-risk/higher-return investments.

The suddenness of exponential growth

I like to show people these two crude movies that I made a long time ago. Both movies are cartoon visions of what it might be like to ride in a spaceship that splashes down on the north pole. In the first movie, the spaceship moves at a constant speed. In the second movie, the spaceship moves at an exponentially increasing speed.

(The views are through a porthole on the spaceship. Sorry about the aspect ratio - something got screwed up when I put the videos on YouTube.)

* Note, a clearer version of the first movie is available here.

* Note, clearer version of the second movie is available here.






Whenever I show people these videos, they can hardly believe the second video. It says a thousand words about the way exponential growth can sneak up on you—and how unstoppable it is, once it gains momentum.

Where does the suddenness of exponential growth come from mathematically? One way to think about it is to recognize that when we "run the clock in reverse," exponential growth is a continual process of cutting in half. This means that any process of exponential growth must spend a very long time at very small values. And in any graphical or visual sense, the point is that one very small number is going to look visually just like another, even if the two numbers in question differ by many orders of magnitude. (On a graph with values ranging from 0 to 1, a value of 0.0003 is going to be indistinguishable from a value of 0.0000000008—even though you'd much rather your chance of winning the Lotto were 0.0003 instead of 0.0000000008!)

Additionally, when you throw in the fact that the rate of change of any changing quantity involves taking simple differences, you see why the rate of change can remain small even when the underlying numbers are actually growing by orders of magnitude. (The difference between two small numbers is necessarily small, even when the two numbers differ by orders of magnitude.)

All of this is why you can watch something "growing exponentially in time" and wonder why it's just sitting there. It's just sitting there, sitting there, sitting there, and BANG! All of a sudden it explodes. The explosion happens when your numbers take the crucial steps from "smaller" to "small" to order-unity. Prior to order-unity, it looks like nothing is happening; after order-unity, it's too late to do anything.

***

Ultimately, "understanding exponential growth" might have little to do with being able to solve certain classes of transcendental equations. It might instead depend on having a gnawing feeling in your gut that exponential growth is an unstoppable force: not only an unclimbable mountain, but an insatiable feeding machine that will devour anything in its path; and a monster that will lie in wait, lie in wait, lie in wait, and then leap forward in the blink of an eye.

***

But to end on a more positive note, we should also remember that exponential growth can also be a resource. (Anybody need four miles of noodles? Just fold 12 times!) In other words, the best weapon we have against the exponential function might be the exponential function itself. This idea arises for example in the theory of quantum computing, which, when it gets here, will be a process in which the computational resources at our disposal grow exponentially with the number of particles in the processor. Previously intractable problems will become solvable in an instant!

Exponential growth as a resource also comes up in tipping point phenomena - you tell two friends, and they tell two friends, and so on and so on and so on. The numbers stay small, but they're working their way up the orders of magnitude, until we reach the fateful stage of order-unity. We usually think of this model in connection with epidemics and fads. But what if the thing we're spreading is instead a message of positive social change: such as one about changing our habits of energy consumption? That would be fighting fire with fire.