Archive for the 'Adventures in Mathematics' Category

Killing Babies

Cross posted at Agricola


In this morning’s statistics class, when Professor J*#^% asked someone for a value, a student answered with “.4” The response prompted the professor to note that he (and most statisticians) likes values taken to at least three places… .4257 being a much preferred answer to .4.

The incident prompted him to tell us a brief story about precision.

Seems that our professor was in a Differential Equations class way back when. The class was discussing the results of a test when one student protested a 10 point deduction for placing a minus (-) in a section of his answer when none was called for. The (since deceased) professor responded with this:

Son, I was in school with a boy that eventually went on to become a pharmacist. He was pretty good, too. Well, one day, a customer came to him with a prescription, and he set right to work on filling that prescription. He performed the necessary calculations, measured out the correct proportions, and mixed that prescription for the customer. Only thing was, he put a minus (-) in his equation that shouldn’t have been there. The customer took that prescription home and gave it to her baby. The baby died. I took those ten points off your grade because I don’t want you killing any babies.

So, Professor J*#^% and his classmates embraced their professor’s sage advice. From that moment forward, whenever they compared grades, the question became: “How did you do?” and the answer was: “Oh, I killed two babies” or “Killed one baby”, or on a really bad test, “Killed 4 babies”.

Now, whether you, dear reader, are appalled or not, rest assured that every one us in that statistics class knows (forever) the value of precision.

Math Wars…Resumed

Before I resumed my education and began this journey in Discovery Informatics, I did as much research as possible. Among those efforts was a meeting with the Assistant Chairman of the Mathematics Department. I disclosed my dream, my background, and then got to the point: Could I, at my age and with my lack of  background in math, possibly get through the DI program? His response was brief, brutal, and very honest. If you struggle with pre-calculus and algebra, you probably shouldn’t be in the program.

Fair enough. The A in algebra boosted my spirits, but the C+ in pre-calculus scared me. Then it was on to Calculus I…….a mightly battle from which I emerged scarred, and, to a certain extent, wiser.

Today, I walked into my Calc II class. Yes, there stood my old friend, the Assistant Chairman. He began the class with a brief slide presentation; the last dozen or so semesters of Calc I students whol earned either A, A-, or B+ in the class. Know from my Statistics classes that they represent a sample of sufficient size so that we can assume a normal distribution, aka, the bell curve, in the grade distribution. Note, too, that he did not include in his sample population those students who earned a grade less than B+ (like me). He then showed a grade distribution of those students in Calc II.

The median was a C+. There were plenty of grades worse than that (I know, and you should too, the median is the 50th percentile). Some freshman whippersnapper, fresh off his AP SAT score, and thus placed in this class, and heretofore considered by his high school classmates as a genius, stated to the professor that he would, without doubt, get an A. The prof begged to differ, stating that half of us will drop or fail, and of the rest, only 2 or 3 will get an A. Added the prof, You might get an A, and I hope you do, but numbers don’t lie.

Whatever sangfroid I might have felt disappeared completely during this exchange of data, to be replaced with that old familiar sensation….gut wrenching fear. Pulse racing, blood pressure elevated, the room suddenly became too warm and I struggled to breathe. I thought that I had trained myself to suppress these periods of anxiety (that primarily arrived just before any tests), but NO!

So the battle resumes. Visits to the math lab, visits to the professor’s office, Sundays spent studying, and anxiety like you don’t know in the days before each test (4 and a Final that is cumulative); these will be my routines this semester.

Wish me luck, I’m gonna need a lot of it……..

e

I spent some time last night reviewing some math fundamentals in an attempt to get my brain moved from el mundo de Español and back into the world of functions, algorithms, and the basics of calculus. You should know, too, that I have spent the summer collecting urls for math sites that might be of some use to me as I travel deeper in the abstraction jungle. One site last night was particularly helpful…..on the subject of e, the mathematical constant. If you are like me, you might ask, what is e? And, what is it good for? (Not absolutely nothing…..The song. Sorry….)

Believe me when I tell you that my calculus textbook is worthless in terms of an explanation. For a simpler (?) explanation, let’s go to Wikipedia:

The mathematical constant e is the unique real number such that the function ex has the same value as the slope of the tangent line, for all values of x.[1] More generally, the only functions equal to their own derivatives are of the form Cex, where C is a constant.[2] The function ex so defined is called the exponential function, and its inverse is the natural logarithm, or logarithm to base e. The number e is also commonly defined as the base of the natural logarithm (using an integral to define the latter), as the limit of a certain sequence, or as the sum of a certain seriesrepresentations of e, below). (see

The number e is one of the most important numbers in mathematics,[3] alongside the additive and multiplicative identities 0 and 1, the constant π, and the imaginary unit i.

The number e is sometimes called Euler’s number after the Swiss mathematician Leonhard Euler. (e is not to be confused with γ – the Euler–Mascheroni constant, sometimes called simply Euler’s constant.)

Since e is transcendental, and therefore irrational, its value cannot be given exactly as a finite or eventually repeating decimal. The numerical value of e truncated to 20 decimal places is:

2.71828 18284 59045 23536…

e is the unique number a, such that the value of the derivative (the slope of the tangent line) of the exponential function f (x) = ax (blue curve) at the point x = 0 is exactly 1. For comparison, functions 2x (dotted curve) and 4x (dashed curve) are shown; they are not tangent to the line of slope 1 (red).

Got that?

Good, neither did I.

But this guy does, and does a helluva job explaining it.

In a nutshell:

e is the base amount of growth shared by all continually growing processes. e lets you take a simple growth rate (where all change happens at the end of the year) and find the impact of compound, continuous growth, where every nanosecond (or faster) you are growing just a little bit.

e shows up whenever systems grow exponentially and continuously: population, radioactive decay, interest calculations, and more. Even jagged systems that don’t grow smoothly can be approximated by e.

Just like every number can be considered a “scaled” version of 1 (the base unit), every circle can be considered a “scaled” version of the unit circle (radius 1), and every rate of growth can be considered a “scaled” version of e (the “unit” rate of growth).

So e is not an obscure, seemingly random number. e represents the idea that all continually growing systems are scaled versions of a common rate.

Now we’re getting somewhere. I understand the principle of compound interest. Who knew it came from calculus?

To continue:

Why not take even shorter time periods? How about every month, day, hour, or even nanosecond? Will our returns skyrocket?

Our return gets better, but only to a point. Try using different numbers of n in our magic formula to see our total return:


n          (1 + 1/n)^n
------------------
1          2
2          2.25
3          2.37
5          2.488
10         2.5937
100        2.7048
1,000      2.7169
10,000     2.71814
100,000    2.718268
1,000,000  2.7182804
...

The numbers get bigger and converge around 2.718. Hey… wait a minute… that looks like e!

Yowza. In geeky math terms, e is defined to be that rate of growth if we continually compound 100% return on smaller and smaller time periods:

\displaystyle{growth = e = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n}

This limit appears to converge, and there are proofs to that effect. But as you can see, as we take finer time periods the total return stays around 2.718.

But what does it all mean?

The number e (2.718…) represents the compound rate of growth from a process that grows at 100% for one time period. Sure, you start out expecting to grow from 1 to 2. But with each tiny step forward you create a little “dividend” that starts growing on its own. When all is said and done, you end up with e (2.718…) at the end of 1 time period, not 2.

Now, of course, I understand why the bankers and financial advisers get so stoked about compound growth rates. This also gives me a peek behind the curtain as to why calculus offers so much to the rest of the natural sciences. Every discipline seeks answers about rates of change, and by golly, it sure does look like e helps them all measure those rates. To wrap things up…..

The big secret: e merges rate and time.

This is wild! e^x can mean two things:

  • x is the number of times we multiply a growth rate: 100% growth for 3 years is e^3
  • x is the growth rate itself: 300% growth for one year is e^3.

Won’t this overlap confuse things? Will our formulas break and the world come to an end?

It all works out. When we write:

\displaystyle{e^x}

the variable x is a combination of rate and time.

\displaystyle{x = rate \cdot time}

Let me explain. When dealing with compound growth, 10 years of 3% growth has the same overall impact as 1 year of 30% growth (and no growth afterward).

  • 10 years of 3% growth means 30 changes of 1%. These changes happen over 10 years, so you are growing continuously at 3% per year.
  • 1 period of 30% growth means 30 changes of 1%, but happening in a single year. So you grow for 30% a year and stop.

The same “30 changes of 1%” happen in each case. The faster your rate (30%) the less time you need to grow for the same effect (1 year). The slower your rate (3%) the longer you need to grow (10 years).

But in both cases, the growth is e^.30 = 1.35 in the end. We’re impatient and prefer large, fast growth to slow, long growth but e shows they have the same net effect.

So, our general formula becomes:

\displaystyle{growth = e^x = e^{rt}}

If we have a return of r for t time periods, our net compound growth is e^rt. This even works for negative and fractional returns, by the way.

Example Time!

Examples make everything more fun. A quick note: We’re so used to formulas like 2^x and regular, compound interest that it’s easy to get confused (myself included). Read more about simple, compound and continuous growth.

These examples focus on smooth, continuous growth, not the “jumpy” growth that happens at yearly intervals. There are ways to convert between them, but we’ll save that for another article.

Example 1: Growing crystals

Suppose I have 300kg of magic crystals. They’re magic because they grow throughout the day: I watch a single crystal, and in the course of 24 hours it creates its own weight in crystals. (Those baby crystals start growing immediately as well, but I can’t track that). How much will I have after 10 days?

Well, since the crystals start growing immediately, we want continuous growth. Our rate is 100% every 24 hours, so after 10 days we get: 300 * e^(1 * 10) = 6.6 million kg of our magic gem.

Example 2: Maximum interest rates

Suppose I have $120 in a count with 5% interest. My bank is generous and gives me the maximum possible compounding. How much will I have after 10 years?

Our rate is 5%, and we’re lucky enough to compound continuously. After 10 years, we get $120 * e^(.05 * 10) = $197.85. Of course, most banks aren’t nice enough to give you the best possible rate. The difference between your actual return and the continuous one is how much they don’t like you.

Example 3: Radioactive decay

I have 10kg of a radioactive material, which appears to continuously decay at a rate of 100% per year. How much will I have after 3 years?

Zip? Zero? Nothing? Think again.

Decaying continuously at 100% per year is the trajectory we start off with. Yes, we do begin with 10kg and expect to “lost it all” by the end of the year, since we’re decaying at 10 kg/year.

We go a few months and get to 5kg. Half a year left? Nope! Now we’re losing at a rate of 5kg/year, so we have another full year from this moment!

We wait a few more months, and get to 2kg. And of course, now we’re decaying at a rate of 2kg/year, so we have a full year (from this moment). We get 1 kg, have a full year, get to .5 kg, have a full year — see the pattern?

As time goes on, we lose material, but our rate of decay slows down. This constantly changing growth is the essence of continuous growth & decay.

After 3 years, we’ll have 10 * e^(-1 * 3) = .498 kg. We use a negative exponent for decay — we want a fraction (1/ert) vs a growth multiplier (e(rt)). [Decay is commonly given in terms of “half life”, or non-continuous growth. We’ll talk about converting these rates in a future article.]

More Examples

If you want fancier examples, try the Black-Scholes option formula (notice e used for exponential decay in value) or radioactive decay. The goal is to see e^rt in a formula and understand why it’s there: it’s modeling a type of growth or decay.

And now you know why it’s “e”, and not pi or some other number: e raised to “r*t” gives you the growth impact of rate r and time t.

I think I understand e a little better now. How about you?

XKCD

Because this says what happens to me in so many of my math classes:

He’s Rooting For The Machines

As I cram down information on Boolean Functions (where 1 + 1 = 1, always), my textbook informs that a fellow named Claude Shannon is the source of this current pain.

What a familiar name. I encountered Dr. Shannon in my introductory Discovery Informatics course, where he was identified as the father of Information Theory.

A seriously smart guy, and arguably the father of the internet, the telephone, and any other modern communication system you care to think about.

But, as is the case most of the time with these smart math guys, there is a funny side. From the text:

Shannon had an unconventional side. He is credited with inventing the rocket-powered frisbee. He is also famous for riding a unicycle down the hallways of Bell Laboratories while juggling four balls. Shannon retired when he was 50 years old, publishing papers sporadically over the following ten years. In his later years he concentrated on some pet projects, such as building a motorized pogo stick. One interesting quote from Shannon, published in Omni Magazine in 1987, is “I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”

Back to Boolean functions………as I try to juggle a few other things while riding my unicycle down the slippery road of life…..

Commutativity and Life

(Cross posted at the other site)

Sitting in a math class, and the professor announces that the next topic will be a brief study of matrices (matrix is the singular form). Then is asked a show of hands of those who have NOT had some previous experience in the topic. Up goes my hand, relieved to see that mine is not the only uncluttered mind, but saddened that there are so few of us. Those emotions are replaced when the professor announces that he will ‘go slow’ so that we midgets can keep up with the crowd. Thanks.

images.jpg

As he takes us through the steps of ever increasing arithmetic manipulation, the point is made that some properties of matrices are commutative while others are not. It is the non-commutative properties that are of interest, he observes. For those of you who have my level of understanding, note that an arithmetic operation is commutative if the order of the process returns the same result; 3 * 2 = 6 and 2 * 3 = 6.

As the link above reports:

Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products.[6][7] Euclid is known to have assumed the commutative property of multiplication in his book Elements.[8] Formal uses of the commutative property arose in the late 18th and early 19th century when mathematicians began to work on a theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics. Simple versions of the commutative property are usually taught in beginning mathematics courses.

But, predictably, there is a large portion of mathematics that is not commutative. I knew it was just too good to be true. As the professor observed, there are many, many examples in life where the order of a process is very important. As examples, he pointed out that opening the window and sticking your head out of the car window are operations where the order of things is critical.

Wikipedia expands on the idea:

Noncommutative operations in everyday life

  • Washing and drying your clothes resembles a noncommutative operation, if you dry first and then wash, you get a significantly different result than if you wash first and then dry.
  • The Rubik’s Cube is noncommutative. For example, twisting the front face clockwise, the top face clockwise and the front face counterclockwise (FUF’) does not yield the same result as twisting the front face clockwise, then counterclockwise and finally twisting the top clockwise (FF’U). The twists do not commute. This is studied in group theory.

I’m confused but more impressed than ever with the nature of our existence. How can an idea as powerful as mathematics embrace contradictory behavior? Why do we think that mathematics can explain the physical world when it is riddled with inconsistency? Could it be that the nature of our existence transcends the universe of mathematics?

Am I having a metaphysical moment?

David Gale

My favorite math professor has, over the course of two semesters, introduced his classes to many prominent mathematicians through brief stories about their lives. Each recounting reminded us of the importance of the scientist and the frailty of their existence. From mild psychosis to paranoia, from greed to altruism, from lives filled with joy and happiness to those wrecked by tragedy and sadness, we have learned that the perfection of mathematics springs from the imperfection of the human condition. The question is not why, but how.

So, it is to be expected that I am alert for translations to the highest sphere…..

Via the Wall Street Journal:

David Gale (1921 – 2008)

Mathematician Who Loved Games Helped Unknot a Pairing-Up Puzzle

This spring’s medical-school graduates have just completed the nerve-wracking “match day,” in which they rank the hospitals where they would like to do their residencies and bite their fingernails until they find out where they will be placed.

Few realize that the algorithm pairing them with a teaching hospital was developed by a game-loving University of California, Berkeley, mathematics professor, David Gale.

[Gale]
David Gale

Enamored of recreations from sudoku to the roller derby, Mr. Gale, who died March 7 at age 86, was a game-theory specialist often mentioned alongside a onetime collaborator, Nobel laureate John Nash, as a giant in the field.

Mr. Gale’s best-known contribution came as a solution to the “stable-marriage problem,” the question of how best to pair up an equal number of men and women, each of whom has his or her own preferences for a mate.

In a 1962 paper written with University of California, Los Angeles, professor Lloyd Shapley, Mr. Gale proposed a multistage process beginning with each man asking his top choice whether she will have him. Women with multiple offers tell one of the suitors “maybe” and all the others “no.” The rejected men move on to make offers to other women. If a woman gets a new offer from someone she likes better, she gives him a “maybe” and tells the earlier “maybe” that he is now a “no.”

After many rounds, as the rejected men turn to women who didn’t get any offers at first, everyone has paired up. Then each woman turns to her man and says “yes.”

Although offered as an academic solution to a theoretical problem, Mr. Gale’s paper proved a remarkably fertile contribution to real-world cases of “two-sided matching” such as the medical-residency example, where hospitals are choosing students at the same time as students are choosing hospitals. A related algorithm is used by school systems in Boston and New York to allocate slots in high schools.

“David’s work will be remembered for generations to come,” says Alvin E. Roth, a professor of economics at Harvard. Mr. Roth helped design the school-choice systems and has lately been working to apply the theory to the allocation of scarce kidney donations.

Mr. Gale inspired headlines as recently as last year when he challenged studies reporting that men had more lifetime heterosexual partners than women, a situation he labeled a logical impossibility.

Mr. Gale studied math at Princeton, where he was a doctoral candidate alongside Mr. Nash. He was known for devising elegant puzzles and games. Among these was Chomp, in which players take cookies from a board until a final, poison cookie must be removed by the loser. Simple enough for a preschooler to master, the game turns out to have mathematical subtleties that have inspired dozens of academic papers.

At dinner time, says his daughter, Katharine Gale, “he would ask us if we all toasted, how many clinks would there be. He would write matrices all over napkins.”

Once, in a flash of inspiration, “he wrote all over an airplane ticket. The airline refused to honor it and he had to buy a new one,” she says.

“He thought math was beautiful, and he wanted people to understand that,” Ms. Gale says.

–Stephen Miller

 

 

Thank you, Professor Gale.


“Life’s hard, son. It’s harder when you’re stupid.” — The Duke.

Education is a companion which no misfortune can depress, no crime can destroy, no enemy can alienate,no despotism can enslave. At home, a friend, abroad, an introduction, in solitude a solace and in society an ornament.It chastens vice, it guides virtue, it gives at once grace and government to genius. Without it, what is man? A splendid slave, a reasoning savage. - Joseph Addison
The term informavore (also spelled informivore) characterizes an organism that consumes information. It is meant to be a description of human behavior in modern information society, in comparison to omnivore, as a description of humans consuming food. George A. Miller [1] coined the term in 1983 as an analogy to how organisms survive by consuming negative entropy (as suggested by Erwin Schrödinger [2]). Miller states, "Just as the body survives by ingesting negative entropy, so the mind survives by ingesting information. In a very general sense, all higher organisms are informavores." - Wikipedia

Blog Stats

  • 30,330 hits