Fluxions and Dragons:

Both Newton and Leibniz are credited as the inventors of Calculus. We know that Newton came first and that Leibniz published it first. In Newton's biography "Never at Rest" by Westfall we read that Leibiniz could have had a brief access to Newton's work because Newton's editor, angry at the master (Newton had a difficult character) could have shown the piece to Leibnizm without Newton knowing this.

Who knows really? Was a brief peek at Newton's work enough for Leibniz? This sounds very Mozart's! In any case, Leibniz's approach is quite different from Newton's. In style, scope, nomenclature... Quite a different beast.

History is so cumbersome. Most of the fame is attributed to Newton. However, we all learn Leibniz's approach, at school and even at college. Maybe because Leibniz was able to popularize his method across Europe. Maybe because his approach looks less dirty than Newton's...

When I knew that the calculus I was taught was Leibniz's I thought: "Weird, but who cares, it must be a notation thing. The important thing is to be able to calculate things, and both methods will lead to the same results. History is for historians, while in math and physics is the result what we care for." How wrong I was!

I have been studying Newton's "The Method of Fluxions and Infinite Series", published after Newton's death. My version includes the annotations by John Colson, the 5th Lucasian (Newton was the 2nd). These annotations are extremely useful, since Newton's explanations are, understandably, very short. He did not understand how dumb the rest of people are.

I prefer to call this work "Fluxions and Dragons" because the work makes extensive use of infinite series, which are like dragons to me. And I like that it sounds a bit like "Dungeons and Dragons". When I explore this work I really have the feeling of going through tunnels and dungeons full of very exotic dragons. By the way, dragons are always treated as monsters that need to be killed. Please, stop killing dragons, even in fiction!

I am still through the middle of this study and already found many fascinating things. In the first part you learn how to deal with series, how to manipulate them in astonishing ways. When you are used to closed forms, Leibiniz style, you find very strange such use of series. A closed form is for example sin(x), when in series form you would write

x - x^3/3! + x^5/5! - &c

where &c is Newton wrote etc (et cetera, literally and the rest). How inelegant! An infinite dragon that is so difficult to manipulate! I was taught to think that sin(x) was the true form, while the series was just an expansion to approximate the true function when needed.

But no, the series form is just a definition of the function, as valid as the closed form. And Newton's work shows how to think in terms of them. Sure, it is quite a dirty approach , but it gives you a very different power. For example, for derivatives (fluxions in Newton's language, recall that usual names and nomenclature are almost all Leibiniz's) you can use both methods and obtain the same results, but while Leibniz gives you the Moses tables of derivation, Newton gives you a much deeper understanding of how such miraculous transformations take place, and the rules are much simpler!

For integration, we still don't know how the Moses-Leibniz look like. Are they complete? How many commandments of integration are there? We don't know. However, Newton's method allows you to perform rapid integration of functions that with Leibiniz you would not know how to begin.The elegance of closed forms also has a clear downside.

I fell, once again, frustrated by how are we taught in school and college. I had a few of very good teachers, good in the sense that they meant well and did a lot of effort to do their job right. The rest were simply awful. Especially in college, where I can safely say more than 99% of all where authentic frauds. But even the good ones were bad in the sense that they really did not know well what they were teaching.

Because if a teacher would know about calculus, then it (I use it as gender neutral here) would have talked about Newton's 1st method all the time. It would also have talked about Leibniz's method, of course. And don't forget that Newton came up with a 3rd method of doing calculus! A purely geometrical method which is the one he used in his Principia. I know that there would be no time to cover all in detail, but a good teacher should talk about these approaches, how different powers are present in each of them, and invite us to study them at home.

In the last years I am trying to learn from primary sources (not exclusively, of course) and I have found an immense stream of amazing things that are nowhere to be found in textbooks. I am developing a kind of "hate" to most textbooks which sell you a more "modern" view so that they are more "accessible" and more refined, since the most recent in science is the most desirable. This is so wrong. What about the jewels and wonders that are only to be found in the primary sources? And even worse: where is the spirit of the creator? As time passes I find textbooks more aseptic, sterile and superficial.

Allow me to give an example that I recently discovered. It is about calculating fluxions. We all know that there is a "rule" for that when you face a polynomial. And, since series are concatenated polynomial terms, we should not bother about other types of functions, since all of them can be expressed as series. So, if we face a term A x^n, its fluxion becomes simple A n x^(n-1). You can explain the rule as "lower the exponent so that it acts as a multiplying prefactor, and then lower the degree of the exponent by one". So simple!

But wait, we are implicitly assuming here that we were performing a derivative with respect to x here, which is not necessarily the case. Let's follow Newton and perform the fluxion with respect to time, and later define time as whatever you want. A notation that is a Newton original and that we still use it is the dot notation, which unfortunately cannot be written here in plain text. It consists in drawing a small dot at the top of a variable to indicate its time derivative. We can place more than one dot, indicating further derivatives, always with respect to time.

In plain text we could write [x·] instead for a first derivative, (x··) for second derivative, &c. So that we can write for the fluxion of A x^n

A n x^(n-1) [x·]

So the rule for the fluxion can also be stated as "multiply by the exponent (n) and then multiply by [x·]/x, which has the same effect:

(A x^n)(n[x·]/x) = A n x^(n-1) [x·]

So far, nothing very fancy. Now fast your seatbelt because the wonderful thing is approaching. Let's separate the two things to be multiplied. On the one hand, we multiply by [x·]/x. Let's keep this. On the other hand, however, let's be more flexible in the number to be multiplied. What if, instead of n, the degree of the term, the exponent, we multiply by another number? Sounds weird, doesn't it? Consider this function:

x^3-ax^2+axy-y^3=0

We want the fluxion (the time derivative) of it. If we follow the rules learned at school, we do

3x^2[x·]-2ax[x·]+a[x·]y+ax[y·]-3y^2[y·]=0

which by the way needs the chain rule, gues who invented that! Then we can group terms like

(3x^2-2ax+ay)[x·]=(3y^2-ax)[y·]

which gives

[x·]/[y·]=(3y^2-ax)/(3x^2-2ax+ay).

The same result is obtained if you follow the rule of multiplying each term by [x·]/x or [y·]/y and also multiplying by the corresponding exponent of the term.

Rewrite the previous function so as to have all x exponents explicitly written:

x^3-ax^2+ayx^1-y^3x^0 = 0

Then we assume the progression of exponents as 3,2,1,0,... and then multiply each term by [x·] and also by its member of the progression. The Moses rule for a derivative is equivalent to assign the 3 to x^3, the 2 to the x^2 term, and so on.

Of course, you also need to do the same for y. First, we write the funcion so that it looks like a progression of y powers:

-y^3+0y^2+axy^1+(x^3-ax^2)y^0 = 0

and then assume a progression 3,2,1,0,... and assign 3 to the y^3 term, the 2 to the y^2 term, and so on.

The last step is just to add both expressions and they will be equal to 0.

The jewel comes now: what if we follow this exact same process but instead of assuming the progression 3,2,1,0,... we assume another one? For example, let's assume the progression 4,3,2,1,... and see what we get.

4x^3[x·]/x-3ax^2[x·]/x+2ayx^1[x·]/x-1y^3x^0[x·]/x = 0

Notice how we always multiply by [x·]/x and how we multiply each term by a number according to our progression. We can simplify to

4x^2[x·]-3ax[x·]+2ay[x·]-y^3[x·]/x = 0

Now it is time to process the y's. Should we assume the progression 4,3,2,1,... here? Not necessarily! Let's go really wild and assume 2,1,0,-1,-2,... So that we get

-2y^3[y·]/y+0+0-1(x^3-ax^2)y^0[y·]/y

which can be polished to

-2y^2[y·]-x^3[y·]/y+ax^2[y·]/y.

Now we must collect the two expressions and add them, which, after some arrangements, give [y·]/[x·]=(4x^2-3ax+2ay-y^3/x)/(2y^2+x^3/y-ax^2/y)

This is a very different result! You can ask, what if we would assume 2,1,0,... for x and 4,3,2,... for y? Then we would obtain

[y·]/[x·]=(2x^2-ax+y^3/x)/(4y^2-2ax-x^3/y+ax^2/y)

What about the general case, where we assume a progression m+3,m+2,m+1,... for x and n+3,n+2,n+1,... for y? Notice how we recover the "classical" result for n=m=0. But in general we obtain the following ratio of fluxions:

[y·]/[x·] = A/B

where

A = ((m+3)x^2-(m+2)ax+(m+1)ay-my^3/x)
B = ((n+3)y^2-(n+1)ax-nx^3/y+nax^2/y)

We see how we can obtain as many Fluxional Equations as we please, by choosing m and n! Of course you may wonder, but not all of them can be correct! Wrong! All of them are correct! As John Colson says, and I am doing an extensive use of his annotations, "this variety of Solutions will beget no ambiguity in the Conclusion, as possibly might have been suspected."

I will give no proof here of this. Do it yourself if you want, or check it with some numerical cases. The basic procedure is just to solve x or y from the initial equation and then to substitute in the Fluxional Equation. You will see how the result will not depend on either m or n.

Now, you may think this is interesting but not useful, because with our college method we arrived so fast to a valid solution. Why should we interested in other solutions? Well, because PERHAPS in some cases the Fluxional Equation obtained for m=n=0 is not the most elegant or simple one! You don't believe it? Let's develop the following example:

2y^3+x^2y-2cyz+3yz^2-z^3 = 0

where there are three flowing quantities. If we proceed with the progression of the indices, the exponents we obtain

2xy[x·]+(6y^2+x^2-2cz+3z^2)[y·]+(6yz-3z^2-2cy)[z·] = 0

while if we choose the progression as to produce "the greatest destruction of terms", which means 2,1,0,... for x and for y and 3,2,1,... for x, we arrive to

2xy[x·]+(4y^2+z^3/y)[y·]+(6yz-3z^2-2cy)

By just observing the most suitable progression to obtain the fluxional equation we have been able to have two terms less!

Of course you can say this is not a big deal, since from the first and longer expression you could recognize that the string -2cz+3z^2 is already present in the original equation and it is equal to (z^3-2y^3-x^2y)/y and substitute. But this way is less elegant, less deep and not always evident as the equation becomes more complicated.

Another example: take the equation

3x^3+xy-2xy^2 = 0

Notice how the usual method yields

[y·]/[x·]=(9x^2+y-2y^2)/(4xy-x)

Now practice yourself the other method with a generic m for x and n for y and see how we get

[x·](3(m+3)x^2+(m+1)(y-2y^2))=[y·]((n+2)2xy-(n+1)x-3nx^3/y)

Now, this is subjective, but having this I would choose m=-1 and n=0 so that we obtain

[y·]/[x·]=(6x^2)/(4xy-x)

It takes a little practice to choose the right indices before applying the fluxion rule, but the results are clearly worth the effort.

Again, you can see from the original equation that 9x^2+y-2y^2=6x^2+3x^2+y-2y^2=6x^2, but if you downplay the importance of this method only because you can do it with a less elegant method, then you are already sucking the life out of the spirit of calculus and you may be ready for modern textbooks production.

Newton's method deeply relates series, progressions of indices and fluxions. I find this astonishing. And Newton's text is full of ultra-clever tricks like this that I am not aware of them being present elsewhere. Many times I have performed time (or total) derivatives of many expressions, and probably I have carried too many terms simply because none of my teachers knew that derivation had some degrees of freedom that you can choose in advance so as to optimize the simplicity of the result. This is what distinguishes the good from the master. And, as I am neither good nor master, I at least want to learn from the latter.

Today we have talked about Fluxions. Another day we will talk more about Dragons. But before I have to keep learning.