Today I had to go invigilate a MATH 277 final as part of my TA requirements (we each have to invigilate/proctor two final exams; sometimes we get ones we’ve actually been TAs for and sometimes we don’t. This was a case of the latter). It turns out that MATT 277 is University of Calgary’s version of MATH 275, or multivariate calculus. The test involved about 20 or so questions.
Our job as TAs, apart from making everybody sign in on the little attendance sheet, was mainly to just walk around in order to discourage cheating and to help anybody out who raised their hand.
So let me just quickly set the scene for you: a large gym full of 250+ students, a 2-hour exam, and lots and lots of calculus.
I bet you can guess what I was thinking about.
I was thinking about Leibniz!
I was wondering, as I walked down the aisles of seats, watching students write the elongated “s” for integration and the dx/dy (or variations of that) for differentiation, what Leibniz would think if he saw a roomful of people, in 2016, still using some of his original symbols. Like, how ridiculous is that? Calculus has been studied, expanded upon, and extended to a ton of different fields/uses since it was first developed, but we’re still using some of Leibniz’ original symbols.
And what would he think about calculus being taught as basically standard curriculum at universities? What would he think about the tons of different uses of calculus today?
I know I kind of talked about this in a previous post, but I actually think about this quite a bit. Especially today.
Yay calculus! Yay Leibniz!
Today we learned how to use complex analysis to solve real-values integrals that would otherwise be very difficult to solve.
No complex variables in sight in that integral, right (assuming x is real-valued, haha)? Well you can CONVERT THIS TO A COMPLEX-VALUED INTEGRAL AND HAVE AN EASIER TO SOLVE PROBLEM!
That freaking blew my mind this morning in class. I’d go through the details of how to do this, but I’m a lazyass and don’t want to use Word’s equation editor to make like 30 different equations showing the steps to solve. Instead, I’ll link to Dr. Datta’s notes from class. Go to page 10 in the PDF (the page labeled “161”) for this example.
Side note: if any of you ever end up going back to UI or know anyone who will be taking some upper-division math classes there, I highly recommend Dr. Datta. She’s very clear at explaining things, good at giving examples, gives reasonable homework, and is always willing to help.
Today we had our second test in Complex Variables. The test involved figuring out quite a few Maclaurin series for functions involving i. I’ve been ridiculously busy and thus haven’t had a chance to check out the dude behind the Maclaurin series…until now!
So. A Maclaurin series is simply a Taylor series centered at zero. According to the almighty Wikipedia, this type of series is named after Colin Maclaurin, a Scottish mathematician that lived from 1698 – 1746 (so during Leibniz/Newton time and a little bit after). The reason centered-at-zero Taylor series are named after him is because he used them extensively in his Treatise of Fluxions when describing and characterizing points of inflection, minima, and maxima of smooth functions.
This dude was super smart. He entered college at 11 YEARS OLD and got a Masters degree three years later. He became a professor at age 19 and actually got a personal recommendation from Newton to be appointed deputy to James Gregory, the mathematics prof at Edinburgh, and then once he surpassed Gregory’s position, Newton was so impressed that he actually offered to pay his salary for him.
He also had a crapton of children (well, 7 children, which I guess probably wasn’t a crapton back then) and died of complications from edema.
Alternate title: Claudia Makes Things Way More Complicated than They Need to Be Because She Sucks
We had this bonus question on our homework for Probability today:
Suppose X has a density defined by
Let FX(x) be the cumulative distribution of X. Find the area of the region bounded by the x-axis, the y-axis, the line y = 1, and the curve y = FX(x).
And I was like, “Aw, sweet! Areas of regions! CALCULUS!”
So first, I had to find the cumulative distribution function (cdf) of X. Easy. It’s just the integral of the density fX(x) from negative infinity to a constant b. In this case:
With 2 ≤ b ≤ 3. So that’s my curve y. The area I’m looking for, therefore, is this (the red part, not the purplish part):
Now anyone with half a brain would look at this and go, “oh yeah, that’s easy. I can find the area of the rectangle formed by the two axes, the line y = 1, and the line x = 3, then find the area of the region below the curve from 2 to 3, and subtract the latter from the former to get the correct area.”
Which works. Area of rectangle = 3, area of region below FX(x) = .25, area of region of
interest = 2.75.
Or they could remember the freaking formula that was explicitly taught last week. Such areas can be calculated using:
But did I see either of those? Nooooooope.
I looked at the graph and was like, “how the hell do you find that?” I tried a few things that didn’t work, then realized that it would be a lot easier to figure out if I changed the integral from being in terms of x (or b, rather) to being in terms of y.
So then I just had to integrate. This gave me the right answer: 2.75!
Moral of the story: don’t complicate things. But if you do complicate things, you might actually end up in a scenario where you’ll use something that you were taught back in calculus I but didn’t ever suspect you’d actually use. I had appreciated learning the handy-dandy technique of changing variables, but I didn’t think I’d be in a situation where I’d apply it. Shows what I know, eh?
It was a nice refresher, at least. I’ve missed calculus.
Calc III is over. :(
It’s a sad day! It was one of my favorite math classes.
I just seriously hope that the answers to the 10-question final were 4, 1, 4, 6, 6, 1, 6, 55/23, 4, and 1, because that’s what my answers were. We’ll see. I gave myself a 60-point leeway to get an A with my homework and midterm scores and I don’t THINK I made 60 points worth of mistakes, but who knows. I’m fantastic at screwing up. I missed one point on the midterm because I completely abandoned a negative sign like two steps into a cross product. FAIL!
Also, migraines suck.
I’M DOING IT AGAIN!
Today we learned about Green’s Theorem. So who is this Green fellow?
[Edit: Okay, originally I was just going to talk about Green ‘cause while Green’s Theorem is just a special case of Stokes’ Theorem, we haven’t learned the latter yet. But turns out both Green and Stokes are named George and that’s hilarious, so we must press on and speak of both.]
So who are these two fellows?
George Green lived from 1793 – 1841. Like what seems to be a large proportion of mathematicians at the time, he was British. He lived in Nottingham. There are two reasons why these are interesting facts:
1. Nottingham, at the time, wasn’t really burning it up intellectually. Only about 25-50% of children received any sort of education, and Greene himself attended an academy for only one year when he was 8. It took him until age 36 to gather enough money (and free time) to afford a higher education (and he died when he was 47, so unfortunately he didn’t have too much time to enjoy it).
2. Despite the setbacks as far as formal education goes, Green was a very smart dude. He was largely self-taught (obviously) and once he finally got to Cambridge he pretty much kicked ass. What’s most interesting, though, are his studies in math. Historians aren’t exactly sure how Green reached the understanding of calculus that was necessary for developing his theorem. He likely used the “Mathematical Analysis,” which was a form of calculus Leibniz developed, but this was during the post Newton-Leibniz controversy over calculus and England pretty much forbade the use of everything calculus-related that originated from Leibniz. ‘Cause they had Newton’s calculus. Never mind that Newton’s notation was inferior and didn’t lend itself to the applications/developments that Leibniz’ notation did and that forbidding the better notation/methods from the England set the country back in mathematical advancements for like a century.* But somehow Green got a hold of it and made his improvements and came up with his theorem and was generally awesome. (LEIBNIZ POWER! Okay, I’m done).
And what about George Stokes? Who was he? Stokes’ life overlapped the end of Green’s life (1819 – 1903). Stokes was Irish and rocked the fields of fluid dynamics, optics, and mathematical physics. He actually did quite a variety of things, so I’m just going to list a few.
- He came up with a way to calculate the terminal velocity of a sphere falling in a viscous fluid (Stokes’ Law!)
- He expressed a mathematical description of rainbows using a divergent series, something that wasn’t really understood just yet by most.
- He was secretary and then president of the Royal Society.
- He wrote a paper in which he tried to describe the variation in gravity across the earth’s surface.
- And, of course, Stokes’ Theorem in math.
Sorry, I’m going to keep doing these little mathematician snippets until…well, until I feel like stopping. So ha.
*I’m not bitter.
I dig my calc III teacher. He’s awesome. But I wish he’d do what I wish all math teachers would do when they introduce a theorem or lemma or rule: tell us a little bit about the person responsible for it, especially if the theorem/lemma/rule is named after the dude.
Like today we talked a lot about Fubini’s Theorem. We used it in like three examples. I used it on the homework I did right after class.
All the while without knowing who the heck this Fubini guy was.
So I checked him out this afternoon. Guido Fubini was an Italian mathematician who lived from 1879 to 1943. He was pretty into geometry and calculus for most of his life and moved around in different professorships in Europe before accepting an invitation to teach at Princeton in 1939 (partially to get away from the Nazis; he was Jewish).
So what the heck is this theorem, anyway?
Well. Let’s just look at rectangular domains first (because that’s all we’ve learned so far, haha…we’re doing non-rectangular domains tomorrow). So let’s look at a pretty double integral to start.
(P.S. I’m loving this chapter on double integrals already, simply because it means I have to write more integral signs. I FREAKING LOVE THAT SYMBOL.)
Say some rectangular region R is defined by the intervals [a,b] x [c,d]. If a function of two variables z= f(x,y) is continuous over R, then we can write the volume of the solid that lies below the surface z = f(x,y) and above the rectangle R as:
Cool? Cool. So what does Fubini’s Theorem state? Again, assuming z = f(x,y) is continuous over R and R is a rectangular region, Fubini’s Theorem allows us to switch the order of integration while still getting the same correct result at the end:
Which is pretty snazzy (there’s a few other statements in the theorem; I just chose this conclusion as the example to show here).
But what I found most interesting about this theorem is that while double integration has been around for quite a long time, this theorem was proved sometime during Fubini’s lifetime–sometime in the late 1800s or early 1900s. (I can’t find an exact date for it, but that’s mainly because my internet’s deciding to be a bitch right now). Which makes sense, I guess, considering there exist cases where this doesn’t hold and so it may not have been an “obvious” thing or may not have been easily provable…but still. Interesting passage of time before we got to this theorem.
GUYSGUYSGUYSGUYSGUYS I’m hyper.
You know what I would love to do (even though it would screw history and the rest of existence up because that’s how these things work)? I would love to take a graphing calculator and a calculus textbook, go back in time, and show them to Leibniz.
“Look!” I’d say. “See this little itty bitty machine? Look at all this nonsense it can do! Not only can I add, subtract, multiply, and divide in a fraction of a second, I can also find square roots, sines, cosines, and tangents, and GRAPH FUNCTIONS! This is your Step Reckoner on steroids. YOU helped pioneer this! EVERYBODY uses these now.
“And look at this textbook. This is what we use to teach calculus to people today. Let me show you some of these symbols. See what we’re using? dy/dx! And the elongated S! We’re still using YOUR symbols because they remain the clearest, easiest, most adaptable ones for this branch of math. AND THIS COMING FROM THE FAR-OFF YEAR OF 2013!!”
Ignoring the whole “somebody just time-traveled!” aspect, I think the calculator would really be the thing that would blow his mind. I mean, the Step Reckoner was massive and it just did the four basic operations. Plus, you know, the fact that the calculator now has this crazy-ass digital display thing. I’d totally help him take it apart and do the best I could explaining what the components were.
Of course I’d probably end up having to lean in reeeeeeeally close to him to do that.
Imagine a creation story where the Cosmos gives us two brother gods: Integration and Differentiation. They are responsible for two components of the Universe.
Integration—”The Great Summer”—is in charge of unity and space (well, area, but let’s just go with space). He wields integral symbols as weapons and lives in the sky.
Differentiation—”The Great Changer”—is in charge of division and, of course, change. He’s able to take the smallest components of the universe (hence the “division” aspect) and create a degree of change in it*. He has armor made out of barbs tangent to his skin and lives in the earth.
Something to draw, maybe…?
*Yes, I know taking the derivative of a function does not cause the change measured. Just work with me here.
We’ve started Taylor series in calc this week. Which is cool; I think I’m understanding what they are/how they work/why they’re important. But one thing I don’t know is who this Taylor fellow is.*
TO THE WIKIPEDIA-MOBILE!
So it looks like Brook Taylor was an early 18th century British mathematician. He did some work on the then newly described calculus, coming up with work Lagrange popularized in the late 1700s and, of course, came up with Taylor’s Theorem and Taylor series.
Here’s a cool demonstration of Taylor series approximation for various trig functions.
Edit: OH GODS HE STUDIED UNDER KEILL. Why, Taylor, whyyyyy?!
Edit 2: he was on the Royal Society committee set to hear Newton’s claims against Leibniz, too! Whaaaaat.
*I wish more teachers gave at least brief little intros on the people who come up with all this cool stuff. Especially when we’re learning about something expressly NAMED after someone.
I like to read about mathematicians as much as I like to read about math itself. I think the people and history behind math are just as important as the math itself. I’m sure a lot of people would debate me on that point, but I think math—the tool we use to understand the universe—can itself be understood so much more when given some context.
Heck, sometimes the simplest things can help give rise to phenomenal mathematical advancements.
Take calculus (surprise, surprise). Kepler, chilling out in the early 1600s before either Leibniz or Newton existed on the planet, was angered by a wine merchant whose methods for measuring the volume of a wine barrel was less than accurate. So he started thinking, “Hey, how do you go about calculating the volume of such a weird shape like a wine barrel, anyway?” And thus, Nova stereometria doliorum vinariorum, or New Solid Geometry of Wine Barrels, was born. He also started on the track of differentiation by wondering how one would create a wine barrel whose dimensions maximized the amount of wine the barrel could hold.
I know that’s a small example, but I think just knowing that itty bitty bit of calculus history “anchors” that bit of math in time and space. At least more so than saying “and then at one point some dudes came up with integration.”
Which is usually how it’s taught (that or, “here’s how you do integration with no context whatsoever!”).
Haha, sorry. THIS IS WHY I WANTED TO TAKE HISTORY OF MATH. I love seeing how all these different aspects of history and people and theories and everything connect. It just makes everything make so much more sense.
So we’re doing trigonometric integrals in calculus and one of our homework problems is this little dude:
We rushed through this section of the chapter this morning ’cause we’re behind schedule and I’m a little shaky on them (also I’m dumb), so I went to the calc room in Polya to get some help. I showed one of the tutors in there this integral.
I told him how I thought we should start: since (1-cos2x) is the numerator of the half-angle formula for sin2x, we could multiply both sides of the half-angle formula to change (1-cos2x) to 2sin2x and then go from there.
He said he’d never even thought about solving it like that, but when I asked him what the “normal” method would be for this integral, he didn’t know.
So is there another way of solving this?
One of our homework problems was to solve one of the examples that l’Hopital used in his original calculus textbook.
It took me a bit to figure it out ’cause I’m dumb, but after I finished it I actually wanted to find the original example in the original work. Surprisingly, it didn’t take too long to find. INTERNET POWERZ!
L’Hopital was French. His textbook is, therefore, obviously in French. Curse my unilingualism! I understand the numbers, though, haha.
How cool is it that we got to do this example? Seriously. I really, really hope they offer History of Mathematics class next fall.
Haha. Again, apologies for all the math-related blogs lately. But in all honesty, my life is about 60% math, 15% computer science, and 25% teaching statistics right now. A portion of one of those percentages will yield to some writing in a bit, but for now this is how it is.
If I’m annoying you, just ignore me for the rest of the semester.
What right do you have to be so damn awesome?
L’Hopital’s rule* just made my day. It is the COOLEST FREAKING THING, man!
All of my readers who have had more advanced math are probably thinking “holy freaking crap, Claudia, shut UP with this fascination with all these things everybody else already knows,” to which I say, “NEVER! This stuff is beautiful and powerful and wondrous and gives me tinglies and should give you tinglies as well because IT ALL WORKS TOGETHER AND IT’S MIND-BLOWING HOW THE UNIVERSE FUNCTIONS SO SMOOTHLY WHEN THERE’S SO DAMN MUCH OF IT!”
Also, how in the hell can anyone fall asleep in Discrete Math? Multinomial Theorem = one sexy mofo. But I still suck at permutations/combinations. You’d think with all the stats stuff that such things would be somewhat intuitive to me now, but no.
Okay, enough blogging. Gotta get back to CALC!
*Actually, the rule was most likely developed by Johann Bernoulli; he had tutored L’Hopital and L’Hopital published the rule in his own textbook in 1696 under his own name (though he noted his debt to Bernoulli in the preface). This ticked Bernoulli off and there are letters he sent to Leibniz in which he complained about L’Hopital publishing the rule without proper acknowledgement. Sigh. Calculus, man.
Edit: woah, L’Hopital died on my birthday in 1704 and Bernoulli died on my grandpa’s birthday in 1748. Freaky.
Unintentional hilarity: forums arguing over “calculus” versus “the calculus.”
“I would use the calculus to help with my diabeetus, plain old calculus for other purposes. Seriously, “the calculus” reminds me of Wilford Brimley.”
“The Batman calls it the calculus.”
“I call them Fluxions, I’m old school.”
Haha, that’s all for today. I’m busy.
This gentleman is my new favorite living human being.
I’d add that linear algebra is an important middle step as well. A lot of stuff that I really enjoy in the field of statistics is stuff I wouldn’t understand nearly as well had I not taken linear algebra.
Basic statistics (like the stuff I’m teaching) –> Linear algebra –> more advanced statistics (FA, PCA, SEM) –> calculus (or taught concurrently with the previous) –> mathematical statistics
In my personal experience, I was able to get to SEM-level without calculus. I took calculus, but I never really used it in the context of stats.
But now that I’m taking it again, even at the basic level of 170, I’m seeing how this will apply to statistics (especially mathematical stats). And that’s super exciting.
So I don’t think this idea of “stats before calc” discounts the importance of calculus. Rather, I think it focuses on this idea of “practical versus theoretical” understanding. Statistics, especially very basic statistics, is something I think everyone should know. It’s practical, it’s applicable in every field. Calculus gives you a stronger understanding of WHY it’s so practical and applicable (at least in my opinion).
So yeah. Dr. Benjamin was also on the Colbert Report some time ago. I’ll have to find that vid.
Haha, speaking of the Report, I’m going to go watch the Maurice Sendak interviews again.
Got my “Newton v. Leibniz” paper work-shopped today and my teacher said it sounded like something out of The New Yorker. So that was pretty cool.
I’ll post it here once I edit it a little more. There are still a few parts I’m not happy with.
If there’s anyone else out there who really digs the history of science/philosophy of science/science in general, they might want to check out the works of Carl Djerassi. Dr. Djerassi, an emeritus professor at Stanford, writes “science-in-fiction.” This, he says, is different than science fiction but also different than biography, as it illustrates scientific history via the human, personal sides of some of the most prominent scientists and scientific events that we’ve seen. In addition to fiction, he also writes poetry, memoir, and plays. I recommend “Calculus” because…well, obvious reasons.
Anyway, check out some of his work if this sounds interesting to you. I just spent like two hours reading his stuff and researching him. Very cool dude.