Pedagogically, the challenge to teaching linear algebra is that you start with "here's systems of linear equations, we can put them into matrices and now here's row operations to solve them," and you end up with "now matrices are actually representations of linear operators on vector spaces, let's analyze the properties of this specific operator." Usually, this is also coupled with a reluctance to actually discuss vector spaces, since the meat of it involves abstract algebra, which usually comes after linear algebra.
Failing to tackle this challenge appropriately can leave students confused about properties that seem apparently random (trace and determinant are big offenders here), or textbooks bringing something up only to never mention it again (null space is often an example here). On top of this, there is also the multiple notation problem (admittedly, not as bad as calculus, where there are too many notations for derivative) and the minor issue that many of the algorithms taught in the book aren't used in practice because of numerical stability issues.
It has been so long since I've taken linear algebra, and I've taken abstract algebra courses since then, that I can't really compare this book to the approach that I learned. Skimming the book, the thing that jumps out the most to me is that LU factorization and determinants are shoved surprisingly late in the book [1], and eigenvalues are "previewed" quite early. I'm not sure that's a good approach: LU factorization is important because backsolving the L and U matrices is more numerically stable (and sparser, when you're dealing with sparse matrices) than the inverse matrix, and it works even if your matrix isn't square. Furthermore, determinants tie in better to row operations, and their weird application with Cramer's rule is another way to solve a set of linear equations: you don't want to introduce Cramer's rule months after you finished treating matrices as stepping stones to solving linear equations.
The book does cover vector spaces, although in a bit of a dance around not covering abstract algebra. I'm not sure it's an effective introduction of vector spaces, although it could well suffice to ease the pedagogical trap mentioned earlier. On the other hand, if it's going to dive that far into vector spaces, it would probably be helpful to have some more sections on matrices over fields that aren't real numbers (i.e., complex numbers (make sure to mention conjugate transpose and Hermitian matrices!), rational numbers, and finite fields).
[1] Strassen's algorithm for matrix multiplication is described before LU factorization, to give you an idea of how weird the ordering ends up being.
I went through EE and CS. EE we started using matrices exactly how you describe it: here’s a system of linear equations, here’s how you write them in matrix form, here’s how you invert them to solve the original system. Turn the crank, answer pops out. I had my trusty HP49G, and I could solve linear systems all day.
Then in CS I took a computer graphics course and it was rotation and translation matrices all day every day.
Then there was a digital communications course where we touched on orthogonal basis functions, and some matrix voodoo related to that and how to get orthogonal vectors out of the mess.
And then finally I took the required CS linear algebra course offered by the math department, where we started from scratch. Here’s a vector (psh, I know vectors!), here’s a vector space (hmmm this is new), and building the rest of it up from there. I really wish that had come earlier on, but I was very very happy to finally have a bit of a theoretical understanding of how these tools I’d been using actually worked.
I feel like my university only taught calculation, not theory, when it came to linear algebra. It’s like the equivalent of a “12 hacks to rotate a matrix” article. The theoretical books I find however give no explanation for the definitions etc, ie, WHY are the dot/cross products defined the way they are. It’s as though they feel matrixes are natural phenomenon that you should just memorize the properties of, which is also nonsense.
The entire field is defined by such terrible books. I’d love to be wrong though if somebody has a recommendation
> Pedagogically, the challenge to teaching linear algebra is that you start with "here's systems of linear equations, we can put them into matrices and now here's row operations to solve them," and you end up with "now matrices are actually representations of linear operators on vector spaces, let's analyze the properties of this specific operator." Usually, this is also coupled with a reluctance to actually discuss vector spaces, since the meat of it involves abstract algebra, which usually comes after linear algebra.
There are many, many linear algebra books. There is not just one approach; it all depends on what audience the author, or author team, is writing to.
Indeed, trace and determinant were never properly introduced to me as an undergrad. Later in life, I got some intuition from Trefethen and Bau, but linear algebra remains alchemy to this bad math student.
Along these lines, my stats professor recommended a really nice book that offers a case studies based approach for grad level stats: http://www.statisticalsleuth.com/
I've been going through it by implementing the solutions in jupyter notebooks. They have the datasets and code in R so it's easy to work with and work out the solutions.
I have no clue how this is having 52 votes and no comments on it. How am I supposed to know this is a good book? I'll highlight the goals of this book as it explains more about the title.
> We place an emphasis on
active learning and on developing students’ intuition through their investigation of examples. For
us, active learning involves students – they are DOING something instead of being passive learners.
I found this goal the most interesting.
> To help students understand that mathematics is not done as it is often presented. We expect
students to experiment through examples, make conjectures, and then refine their conjectures.
We believe it is important for students to learn that definitions and theorems don’t pop up
completely formed in the minds of most mathematicians, but are the result of much thought
and work
The reason I upvoted was with intent to review later. I personally found that after two semesters of linear and a BS Mathematics I didn’t know jack about linear algebra. I came to the conclusion that I should’ve studied physics or engineering if I’d wanted to actually learn how to use it!
I've been thinking about something along similar lines for a while and was wondering if there were any math/physics/science textbooks that not only took a more historical approach to teaching their subject, but also had a more "problem based" motivation.
In other words, I'm looking for textbooks that start from the problems scientists were trying to solve when they came up with their discoveries, instead of presenting the theory ex nihilo, and then retroengineering a purpose to that theory (which to my recollection is how 99% of textbooks work).
Failing to tackle this challenge appropriately can leave students confused about properties that seem apparently random (trace and determinant are big offenders here), or textbooks bringing something up only to never mention it again (null space is often an example here). On top of this, there is also the multiple notation problem (admittedly, not as bad as calculus, where there are too many notations for derivative) and the minor issue that many of the algorithms taught in the book aren't used in practice because of numerical stability issues.
It has been so long since I've taken linear algebra, and I've taken abstract algebra courses since then, that I can't really compare this book to the approach that I learned. Skimming the book, the thing that jumps out the most to me is that LU factorization and determinants are shoved surprisingly late in the book [1], and eigenvalues are "previewed" quite early. I'm not sure that's a good approach: LU factorization is important because backsolving the L and U matrices is more numerically stable (and sparser, when you're dealing with sparse matrices) than the inverse matrix, and it works even if your matrix isn't square. Furthermore, determinants tie in better to row operations, and their weird application with Cramer's rule is another way to solve a set of linear equations: you don't want to introduce Cramer's rule months after you finished treating matrices as stepping stones to solving linear equations.
The book does cover vector spaces, although in a bit of a dance around not covering abstract algebra. I'm not sure it's an effective introduction of vector spaces, although it could well suffice to ease the pedagogical trap mentioned earlier. On the other hand, if it's going to dive that far into vector spaces, it would probably be helpful to have some more sections on matrices over fields that aren't real numbers (i.e., complex numbers (make sure to mention conjugate transpose and Hermitian matrices!), rational numbers, and finite fields).
[1] Strassen's algorithm for matrix multiplication is described before LU factorization, to give you an idea of how weird the ordering ends up being.