

I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.Yes, conventions have emerged, people tend to use the same sort of notation in a given context, but in the main, the notation should be regarded as an aide memoire, something to guide you.
You say that you’re struggling because of “the math notations and zero explanation of it in the context.” Can you give us some examples? Maybe getting a start on it with a careful discussion of a few examples will unblock the difficulty you’re having.




> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.
One main cause for this belief is that in a programming there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.
I dislike maths notation as I find it lacks rigour.




Came here to say the same thing harshly and laced with profanity. I guess I can back off a bit from that now.
I was filled with crushing disappointment when I learned mathematical notation is “shorthand” and there isn’t a formal grammar. Same goes for learning writers take “shortcuts” with the expectation the reader will “fill in the gaps”. Ostensibly this is so the writer can do “less writing” and the reader can do “less reading”.
There’s so much “pure” and “universal” about math, but the humans who write about it are too lazy to write about it in a rigorous manner.
I can’t write software w/ the expectation the computer “just knows” or that it will “fill in the gaps”. Sure– I can call libraries, write in a higherlevel language to let the compiler make machine language for me, etc. I can inspect and understand the underlying implementations if I want to, though. Nothing relies on the machine “just knowing”.
It’s feels like the same goddamn laziness that plagues every other human endeavor outside of programming. People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.
“We’ll have a facetoface to discuss this there’s too much here to put in an email.”




You seem to be complaining that math isn’t programming, that it’s something different, and you’ve discovered that you don’t like how mathematicians do math.
Math notation is the way it is because it’s what mathematicians have found useful for the purpose of doing and communicating math. If you are upset and disappointed that that’s how it is then there’s not a lot we can do about it. If there was a better way of doing it, people would be jumping on it. If a different way of doing it would let you achieve more, people would be doing it.
It’s not laziness, and I think you very much have got the wrong idea of how it works, why it works, and why it is as it is. Your anger comes across very clearly, and I’m saddened that your experience has left you feeling that way.
Maths is very much about communicating what the results are and why they are true, then giving enough guidance to let someone else work through the details should they choose. Simply giving someone absolutely all the details is not really communicating why something is true.
I’m not good at this, but let me try an analogy. A computer doesn’t have to understand why a program gives the result it does, it just has to have the exact algorithm to execute. On the other hand, if I want you to understand why when n is an integer greater than 1, { n divides (n1)!+1 } if and only if { n is prime } then I can sketch the idea and let you work through it. Giving you all and every step of a proof using Peano axioms isn’t going to help you understand.
Similarly, I can express in one of the computer proof assistants the proof that when p is an odd prime, { x^2=1 has a solution mod p } if and only if { p=4k+1 for some k }, but that doesn’t give a sense of why it’s true. But I can sketch a reason why it works, and you can then work out the details, and in that way I’m letting you develop a sense of why it works that way.
Math isn’t computing, and complaining that the notation isn’t like a computer program is expressing your disappointment (which I’m not trying to minimise, and is probably very real) but is missing the point.
Math isn’t computing, and “Doing Math” is not “Writing Programs”.




I really, really appreciate your reply and its tone. Thank you for that. You’ve given me some things to think about.
I often wish people were more like computers. It probably wouldn’t make the world better but it would make it more comprehensible.




Thanks for the pingback … I appreciate that. And thanks for acknowledging that I’m trying to help.
It might also help to think of “scope” in the computing sense. Often you have a paragraph in a math paper using symbols one way, then somewhere else the same symbols crop up with a different meaning. But the scope has changed, and when you practise, you can recognise the change of scope.
We reuse variable names in different scopes, and when something is introduced exactly here, only here, and only persists for a short time, sometimes it’s not worth giving it a long, descriptive name. That’s also similar to what happens in math. If I have a loop counting from 1 to 10, sometimes it’s not worth doing more than:
for x in [1..10] {
/* five lines of code */
}
If you want to know what “x” means then it’s right there, and giving it a long descriptive name might very well hamper reading the code rather than making it clearer. That’s a judgement call, but it brings the same issues to mind.
I hope that helps. You may still not like math, or the notation, but maybe if gives you a handle on what’s going on.
PS: There are plenty of mathematicians who complain about some traditional notations too, but not generally the big stuff.




> We reuse variable names in different scopes
This example works against you. Scope shadowing is nearly universally considered bad practice, to the point that essentially every linter is preconfigured to warn about it, as are many languages themselves (eg prolog, erlang, c#, etc)
To a programmer, you’re saying “see, we do it just like the things you’re taught to never ever do”
.
> You may still not like math, or the notation,
The notation is probably fine
What I personally don’t like is mathematicians’ refusal to provide easy reference material
Programmers want mathematicians to make one of these: https://matela.com.br/pub/cheatsheets/haskellcs1.1.pdf
It doesn’t have to be perfect. We don’t need every possibility of what yhat or vertical double bars means. An 85% job would be huge.




> Programmers want mathematicians to make one of these: https://matela.com.br/pub/cheatsheets/haskellcs1.1.pdf
There are lots of maths cheat sheets like that. Maths is big, like allprogramminglanguages big. Just like in programming, notations are reused in different areas with different meanings, and different authors sometimes use different notation for the same meaning. A universal cheat sheet is impossible (just like a general programming cheat sheet is), but many cheat sheets or notation reference pages exist for particular contexts, one of which is “the basics”, e.g. https://www.pinterest.nz/pin/734016439237543897/. Try searching or image searching for [math cheat sheet], [linear algebra cheat sheet], etc.
> mathematicians’ refusal to provide easy reference material
This is an absurd claim. There is no such general refusal. On the contrary, many mathematicians provide their students with relevant easy reference material constantly. We sometimes spend entire semesterlong courses providing easy reference material, and there are many books with exactly the kind of cheatsheet you want inside the cover, or in an appendix or front matter (as well as the ones on the internet mentioned above).




> This example works against you. Scope shadowing is nearly universally considered bad practice
So you never used the same variable name in two different scopes ever? Like, if a function takes argument “name”, no other function you ever write again in any program can have a variable named “name” unless it is the same exact usage?
Or, as is commonly complained about in math, every programmer in the world then use the variable “name” only for that usecase and otherwise comes up with a new name for it?
Having different scopes doesn’t imply shadowing, it just means that you define it and then use it and then scope goes out and it no longer exists. No mathematician knows even close to every domain, so different domains of math uses notation differently. It is like how different programmers programs in different programming languages. It is such a waste to have so many programming languages, but people still do it for legacy reasons.




> Math notation is the way it is because it’s what mathematicians have found useful for the purpose of doing and communicating math.
That’s only really a good description for the most well trod areas, where people habe bothered to iterate. I think a more realistic statement would be:
“Math notation is the way it is because some mathematician found it sufficient to do and communicate math, and others found it tolerable enough to not bother to change.”
Personally, though, my problem has always been where publications use letters and symbols to mean things that are just “known” in some subfield that isn’t directly referenced. It’s not a problem for direct back and forth communication during development, true, but it dramatically increases the burden on someone who wants to jump in.




I mostly agree with you.
That all said, it would still be quite nice if it was somehow more accessable. A lot of papers containing material that’s probably actually quite standardizable remain opaque to me, and the notation invariably falls by the wayside if there’s a code or language description available.
Many times, math notatons have been thought to be minimal, or most clear possibly, only to fall by the wayside
Whereas this notation serves domain specialists well, it still leaves people like me somewhat confused
A cheat sheet – even to the practical norms – would go a long way




Here’s a take from a mathematicianintraining, and it’s biased toward researchlevel math, or at least math from the last hundred years:
Math is difficult, and a lot of what we have is the result of the sharpest minds doing their best to eke out whatever better understanding of something they can manage. Getting any sort of explanation for something is hard enough, but to get a clear theory with good notation takes an order of magnitude more effort and insight. This can take decades more of collective work.
Imagine complaining about cartographers from a thousand years ago having sketchy maps in “unexplored” regions. Maps are supposed to be precise, you say, there’s actual earth there that the map represents! But it takes an extraordinary amount of effort to actually send people to these places to map it out — it’s hardly laziness. Mathematics can be the same way, where areas that are seemingly unrigorous are the sketches of what some explorers have seen (and they check that their accounts line up), then others hopefully come along and map it all in detail.
When reading papers, there’s a fine balance of how much detail I want to see. For unfamiliar arguments and notation, it’s great to have it explained right there, but I’ve found having too much detail frustrating sometimes, since after slogging through a page of it you realize “oh, this is the standard argument for suchandsuch, I wish they had just said so.” You tend to figure that something is being explained because there is some difference that’s being pointed out.
I’ve been doing some formalization in Lean/mathlib, and it is truly an enormous amount of work to make things fully rigorous, even making it so that all notation has a formal grammar. It relies on Lean to fill in unstated details, and figuring out ways to get it to do that properly and efficiently, since otherwise the notation gets completely unworkable.




> There’s so much “pure” and “universal” about math, but the humans who write about it are too lazy to write about it in a rigorous manner.
Are you sure it’s laziness? Maybe it’s a result of there not actually being any universal notation (not even within subfields) or the exactness you refer to really isn’t necessary. This doesn’t mean that unclear exposition is a good thing. Mathematical writing (as with all writing) should strive towards clarity. But clarity doesn’t require some sort of minutely perfectly consistently notation which would be required by a computer because humans are better than computers at handling exactly those kinds of situations.
> People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.
I think you have it wrong. People can’t be bothered to be as exact because they don’t need to. People can understand things even if they are inexact. So can mathematicians. Honestly this is a feature. If computers would just intuitively understand what I tell them to do like a human assistant would, that would be a step up not a step down in human computer interfaces.




> But clarity doesn’t require some sort of minutely perfectly consistently notation which would be required by a computer
I made this point in another comment, but I think it bears repeating and elaboration: Consistency isn’t required (at least outside any single paper), but explicitness would be a tremendous boon.
Software incorporates outside context all the time, but it pretty much always does it explicitly (though the explicitness may be transitive, ie. dependencies of dependencies). Math papers often assume context that is not explicitly noted in the citations, nor those papers’ citations, etc.
Instead, some of the context might only be found in other papers that cite the same papers you are tracking down. You sometimes need to follow citations both backward and forward from every link in the chain. And unlike following citations backward (ie. the ones each author considered most relevant), the forward links aren’t curated and many (perhaps most) will be blind alleys (there also may be cycles in the citation graph, but these are relatively rare). But somehow you have to collect knowledge (or at least passing familiarity) with an encyclopedic corpus in order to at least recognize and place the context left implicit in any one paper in order to understand it.
It’s maddening.




I totally agree. I think that many mathematical papers aren’t explained as well as they can be. My advisor was pretty adamant that papers should not be written in some proofchasing style like you describe and that the author should clearly include the arguments they need (citing those authors they might have learned it from) unless those arguments are truly standard. No “using a method similar to [author] in Lemma 5 of [some paper]” and instead just including it in your paper and making sure if fits in well.
That is just an example of bad exposition in my opinion. It’s also not technically “unclear” in any notational sense so it’s a bit of an aside from this argument. But I agree with you 100% that it is bad bad bad. This is a perfect example of why arguments like “does this proof make coq happy” totally misses the point.




People can also understand each other through combinations of obscure slang, garbled audio, thick accents, and drunken slurring. It’s still an unpleasant way to communicate.
Shall we be satisfied with the same low standards in a technical field, because it is how it is?
Handson users of math notation are complaining that it sucks. I’m not sure why a dismissive “works for me” is so often the default response.




> Handson users of math notation are complaining that it sucks. I’m not sure why a dismissive “works for me” is so often the default response.
Are you sure this is because the notation is unclear/imprecise or because you just don’t like it? I like certain programming languages and certain programming styles and really don’t like others. But in none of the cases (those I like nor those I don’t) are they not 100% “clear”. The code compiles and executes after all so there really isn’t much of an argument that somehow it’s underspecified.
The same thing exists in mathematics. There are certain fields of math whose traditional notation/style/approach/etc. are totally incomprehensible to me. There are also many mathematicians who would say the same about my preferences as well.
So my point is that all people are _different_. Some people like certain things and some people like others. How can you hope to please everyone simultaneously? In my experience, there is no field at all that is as precise as mathematics. Sure “code” is precise, but (imo) professional programmers are nowhere near as precise in any general design or conversation than mathematicians. So I find the attack on supposedly bad mathematical notation a bit odd.
Mathematicians constantly try to come up with better methods of explaining things. They put more effort into it than basically any field in my experience. The problems are really that we as humans don’t all think the same and that mathematics is just plain hard. We’ve improved mathematical communication immensely throughout history and we will continue to do so. But we’ll never reach some sort of perfect communication style because no single such style could ever exist.




> Handson users of math notation are complaining that it sucks. I’m not sure why a dismissive “works for me” is so often the default response.
It is really easy to complain. People also complain about every popular programming language, but it is really hard to make something that is actually better. It is easy to make something that you yourself think is better, but it is hard to make something that is better in practice.




There are formal grammars. The formal grammars are really hard to understand in my humble opinion. The best examples I think are COQ (see e.g. https://en.wikipedia.org/wiki/Coq) and Lean (see e.g https://en.wikipedia.org/wiki/Lean_(proof_assistant) ).
Yes, we are too lazy to be 100% formal and many times we are too lazy to be mostly formal. This is mostly because we target our writing to other mathematicians who have no need to see every small step and including every step makes the proofs long. On the other hand, I do feel that generally speaking mathematicians should show more of their work and skip fewer steps.
I find your statement “People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.” to be very true. Being precise is difficult.




> I dislike maths notation as I find it lacks rigour.
I see this a lot from programmers, but in essence, you seem to be complaining that maths notation isn’t what you want it to be, but is instead something else that mathematicians (and physicists and engineers) find useful.




As someone who’s studied math and CS extensively, it’s not that mathematicians don’t need that rigor it’s only certain subfields have a culture of this kind of notational rigor. You absolutely see little bubbles of research, 24 professors, get sealed off from the rest of the research community because their notational practices are so sloppy that no one wants to bother whereas others make it easy to understand their work.
CS as a field just seems to have a higher base standard for explaining their notation and ideas. It helps in crosscollaboration by making it significantly easier to self study.
Related to this, I’d say math books have a significantly worse pedagogical culture in regards to both notation and defining prerequisites. It’s very common for a math book to say “we expect readers to have taken a discrete math course” and not defining notation despite knowing the topics covered in discrete math vary greatly from school to school and may not overlap. Math professors frequently have to paper over these problems at Uni as they realize the class does not understand some notation. CS are just better about this, and I can only explain it as a part of the culture and tradition.




> CS are just better about this, and I can only explain it as a part of the culture and tradition.
CS professors writes just as incomprehensible math as everyone else, as you can see many here brings up examples of CS professors writing incomprehensible math in their papers.




Moreover, you might think that Lisp notation would improve it, but CS papers using Sexpressions are just as incomprehensible, even to a seasoned Lisp programmer.
Math notations are twodimensional and don’t suffer very badly from structural ambiguities, so that actually fixes almost nothing.
The problem in unfamiliar math notations is rarely the chunking of which clump is a child of which clump.
E.g say that some paper uses, say, angle brackets, with some deep meaning that you can learn about if you recurse three levels down in the list of references.
I’m not confused that in , the Ap thing is a child of the angle brackets; and calling it (frob (A p)) doesn’t help much in this regard.
However, at least you can search literature for the word frob more easily than for angle brackets.





I think “useful” is doing a lot of work here. A lot of math notation exists clearly to gate keep. It’s often nonsensical. It’s a shame because it really makes mathematicians look bad (re:annoying) to those who can see through it. It’s not hard to see through it or anything, but it is obnoxious. All you need is an english explanation of the notation, and then you’re good, but often all of the sources on the topic are written in the same obnoxious babble language.
Take sequential Monte Carlo / sequential importance sampling for instance. This powerpoint on it is clownishly bad: http://people.eecs.berkeley.edu/~jordan/courses/260spring10…
This is supposed to be an algorithm implemented in code. It’s essentially illegible without code examples, which it doesn’t feature. Code examples tell you what the cipher signifies; at no point does the cipher provide any value to the learner. Fanciful bayestheoretical statements and so on basically reduce to “iteratively build enlarging valid states.” Given the fact that this simple statement is missing, I question if the professor has some sort of communication disorder or if they’re just a troll. Similar to pomo philosophers, it’s probably a mix.




Lecture powerpoints are bad everywhere since you are meant to listen to the lecturer speaking about them, they aren’t meant to be read independently like this.
Try to understand programming based on a programming lecture powerpoint, it is usually impossible.
Edit: Also you can’t write code for what he is talking about in that lecture. Code cannot deal with infinities or continuous values. You’d get approximations which isn’t the same thing, then you’d need to prove that those approximations are good enough which would have to be done without code anyway.




You may not realize that in a given field, the same variable that represents the same basic thing may be negated depending on the part of the world the paper is published from. This can be fine, if it’s your subfield, you happen to know to be careful with said variable. I don’t personally dig into a lot of disperate maths in different papers very often, but this is the single biggest complaint my polyglot friend talks about. The second biggest is when he has to read and parse the math from a dozen unrelated papers in a field to find out what some random undefined variable means in the actual paper he cares about.




I graduated in physics so I am no stranger to math notation quirks and I think I also do understand their usefulness at times (conciseness in notation, etc). And it can be dangerous, too. As soon as the notation lures you into doing transformations that are invalid.
Doesn’t help that then notation is often poorly defined, and sometimes a weird mix of notations is presented.
Overall the situation is also not pleasant for math people changing topics, or physicists reading papers from physical chemistry professors who ‘grew up’ in mathematical chemistry.




Yes. What’s wrong with changing math notation? Why wouldn’t you do it if you know that it would make it easier for others to approach? What’s the rationale behind doing exactly nothing to make the notation more approachable for the masses?




Math notation has evolved to be what it is because it is useful for the actual doing of math, and the communication of math to those who have sufficient background. It’s not deliberately designed to keep people out, and there are literally hundreds of thousands of books that introduce people to the notations used, to help onboard them.
Haskell is unreadable to one who has not trained in it or similar languages … why don’t they make the syntax more readable? Or C++ with its modern templating … why don’t they change the syntax to make it more readable?
You might be tired of wandering into someone else’s area of expertise and telling them:
You must change! You must make it more accessible!
Believe me, mathematicians are tired of nonmathematicians wandering up and saying:
Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it? Do you really believe that no one has tried to make it more accessible?
Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?




> It’s not deliberately designed to keep people out,
It looks that way, to many people, even in this thread.
> why don’t they change the syntax to make it more readable?
They do, actually. Quite often at that. It’s called releasing new version.
> Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
No. Computer code is as far from intuitive as it can be. Nobody says otherwise. So you don’t need to do anything to get there, the notation’s good on that front (meaning: completely nonintuitive).
That’s where the IDEs come in. And debuggers. And other tools. Lots of tools. They really help. You could use them, because the IDEsformath already exist. In college I had exactly one semester to familiarize myself with one of them, and it was never mentioned again until graduation.
> Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it?
Why, do you believe it’s not possible for it to be that way? See: https://en.wikipedia.org/wiki/Pythagoras#Prohibitions_and_re…
> Do you really believe that no one has tried to make it more accessible?
Why did they fail? (If they didn’t – where’s the exponential growth of first years’ mathematicians in training)
> Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?
I’m 100% not interested in why it is like this, it’s not my problem, so I really wouldn’t know. Would you be interested in how at some point you had to write `class X(object):` and that it later changed to simply `class X:`? Would you go hunt on the mailing list to see who exactly came up with the idea? Or why they thought it would be better that way? Would you be interested in that if you just had to write a 10lines of Python, to scrape some web site?





> Did you just use an example from 2600 years ago to make a point?
Yes? What’s wrong with that?
I’m pointing out the most widely known example, to make a point, which is: “it is possible to design notation specifically for keeping outsiders out”. I’m not saying that modern math notation is like that. I think, as a layman, that it probably evolved over a long time and so is full of idiosyncrasies that made perfect sense back when they were introduced (my GP seems to describe it in similar terms, so I hope I’m not that far removed from reality).




> It’s not deliberately designed to keep people out
Surely you must realize that you’re protesting this because it has this reputation, though?
And surely you must realize that it has this reputation for a reason?
When I was a teenager and took my first calculus course, I struggled with summation for three days. When I finally went to my dad he looked at me funny and said “your teacher is an idiot, isn’t he? It’s a for loop.”
I had been writing for loops for seven years at that age. I almost cried. It was like a lightswitch.
The problem was always that nobody had ever actually explained what the symbol meant in any practical way. Every piece of terminology was explained with other terminology, when there was absolutely no reason to do so.
Mathematics has the reputation for impermeability and unwelcomingness for a reason.
It’s because you guys are ignoring us saying “we want to learn, please write out a cheat sheet” and saying “yes, but don’t you see” instead of just building the easy onramp that every other field on earth has built
.
>> You might be tired of wandering into someone else’s area of expertise and telling them:
>
> You must change! You must make it more accessible!
No, we generally just fix the problem. If people are saying “this isn’t accessible enough,” we just work on it.
I would like for you personally to be aware of Bret Victor’s work. He’s incredibly potent and clear on these topics.
Programmers work really really hard on learnability and understandability. This is a big deal to us. That’s why we can’t understand why it’s not a big deal to you.
http://worrydream.com/LearnableProgramming/
We have, in fact, mostly given up on waiting for you, and started to make our own tooling to understand your work, using obvious principles like live editors and witnessable effects.
http://worrydream.com/MediaForThinkingTheUnthinkable/
Edit: those are the talk notes. Wrong link, sorry. I should have used this instead: https://vimeo.com/67076984
This is a big part of how we criticize ourselves, is for failing to provide the tooling to allow new modes of approach.
https://www.youtube.com/watch?v=PUv66718DII
We frequently think of our programming languages as new modes for thought. This line of discussion is particularly popular in the Lisp, Haskell, and Forth communities, though it crops up at some level everywhere.
We frequently think that the more opaque the language, the less useful it is in this way.
That’s why programming languages, which are arguably 70 years old as a field, have so much more powerful tools for teaching and explanation than math, which is literally older than spoken language
You guys don’t even have documentation extraction going yet. We have documentation where you have a little code box and you can type things and try it. You can screw with it. You can see what happens.
This is why we care about things like Active Reading and explorable explanations.
http://worrydream.com/ExplorableExplanations/
This is why we care about things like live reactive documents. It really changes your ability to intuitively understand things.
http://worrydream.com/Tangle/
Math hasn’t grokked nonsymbolic communication since Archimedes, that’s why it took nearly two thousand years to catch up with him.
We are asking you to come into step with the didactic tools of the modern world. It’s not the 1850s anymore. We have better stuff than blackboards.
Are these flat symbolic equations cutting it for you guys to communicate with one another? Sure.
Are they cutting it for you guys to onboard new talent, or make your wealth available to the outside? No. (Do you realize that there is an outside to you, which isn’t true of most technical fields anymore?)
These problems are not unique to mathematics, of course. Formal logic is similar. Within my own field of programming, the AI field is similar, as is control theory, as tends to be database work. They don’t want to open the doors. You have to spend six years earning it.
But the hard truth is there are more difficult fields than mathematics that have managed to surmount these problems, such as physics (which no, is not applied mathematics,) and I think it might be time to stop protesting and start asking yourself “am I failing the next generation of mathematicians?”
An example of who I believe to be genuinely good math communicators in the modern era are Three Blue One Brown.
.
>> Believe me, mathematicians are tired of nonmathematicians wandering up and saying:
>
> Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
Then fix the problem.
It IS fixable.
.
> Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it?
Given the way you guys push back on being asked to write simple reference material?
No, but I understand why they do.
.
> Do you really believe that no one has tried to make it more accessible?
No. Instead, I believe that nobody has succeeded.
Try to calm down a bit, won’t you? People tried to explain Berkeley sockets in a simple way for 12 years before Beej showed up and succeeded. The Little Schemer was 16 years after Lisp.
Explaining is one of the very hardest things that exists.
We’re not saying you didn’t try! The battlefield is littered with the corpses of attempts to get past Flatland.
We’re just saying “you haven’t succeeded yet and this is important. Keep trying.”
.
> Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?
No. The literal ask is for you to repair that. Crimeny.




> Surely you must realize that you’re protesting this because it has this reputation, though?
I’ve never heard anyone make this accusation until I read it here on HN today. The reputation doesn’t seem to be widespread.
> Programmers work really really hard on learnability and understandability. This is a big deal to us. That’s why we can’t understand why it’s not a big deal to you.
How to better teach math is like one of the most studied topics in education since it is so extremely important for so many outcomes. People learn programming faster since programming is simply easier, not because more effort has been done to make programming easy. There hasn’t, way more effort has been put into making math easy and the math we have is the results of all that work.
https://en.wikipedia.org/wiki/List_of_mathematics_education_…
> Given the way you guys push back on being asked to write simple reference material?
Nobody pushes back on writing simple reference manuals. There are tons of simple reference manuals for math everywhere on the internet, in most math papers, in most math books, everywhere! Yet still people fail to understand it. Many billions has been put trying to improve math education, trying to find shortcuts, trying to do anything at all. You are simply ignorant thinking that there are some quick fix super easy to implement things that would magically make people understand math. There isn’t. It is possible that math education could get improved, but it wont be a simple thing.





> there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.
This is such a disingenuous take. How many of the source code files you write are 100% self contained and well defined? I’d bet not a single one of them are. You reference libraries, you depend on specific compiler/runtime/OS versions, you reference other files etc. If you take a look at any of these scientific papers you call “badly defined”, did you really go through all of the referenced papers and look if they defined the things you didn’t get? If not then you can’t be sure that the paper uses undefined notation. If you argue that it is too much work to go through that many references, well that is what you would have to do to understand one of your program files.




One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can’t work without being specified.
When you get down to the microcode of the CPU that implements the ISA you might have an issue if it’s illspecified. You might be talking about an ISA like RISCV, though, specified at a level sufficient to go down to the gates. You might be talking about an ISA like 6502 where the gatelevel implementations have been reverseengineered.
You can take programming all the way down boolean logic if you need to and the tools are readily available. They don’t rely on you “just knowing” something.




> One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can’t work without being specified.
I doubt you actually can do that and understand it all. A computer can do it, but I doubt you the human can do that and get a perfect picture of any non trivial program without making errors. Human math is a human language first and foremost, its grammar is human language which is used to define things and symbols. This lets us write things that humans can actually read and understand the entirety of, unlike a million lines of code or cpu instructions.
Show me a program written by 10 programmers over 10 years and I doubt anyone really understands all of it. But we have mathematical fields that hundreds of mathematicians have written over centuries, and people still are able to understand it all perfectly. It is true that a computer can easily read a computer program, but since we are arguing about teaching humans you would need to show evidence that humans can actually read and understand complex code well.




> because programs can’t work without being specified.
Someone hasn’t read the C spec, with all its specified as undefined behavior.
Programs working on real systems is very different from those systems being formally specified. I suspect that if you only had access to the pile of documentation and no real computer system – if you were an alien trying to reconstruct it, for example – you’d hit serious problems.




Undefined behavior isn’t a feature. A spec isn’t an implementation, either.
All behavior in an implementation can be teasedout if given sufficient time.
> if you were an alien trying to reconstruct it, for example – you’d hit serious problems.
I can’t speak to alien minds. Considering the feats of reverseengineering I’ve seen in the IT world (software security, semiconductor reverseengineering) or cryptography (the breaking the Japanese Purple cipher in WWII, for example) I think it’s safe to say humans are really, really good at reverseengineering other humancreated systems from closetonothing. Starting with documentation would be a stepup.




> All behavior in an implementation can be teasedout if given sufficient time.
Can it? Given what? You would need to understand how the CPU is supposed to execute the compiled code to do that. In order to understand the CPU you would need to read the manual for its instruction set, which is written in human language and hence not any better defined than math. At best you get the same level of strictness as math.
If you assume you already have a perfect knowledge of the CPU workings, then I can just assume that you already have perfect knowledge of the relevant math topic and hence don’t even need to read the paper to understand the paper. Human knowledge needs to come from somewhere. If you can read a programming language manual then you can read math. Every math paper is its own DSL in this context with its own small explanations for how it does things.




> Every math paper is its own DSL in this context with its own small explanations for how it does things.
That’s really the point though: not every piece of software defines it’s own DSL, nor does it necessarily incorporate a DSL from some library or framework (which in turn may or may not borrow from other DSLs, etc.). It is also impossible to incorporate something from other software without actually referencing it explicitly.
Math, though, is more like prose in this respect – while any given novel probably has a lot of structure, terminology, and notation in common with other works in its genre, unless it is extremely derivative it almost certainly has a few quirks and innovations specific to the author or even unique to that particular work that you can absorb while reading or puzzle out due to context, as long as you accept that the context is quite a lot of other works in the genre (this is more true of some genres/subfields than others). Unlike novels, at least in math papers (but not necessarily books) you get explicit references to the other works that the author considered most relevant, but those references are not usually sufficient on their own, nor necessarily complete, and you have to do more spelunking or happen to have done it already.
Finally, like prose, with math you have to rely on other (subsequent) sources to point out deficiencies in the work, or figure them out on your own. Math papers, once published, don’t usually get bug fixes and new releases, you’re expected to be aware (from the context that has grown around the paper postpublication) what the problems are. Which means reading citations forward in time as well as backward for each referenced paper. The combinatorial explosion is ridiculous.
It would be great if there were something like tour guides published that just marked out the branching garden paths of concepts and notation borrowed and adapted between publications, but textbooks tend to focus on teaching one particular garden path.




> It is also impossible to incorporate something from other software without actually referencing it explicitly.
No, some programming languages just injects symbols based on context. You’d have to compile it with the right dependencies for it to work, so it is impossible to know what it is supposed to be.
And even if they reference some other file, that file might not even be present in the codebase, instead some framework says “fetch this file from some remote repository at this URL on the internet” and then it fetches some file from the node repository, which can be another file tomorrow for all we know. This sort of time variance is nonexistent in math, so to me math is way more readable than most code.
And you have probably seen a programming tutorial or similar which uses library functions that no longer exists in modern versions, tells you to call a function but the function was found in a library the tutorial forgot to tell you about, or many of the other things that can go wrong.





Formulas would also be easier to read if they would not name all their variables and functions with one character.
If programmers would write code like that (even fortran programmers use 3 characters), noone would be able to understand the code…




As someone trained in mathematics, I can tell you that using single character variables allows one to focus better on the concepts abstractly which is one of the goals of mathematics. That is to say, it is a practice wellsuited to mathematics.
It doesn’t carry over to programming where explicit variables are better suited. In mathematics one is dealing with relatively few concepts compared to a typical program so assigning a single letter (applied consistently) to each is not a problem. This is not so in programming, except for a few cases like using i and j for loop variables (back when programs had explicit loops).




As far as programmers, forget about the names. Does every C source file that uses pointer arithmetic include an explanation of how it works? Nope. They just use it and assume the reader understands it or is clever enough to ask for help or read up on the language.
Mathematical writing is similar. At some point you have to assume an audience, which may be more or less mathematically literate. If you’re writing for graduate students or experts in a domain, you don’t include a tutorial and description of literally every term, you can assume they’re familiar with the domain jargon (just like C programmers can assume that others who read their program understand pointers and other program elements). Whenever something is being used that is unique to the context, a definition is typically provided, at least if the writer is halfway decent.
If the audience is assumed to be less mathematically literate (like a Calculus course textbook audience), then more terms will be defined (chapter 1 of most Calculus books include a definition of “function”). But a paper on some Calculus topic shouldn’t have to define the integral, it should be able to use it because the audience will be expected to understand Calculus.






This is pretty understandable because:
1. They explain all abbreviations at the top.
2. There is a lenghty text explaining the formula.
3. It’s mathematically pretty easy if you know partial differentation.
Also scientists and engineers write pretty horrible code….




It’s not horrible, it’s different, has different goals and different audiences. Context is king, and the bulk of professional programmers criticizing scientist code is just lack of context and a different set of priorities.
From a more science based background i often think programmers write horrible code as i search in vain for where anything actually happens in a sea of abstractions.





> in a programming there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined
Yes this is why we all use Hungarian notation and GNU indentation.




I’m glad I’m not the only person like this. I’ve never liked tradition math notation and found it about as useful as traditional musical notation, that is, hard to read for the layman and for no other reason than “this is how people have been doing it for a long time”. Maybe I’m the minority, but when I read a CS paper I mostly ignore the maths and then go to the source code or pseudocode to see how the algorithm was implemented.




> …for no other reason than “this is how people have been doing it for a long time”.
I disagree. Math notation has evolved to be as it is because it is useful for the purpose of doing math. If there were some way of doing it better, people would be evolving to be doing so.
In some ways they are … people are using computer algebra packages more for a lot of the grunt work, and are using proof assistants to verify some things, but there’s a lot of math that’s still done by sketching why something is true and letting the reader work through it. Math notation isn’t about executing algorithms, it’s about communicating what the result is, and why it works.
“Doing Math” is not “Writing Programs”, so math notation is different.




> If there were some way of doing it better, people would be evolving to be doing so.
I don’t see why wouldn’t it be some kind of local maximum. Maybe there are better ways, but they are sufficiently far away from current notation, that they aren’t even thought about.




Hum… Apart from math, music notation maps linearly from the symbols to the instrument positions and time.
It’s absolutely not used only because “this is how people have been doing it for a long time”. It’s a very efficient notation to decode.




> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.
No, that belief isn’t the problem; that actual status quo itself is obviously the problem. There are numerous notations and authors don’t explain what they are using, assuming everyone has recursively read all of their references depthfirst before reading their paper.







Sure there is, read these and you will understand most math papers people here struggle with: https://mathblog.com/mathematicsbooks/
Of course you still wont be able to understand most math papers written by pure mathematicians, but it should be fine for whatever you need in CS. I know all the topics on that page, it is just a very fleshed out math major.





But why are you reading research papers with math without having studied math? If you want to understand them fully then you need to do the relevant courses, people spend years learning these things. You don’t have to read them all, just the branch relevant to the paper.




Why are you telling OP what his problem is? Shouldn’t you address his pain points, not your rationalization of them?
I wrote it many times already and am bit tired of it, so just a quick summary:
– programmers[1] also use cryptic notation and tend to think in concepts rather than syntax
– nevertheless, programmers spend a lot of time commenting the code, documenting it, specifying it, and so on.
– why can’t mathematicians emulate it? What is so wrong about attaching additional few pages to every paper that nobody wants to do it? Pages with explanation of the syntax used, even the common bits. And you know what they could also do? Link to external resources with explanations! But no. This is not happening. Do their PDFs have a size limit or something? Is inserting a link into a paper considered some kind of blasphemy?
I don’t know the reason, but in all the discussions on this topic mathematicians almost always underestimate the importance of knowing the syntax. It’s much more important for comprehension than they tend to admit. And in the end they do exactly nothing to make the syntax more approachable for newcomers. And then newcomers are outgoers in a heartbeat. It’s so obvious that I can’t help thinking it’s premeditated…
EDIT: [1] Among many others, of course.




Mathematicians do document and comment, that’s what papers and textbooks are: commentary on the math. They don’t throw out formulae and equations and call it a day. Attaching a full tutorial for every level of reader is tantamount to attaching Stroustrup’s C++ books to every C++ program, or K&R to every C program. You wouldn’t do that, you’d expect the reader to ask you for references or to seek them out themselves.




I think the GP post is criticizing the lack of documenting syntax. Math papers tend to document semantics, whereas the understanding of the syntax by the reader is presumed.




> or K&R to every C program.
That’s actually doable… 😉 K&R is rather terse, what, 1/5 of Stroustrup or something like that. But I digress.
More on topic: there’s also a class of programs that DO come with a book attached – or rather, multiple books, for every level; if not included outright in the distribution then at least linked to in the “learn” tab on a homepage. They’re called programming languages. So, it can be done. That’s all I want to say.




> What is so wrong about attaching additional few pages to every paper that nobody wants to do it? Pages with explanation of the syntax used, even the common bits.
Programs don’t do this, why do you expect every math paper to do it?
> Link to external resources with explanations!
This is called a bibliography, every book that isn’t so old that it is the definition and paper includes one. In many textbooks there are also appendices which cover (some of) the foundational material. And most include sections (often in the front and back covers) that show the symbols and their names, if not their definitions.




> Programs don’t do this, why do you expect every math paper to do it?
Well, I don’t. It was you moving the goalpost. I talked about “a few pages”, and you made “a book” out of it. I simply don’t agree with you here and so I have very little to add at this point, sorry.
> This is called a bibliography, every book that isn’t so old that it is the definition and paper includes one.
No. Bibliography is like a list of libraries you depend on. It has literally nothing to do with explaining the syntax close to where it’s used.
> appendices which cover (some of) the foundational material.
Ha, ha, ha. No. If it’s not front and center, then it doesn’t count. I’m sorry, but I’m really tired of this subject. I would be willing to compromise more if that wasn’t the case, believe me.
> show the symbols and their names, if not their definitions.
Ok. Putting that on the cover is a bit strange, but ok. That’s a nice, but very small, step in the right direction. Please iterate and improve upon it!
EDIT: again, because I missed it at first:
> Programs don’t do this, why do you expect every math paper to do it?
Programs do come with man pages! And tutorials, interactive tours, contextual help, and more. Emacs comes with 3 books, and a tutorial. (GNU) libc has a book to it. Firefox has a whole portal (MDN) as its documentation. Visual Studio comes with MSDN and a huge amount of explanatory material. And when it comes down to code, you have autocompletion, go to definition, search for callers; you can hover over a symbol and you get a popup with documentation and types; you can also trace execution, stop the execution, rewind the execution (if you have good debugger), experiment with various expressions evaluated at different points.
The most important difference between math and programming (or CS)is that programmers can (and do) build automated tools that help the next generation of newbies get into programming, while mathematicians can’t. It’s just that they don’t want to admit this is a weakness, and only fortify more in their ivory towers.
TLDR: I just can’t see how you can even put math papers and programs on the same scale in terms of accessibility!




> Programs do come with man pages! And tutorials, interactive tours, contextual help, and more. Emacs comes with 3 books, and a tutorial. (GNU) libc has a book to it. Firefox has a whole portal (MDN) as its documentation. Visual Studio comes with MSDN and a huge amount of explanatory material. And when it comes down to code, you have autocompletion, go to definition, search for callers; you can hover over a symbol and you get a popup with documentation and types. I just can’t see how you can even put math papers and programs on the same scale in terms of accessibility!
You are comparing big teams and products to a single guy writing a paper intended for a niche audience and to be read maybe a few hundred times if he is lucky. People makes mistakes and sometimes forget to document everything, they try to document everything though as can be seen in their papers where most things are documented well, but sometimes they miss things and unlike code you don’t have compiler warnings telling you about it. And given how few people read those papers it isn’t worth investing in a team to go through and update all of those papers to properly add definitions for everything they missed.
The equivalent to those programs in math would be high school textbooks, and they are extremely well documented and easy to read in most cases.




> it isn’t worth investing in a team to go through and update all of those papers to properly add definitions for everything they missed.
Thank you. There’s nothing else left to discuss.




Thanks for understanding, math is a small field without money for things like this, there is no way anyone should expect those niche papers to be as well documented as big programming projects used by millions.
If you still think that is a problem then start some open source organization to fix that. Nobody has done that yet though since so few people care about math papers, but since you feel so strongly about this you could do it, someone has to be the one to start it.




No, I mean, well, it’s very understandable when you describe it that way. Actually, I think your post here changed my perception of the problem the most out of all discussions I had on the subject. It made me think about people who are behind the papers. I somehow missed it. Thank you.
(And, sorry for being a jerk in this thread. I said too much in a few places, exactly because I didn’t think of innocent mathematicians who might read it. I’m still convinced that there is a lot that math can borrow from CS and SE, but I’m definitely going to argue this differently.)




I wrote one math paper before I went into programming. It is a lot of work, like code reviewing but much much longer. It isn’t fun. A big reason I got into programming is because that process is so much work. Of course I, the professor who reviewed it and the professors who looked at it afterwards understood it, but I can’t guarantee that someone who hasn’t read a lot about research level topology or combinatorics will easily understand much at all. However I doubt that anyone who didn’t do those things will ever read it since it is an uninteresting niche topic. I’d be surprised if even 10 people read it fully.




Yeah, I didn’t think about it at all – I didn’t realize that what I’m saying is basically demanding people to work for free (and on things that won’t be useful to anyone in 99% of cases), and that’s on top of already huge effort that is writing the paper in the first place. Honestly, I was behaving like people who open tickets in an open source project just to demand that someone implements a particular feature, just for them, and right now. I dislike such behavior, and realizing that I’m doing the same hit me hard 🙂




Note that the OP is asking about collegelevel math, not cuttingedge papers.
Textbooks routinely have a list of symbols and their definitions.
But, from my experience, notation is rarely the problem. I’d bet that the root cause of OP’s frustration is lack of understanding of concepts, not notation. (But, of course, it’s hard to say more without specific examples).




> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.
Just to back up this point: In probably every universitylevel math book I’ve read, they introduce and explain all the notation used. In the preface and/or as concepts are introduced.
There are lists at wikipedia [1] and other places, but I’m not sure how valuable it is out of context.
[1] https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo…





It’s not entirely unlikely that I am remembering just the good stuff 🙂 But I was surprised how many books would define even the most common notation, like ⊂, ∀, and ∃.
I guess if you call your book “Introduction to…” you ought to do that. And it seems that all books were called that, regardless of how narrow and advanced the rest of the title was 🙂




Often books assume some prerequisites, the question here is the level of those prerequisites. Some books try to include all the necessary background, others assume a preexisting base level of knowledge.
Different authors, different books, different audiences, and different contexts.





What you’re looking at is calculus, specifically differentiation. This is pretty core to understanding physics, because so much of physics depends on the timeevolving state of things. That’s fundamentally what’s happening here.
The triangle, for example, is the uppercase greek letter delta, which in calculus represents ‘change of’. You might have heard of ‘deltaT’ with respect to ‘change of time’.
In calculus, uppercase delta means ‘change over a finite time’ vs lowercase delta meaning ‘instantaneous change’. The practical upshot, for example, is that the lowercase is the instantaneous rateofchange at an instant in time, whereas the uppercase is the change over a whole time (e.g. the average rate of change per second for time=0 seconds to time=3 seconds).
If you are trying to grok this, I would suggest an introductory calculus or precalculus resource. It doesn’t have to be a uni textbook – higherlevel high school maths usually teaches this. In this particular case, the Khan Academy would be my recommendation because it is about the right level (we’re not talking esoteric higherlevel university knowledge here) and it is eminently accessable. For example, this link may be a good starter in this instance:
https://www.youtube.com/watch?v=MeUKzdCBps




You say “There’s a formula with a triangle …” without telling me where. That’s not real helpful, and you’re making me do the work to find out what you’re talking about. If you want assistance to get started, you need to be more explicit.
However, I have done that work, so I’ve looked, and in the second column of page 210 there’s a “formula with a triangle”:
t_c=5 middot 10^{5} sqrt( V / Dt )
… where the “D” I’ve used is where the triangle appears in the formula.
But that can’t be it, because just two lines above it we have:
“For a pulse of width Dt, the critical time …”
So that’s stating that “Dt” is the width of the pulse, and should be thought of as a single term.
So maybe that’s the wrong formula, or maybe it was just a bad example. So trying to be more helpful, the “triangle” is a Greek capital delta and means different things in different places. However, it is often used to mean “a small change in”.
https://en.wikipedia.org/wiki/%CE%94T
FWIW … at a glance I can’t see where that result is derived, it appears simply to be stated without explanation. I might be wrong, I’ve not read the rest of the paper.




I feel you’re coming at this without appreciating your body of prior knowledge. Intended or not, your statment “But that can’t be it, because just two lines above it we have…” assumes a whole lot of knowledge.
You and I both know that it reads as one term, but for someone unfamiliar with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items, because the algebraic ‘multiply’ is so often implied, e.g. 3x=3 * x as two individual ‘things’.
I think there’s merit in explaining the concept of delta representing change, because it’s not obvious. For example, when I was taught the concept in school, my teacher explicitly started with doing a finite change with numbers, then representing it in terms of ‘x’ and ‘y’, then merged them into the delta symbol. That’s a substantial intuitive stepping stone and I think it’s pretty reasonable that someone may not find this immediately apparent.




I agree completely that I’m coming at this with a lot of background knowledge, but if I’m reading in an unfamiliar field and I see a symbol I don’t recognise, I look in the surrounding text to see if the symbol appears nearby. As I say, “Δt” appears immediately above … that’s a clue. As you say, it’s drilled in at school that everything is represented by a single glyph, and if these are juxtaposed then it means multiplication, and that is another thing to unlearn.
But I think the problem isn’t the specifics of the “Δ”, it’s the metaproblem of believing that symbols have a “one true meaning” instead of being defined by the scope.
I agree that explaining the delta notation would be helpful, but that’s like giving someone a fish, or making them a fire. They are fed for one day, or warm for one night, it’s the underlying misconceptions that need addressing so they can learn to fish and be fed, or set on fire and be warm, for the remainder of their life.




I absolutely agree with your comments regarding teaching the underlying approach to digesting a paper. You definitely raise good points, especially the ‘one true meaning’ comment. I should state that I’m not discounting the value of your point, especially given this clarification, however I guess that when I reflect on my experience in my time learning this, the time I best learnt was via initial expalnation, then worked example, then customary warning of cornercases and herebedragons.
e: I also think, on reflection, that a signfigicant part of your ability to grok a new paper per your comments is your comfort in approaching these concepts due to your familiarity. Think of learning a new language – once you have a feel for it, you’re likely more comfortable exploring new concepts within it, however when you’re faced with it from the start you probably feel very lost and apprehensive.
I feel that understanding calculus is a fairly fundamental step in the ‘language of maths’, teaching that symbols don’t necessarily represent numbers but can represent concepts (e.g. delta being change). This isn’t something you encounter until then, but once you do you begin to understand the characters associated iwth integrals, matricies, etc. in a way that you may not have previously with algebra alone.






> with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items
But most will already be familiar with the family of goniometric functions such as sin and cos, there’s log and possibly exp and sqrt. There’s min and max; advanced math has inf and sup.




I think that this is indeed the formula in GP’s question. And indeed sometimes math notation is obtuse like that. It looks like 2 terms, but the triangle goes together with the t as a single term. At other times it might be called “dt” and despite looking like a multiplication of 2 variables (d and t, or triangle and t in this case) it’s just a single variable with a named made of 2 characters.
The important thing here is that “For a pulse of width Dt” is the definition of this variable, but this can be easily missed if you’re not used to this naming convention.




That’s because “Δ” means “a change of” or “an interval of”. So, Δt is “an interval of time”. It is like a compound word, really. It conveys more information than giving it an arbitrary, singleletter name.
This convention is used in a whole bunch of scientific fields, like quantum mechanics, chemistry, biology, mechanics, thermodynamics, etc.
It’s also very useful in how it relates to derivatives, which is a crucial concept in just about any kind of science you could care to mention.
So yes, there is a learning curve, but we write things this way for good reasons, most of the time.
Multiplication should be represented by a (thin) space in good typography, to avoid this sort of things. Not doing it is sloppy and invites misreading. Same with omitting parenthesis around a function’s argument most of the time (e.g. sin 2πθ instead of sin(2 π θ) ).




> it’s just a single variable with a named made of 2 characters.
I have this same problem with programming, when I have to deal with code written by nonmathematicians. They tend to use all these stupid variables with more than one letter and that confuses the heck out of me.




Sorry I didn’t mean to make you work for me, but it’s a PDF and I didn’t know how to explain better the position (maybe I should have told you the first formula on page X).
For you it was a D, for me it was a triangle and I didn’t get the meaning of that Dt. Maybe it’s just a too advanced paper for my knowledge.




BTW … you say:
> Maybe it’s just a too advanced paper for my knowledge.
Maybe it is for now … the point being that if you start at the beginning, chip away at it, search for terms on the ‘net, read multiple times, try to work through it, and then ask people when you’re really stuck, that’s one way of making progress.
You can, instead, enroll in an online course, or nightschool, and learn all this stuff from the ground up, but it will almost certainly take longer. Your knowledge would be better grounded and more secure, but learning how to read, investigate, search, work, then ask, is a far greater skill that “taking a course”.
Others have answered your specific question about the delta symbol, but there are deeper processes/problems/questions here:
* Not all concepts or values or represented by a single glyph, sometimes there are multiglyph “symbols”, such as “Δt” in your example.
* When you see a symbol you don’t recognise, read the surrounding text. The symbol will almost always be referenced or described.
* The notation isn’t universal. Often it’s an aid to your memory, to write in a succinct form the thing that has been described elsewhere.
* In these senses, it’s very much a language more akin to natural languages than computer languages. The formulas are things used to express a meaning, not things to be executed.
* Specific questions about specific notation can be answered more directly, but to really get along with mathematical notation you need to “read like math” and not “read like a novel”.
* None of this is correct, all of it is intended to give you a sense of how to make progress.




I’m just saying “D” because I can’t immediately type the symbol here and it was easier just to use that. Not least, I didn’t know if that was the formula you meant.
But as I say, immediately above the formula it says:
“For a pulse of width ∆t, the critical time …”
So that really is saying exactly what that cluster of symbols means. There will be things like this everywhere as you read stuff. Things are rarely completely undefined, but you are expected to be reading along.
And you need to work. I just typed this into DDG:
“What does ∆t mean?”
The very first hit is this:
https://en.wikipedia.org/wiki/Delta_%28letter%29
That gives you a lot of context for what the symbol means, and this is the sort of thing you’ll need to do. You need to stop, look at the thing you don’t understand, read around in the nearby text, then type a question (or two, or three) into a search engine.




I’ll use this as an example for the point I’m trying to make in my comment https://news.ycombinator.com/item?id=29341727
Please don’t take this the wrong way. It is not meant to be demeaning, and it is not meant to be gatekeeping (quite the contrary!). But: If you do not know what a derivative is, then learning that that symbol means derivative (assuming that it does, I have not actually looked at what you link to) will help you next to nothing. OK, you’ll have something to google, but if you don’t already have some idea what that is, there is no way you will get through the paper that way.
I hope you take this as motivation to take the time to properly learn the fundamentals of mathematics (such as for example calculus for the topic of derivatives).




The triangle, or “delta”, is used to indicate a tiny change in the following variable.
Let’s say you go on a journey, and the distance you’ve travelled so far is “x” and the time so far is “t”.
Then your average velocity since the beginning is x / t .
But, if you want to know your current velocity, that would be delta x divided by delta t .
The delta is usually used in a “limiting” sense – you can get a more accurate measurement of your velocity by measuring the change in x during a tiny time interval. The tinier the interval, the more accurate the estimate of current velocity.
What I’m talking about here is the first steps in learning differential calculus. You could look for that at kahnacademy.com. You might also benefit by looking at their “precalculus” courses.
Just keep plugging away at it, the concepts take awhile to seep in. Attaining mathematical maturity takes years.





Yes, small changes usually use lowercase delta, e.g. δt. Not to be confused with the derivative symbol dt, nor with the partial derivative symbol ∂t !
Before I continued my maths learning after highschool (ie before UK Alevels) I learnt the Greek alphabet to make it easier to understand maths notations as I could ‘voice’ (internally) all the funny glyphs adopted from Greek.
At uni I learnt how to properly write an ampersand (for logic classes) and how to write Aleph and Beth (for pure maths, particularly transcendental numbers).
Some professors have a fondness for the more confusing Greek letters (lowercase xi, lowercase eta) … is it n or eta, epsilon or xi, …





But this is a physics paper, that isn’t how you use uppercase delta in physics. It is just a range. In physics however you do a ton of approximations all the time in ways mathematicians hate (you don’t care about errors smaller than you can measure), so uppercase delta is often approximated with derivatives etc, but it isn’t a derivative. Math in physics is way more practical and uses very different techniques than math in math, often because physicists invented the math first and mathematicians later went and formalized it.





Everyone is talking about the Δ symbol, but the real problem that you’ll encounter will be later in the paper where they start talking about H(ω), which is the Fourier transform of the impulse function (equation 4 and following). You’ll need to know a fair bit about Fourier transforms and impulse responses and filter design to get through this section. The notation is the least of the problems.
One place to start is https://en.wikipedia.org/wiki/Impulse_response




Wikipedia is truly atrocious for learning math, the articles are like man pages in that they precisely describe the concepts in terms that will only make sense if you already know the thing. They just aren’t written for pedagogy.
Like in 300 BC today, there’s no royal road to geometry.






A few points:
1. You’re reading a journal article. They will assume you know the notation not just of the broader discipline (e.g. physics/electrical engineering), but of the subdiscipline and at times the subsubdiscipline. Journal papers are explicitly written not to be easy to comprehend by beginners.[1] Notation will be only one problem you’ll face.
2. As has been pointed out, this is not a mathematics paper. Mathematicians have their own notation, as do physicists and engineers. As I mentioned in the above bullet, they can have their own notation even in subdisciplines (e.g. circuit folks use “j” for the imaginary number, and semiconductor folks use “i”). There is a lot of overlap in notation amongst these parties, but you should never assume because you know one notation that you’ll easily understand the math written by other fields.
3. Most introductory textbooks will explain the basic notation. Unfortunately, I often do find gaps where you go to higher level textbooks and they use notation that they don’t explain (i.e. they assume you’ve seen it before), but is not covered in the prior textbooks.
4. Finally, sorry to say this, but “delta” (the triangle) for representing change is used in almost all sciences and engineering. It was heavily used in my high school as well. If you’re struggling with this you really need to read some introductory textbooks in, say, physics.
[1] I’m not kidding. I’ve spent time in academia and I’ve complained how obtuse some articles are, and almost universally the response is “We write for other experts, not for new graduate students”. One professor took pride at the fact that in his field, one can comprehend only about one page of a paper per day – and this coming from someone who is an expert. These people have issues.




Looks like you need to grind through an elementary calculus book. With the exercises, you may think you build intuition by reading just the definitions, but half of the understanding is tacit and you get through the exercises.
If you’re trying to get into signal processing, it’ll involve calculus in complex numbers, and knowledge of that is often gained through plodding through proofs and exercises over and over.






There pretty much is one true notation. There could be some slight variations, like bolding vectors, putting an arrow over them or not distinguishing them at all from scalars. But 95% of the time everyone uses the same notation.




I don’t know your background, but I wonder how broad it is in terms of mathematical topics. The notations used in Algebraic Topology vs Category Theory vs Algebraic Number Theory vs Analytical Combinatorics vs Complex Analysis.
This isn’t a criticism, it’s just that notations vary wildly in those areas, and there’s lots of crossover of notations, not all of which agree with each other.
I’m not an expert, but I’ve had some exposure to the problem(s).




I studied diff geo at phd level and met stat at undergrad level, plus a sprinkling of category theory, some discrete mathematics and some physics, so I’ve been exposed to most of these.
I presumed we were talking about basic mathematics here since new notation is the least of your worries when your thinking about fibre bundles and cohomologies, but I can’t really think of any significant overlap in notation that would be different between the fields I’ve come across. Could you give some examples?




I’m trying to be more general than specific questions at the midundergrad level, because looking in from the outside, people seem to thing that if only the notation weren’t so mysterious then they could understand everything. But this comment — https://news.ycombinator.com/item?id=29344238 — gives a flavour, talking about coming across “π” in different contexts and having to give different interpretations.
But I remember sketching an algorithm to someone and just inventing notations on the fly as I did so, knowing that they would simply be ways to remember the underlying ideas.
Even so, at 1st year undergrad the notations used in Mathematical Physics vary from those used in Introductory Graph Theory, and again from Real Analysis. But once the reader knows the underlying semantics, the actual notation is mostly a nonissue (as you know).




Alright, but are there really any overlapping concepts between graph theory and analysis? There can’t be many!
The comment you linked to is pretty strange, given the limited number of symbols in the Greek and Latin alphabets, there’s obviously going to be a lot of reuse, but I can’t see how that could really cause any confusion though, unless you’re just grabbing books from the shelf and opening them at random. And even then, it should almost always be clear from context if pi is a number or a plane, and if it’s a function that will be visually distinguished.
I’ve seen nonmathematicians use words as names of variables and functions, it always makes me shudder. I unsuccessfully tried to introduce Hebrew letters as an alternative,when I discovered how to use the in Latex, but it never caught on…
I actually find math notation incredibly intuitive and effective, I think it’s close to optimal. In fact it’s only after getting into programming that it even occurred to me how elegant and magical it is. I understand what things mean and can write things myself, without being able to exactly explain how, or to translate it into a fully specified system that a computer would understand.






It sounds like you’re trying to read papers that assume a certain level of mathematical sophistication without having reached that level. Typical engineering papers will assume at least what’s taught in 2 years of college level mathematics, mainly calculus and linear algebra, and no they aren’t going to be explaining notation used at that level.
But it isn’t just about the notation. You also need to understand the concepts the notation represents, and there aren’t really any shortcuts to that.
These days there are online courses (many freely available) in just about every area of mathematics from prehigh school to intro graduate level.
It’s possible for a sufficiently motivated person to learn all of that mathematics on their own from online resources and books, but it isn’t going to be an easy task or one that you can complete in a few weeks/months.




The author explained his problem and asked for resource recommendations.
Your response is to scold him for having the problem he already said he had and instead of recommending resources you told him to go look on the internet.
And you implied he doesn’t have motivation.




I’m a math professor, and my students find it revelatory to understand math as I talk and draw.
Math notation is not math, any more than music notation is music. Notably, the Beatles couldn’t read sheet music, and it didn’t hold them back.
The best comparison would be is reading someone else’s computer code. At its best computer code is poetry, and the most gifted programmers learn quickly by reading code. Still, let’s be honest: Reading other people’s code is generally a wretched “Please! Just kill me now!” experience.
Once you realize math is the same, it’s not about you, you can pick your way forward with realistic expectations.




Great insight! I’ve definitely encountered mathematically inclined people but who cannot read or write math. Now it makes sense to me.
Also I’ve found the converse true. There are people who can manipulate mathematical symbols very well but actually don’t understand the big picture or general direction. The analogy would be that there are people who can write and read music notes (even transpose to different keys) without hearing it in their head (I was one of them).




Super answer! I wish you were one of my professors, and I had excellent professors.
If I may humbly add, try making your own notation and playing around with it. Very rapidly one realizes just how hard a problem good notation is.




All math notation was created by mathematicians who wanted to quickly represent something, either to:
– better see the structure of the problem; or
– reduce the amount of ink they need to write the problem
Very similar to how programmers use functions, in fact.
To this end, mathematicians in different fields have different notation, and often this notation overlaps with different meaning. Think how Chinese and Japanese have overlapping characters with different meanings.
As others have stated, there is no “one true notation” — all notation is basically a DSL for that math field.
Instead, choose a topic you are interested in, find an introductory text, and start reading. They will almost certainly explain the notation. Unfortunately, even within a field, notation can vary, but once you have a grasp of one you will probably grasp the rest quick enough.
I will mention, though, that some notation is “mostly” universal. Integrals, partial derivatives, and more that I can’t recall right now all use basically the same notation everywhere, since they underlie a lot of other math fields.




For about $5 you can find an old (around 19601969) edition of the “CRC Handbook of Standard Mathematical Tables. I’ve owned two of the 17th edition published in 1969, because back then hand calculators didn’t exist and many of the functions used in mathematics had to be looked up in books, like what is the square root of 217. Engineers used these handbooks extensively back then.
Now, of course, you have the internet and it can tell you what the square root of 217 is. Consequently, the value of these used CRC handbooks is low and many are available on eBay for a few dollars. Pick up a cheap one and in it you will find many useless pages of tables covering square roots and trigonometry, but you will also find pages of formulas and explanations of mathematical terms and symbols.
Don’t pay too much for these books because the internet and handheld calculators have pretty much removed the need from them, but that is how I first learned the meanings of many mathematical symbols and formulas.
You might also look for books of “mathematical formulas” in you local bookstores. Math is an old field and the notations you are stumbling over have likely been used for 100 years, like the triangle you were wondering about. (Actually the triangle is the upper case greek letter delta. Delta T refers to an amount of time, usually called an interval of time.)
Unfortunately, because math is an old subject it is a big subject. So big that no one person is expert in every part of math. The math covered in high school is kind of the starting point. All branches of mathematics basically start from there and spread out. If you feel you are rusty on your high school math, start there and look for a review book or study guide in those subjects, usually called Algebra 1 and Algebra 2. If you recall your Algebra 1 and 2, take a look at the books on precalculus. The normal progression is one year for each of the following courses in order, Algebra 1, Geometry, Algebra 2, PreCalculus, and Calculus. This is just the beginning of math proficiency, but by the time you get through Calculus you will be able to read the paper you referenced.
Is it really a year for each of those subjects? It can be done faster but math proficiency is a lot of work. Like learning to be a good golfer, it would be unusual to become a 10 handicap in less than 5 years of doing hours of golf each and every week.
Calculus is kind of the dividing line between highschool math and college level math. Calculus is the prerequisite for almost all other higher level math. With an understanding of Calculus one can go on to look into a wide range of mathematical subjects.
Some math is focused on its use to solve problems in specific areas; this is called applied math. In applied math there are subjects like Differential Equations, Linear Algebra, Probability and Statistics, Theory of Computation, Information & Coding Theory, and Operations Research.
Alternatively, there are areas of math that are studied because they have wider implications but not because they are trying to solve a specific kind of problem; this is called pure math. In pure math there are subjects like Number Theory, Abstract Algebra, Analysis, Topology & Geometry, Logic, and Combinatorics.
All of these areas start off easy and keep getting harder and harder. So you can take a peek at any of them, once you are through Calculus, and decide what to study next.





> […] I find it really hard to read anything because of the math notations and zero explanation of it in the context.
I suggest finding contexts first, and exploring math within those contexts. Different subfields have their own conventions and notation.
For example, you might be working in category theory, and see an arrow labeled “π”. When I see that, I think, “Ah, that’s probably a projection! That’s what π stands for!”
Or you might be in number theory, and see something like π(x). When I see that, I think, “Ah, that’s the prime number counting function! That’s what π stands for, ‘prime’!”
Or you might be in statistics, and see 1/2√π e^(1/2 x^2). When I see that, I think, “Ah, that’s the number π! It’s about 3.14”
Or you might see a big ∏ which stands for “product”.
The fact that such a common symbol, π, stands for four different things in four different contexts can be a bit confusing. So if you want to learn mathematical notation, pick a context that you want to study (like linear algebra), and look for accessible books and videos in that subfield. The trick is finding stuff that is advanced enough that you’re getting challenged, but not so advanced that it’s incomprehensible. A bit of a razor’s edge sometimes, which is unfortunate.





As a starting point you can check out the notation appendices from my books:
https://minireference.com/static/excerpts/noBSmathphys_v5_pr…
https://minireference.com/static/excerpts/noBSLA_v2_preview….
You can also see this excerpt here on set notation https://minireference.com/static/excerpts/set_notation.pdf
That covers most of the basics, but I think your real question is how to learn all those concepts, not just the notation for them, which will require learning/reviewing relevant math topics. If you’re interested in posthighschool topics, I would highly recommend linear algebra, since it is a very versatile subject with lots of applications (more so than calculus).
As ColinWright pointed out, there is no one true notation and sometimes authors of textbooks will use slightly different notation for the same concepts, especially for more advanced topics. For basic stuff though, there is kind of a “most common” notation, that most books use and in fact there is a related ISO standard you can check out: https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag…
Good luck on your math studies. There’s a lot of stuff to pick up, but most of it has “nice APIs” and will be fun to learn.





Could it be that you are trying to read things that are a bit too advanced? Maybe look for some first year university lecture notes? In general, if you cannot follow something, try to find some other materials on the same subject, preferably more basic ones.




Try reading a good undergraduate calculus textbook. It would be hefty and a bit wordy, and it may take a few months to go through, but calculus requires surprisingly little amount of prior knowledge – even the concept of limit should be defined in the textbook (the famous epsilondelta).
Also remember that math notations are meant for people. If you learn the sigma summation notation, and if you wonder “So I understand what is Sigma_{i=0}^{10}, but what is Sigma_{i=0}^{1}?” then you’re wondering irrelevant stuff. If a math notation is confusing to use, good mathematicians will simply not use it and devise an alternative way to express it (or redefine it more clearly for their purpose).
Also, don’t skip exercises. Try to solve at least 1/3 of them after each chapter. Exercises are the “actually riding a bike” part of learning how to ride a bike.




Had been in the same situation for years. Read a paper, encounter the first equation, scratch my head and search around trying to understand it, give up. That changed half a month ago, after watching the Linear Algebra and Calculus course at https://www.youtube.com/c/3blue1brown/playlists?view=50&sort….
Let me explain a little bit. Just like a foreign language you stopped learning and using after high school, what prevents you from using it fluently is not just the vocabulary and grammar, but also the intuition and the understanding of the language as a whole. Luckily, math is a human designed language, with linear algebra and calculus being the fundamentals. And again, learning them is about building intuition on why and how they are used, so whenever you encounter transformation, you think in terms of vectors and matrices, and derivative for anything relevant to rate of change. By using carefully designed examples and visual representation, Grant Sanderson greatly smoothed the learning curve in the video courses. Try it out and you’ll see.
Beyond that, different fields do have slightly different notation. When you first encounter them, just grab some introduction books or online courses and skim over the very first chapters.




There is no single authoritative source for mathematical notation. That said, there are a lot of common conventions. You could do worse than this NIST document if it’s just a notation question:
https://dlmf.nist.gov/front/introduction
Of course, if the real problem is that you need to learn some mathematical constructs, that is a different problem. The good news is that there’s a lot of material online, the bad news is that not all of it is good… I often like Khan Academy when it covers the topic.
I wish you luck!




One of the best things I figured out that at least in the last 70 years or so ago it’s pretty easy to find the “first” or foundational paper for a particular construct where they have to explain their notation for the first reader or they have the vibe of working with the new idea in the raw rather than 40 years later where it is matured. One example I use for this is hamming codes where some of the recent examples or explanations don’t build it from first principles, but the original articles do explain it very clearly.




after starting to take quantum chemistry the professor wrote up on the board:
E Psi=H Psi
and we all joked you could just cancel the Psi and so E=H.
several very kind people explained vector calculus to me (“bold means a matrix, and this dot means matrix multiplication”) but to be honest, I still can’t read math notation but if you show me anything in numpy I’ll understand it immediately.




Naively, I would say the following:
1) Search youtube for multiple videos by different people on the topic you want to learn. Watch them without expecting to understand them at first. There is a delayed effect. Each content creator will explain it slightly differently and you will find that it will make sense once you’ve heard it explained several different times and ways.
I will read the chapter summary for a 1k page math book repeatedly until I understand the big picture. Then I will repeated skim the chapters I least understand until I understand its big picture. I need to know the terms and concepts before I try to understand the formulas. I will do this until I get too confused to read more then I will take a break for a few hours/days and start again.
2) You have to rewrite the formulas in your own language. At first you will use a lot of long descriptions but quickly you will get tired and you will start to abbreviate. Eventually, you get the point where you will prefer the terse math notation because it is just too tedious to write it out in longer words.
3) You might have to pause the current topic you are struggling with and learn the math that underlies it. This means a topic that should take 1 month to learn might actually take 1 year because you need to understand all that it is based on.
4) Try to find an applied implementation. For example photogrammetry applies a lot of linear algebra. It is easer to learn linear algebra if you find an implementation of photogrammetry and try to rewrite it. This forces you to completely understand how the math works. You should read the parts of the math books that you need.




I think the problem is that there is no authoritative text, that I know of, and as ColinWright says, the same ideas can be notated differently by different fields or sometimes by different authors in the same field (though often they converge if they are in the same community).
Wikipedia has been helpful sometimes but otherwise I have found reading a lot of papers on the same topic has been useful. However, this is kind of an “organic” and slow way of learning notation common to a specific field.




The Greek alphabet would like to thank all the scholars for the centuries of overloading and offer a “tee hee hee” to all of the students tormented by attendant ambiguities.
Tough love, kids.




Maybe a problem is trying to learn it by reading it.
I was a college math major, and I admit that I might have flunked out had I been told to learn my math subjects by reading them from the textbooks without the support of the classroom environment. It may be that the books are “easy to read if a teacher is teaching them to you.”
Talking and writing math also helped me. Maybe it’s easier to learn a “language” if it’s a two way street and involves more of the senses.
Perhaps a substitute to reading the stuff straight from a book might be to find some good video lectures. Also, work the chapter problems, which will get your brain and hands involved in a more active way.
As others might have mentioned, there’s no strict formal math notation. It’s the opposite of a compiled programming language. In fact, math people who learn programming are first told: “The computer is stupid, it only understands exactly what you write.” In math, you’re expected to read past and gloss over the slight irregularities of the language and fill in gaps or react to sudden introduction of a new symbol or notational form by just rolling with it.




My advisor’s advise was basically “find a notation that you yourself like and understand well” and stick consistently to it. He said this in a context of having seen many standard notations before (so he’s not saying to reinvent the wheel), but his point was just that notations and ways of thinking are personal. Try to be clear and precise (for yourself and others), but realize that you are crafting something that reflects you and your way of thinking.
It’s kind of a copout, but to be fair it’s basically what I would say for programming as well. Try to simultaneously write code that clear to yourself and clear to others. There’s no perfect method. Just constantly selfcritique and try to improve.





This block post, which has been referenced several times on HN, was a god send for me: https://www.neilwithdata.com/mathematicsselflearner
I also used get hung up on “mathematical notation”. But it turns out the problem wasn’t the notation. I was just bad at math. Well, outofpractice is more like it.
Once you have the fundamentals clearly explained and you’re doing some math on a regular basis the notation, even obscure nonstandard notation becomes relatively intuitive.




Practice, just like you learned programming.
“The Context” gives you the meaning for the notation, sadly. You have to kind of know it to understand the notation properly.




You can also get sufficiently angry and just write out linear algebra books and what not in Agda / Coq / Lean if it pisses you off so much (I’ve done a bunch of exercises in Coq)




I like the approach they took in Structure and Interpretation of Classical Mechanics, where the whole book is done in Scheme:
(define ((Lagrangeequations Lagrangian) q)
( (D (compose ((partial 2) Lagrangian) (Gamma q)))
(compose ((partial 1) Lagrangian) (Gamma q))))




I should really pick that one up some day. It had an inspiring story, I believe the author wanted to understand the classical mechanics and just wrote them out in Scheme.




Pretty much, yea. And because they are literally a 100× programmer, they also extended Scheme to support stuff you usually use a computer algebra system for at the same time. After all, if your CAS can take the derivative of a function, why can’t your programming language?







It’s actually executable, which is part of why they wrote this particular book. The intent was to have a more uniform syntax for presenting the math and being able to (programmatically) use it.




Compare it to D(∂₂L∘Γ[q]) − ∂₁L∘Γ[q]=0.
Of course, even that isn’t quite the standard notation; it’s using a less ambiguous notation which they invented for the book. From the preface (https://mitpress.mit.edu/sites/default/files/titles/content/…):
—
Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. Traditional mathematical notation contributes to this problem. Symbols have ambiguous meanings that depend on context, and often even change within a given context.¹ For example, a fundamental result of mechanics is the Lagrange equations. In traditional notation the Lagrange equations are written
d/dt ∂L/∂q̇ⁱ − ∂L/∂qⁱ=0.
The Lagrangian L must be interpreted as a function of the position and velocity components qⁱ and q̇ⁱ, so that the partial derivatives make sense, but then in order for the time derivative d/dt to make sense solution paths must have been inserted into the partial derivatives of the Lagrangian to make functions of time. The traditional use of ambiguous notation is convenient in simple situations, but in more complicated situations it can be a serious handicap to clear reasoning. In order that the reasoning be clear and unambiguous, we have adopted a more precise mathematical notation. Our notation is functional and follows that of modern mathematical presentations.² An introduction to our functional notation is in an appendix.
Computation also enters into the presentation of the mathematical ideas underlying mechanics. We require that our mathematical notations be explicit and precise enough that they can be interpreted automatically, as by a computer. As a consequence of this requirement the formulas and equations that appear in the text stand on their own. They have clear meaning, independent of the informal context. For example, we write Lagrange’s equations in functional notation as follows:³
D(∂₂L ∘ Γ[q]) − ∂₁L ∘ Γ[q]=0.
The Lagrangian L is a realvalued function of time t, coordinates x, and velocities v; the value is L(t, x, v). Partial derivatives are indicated as derivatives of functions with respect to particular argument positions; ∂₂L indicates the function obtained by taking the partial derivative of the Lagrangian function L with respect to the velocity argument position. The traditional partial derivative notation, which employs a derivative with respect to a “variable,” depends on context and can lead to ambiguity.⁴ The partial derivatives of the Lagrangian are then explicitly evaluated along a path function q. The time derivative is taken and the Lagrange equations formed. Each step is explicit; there are no implicit substitutions.
—
(define ((Lagrangeequations Lagrangian) q)
( (D (compose ((partial 2) Lagrangian) (Gamma q)))
(compose ((partial 1) Lagrangian) (Gamma q))))
I think you can see that the Scheme code is a direct and very simple translation of the equation.
And it has the advantage that you can run it immediately after typing it in, assuming you have a coordinate path to pass to it. They immediately go to a concrete example:
(define ((Lfreeparticle mass) local)
(let ((v (velocity local)))
(* 1/2 mass (dotproduct v v))))
(define (testpath t)
(up (+ (* 'a t) 'a0)
(+ (* 'b t) 'b0)
(+ (* 'c t) 'c0)))
(((Lagrangeequations (Lfreeparticle 'm))
testpath)
't)
⇒ (down 0 0 0)
As the book says, “That the residuals are zero indicates that the test path satisfies the Lagrange equations.”
They then give another example, symbolic this time:
(showexpression
(((Lagrangeequations (Lfreeparticle 'm))
(literalfunction 'x))
't))
⇒ (* (((expt D 2) x) t) m)
Quoted from https://mitpress.mit.edu/sites/default/files/titles/content/…





Mathematics is a lingo and notations are mostly convention. Luckily people generally follow the same conventions, so my best advice if you want to learn about a specific topic is to work through the introductory texts! If you want to learn calculus find an introductory college text. Statistics? There are traditional textbooks like Introduction to Statistical Learning. The introductory texts generally do explain notation which may become assumed knowledge for more advanced texts, or as you seem to be wanting to read, academic papers. If those texts are still too difficult, then maybe move down to highschool text first.
Think about it this way. A scientist, wanting to communicate his ideas with fellow academics, is not going to spend more than half the paper on pedantics and explaining notations which everyone in their field would understand. Else what is the purpose of creating the notations? They might as well write their formulas and algorithms COBOL style!
Ultimately mathematics, like most humaninvented languages, is highly tribal and has no fixed rules. And I believe we are much richer for it! Mathematicians constantly invent new syntax to express new ideas. If there was some formal reference they had to keep on hand every time they need to write an equation that would hamper their speed of thought and creativity. How would one even invent something new if you need to get the syntax approved first!
TL;DR: Treat math notation as any other human language. Find some introductory texts on the subject matter you are interested in to be “inducted” into the tribe




Well, the real fun is deciphering a lower case xi – ξ – when written on the blackboard (or whiteboard), specially compared to a lower case zeta – ζ (fortunately way less commonly used).
As all the others already told you. you don’t learn by reading alone.




Ah, yes. I remember the time when I saw someone write something vaguely like the following
[0,ξ[={x0
Which was fun trying to figure out when written in handwriting where ξ,{,} all look the same.
If you can’t figure out what it’s supposed to be, this equation starts with a halfopen interval denoted: [ξ,0[. This notation has some advantages but can be make things hard to read.





Most textbooks come with a list of definitions.
Try to read it aloud.
“The Probability Lifesaver” has a lot of good mathematics tips (which are not even mathematics related) most of which are not probabilityspecific. It’s a goldmine.




I think good first resource would be the book and lecture notes in an introductory university course treating the specific domain you are interested in because often lots of things in notation are domain specific. Lots of good open university lectures out there, if not sure from where to start the MIT open courseware used to be a good first guess for accessing materials.
As a sidenote I have MSc in Physics with a good dollop of maths involved and I am quite clueless when looking at a new domain so it’s not as if university degree in nonrelated subject would be of any help…




>… I’d really like to learn “higher level than highschool” math…
This sounds somewhat abstract, as the math field is vast. If you consider the next level from where you believe your present standing is, I would try to revisit the collegelevel math which you probaby experienced back in time.
Generally, the textbooks rely on previous knowledge and gradually feed the new concepts, including the math notation as needed in the new scope.
I find it easier to get the feel for the notation by actually writing it by hand. Indeed it’s just an expression tool. Also, you may develop your own way of making notes, as you go on dealing with mathrelated problems.
But in the core of this you are learning the concepts and an approach to reasoning. Of course, for this path to have any practical effect, you would need to memorize quite a bit, some theorems, some methods, some formulas, some applications. Internalizing the notation will help you condense all of that new knowledge.
Picking a textbook for your level is all that is needed to continue the journey!




I learned it by asking peers in grad school what stuff meant. And working through the math myself (it was a slog at first) and then writing stuff out it in LaTeX. When one is forced to learn something because one needs to take courses and to graduate, the human brain someone figures out a way.
A lot of it is convention, so you do need a social approach – ie asking others in your field. For me it was my peers, but these days there’s Math stack exchange, google, and math forums. Also, first few chapters of an intro Real Analysis text is usually a good primer to most common math notation.
When I started grad school I didn’t know many math social norms, like the unstated one that vectors (say x) were usually in column form by convention unless otherwise stated (in undergrad calc and physics, vectors we’re usually in row form). I spent a lot of time being stymied by why matrix and vector sizes were wrong and why x’ A x worked. Or that the dot product was x’x (in undergrad it was x.x). It sounds like I lacked preparation but the reality was no one told me these things in undergrad. (I should also note that I was not a math major; the engineering curriculum didn’t expose me much to advanced math notation. Math majors will probably have a different experience.)




First, just to state the obvious, if you can accurately describe a notation in words, you can do an Internet search for it.
When that fails, math.stackexchange.com is a very active and helpful resource. You can ask what certain notation means, and upload a screenshot since it’s not always easy to describe math notation in words.
If you don’t want to wait for a human response, Detexify (https://detexify.kirelabs.org/classify.html) is an awesome site where you can hand draw math notation and it’ll tell you the LaTeX code for it. That often gives a better clue for what to search for.
For example you could draw an upside down triangle, and see that one of the ways to express this in LaTeX is nabla. Then you can look up the Wikipedia article on the Nabla symbol. (Of course in this case you could easily have just searched “math upside down triangle symbol” and the first result is a Math Stackechange thread answering this).





You might be better picking an area, and trying to work out the notation relating to that area e.g. vectors / matrices / calculus etc. As Colin says below there are often multiple equivalent ways of representing things across different fields and timeframes. I seem to remember maths I studies in Elec Eng looking different but equivalent to the way it was represented in other disciplines





I have a masters in engineering, but there was a lot of pure math things that I never understood until recently. I found the same approach to learning software concepts and APIs. Just start at the one you don’t know and recursively explore the concepts until you find stuff you do know.




My suggestion to you is going to sound pithy, but its what worked for me: do problems. Lots and lots of problems.
Pick a direction (maybe discrete math, if you’re trying to do CS) and get a book (I like EPP, as it is super accessible) and go, in order, through each chapter. Read, do the example problems, and do EVERY SINGLE PROBLEM in the (sub)chapter section enders.
Its a time commitment, but if you really want to learn it, this is one way to do so. IMO finding the right textbook is key.




Math notation feels like a writeonly language somehow.
I can read and understand undocumented code with relative ease. Reading math notation without any documentation seems pretty much impossible, otoh.




You get better at it the more you do. A tip is also to actually change a mathematical exposition into a form you better understand (e.g. by writing it in a different notation and/or expanding it out in words to make the existing notation less dense). Basically convert the presentation into the way you would personally like to see it.
If you do this enough, the process becomes easier and the original notation becomes easier to understand. But it takes a lot of time and patience (as I’m sure it took for you understand undocumented code did as well).




Do you mean all the introductory mathematics books you tried fail to properly explain the notation ?
Or that the notation differs from books to books ?
(In my case, I learned the notation via French math textbooks, and in the first day of college/uni we litteraly went back to “There is a set of things called natural numbers, and we call this set N, and there is this one thing called 0, and there is a notion of successor, and if you keep taking the successor it’s called ‘+’, and…” etc..
But then, the French, Bourbakistyle of teaching math is veeeeeeeery strict on notations.




It can be quite provincial. Could you please post a link to a paper or website that has notation you’d like to understand? Which domains are you interested in particularly?




Related question, does anyone know of any websites/books that have mathematical notation vs the computer code representing the same formula side by side? I find that seeing it in code helps me grasp it very quickly.






I learned most of my university math through “Calculus a Complete Course”. But it’s a bit expensive so I would recommend you buy an older version of the book where you can find a free solution pdf.
But you’ll have to be a bit realistic when going through the book, it’s going to take a good while.




I’ve run into this problem as well and it’s put me off learning TLA+ and information theory, which bums me out. I assume there’s a Khan Academy class that would help but it’s hard to find.




I have a notation problem. I want to write “approximately 24 volt” on my printed circuit board, but I have little space. I could write “≈24V”, but the wavy symbol makes it look like it is AC instead of DC. How to solve this without adding more characters or changing my circuit?




Use=c.24V (read as ‘equals circa 24 volts’, circa is Latin for ‘about’).
Use the 3 line version of approximately equal (looks like tilde above an equal sign, ≅).




If you don’t remember notation, surely you don’t remember the material either, so why not just skim through the basic textbooks?




I found that there is a physicality/motion to the progression of notation that you learn by solving a lot of problems, especially solving them quickly during tests





Is there any particular topic? I agree with other posters though that the notation is a short hand for the concepts and you need the concepts, not the notation.




the notation you need to know should be defined somewhere in the book or paper you’re reading
if it’s not, try intuition
if that fails, email your mathematician friend and ask
don’t have a mathematician friend? there’s your next goal, go make one.




> if it’s not, try intuition
If it’s not, the book is badly written. Most of the time, you can’t rely on a specific bit of notation to be consistent across books or articles. Smart arses who try to impress the readers with their fancy unique notations are the bane of scientists doing literature reviews.
90% of the time, there needs to be a keyword when a symbol is introduced, e.g. “where Λ is the timedependent foo operator” so you can get a textbook to find what the fuck a “foo operator” is. Then, the first time you spend a day learning what it is, and the next million times you mumble “what a stupid notation for such a straightforward concept”.




I hear this question asked quite often, particularly on HN. I think the question is quite backwards. There is little value alone in learning “math notation”, even ignoring what many people point out (there is no one “math notation”). “Math notation”, at best, translates into mathematical concepts. Words, if you will, but words with very specific meaning. Understanding those concepts is the crux of the matter! That is what takes effort – and the effort needed is that of learning mathematics. After that, one may still struggle with bad (or “original”, or “different”, or “overloaded”, or “idiotic”, or…) notation, of course, but there is little use in learning said notation(s) on their own.
I’ve been repeatedly called a gatekeeper for this stance here on HN, but really: notation is a red herring. To understand math written in “math notation”, you first have to understand the math at hand. After that, notation is less of an issue (even though it may still be present). Of course the same applies to other fields, but I suspect that the question crops up more often regarding mathematics because it has a level of precision not seen in any other field. Therefore a lot more precision tends to hide behind each symbol than the casual observer may be aware of.




Look for an etymology dictionary on math notation? The biographical sketch of the person who introduced the equal sign is an interesting read.





Khan academy and Schaum’s Outlines are your friends.
Then some textbooks with exercises (e.g. Axler on lin alg).
The notation is usually an expression of a mental model, so just approaching via notation may cause some degree of confusion.





I sometimes think math notation is a conspiracy against the clever but lazy.
Being able to pronounce the greek alphabet is a start, as you can use your ear and literary mind once you have that, but when you encounter <...>, as in an unpronouncable symbol, the meaningless abstraction becomes a black box and destroys information for you.
Smart people often don’t know the difference between an elegant abstraction that conveys a concept and a black box shorthand for signalling preshared knowledge to others. It’s the difference between compressing ideas into essential relationships, and using an exclusive code word.
This fellow does a brilliant job at explaining the origin of a constant by taking you along the path of discovery with him, whereas many “teachers” would start with a definition like “Feigenbaum means 4.669,” which is the least meaningful aspect to someone who doesn’t know why. https://www.veritasium.com/videos/2020/1/29/thisequationwi…
It wasn’t until decades after school that it clicked for me that a lot of concepts in math aren’t numbers at all, but refer to relationships and relative proporitons and the interactions of different types of things, which are in effect just shapes, but ones we can’t draw simply, and so we can only specify them using notations with numbers. I think most brains have some low level of natural synesthesia, and the way we approach math in high school has been by imposing a three legged race on anyone who tries it instead.
Pi is a great example, as it’s a proportion in a relationship between a regular line you can imagine, and the circle made from it. There isn’t much else important about it othat than it applies to everything, and it’s the first irrational number we found. You can speculate that a line is just a stick some ancients found on the ground and so its unit is “1 stick” long, which makes it an integer, but when you rotate the stick around one end, the circular path it traces has a constant proportion to its length, because it’s the stick and there is nothing else acting on it, but amazingly that proportion that describes that relationship pops out of the single integer dimension and yields a whole new type of unique number that is no longer an integer. The least interesting or meaningful thing about pi is that it is 3.141 etc. High school math teaching conflates computation and reasoning, and invents gumption traps by going depth first into ideas that make much more sense in their breadthfirst contexts and relationships to other things, which also seems like a conspiracy to keep people ignorant.
Just yesterday I floated the idea of a book club salon idea for “Content, Methods, and Meaning,” where starting from any level, each session 23 participants pick and learn the same chapter separately and do their best to give a 15 minute explanation of it to the rest of the group. It’s on the first year syllabus of a few universities, and it’s a breadthfirst approach to a lot of the important foundational ideas.
The intent is I think we only know anything as well as we can teach it, so the challenge is to learn by teaching, and you have to teach it to someone smart but without the background. Long comment, but keep at it, dumber people than you have got further with mere persistance.




If math was a programming language, all mathematicians would be fired for terrible naming conventions and horrible misuse of syntax freedom.
Honestly, most math formulas can be turned into something that looks like C/C++/C#/Java/JavaScript/TypeScript code and become infinitely more readable and understandable.
Sadly, TypeScript is one of the languages that is attempting to move back to idiocy by having generics named a single letter. Bastards.




If you haven’t already,
I would start by learning the Greek alphabet
and the sounds that the letters make.
Conventions like Σ for sum and Δ for difference seem much less strange
when you realize that they’re basically just S and D.




1] learn the greek alphabet if you haven’t already.
2] dive deep into the history of math.
3] youtube…
3 blue 1 brown, stand up maths, numberphile, kahn academy. These channels are your friends.
4] don’t give up and make it fun. Once you’re bit by the bug of curiosity and are rewarded with understanding you’ll most probably be unstoppable but still, its a long road. Better to focus on the journey.
Lastly, the notation is what it is because of the nature of math itself coupled with the history of who was doing the solving exacerbated by the cultural uptake. There have been and will continue to be new notation. Its unfortunate that often to learn a new concept the barrier is with parsing the syntax. Stick with it and stay curious and those squiggles will take on new magical and profound meanings.






Math papers can be pretty sloppy, and you don’t realize this until you start working with formal mathematics—then it’s obvious.
Almost all hand “proofs” in math papers have minor bugs, even if they’re mostly correct in the big picture sense.
Even math designed to support programming (e.g. in computer graphics) is almost always incomplete/outright wrong in some meaningful way.*
But with a struggle, it’s still largely usable/useful.
I’ve used advanced mathematics most of my career to do work (i.e. read a paper, implement it), but the ability to actually use math to do new things in computer science that mattered only to me only happened after I learned TLA+, which took a few weeks of solid study to click. Since then, it’s been a pleasure. My specs have never been this good!
Lamport’s video course on TLA+ is pretty good, but honestly I’ve read everything I can find on the topic so it’s difficult to know what helped me the most.
*I think this is because, short of doing formal mathematics, there’s no way to “test” your math. It’s the equivalent of expecting programmers to write correct code the first time with no tests, and without even running the code.





Through a really nice and helpful math prof who took time out of her day to explain it to those in the “im in trouble” additional course. Forever grateful for that, would have failed otherwise.
Math notation becomes very readable, as soon as the teacher writes a example out on the black board, and that is why i will never forgive wikipedia / wolfram / latex for not having a interactive “notation to example expansion”. They had such a chance to reform the medium – to make it more accessible to beginners and basically forgot about them.




> I find it really hard to read anything because of the math notations and zero explanation of it in the context.
So many answers and no correct one yet. Read and solve “How to Prove It: A Structured Approach”, Velleman. This is the best introduction I’ve seen so far. After finishing you’ll have enough maturity to read pretty much any math book.


