Inadmissible theorems in research

Inadmissible theorems in research



One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote



'You are not yet allowed to use L'Hôpital's rule.'



So, I like to say that L'Hôpital's rule was inadmissible in that exam.



Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.



But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.



Does maths research have anything inadmissible?



I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.



Actually what about outside maths, say, physics or computer science?





I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
– Azor Ahai
Aug 29 at 21:02





Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
– Wrzlprmft
Aug 31 at 7:06





16 Answers
16



The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.





@Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
– BCLC
Aug 29 at 18:18





@BCLC: It is more common than you think. For just one example phrasing, see "it is folklore that" on Google Scholar.
– user21820
Sep 10 at 4:10





Tommi Brander, in @user21820 's link, is the first paper, which is by Terry Tao, related to 'classical regularity theory for partial differential equations' ?
– BCLC
Sep 25 at 15:37





@BCLC Uhhh... I guess? This is not a precise classification schema. Why do you ask?
– Tommi Brander
Sep 25 at 19:08





@TommiBrander well the paper looks like a good example of your example
– BCLC
Sep 25 at 23:32



Does maths research have anything inadmissible?



No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



So this is far from being a pointless exercise, if that's what you had in mind.





This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
– Pete L. Clark
Aug 29 at 22:00





@FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
– Konrad Rudolph
Aug 30 at 9:52






@KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
– Yemon Choi
Aug 30 at 12:07






An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
– user168715
Aug 31 at 6:53





@PeteL.Clark There's even a relevant XKCD about that.
– Nic Hartley
Sep 1 at 1:49



In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.





Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
– BCLC
Aug 29 at 14:50





Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
– BCLC
Aug 29 at 15:24




It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.





+1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
– Yemon Choi
Aug 30 at 12:09





This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
– Noah Snyder
Aug 30 at 12:20






@NoahSnyder: DAG doubtless stands for directed acyclic graph.
– J W
Aug 30 at 13:13





@JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
– Noah Snyder
Aug 30 at 13:21





The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
– MSalters
Aug 30 at 14:53



While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



Two examples come to mind:



The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.



The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.



Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.





Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
– darij grinberg
Aug 29 at 15:08





@darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
– Tobias Kildetoft
Aug 29 at 17:04




There are cases where the researcher restricts himself not to use certain theorems. Example:



Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.



The author restricts himself to use only "elementary" (in a technical sense) methods.



Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".





So same as jakebeal?
– BCLC
Aug 29 at 17:03





That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
– Hilder Vitor Lima Pereira
Aug 30 at 13:21





@HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
– Dan Romik
Aug 30 at 15:46





@DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
– Hilder Vitor Lima Pereira
Aug 30 at 16:03





@HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
– elliot svensson
Aug 31 at 14:20



It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.





Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
– BCLC
Aug 29 at 15:13





There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
– darij grinberg
Aug 29 at 15:14





@BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
– Jessica B
Aug 29 at 15:17





@darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
– Dan Romik
Aug 29 at 20:42





@DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
– darij grinberg
Aug 29 at 20:45




In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.





That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
– Buffy
Aug 29 at 20:43






In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
– Andrés E. Caicedo
Aug 30 at 20:29



To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.



However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.





I feel like you have misunderstood the question.
– Jessica B
Aug 29 at 15:05





@Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
– cHao
Aug 29 at 15:54





One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
– Nate Eldredge
Aug 29 at 16:48





Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
– Nate Eldredge
Aug 29 at 16:56





I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
– Noah Snyder
Aug 30 at 12:52




I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.





Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
– BCLC
Aug 29 at 15:10






Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
– Dmitry Savostyanov
Aug 29 at 15:11



I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.



I will add a small example from Theoretical Computer Science and algorithm design.



It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.



We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



Luca Trevisan had made important progress towards that direction using spectral techniques.



In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.





There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
– BCLC
Aug 29 at 14:46





I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
– Mick
Aug 29 at 14:51






Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
– BCLC
Aug 29 at 14:56






It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
– cHao
Aug 29 at 15:51






Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
– WGroleau
Aug 30 at 12:15



Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



And after such hand checks...how many mathematical researchers would credit computer software for assistance?



Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.





Please try to avoid leaving two separate answers; you should edit your first one
– Yemon Choi
Sep 2 at 3:36





I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
– Yemon Choi
Sep 2 at 3:37



In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.





I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
– Yemon Choi
Sep 2 at 3:35





Also, don't create two new user identities. Register one which can be used consistently
– Yemon Choi
Sep 2 at 3:37



The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)



Required, but never shown



Required, but never shown






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)