Sunday, February 5, 2012

Universal Map of Mathematics

F-35 shown obsolete on previous post

Mathematics has been done backwards.
   Mathematics is supposed to be a logical structure based upon postulates, things taken to be true: two points determine a line, and assumptions, limitations on the items: triangle is isosceles.  The logic and structure are supposed to be the framework upon which the proofs are built, but currently proofs are treated as disconnected globules of thought.  Each proof is initiated, the proof steps done, and then the conclusion written without any consideration of connections to other proofs.  The formal structure is actually disregarded.
   In the simplest mapping, each proof is taken as a node, functioning as an input-output box.  At right is a simplified schematic showing the initial row being the postulates and assumptions, P & A, and each row higher the interconnections leading to each theorem, T.  Each successive row is numbered as one higher than the highest numbered row of the predecessor theorems; each theorem in row 3 has at least one predecessor in row 2.
   In order to construct this map the necessary inputs would have to be clearly identified for each theorem.  That means that the theorem conclusion would also have to be clearly identifiable.  In order to facilitate this it would be best to start with a minimum logic vocabulary for mathematics, the minimum number of logic words necessary to express all mathematic proofs.
   To get an estimate of the size of the minimum vocabulary, a large number of proofs, maybe 1 000 could be read through a word counter program, each new word being added as a higher index number.  The first time the word, "the" is recorded it would add 1 to the index, any subsequent reading of "the" would be ignored.  A listing would also be made of each different word.  This would be a preliminary estimate, the words "is" and "are" would be recorded separately but are logically one word.  The total number of logic words is probably well under 1 000.   There is a difference between minimum and minimum useful words.  It may occur that certain strings of words occur frequently, rather than repeatedly writing the full string, it would be simpler and better to choose a substitute word to replace the string.
   It might be best to write the logic words without using any usual language to reduce ambiguity.  The words could be written similar to Chinese, compiling each from a list of symbols to create a unique character for each word.  Other options are to write the words as members of a numbered list; word 0531, or to write them as cluster of letters; nfw could be the word for an impossible statement, short for 'no way'.
   Contained within this model, since each theorem step is assumed to be equal, is the assumption that each theorem has the same logic length which does not seem to be correct or make sense.
   The basic concept of this process is that a theorem is a transform performed on postulates, theorems take the postulates form their initial state into the forms created by the theorems.  Instead of initiating, forming the proof, concluding the proof and then reinitiating the next proof, the transformation would be continuous through the proofs.  The way math is currently done is the equivalent of treating the postulates as buckets of water and pouring the postulate into the pipe representing the first theorem and then collecting it into a second bucket which would then be poured into the next theorem pipe.  The model I am suggesting is to connect the pipes and let the postulates flow through them uninterrupted.  The postulates are like little paper boats that flow down the theorem pathways under the influence of mathematical proof.  Equally bad analogies are electricity flowing through wires and switches or light flowing through waveguides, in which case additional assumptions can be considered to change the lights polarity.  In any case the point is that there is a continuous flow of mathematic logic and not discrete lumps of disconnected proof.
   There is another implication to this, that for every theorem there is associated a Big Theorem.  As the illustration at right attempts to show, the Big Theorem is the totality of all proofs, postulates and assumptions needed to arrive at the theorem result.  It includes all predecessor theorems.  It is the total transform of the postulates to the theorem result.
    The Big Theorem when written is inefficient as most of it is reused for other Big Theorems for other theorems. A large part of the logic block would have to be rewritten for each theorem.
    Another way of writing the theorem structure is to use matrix notation.   The
diagram at right illustrates this process.  The postulates and assumptions are listed as right vector and the proof process is written as a matrix.
   The process of mathematical proof can be considered a form of logical addition, the logic content of postulate A or theorem B is added to theorem C under assumption D.  It is the combining of different logic statements which should be emphasized.  Thus the postulates and assumptions are combined as specified by the matrix to form the first set of theorems T1.  In set notation, it is the set of all theorems whose predecessors are only postulates and assumptions without any independent theorems.  It is a well defined set and therefor mathematically valid.
   With this matrix notation the representation of each theorem having the same logic length again arises.  Since all the theorems are contained in a single vector, that would again indicate an equal logic length.  Fortunately, the matrix notation also provides an explanation.  In the right vector there might be 50 terms, but for a given proof only 2 might be used.  There has to be more than 1, since only a single unmodified input would lead to a tautology.  So the matrix would have 50 columns, 48 of which would have an entry of 0.  Effectively, all of the theorems would have an algebraic length of 50, but many of the terms would be unused.  It is inefficient but causes no mathematic ambiguity and so is fully acceptable.  The number of inputs and the logic length of each theorem is not proportional, but by analogy one could imagine the different logic lengths written parallel like vectors.  What is then done is that the longest logic length is taken as the norm, the others are boosted to the same length by essentially adding ba-blah ba-blah ba-blah ba-blah to each of the other theorems until the written logic lengths are the same.  It is the mathematical equivalent of adding junk DNA.  Again, it is inefficient but causes no mathematic difficulties.  It is the equivalent of having a shorter man wear platform shoes or stand on a step ladder to appear to be the same height as a taller man.
   For the second theorem level, T2, the set definition is all of the theorems in T2 must have as predecessors the postulates assumptions and theorems of T1.  Similar definitions can be extended for T3,T4 etc. The process in theory can be endlessly repeated, although the vectors and matrices would become enormous and unwieldy.
   The reason why theorems can be freely combined is that they have the equal value of being true, in physics this would represent an equipotential surface.  This surface could be taken as 0 value.  This leads to the symbolically nice result that nay addition of theorems, since they are all true, will also produce a true result or, symbolically 0 + 0 + 0 = 0.   Within a given theorem, there are definition statements and logic statements, since all the logic statements must be true, this yields, 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 = 0. All the statements are true and the result is true.
   A theorem can be considered to originate on the plane of truth, 0 plane, at 0+ and its result would be expressed at 0-, see above.  At 0+ all the theorems must overlap since they can be taken in any grouping.  That means that 0+ must be degenerate to a single point at the origin.  0- could be expressed with a different end point for each theorem.  In order to write the map, the degeneracy is best ignored and the theorems unspooled, counting each passage through the origin as an increase in 1 of the counter for theorem levels.
   There is the question of what happens when one moves off the plane of truth, 0 plane.  It might be fuzzy logic.
   Each logic statement in a theorem is true, the theorem is true and the Big Theorem associated with the theorem is true.  So  what is the actual definition of a theorem?   It would appear to be the logic length of the distance between branching points, the distance form one statement strong enough to support branching to the next statement in that logic string capable of supporting branching, see right.  The v's indicate theorem steps and the horizontal lines points where theorem join to form new theorems.  This is not displayed by theorem level steps.  Instead, the theorems are interleaved.  It is more compact and efficient but not necessarily as easy to follow as proof steps. There is no need to consider the theorems as independent, they are written that way only because the human mind needs to break things down into small chunks, the larger, sprawling continuous proof structure actually makes more sense.
   The theorems would be written as the theorem step number than a library number for the theorem and then additional identifiers by type; q.v. 5.0354.06.09 would be a theorem at the 5th level with a reference number of 0354 which might be the order it was entered and then two group identifiers 06 and 09 which could be algebra and number theory.  The library number could be either for the theorem level of 5 or for all proofs at all levels.  Postulates and assumptions would be assigned level 0..
   Initially, each theorem would be assigned an arbitrarily high proof level to temporarily hold and position it such as 10 000.  When a given theorem is identified as an input to another theorem it would be assigned level 9 999. The declining theorem levels would continue until a theorem has only postulates and assumptions as predecessors, it would then be assigned level 1. and the logic string to which it is attached would be renumbered accordingly in sequence.
   Using a limited logic vocabulary and specifying tight protocols on theorem writing could allow for self assembly, someone writes the proof form indicating necessary antecedents and the antecedents would be automatically connected when they are entered.  This allows for the map to be done in bits and then machine assembled, similar to DNA sequencing.  The entire map would exist only as digital storage.
   If a theorem has more than one proof, it would exist at more than one location, i.e. 5.0354, 6.1042, 8.0017.  They would be linked in the data base, 5.0354=6.1042=8.0017.  For further development, I believe the correct listing is the lowest proof level.  I do not think that causes any ambiguity.
  The mapping allows users to move around and see the connections between theorems.  It might be useful in teaching to demonstrate the most important points in developing proofs.  It might also show places of low density of proofs or where theorems are close in some topological manner to show where a bridging theorem might be useful.  The other important reason is for completeness, this is actually the physical structure of mathematics, mathematic theorem can be viewed as a tinker toy set with the theorems as connectors and the logic as rods.  Alternatively, The theorems can be viewed as streams of logic statements which connect at branching points.  Either description is preferable to the current structure of non-connected random thoughts floating in some undefined protoplasm.
  The Map would allow for a new field of Mathematical Topology, the study of mathematics itself as a topologic structure.
   Mathematics is being done backwards because the structure, the mapping, should be done before proofs are added, currently proofs are created without any structure to their format as though they were disembodied random thoughts.

No comments:

Post a Comment