F-35 shown obsolete on previous post
The best lace to hide for protection nuclear missiles for the US is to place them on submarines in Lake Superior. That would require an agreement with Canada which owns half of that lake. The second best option is to place them in Lake Michigan, which is wholly American owned.
The sketch to right shows a submarine hull with missile launch tubes. The current missiles have a range of 6500 nautical miles according to Jane's. That is a range of 7500 statute miles or 12 500 kilometers. A range of 9000 statute miles would allow all targets except Australia, New Zealand and a large part of Indonesia to be reached. Increasing the range can be achieved by reducing the warheads to 2 or 3 and increasing the fuel in the last stage by the amount of weight saved. If that does not produce the 9000 miles, a new, bigger, missile would have to be designed.
To allow the submarine to remain submerged, a nuclear reactor is one possibility, but nuclear reactors in Lake Superior are politically impossible. The other option are hydrogen-oxygen fuel cells.
The missiles would require a hull of probably at least 15 meters in height. Allowing 5 pressure tanks for gas on each side for a total of ten, and since H2O is water, that would allow 6 hydrogen and 3 oxygen tanks. Liquid hydrogen has a density of about 70 kilograms per cubic meter. A standard volume of gas is 22.4 liters or about 45 standard volumes per cubic meter, hydrogen has an atomic weight of 2 so that yields 90 grams per cubic meter at standard temperature and pressure. Standard industrial gas cylinders are routinely pressurized to 400 atmospheres so 500 atmospheres should be obtainable. That would be 90 X 500 / 1000 = 45 kilograms of hydrogen per cubic meter. Allowing each cylinder to be 3 meters in diameter (3 X 5 = 15 meters the height of the hull) and allowing a length of 50 meters for each cylinder, there would be 3 X 3 // 4 X pi X 50 = 350 cubic meters. For 6 cylinders, that would be 2 100 cubic meters total and a gas wight of hydrogen of at least 90 000 kilograms. the energy content of hydrogen is 141 Megajoules per kilogram dividing by 3600 seconds per hour and 1000 to convert to kilowatts yields
39 kilowatt hours per kilogram. Allowing even 60% efficiency for the fuel cells gives 24 kilowatt-hours per kilogram or (X 90 000), a total of 2 200 000 kilowatt-hours. Allow 700 000 kilowatt hours for movement, that would be 930 000 horsepower hours or 460 hours at 2000 horsepower or 230 hours out and back each.
If the submarine deploys for 100 days that would allow 600 kilowatts continuous usage, a typical American home uses 1 kilowatt. The submarine could deploy for a considerably longer time.
The tenth cylinder could be half potable water and half sewage tank, although the fuel cell would produce even more water. Of the 350 cu-meters, at least 160 could be potable water or 160 000 liters. If the crew has 25 members and they use 40 liters per day, it can probably be reduced to 20 liters a day, 4 for drinking, 6 for a very efficient lather and rinse shower and some for cooking and washing clothes, that water being used to flush toilets, that would be 1000 liters per day usage or enough water for 160 days.
The submarine to the right would not be a thing of beauty. It would be designed to travel to a given location and then settle on the bottom for 2 to 4 weeks. At the end of that time it would rise up and sail back to port. The crew would spend a total of maybe 4 months away from their families, but in short periods of 2-4 weeks. At least one other crew would operate the submarine and the rest of the year might be for maintenance. Illustrated are 6 rows of missile tubes tubes to the front of the sub and an area to the back with the crew on the upper level and the fuel cell, motors and mechanical equipment on a lower level. To the rear of the sub could be attached an escape capsule for the crew in case of casualty.
Lake Superior cavers 32 000 square miles, about 80 000 sq kilometers. Lake Michigan is about 22 000
sq miles. Both of them are too large to be targeted with nuclear missiles, they can also be placed under fairly high security watch. It may require that no one can use sonar in the lake without permission to prevent accidental discovery, that would include fish finders which would produce an uproar. The deepest point in Lake Superior is about 1000 ft, 300 meters, a pressure hull can withstand that pressure.
From what I have read, Lake Superior never freezes, so ice would not interfere with missile launches.
Saturday, February 18, 2012
Thursday, February 9, 2012
Research
F-35 shown obsolete on previous post
The system of giving research grants for a particular project is counterproductive, it undermines how research is actually done.
Research involves coming up with ideas and then evaluating them, making a commitment to one course of study does not help the situation. It makes far more sense to give a grant over time, 3 years would be good, to an individual who shows promise in doing research. At the end of the 3 years the individual's work would be evaluated; any additional grant would either be; cancelled, continued or increased. The evaluation would be a peer reviewed process. There is a danger of a buddy system forming and the evaluators all helping each other, but it is still the best possible system. The only defense against it becoming corrupted is to audit the process, the difficulty there is that the only people who can do an effective audit are the same people who do the peer reviews. That means, ultimately, that there has to be ahope of finding enough qualified reviewers to dilute friendships and hope they are interested enough in integrity to report suspicious clusters of mutual rewards.
The current system is to grant money for a specific project, Thee are several problems with this: The project may turn out to have little chance of success, in which case, the researcher can give the money back and lose the ability to hold research staffs together or keep doing the research, even knowing it will be a failure. There is also the very fact that it defies how people think. A researcher may start doing some work and realize a side issue may be more interesting, under the current system he risk jail if he diverted money to something which might ber more productive. Another failure is that it prevents the flexibility of researchers to pool money and divide work. By giving time granted money, these problems are obviated. The researcher has the freedom to pursue any item of interest, although it might not produce a renewed grant.
The goal of steering grants to areas considered to be of high interest can be achieved by blocking the renewal grants. For instance, in medical research, 50% of the renewal grants could be designated for those who did the best cancer related research. That would motivate researchers to work on cancer since all of the other research areas are going to be competing for the other 50% of the money. The odds of being renewed for non-cancer research would be lower than for cancer research, motivating researchers to try to do cancer research. However, they would be free to pursue any interesting work and still have a chance of renewal. In the long run, it will produce better over-all research.
The money itself needs to be traceable to prevent theft, so all disbursements would have to be issued by a control officer and be clearly identified as to the recipient.
The system of giving research grants for a particular project is counterproductive, it undermines how research is actually done.
Research involves coming up with ideas and then evaluating them, making a commitment to one course of study does not help the situation. It makes far more sense to give a grant over time, 3 years would be good, to an individual who shows promise in doing research. At the end of the 3 years the individual's work would be evaluated; any additional grant would either be; cancelled, continued or increased. The evaluation would be a peer reviewed process. There is a danger of a buddy system forming and the evaluators all helping each other, but it is still the best possible system. The only defense against it becoming corrupted is to audit the process, the difficulty there is that the only people who can do an effective audit are the same people who do the peer reviews. That means, ultimately, that there has to be ahope of finding enough qualified reviewers to dilute friendships and hope they are interested enough in integrity to report suspicious clusters of mutual rewards.
The current system is to grant money for a specific project, Thee are several problems with this: The project may turn out to have little chance of success, in which case, the researcher can give the money back and lose the ability to hold research staffs together or keep doing the research, even knowing it will be a failure. There is also the very fact that it defies how people think. A researcher may start doing some work and realize a side issue may be more interesting, under the current system he risk jail if he diverted money to something which might ber more productive. Another failure is that it prevents the flexibility of researchers to pool money and divide work. By giving time granted money, these problems are obviated. The researcher has the freedom to pursue any item of interest, although it might not produce a renewed grant.
The goal of steering grants to areas considered to be of high interest can be achieved by blocking the renewal grants. For instance, in medical research, 50% of the renewal grants could be designated for those who did the best cancer related research. That would motivate researchers to work on cancer since all of the other research areas are going to be competing for the other 50% of the money. The odds of being renewed for non-cancer research would be lower than for cancer research, motivating researchers to try to do cancer research. However, they would be free to pursue any interesting work and still have a chance of renewal. In the long run, it will produce better over-all research.
The money itself needs to be traceable to prevent theft, so all disbursements would have to be issued by a control officer and be clearly identified as to the recipient.
Sunday, February 5, 2012
Universal Map of Mathematics
F-35 shown obsolete on previous post
Mathematics has been done backwards.
Mathematics is supposed to be a logical structure based upon postulates, things taken to be true: two points determine a line, and assumptions, limitations on the items: triangle is isosceles. The logic and structure are supposed to be the framework upon which the proofs are built, but currently proofs are treated as disconnected globules of thought. Each proof is initiated, the proof steps done, and then the conclusion written without any consideration of connections to other proofs. The formal structure is actually disregarded.
In the simplest mapping, each proof is taken as a node, functioning as an input-output box. At right is a simplified schematic showing the initial row being the postulates and assumptions, P & A, and each row higher the interconnections leading to each theorem, T. Each successive row is numbered as one higher than the highest numbered row of the predecessor theorems; each theorem in row 3 has at least one predecessor in row 2.
In order to construct this map the necessary inputs would have to be clearly identified for each theorem. That means that the theorem conclusion would also have to be clearly identifiable. In order to facilitate this it would be best to start with a minimum logic vocabulary for mathematics, the minimum number of logic words necessary to express all mathematic proofs.
To get an estimate of the size of the minimum vocabulary, a large number of proofs, maybe 1 000 could be read through a word counter program, each new word being added as a higher index number. The first time the word, "the" is recorded it would add 1 to the index, any subsequent reading of "the" would be ignored. A listing would also be made of each different word. This would be a preliminary estimate, the words "is" and "are" would be recorded separately but are logically one word. The total number of logic words is probably well under 1 000. There is a difference between minimum and minimum useful words. It may occur that certain strings of words occur frequently, rather than repeatedly writing the full string, it would be simpler and better to choose a substitute word to replace the string.
It might be best to write the logic words without using any usual language to reduce ambiguity. The words could be written similar to Chinese, compiling each from a list of symbols to create a unique character for each word. Other options are to write the words as members of a numbered list; word 0531, or to write them as cluster of letters; nfw could be the word for an impossible statement, short for 'no way'.
Contained within this model, since each theorem step is assumed to be equal, is the assumption that each theorem has the same logic length which does not seem to be correct or make sense.
The basic concept of this process is that a theorem is a transform performed on postulates, theorems take the postulates form their initial state into the forms created by the theorems. Instead of initiating, forming the proof, concluding the proof and then reinitiating the next proof, the transformation would be continuous through the proofs. The way math is currently done is the equivalent of treating the postulates as buckets of water and pouring the postulate into the pipe representing the first theorem and then collecting it into a second bucket which would then be poured into the next theorem pipe. The model I am suggesting is to connect the pipes and let the postulates flow through them uninterrupted. The postulates are like little paper boats that flow down the theorem pathways under the influence of mathematical proof. Equally bad analogies are electricity flowing through wires and switches or light flowing through waveguides, in which case additional assumptions can be considered to change the lights polarity. In any case the point is that there is a continuous flow of mathematic logic and not discrete lumps of disconnected proof.
There is another implication to this, that for every theorem there is associated a Big Theorem. As the illustration at right attempts to show, the Big Theorem is the totality of all proofs, postulates and assumptions needed to arrive at the theorem result. It includes all predecessor theorems. It is the total transform of the postulates to the theorem result.
The Big Theorem when written is inefficient as most of it is reused for other Big Theorems for other theorems. A large part of the logic block would have to be rewritten for each theorem.
Another way of writing the theorem structure is to use matrix notation. The
diagram at right illustrates this process. The postulates and assumptions are listed as right vector and the proof process is written as a matrix.
The process of mathematical proof can be considered a form of logical addition, the logic content of postulate A or theorem B is added to theorem C under assumption D. It is the combining of different logic statements which should be emphasized. Thus the postulates and assumptions are combined as specified by the matrix to form the first set of theorems T1. In set notation, it is the set of all theorems whose predecessors are only postulates and assumptions without any independent theorems. It is a well defined set and therefor mathematically valid.
With this matrix notation the representation of each theorem having the same logic length again arises. Since all the theorems are contained in a single vector, that would again indicate an equal logic length. Fortunately, the matrix notation also provides an explanation. In the right vector there might be 50 terms, but for a given proof only 2 might be used. There has to be more than 1, since only a single unmodified input would lead to a tautology. So the matrix would have 50 columns, 48 of which would have an entry of 0. Effectively, all of the theorems would have an algebraic length of 50, but many of the terms would be unused. It is inefficient but causes no mathematic ambiguity and so is fully acceptable. The number of inputs and the logic length of each theorem is not proportional, but by analogy one could imagine the different logic lengths written parallel like vectors. What is then done is that the longest logic length is taken as the norm, the others are boosted to the same length by essentially adding ba-blah ba-blah ba-blah ba-blah to each of the other theorems until the written logic lengths are the same. It is the mathematical equivalent of adding junk DNA. Again, it is inefficient but causes no mathematic difficulties. It is the equivalent of having a shorter man wear platform shoes or stand on a step ladder to appear to be the same height as a taller man.
For the second theorem level, T2, the set definition is all of the theorems in T2 must have as predecessors the postulates assumptions and theorems of T1. Similar definitions can be extended for T3,T4 etc. The process in theory can be endlessly repeated, although the vectors and matrices would become enormous and unwieldy.
The reason why theorems can be freely combined is that they have the equal value of being true, in physics this would represent an equipotential surface. This surface could be taken as 0 value. This leads to the symbolically nice result that nay addition of theorems, since they are all true, will also produce a true result or, symbolically 0 + 0 + 0 = 0. Within a given theorem, there are definition statements and logic statements, since all the logic statements must be true, this yields, 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 = 0. All the statements are true and the result is true.
A theorem can be considered to originate on the plane of truth, 0 plane, at 0+ and its result would be expressed at 0-, see above. At 0+ all the theorems must overlap since they can be taken in any grouping. That means that 0+ must be degenerate to a single point at the origin. 0- could be expressed with a different end point for each theorem. In order to write the map, the degeneracy is best ignored and the theorems unspooled, counting each passage through the origin as an increase in 1 of the counter for theorem levels.
There is the question of what happens when one moves off the plane of truth, 0 plane. It might be fuzzy logic.
Each logic statement in a theorem is true, the theorem is true and the Big Theorem associated with the theorem is true. So what is the actual definition of a theorem? It would appear to be the logic length of the distance between branching points, the distance form one statement strong enough to support branching to the next statement in that logic string capable of supporting branching, see right. The v's indicate theorem steps and the horizontal lines points where theorem join to form new theorems. This is not displayed by theorem level steps. Instead, the theorems are interleaved. It is more compact and efficient but not necessarily as easy to follow as proof steps. There is no need to consider the theorems as independent, they are written that way only because the human mind needs to break things down into small chunks, the larger, sprawling continuous proof structure actually makes more sense.
The theorems would be written as the theorem step number than a library number for the theorem and then additional identifiers by type; q.v. 5.0354.06.09 would be a theorem at the 5th level with a reference number of 0354 which might be the order it was entered and then two group identifiers 06 and 09 which could be algebra and number theory. The library number could be either for the theorem level of 5 or for all proofs at all levels. Postulates and assumptions would be assigned level 0..
Initially, each theorem would be assigned an arbitrarily high proof level to temporarily hold and position it such as 10 000. When a given theorem is identified as an input to another theorem it would be assigned level 9 999. The declining theorem levels would continue until a theorem has only postulates and assumptions as predecessors, it would then be assigned level 1. and the logic string to which it is attached would be renumbered accordingly in sequence.
Using a limited logic vocabulary and specifying tight protocols on theorem writing could allow for self assembly, someone writes the proof form indicating necessary antecedents and the antecedents would be automatically connected when they are entered. This allows for the map to be done in bits and then machine assembled, similar to DNA sequencing. The entire map would exist only as digital storage.
If a theorem has more than one proof, it would exist at more than one location, i.e. 5.0354, 6.1042, 8.0017. They would be linked in the data base, 5.0354=6.1042=8.0017. For further development, I believe the correct listing is the lowest proof level. I do not think that causes any ambiguity.
The mapping allows users to move around and see the connections between theorems. It might be useful in teaching to demonstrate the most important points in developing proofs. It might also show places of low density of proofs or where theorems are close in some topological manner to show where a bridging theorem might be useful. The other important reason is for completeness, this is actually the physical structure of mathematics, mathematic theorem can be viewed as a tinker toy set with the theorems as connectors and the logic as rods. Alternatively, The theorems can be viewed as streams of logic statements which connect at branching points. Either description is preferable to the current structure of non-connected random thoughts floating in some undefined protoplasm.
The Map would allow for a new field of Mathematical Topology, the study of mathematics itself as a topologic structure.
Mathematics is being done backwards because the structure, the mapping, should be done before proofs are added, currently proofs are created without any structure to their format as though they were disembodied random thoughts.
Mathematics has been done backwards.
Mathematics is supposed to be a logical structure based upon postulates, things taken to be true: two points determine a line, and assumptions, limitations on the items: triangle is isosceles. The logic and structure are supposed to be the framework upon which the proofs are built, but currently proofs are treated as disconnected globules of thought. Each proof is initiated, the proof steps done, and then the conclusion written without any consideration of connections to other proofs. The formal structure is actually disregarded.
In the simplest mapping, each proof is taken as a node, functioning as an input-output box. At right is a simplified schematic showing the initial row being the postulates and assumptions, P & A, and each row higher the interconnections leading to each theorem, T. Each successive row is numbered as one higher than the highest numbered row of the predecessor theorems; each theorem in row 3 has at least one predecessor in row 2.
In order to construct this map the necessary inputs would have to be clearly identified for each theorem. That means that the theorem conclusion would also have to be clearly identifiable. In order to facilitate this it would be best to start with a minimum logic vocabulary for mathematics, the minimum number of logic words necessary to express all mathematic proofs.
To get an estimate of the size of the minimum vocabulary, a large number of proofs, maybe 1 000 could be read through a word counter program, each new word being added as a higher index number. The first time the word, "the" is recorded it would add 1 to the index, any subsequent reading of "the" would be ignored. A listing would also be made of each different word. This would be a preliminary estimate, the words "is" and "are" would be recorded separately but are logically one word. The total number of logic words is probably well under 1 000. There is a difference between minimum and minimum useful words. It may occur that certain strings of words occur frequently, rather than repeatedly writing the full string, it would be simpler and better to choose a substitute word to replace the string.
It might be best to write the logic words without using any usual language to reduce ambiguity. The words could be written similar to Chinese, compiling each from a list of symbols to create a unique character for each word. Other options are to write the words as members of a numbered list; word 0531, or to write them as cluster of letters; nfw could be the word for an impossible statement, short for 'no way'.
Contained within this model, since each theorem step is assumed to be equal, is the assumption that each theorem has the same logic length which does not seem to be correct or make sense.
The basic concept of this process is that a theorem is a transform performed on postulates, theorems take the postulates form their initial state into the forms created by the theorems. Instead of initiating, forming the proof, concluding the proof and then reinitiating the next proof, the transformation would be continuous through the proofs. The way math is currently done is the equivalent of treating the postulates as buckets of water and pouring the postulate into the pipe representing the first theorem and then collecting it into a second bucket which would then be poured into the next theorem pipe. The model I am suggesting is to connect the pipes and let the postulates flow through them uninterrupted. The postulates are like little paper boats that flow down the theorem pathways under the influence of mathematical proof. Equally bad analogies are electricity flowing through wires and switches or light flowing through waveguides, in which case additional assumptions can be considered to change the lights polarity. In any case the point is that there is a continuous flow of mathematic logic and not discrete lumps of disconnected proof.
There is another implication to this, that for every theorem there is associated a Big Theorem. As the illustration at right attempts to show, the Big Theorem is the totality of all proofs, postulates and assumptions needed to arrive at the theorem result. It includes all predecessor theorems. It is the total transform of the postulates to the theorem result.
The Big Theorem when written is inefficient as most of it is reused for other Big Theorems for other theorems. A large part of the logic block would have to be rewritten for each theorem.
Another way of writing the theorem structure is to use matrix notation. The
diagram at right illustrates this process. The postulates and assumptions are listed as right vector and the proof process is written as a matrix.
The process of mathematical proof can be considered a form of logical addition, the logic content of postulate A or theorem B is added to theorem C under assumption D. It is the combining of different logic statements which should be emphasized. Thus the postulates and assumptions are combined as specified by the matrix to form the first set of theorems T1. In set notation, it is the set of all theorems whose predecessors are only postulates and assumptions without any independent theorems. It is a well defined set and therefor mathematically valid.
With this matrix notation the representation of each theorem having the same logic length again arises. Since all the theorems are contained in a single vector, that would again indicate an equal logic length. Fortunately, the matrix notation also provides an explanation. In the right vector there might be 50 terms, but for a given proof only 2 might be used. There has to be more than 1, since only a single unmodified input would lead to a tautology. So the matrix would have 50 columns, 48 of which would have an entry of 0. Effectively, all of the theorems would have an algebraic length of 50, but many of the terms would be unused. It is inefficient but causes no mathematic ambiguity and so is fully acceptable. The number of inputs and the logic length of each theorem is not proportional, but by analogy one could imagine the different logic lengths written parallel like vectors. What is then done is that the longest logic length is taken as the norm, the others are boosted to the same length by essentially adding ba-blah ba-blah ba-blah ba-blah to each of the other theorems until the written logic lengths are the same. It is the mathematical equivalent of adding junk DNA. Again, it is inefficient but causes no mathematic difficulties. It is the equivalent of having a shorter man wear platform shoes or stand on a step ladder to appear to be the same height as a taller man.
For the second theorem level, T2, the set definition is all of the theorems in T2 must have as predecessors the postulates assumptions and theorems of T1. Similar definitions can be extended for T3,T4 etc. The process in theory can be endlessly repeated, although the vectors and matrices would become enormous and unwieldy.
The reason why theorems can be freely combined is that they have the equal value of being true, in physics this would represent an equipotential surface. This surface could be taken as 0 value. This leads to the symbolically nice result that nay addition of theorems, since they are all true, will also produce a true result or, symbolically 0 + 0 + 0 = 0. Within a given theorem, there are definition statements and logic statements, since all the logic statements must be true, this yields, 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 = 0. All the statements are true and the result is true.
A theorem can be considered to originate on the plane of truth, 0 plane, at 0+ and its result would be expressed at 0-, see above. At 0+ all the theorems must overlap since they can be taken in any grouping. That means that 0+ must be degenerate to a single point at the origin. 0- could be expressed with a different end point for each theorem. In order to write the map, the degeneracy is best ignored and the theorems unspooled, counting each passage through the origin as an increase in 1 of the counter for theorem levels.
There is the question of what happens when one moves off the plane of truth, 0 plane. It might be fuzzy logic.
Each logic statement in a theorem is true, the theorem is true and the Big Theorem associated with the theorem is true. So what is the actual definition of a theorem? It would appear to be the logic length of the distance between branching points, the distance form one statement strong enough to support branching to the next statement in that logic string capable of supporting branching, see right. The v's indicate theorem steps and the horizontal lines points where theorem join to form new theorems. This is not displayed by theorem level steps. Instead, the theorems are interleaved. It is more compact and efficient but not necessarily as easy to follow as proof steps. There is no need to consider the theorems as independent, they are written that way only because the human mind needs to break things down into small chunks, the larger, sprawling continuous proof structure actually makes more sense.
The theorems would be written as the theorem step number than a library number for the theorem and then additional identifiers by type; q.v. 5.0354.06.09 would be a theorem at the 5th level with a reference number of 0354 which might be the order it was entered and then two group identifiers 06 and 09 which could be algebra and number theory. The library number could be either for the theorem level of 5 or for all proofs at all levels. Postulates and assumptions would be assigned level 0..
Initially, each theorem would be assigned an arbitrarily high proof level to temporarily hold and position it such as 10 000. When a given theorem is identified as an input to another theorem it would be assigned level 9 999. The declining theorem levels would continue until a theorem has only postulates and assumptions as predecessors, it would then be assigned level 1. and the logic string to which it is attached would be renumbered accordingly in sequence.
Using a limited logic vocabulary and specifying tight protocols on theorem writing could allow for self assembly, someone writes the proof form indicating necessary antecedents and the antecedents would be automatically connected when they are entered. This allows for the map to be done in bits and then machine assembled, similar to DNA sequencing. The entire map would exist only as digital storage.
If a theorem has more than one proof, it would exist at more than one location, i.e. 5.0354, 6.1042, 8.0017. They would be linked in the data base, 5.0354=6.1042=8.0017. For further development, I believe the correct listing is the lowest proof level. I do not think that causes any ambiguity.
The mapping allows users to move around and see the connections between theorems. It might be useful in teaching to demonstrate the most important points in developing proofs. It might also show places of low density of proofs or where theorems are close in some topological manner to show where a bridging theorem might be useful. The other important reason is for completeness, this is actually the physical structure of mathematics, mathematic theorem can be viewed as a tinker toy set with the theorems as connectors and the logic as rods. Alternatively, The theorems can be viewed as streams of logic statements which connect at branching points. Either description is preferable to the current structure of non-connected random thoughts floating in some undefined protoplasm.
The Map would allow for a new field of Mathematical Topology, the study of mathematics itself as a topologic structure.
Mathematics is being done backwards because the structure, the mapping, should be done before proofs are added, currently proofs are created without any structure to their format as though they were disembodied random thoughts.
Saturday, February 4, 2012
Universal Map of Mathematics II
F-35 shown obsolete on past post
The post Universal Map of Mathematics should be read first.
Now, we will go into bad science fiction.
Under the mapping, each theorem represents a quasi-vector, its origin if the theorem designators of its predecessors 5. 0103, 4.0658, 3.1057 and it termination the theorem designator 6.0098. The goal would be to treat that as an actual vector.
The triple point of the origin means that the theorem space would have to fold into itself for the 3 points to coincide. This also leads to the conclusion that folds create new theorems and the possibility of arbitrarily creating new theorems by random folds as long as there is no internal contradiction between the source theorems. `The other result is that an extension of the outer surface of the mathematic topology would also constitute a new theorem. The analogy is the vectors (1,1,1) and (2,0,1), by inspection the vector (-1,-1,2) is normal, perpendicular to both of them. If 2 vectors are normal the addition of the product of their respective co-ordinates, x, y, z, will add to 0. In this case, 1X-1+1X-1+1X2 = -1+-1+2 =0 and 2X-1+0X-1+1X2= -2+0+2=0, the third vector is normal to both of the others.
If the theorems can be treated as vectors then outer normals to the surface of mathematic topology would automatically constitute new mathematics. As the diagram above shows, the new vectors exist if a component of them is normal to the surface or if they are entirely normal to the surface.
For them to be vectors, they must have a well defined magnitude and direction. Neither is easy to define, or may be possible to define, in this case.
The designators of the theorems, .06, .09 would have to be given actual spatial meaning in some coherent form such as at right. Minimally, target areas of mathematical definition would have to be formed, to be fully rigorous there would have to be a specific point indicating each field and a meaning assigned to scaling off between those points.
There is a related problem of the representation of the theorems themselves. If the theorem is designated by level and position related to meaning, 5th level, number theory, there could be a need to add another dimension to allow for multiple proofs at that point. In this additional dimension they could be labeled consecutively out from the indicator point. But even then, the theorem must connect with multiple other theorems and these connections are probably best handled by a pole in yet another dimension, the sections of the pole corresponding with the different theorem connections and being extended with each additional connection.
The connections could be treated as rubber strings which stretch form pole to pole. The location of the conclusion of the joining of 2 or more theorems would have to remain constant under reciprocal relations, the theorem joining 5.0603 with 4.1053 would have to have the same destination as the one joining 4.1053 with 5.0603. Each would create an angle measured from the connector aimed towards the target space, the corresponding angle and distance would have to land at the same target.
This leads to another related problem the postulates and assumptions would have to be assigned meaningful locations in an initiating plane.
I believe the only way to do this is through statistical means. The initial postulates and assumptions would be used to create the theorems T1. Those theorems would be assigned descriptors of type, such as number theory, and a provisional location point. By using a large number of theorems the postulates and assumptions would be arranged to minimize the error of theorems going to their designed target point assuming that distances between postulates are treated as vectors and reciprocity is required. A second statistical pass would then be made using the assigned locations of the postulates and the target points would be adjusted by the outcome of the postulate vectors. In a few passes going back and forth, a most likely location for the postulates and the target points could be determined. To go to theorem level T2, the accuracy would decline and may have to be re-rectified. This would probably have to be repeated at each proof level. It would assign a meaning to each designator, however tenuous it might be.
This could then allow for an arbitrary selection of theorem points, an arbtrary fold in theorem space, the ability to approximately predict where the result would be of a theorem combining those results, if the theorem was valid. Which brings up the joke of the mathematician giving driving directions,"No, I didn't say that this road would take you to where you want to go, what I said was 'If this road goes to where you want then it would be the shortest route there.'"
The entire process would produce a probability of the theorem's outcome location, which might be useful. In theory it opens the possibility for machine driven mathematics, computers generating vectors without having to understand the underlying mathematics.
The post Universal Map of Mathematics should be read first.
Now, we will go into bad science fiction.
Under the mapping, each theorem represents a quasi-vector, its origin if the theorem designators of its predecessors 5. 0103, 4.0658, 3.1057 and it termination the theorem designator 6.0098. The goal would be to treat that as an actual vector.
The triple point of the origin means that the theorem space would have to fold into itself for the 3 points to coincide. This also leads to the conclusion that folds create new theorems and the possibility of arbitrarily creating new theorems by random folds as long as there is no internal contradiction between the source theorems. `The other result is that an extension of the outer surface of the mathematic topology would also constitute a new theorem. The analogy is the vectors (1,1,1) and (2,0,1), by inspection the vector (-1,-1,2) is normal, perpendicular to both of them. If 2 vectors are normal the addition of the product of their respective co-ordinates, x, y, z, will add to 0. In this case, 1X-1+1X-1+1X2 = -1+-1+2 =0 and 2X-1+0X-1+1X2= -2+0+2=0, the third vector is normal to both of the others.
If the theorems can be treated as vectors then outer normals to the surface of mathematic topology would automatically constitute new mathematics. As the diagram above shows, the new vectors exist if a component of them is normal to the surface or if they are entirely normal to the surface.
For them to be vectors, they must have a well defined magnitude and direction. Neither is easy to define, or may be possible to define, in this case.
The designators of the theorems, .06, .09 would have to be given actual spatial meaning in some coherent form such as at right. Minimally, target areas of mathematical definition would have to be formed, to be fully rigorous there would have to be a specific point indicating each field and a meaning assigned to scaling off between those points.
There is a related problem of the representation of the theorems themselves. If the theorem is designated by level and position related to meaning, 5th level, number theory, there could be a need to add another dimension to allow for multiple proofs at that point. In this additional dimension they could be labeled consecutively out from the indicator point. But even then, the theorem must connect with multiple other theorems and these connections are probably best handled by a pole in yet another dimension, the sections of the pole corresponding with the different theorem connections and being extended with each additional connection.
The connections could be treated as rubber strings which stretch form pole to pole. The location of the conclusion of the joining of 2 or more theorems would have to remain constant under reciprocal relations, the theorem joining 5.0603 with 4.1053 would have to have the same destination as the one joining 4.1053 with 5.0603. Each would create an angle measured from the connector aimed towards the target space, the corresponding angle and distance would have to land at the same target.
This leads to another related problem the postulates and assumptions would have to be assigned meaningful locations in an initiating plane.
I believe the only way to do this is through statistical means. The initial postulates and assumptions would be used to create the theorems T1. Those theorems would be assigned descriptors of type, such as number theory, and a provisional location point. By using a large number of theorems the postulates and assumptions would be arranged to minimize the error of theorems going to their designed target point assuming that distances between postulates are treated as vectors and reciprocity is required. A second statistical pass would then be made using the assigned locations of the postulates and the target points would be adjusted by the outcome of the postulate vectors. In a few passes going back and forth, a most likely location for the postulates and the target points could be determined. To go to theorem level T2, the accuracy would decline and may have to be re-rectified. This would probably have to be repeated at each proof level. It would assign a meaning to each designator, however tenuous it might be.
This could then allow for an arbitrary selection of theorem points, an arbtrary fold in theorem space, the ability to approximately predict where the result would be of a theorem combining those results, if the theorem was valid. Which brings up the joke of the mathematician giving driving directions,"No, I didn't say that this road would take you to where you want to go, what I said was 'If this road goes to where you want then it would be the shortest route there.'"
The entire process would produce a probability of the theorem's outcome location, which might be useful. In theory it opens the possibility for machine driven mathematics, computers generating vectors without having to understand the underlying mathematics.
Subscribe to:
Posts (Atom)