Philosophy of science-the first two and a half millennia.

How does science work? And what's all this about quantum mechanics?

Moderators: AMod, iMod

Post Reply
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Philosophy of science-the first two and a half millennia.

Post by uwot »

Oops! Managed to delete the whole thread. Oh, here it is again.

Philosophy of science-the first two and a half millennia.

What really matters?

Which of these three propositions do you most agree with?

A scientific theory must be:

A Logically coherent explanation.
Supported by evidence.
Useful.

If you are firmly of the opinion that one of them is the defining feature of science, then in philosophical terms you are either a rationalist, an empiricist or a pragmatist. If you happen to be a scientist, then it is likely that your main interest is theoretical, experimental or instrumental. More generally, you might just like to have an idea about how something works, find out how it works or just make it work. When philosophers of science are doing what they are paid for, one of the key things they consider is what blend of the above makes some human activity a science. On the face of it, it shouldn't be all that difficult, there’s only three variables, how hard can it be?

Keeping our feet on the ground.

Rather than try and make sense of abstract ideas, it’s probably easier to follow the example of many philosophers of science and look at an example from history. The story of gravity is useful, because it is something that we all experience, it involves all of the above points, and it’s still a mystery. There is a simple story about how ideas about gravity developed up to the 20th century, according to which it’s a hop, skip and a jump from Aristotle to Galileo to Newton; from an explanation to demonstration to a useful equation.

Aristotle

If longevity were any measure, then by far the most successful theory of gravity is Aristotle’s explanation, or rather his two explanations. One of those ideas is based on the behaviour of the Greek ‘elements’: earth, water, air and fire. We all know that stones sink in water; that air bubbles to the surface and flames leap upwards, because we have seen the evidence. In an effort to put the evidence in ‘scientific’ terms, Aristotle explained that it is the ‘nature’ of the different elements to move in particular ways: earth and water move down, air and fire move up. In other words, there is something about the elements that makes them move the way they do, but does calling it ‘nature’ make it science?

Aristotle went on to make a rudimentary mathematical claim: that the more of a particular nature that an object contains, the more earth or air it is made of, the faster or slower it will fall. In other words, velocity is proportional to mass. This is an hypothesis that can be measured relatively easily, and at a simple level would be useful to know, if true. So does that make it science?

To be more precise, what Aristotle actually believed is that terrestrial elements move towards or away from the centre of the universe and Earth, being mostly earth, was therefore in the middle. Since it had reached its destination, Aristotle argued that it wasn’t moving. The evidence he supported this claim with was that when you drop something, it lands at your feet. If the Earth were moving, it should land as far away as the Earth had moved in the time it took to fall. His explanation for why the sun, moon, planets and stars apparently move round the Earth is because they are set in a series of concentric spheres that are spinning around the north and south poles. The spheres are made of aether, the fifth element, which unlike the terrestrial elements moves in circles, which Aristotle explained by claiming that there are only two types of basic motion, linear and circular. All ‘random’ motion can be described as a combination of those two, and since earth, water, air and fire all move in straight lines, there must be an element that moves in circles. The premises are a bit shaky, but if you accept them, it’s a logically coherent explanation. This model was developed by Ptolemy into a mathematical description that was reasonably successful at predicting the position of the heavenly bodies. So it was supported by the available evidence and if the position of the stars is important to you, say for divination or religious observance, it’s useful.

We now know that practically everything Aristotle said about gravity is wrong, but the explanations taken as a whole make so much sense that for two thousand years, they resisted all challenges. Aristarchus of Samos had argued that the sun is the centre of the universe in the 3rd century BC, an idea that was revised by Copernicus in the 16th century AD. But if the Earth isn’t at the centre, then Aristotle’s explanation for why stones fall, and that they fall straight down, must be wrong.

Responsibility for the publication of Copernicus’ book was handed to Andreas Osiander. He added a preface in which he argued that different explanations can be supported by the same evidence. It doesn’t matter to the calculations whether people choose the explanation they find most plausible, or that they find most useful to work with. As Osiander said, “If they provide a calculus consistent with the observations, that alone is enough.” and although Copernicus’ mathematics wasn’t as developed as Ptolemy’s, bits of it were easier to work with. So some mathematicians and astronomers adopted Copernicus’ model, not necessarily because they believed the explanation, but because it was useful.

Galileo

Pisa in the 17th century, Galileo is at the top of the leaning tower, preparing to drop different sized cannon balls to prove that they fall at the same rate. As it happens, very few people think that Galileo actually performed this experiment; instead Galileo was puzzled by a consequence of Aristotle’s belief that heavier things fall faster than light ones. What would happen if the two different weights were tied together? According to Aristotle, the heavier weight should fall faster, and since the lighter one will be holding the heavier one up, the string will be pulled tight. Overall, the speed should be something between the speed of the two different weights. On the other hand, since the two weights are joined, they and the string are effectively one thing with a combined weight, so the combined speed should be faster than the individual speeds. Those two outcomes can’t both be right and while Galileo may or may not have dropped weights from a tower, he did do a lot of experiments rolling different weights down slopes that flatly contradicted Aristotle’s claim that more weight equals more speed.

Then there were the observations Galileo made with his telescope. They didn’t rule out the possibility that the Earth was at the centre, but they clearly showed that the universe was not exactly as Aristotle had described it. The moons of Jupiter for instance, demonstrate that not everything revolves around the Earth, but you could get round that by positing that there are ethereal spheres centred on Jupiter. But then if you keep making up more stuff to explain awkward facts, are you making claims about how the world actually works, or about how your explanation works?


Newton

On 28 November 1660 in London, a group of scientists announced the formation of a "College for the Promoting of Physico-Mathematical Experimental Learning”. Hearing of the plan, King Charles II gave his approval and within two years a charter was signed creating the "Royal Society of London”. The motto of the Royal Society is 'Nullius in verba’ usually translated as 'take nobody's word for it'. In 1660, that still meant Aristotle.

In 1687 the Royal Society published the Philosophiæ Naturalis Principia Mathematica by one of its fellows, Isaac Newton, in which he described his law of universal gravitation. The legend of the apple falling on his head has about as much credibility as Galileo on the Tower of Pisa, but the story goes that this was the event that made him realise that the reason stones fall to the ground is the same reason that moons go around planets and planets go round the Sun. There aren’t two ‘forces of gravity’, there is only one. Physicists like simplicity, and they particularly like unifying forces. Instead of it being the nature of earth to move towards the centre, Newton demonstrated that every particle in the universe is attracted to every other. And you can forget about ethereal spheres.

While even Newton recognised that a lot of evidence was needed to corroborate his law, as time progressed, it became clear that it was extremely successful in accounting for the position of the known planets, which at the time only went so far as Saturn. Almost a century later, in 1781, William Herschel recognised that a point of light which earlier astronomers had mistaken for a star, was the planet Uranus. In 1845, by which time Uranus had completed most of an orbit, it was clear that it was not behaving as Newton’s law demanded. By then, such was the confidence in Newton that mathematicians in Paris and Cambridge began calculating the mass and position of a body that could account for the anomalies. Using the results of Urbain Le Verrier as his guide, Johann Gottfried Galle identified the planet Neptune which, like Uranus had previously been mistaken for a star. Newton’s idea of a ‘force of gravity’ explained the motion of the planets, it was supported by a wealth of evidence and it had been usefully applied in the discovery of a ‘new’ planet.

But is the idea of ‘a force of gravity’ any more informative than Aristotle’s appeal to ‘nature’? Does ‘there is a force’ tell us anymore than ‘there is a nature’? The ink was barely dry on the first edition before people started objecting that Newton had introduced a force without a mechanism; for all the explanatory power of ‘the force of gravity’, there is no explanation for how gravity works. Much of the challenge came from followers of Rene Descartes. He had also been interested in the movement of the planets, but his main concern was to give a logical explanation of the orbit of planets. This he did by invoking the idea of vortices, according to which, space is composed of infinitesimal ‘corpuscles’ that behave like a fluid. These are swept around the Sun, a little like water is dragged around the plughole, and they in turn pull the planets along with them.

When in 1713 Newton published a second edition, he felt compelled to add an essay called the General Scholium in which he directly challenged the idea of vortices. Newton pointed out that the orbits of comets are too eccentric to fit the model and that they cut across planetary vortices with no apparent effect, “And therefore the celestial spaces, through which the globes of the planets and comets move continually in all directions freely and without any sensible diminution of motion, are devoid of any corporeal fluid…”

Having dismissed that explanation, Newton included a passage known by a phrase that occurs in it: hypotheses non fingo -I make no hypotheses. “But hitherto I have not been able to discover the cause of those properties of gravity from phenomena, and I make no hypotheses. For whatever is not deduced from the phenomena, is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy.”

To Newton, the explanation of how something works isn’t essential to science; as long as the mathematical model gave us the power to predict and manipulate our environment, the job of physics was done. As the passage concludes: “And to us it is enough, that gravity does really exist, and act according to the laws which we have explained, and abundantly serves to account for all the motions of the celestial bodies, and of our sea.” The explanation isn’t that important; as Osiander had said, what matters is, can you use it?

Newton could just as easily have called the force of gravity the nature of gravity. The real difference between Aristotle’s ‘nature’ and Newton’s ‘force’ is not so much the logical explanation, it is abundance of evidence and the quality, and therefore usefulness of the mathematics. But if being right is a criterion for science, then we’d have to throw out Newton with Aristotle.

Einstein

Uranus was not the only planet that appeared to be breaking Newton’s law. In fact Le Verrier had been working on anomalies in Mercury’s orbit since 1840, however, when his predictions were tested by observations of the transit of Mercury in 1843, they didn't match. But, with the success of Neptune behind him, Le Verrier returned to the problem of Mercury, again calculating the mass and position of a planet that could explain the behaviour. So confident was he, that he even gave it a name, Vulcan. Sharing his optimism, astronomers began looking for the new planet, some claimed to have found it, one, Edmond Modeste Lescarbault, was even awarded the Legion D’Honneur for doing so, but on closer inspection, all the claims proved to be unfounded. There is no planet Vulcan; the Newtonian explanation was not supported by the evidence. Something else was causing the discrepancy.

There is another simple explanation of how Einstein discovered relativity.
In 1865, the Royal Society published James Clerk Maxwell’s A Dynamical Theory of the Electromagnetic Field. In it Maxwell showed that electromagnetism can be described mathematically as a wave that travels through space at the speed of light. But waves, as a rule, require a medium; after all, a wave on the ocean isn’t a wave if there is no ocean. Against the advice of Newton, Maxwell was prepared to offer an explanation: “that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.”

The hypothetical substance, the explanation, became known as the lumininferous aether. Unlike the swirling corpuscular medium proposed by Descartes, this was believed to be static, something that the Earth and all other celestial bodies were moving through. More like a fog than a whirlpool. Given that the speed of light through the aether was calculated to be constant, it’s speed relative to an observer should vary according to whether the observer is moving towards the source, or away from it, much as the relative speed of waves depends on whether you are running into the ocean, or onto the beach. Were that so, it should be possible to detect the difference in relative velocity. The most famous experiment was conducted by the American physicists Albert Michelson and Edward Morley in 1887, but they found no such relative motion.

For a while, physicists scratched their heads and produced explanations for how the luminiferous aether could produce the baffling results. Then, in 1905, Albert Einstein put forward his special theory of relativity. Much as Newton had done to corpuscles, Einstein jettisoned the luminiferous aether and, again like Newton, hypothesised completely empty space to create a mathematical description that accurately accounts for the evidence.

However, when in 1915 Einstein published his theory of general relativity, he took a different view. A feature of Einstein’s theory of general relativity is that it explains gravity by imagining that rather than being a vacuum, as assumed in special relativity, space is instead a medium which is warped by the presence of matter. There is no explanation for how matter warps spacetime, any more than there is an hypothesis about how gravity works in Newton; ‘warped spacetime’ is an explanation without an explanation, but again, in order to be useful, that doesn’t matter. As the evidence shows, the field equations Einstein deduced are more accurate than Newton’s Law.


20th century philosophy of science. Popper, Kuhn and Feyerabend.

In 1919, British physicist and astronomer Arthur Eddington led an expedition to Principe, off the coast of Africa, to photograph a total eclipse of the Sun. The aim was to test general relativity by measuring how much the light from stars was bent by the Sun’s gravity. The results showed that it was twice what Newtonian gravity could account for and much closer to Einstein’s predictions. When the results were published it made headlines around the world and turned Einstein into the byword for scientific genius he remains to this day.

At the time, a young Karl Popper was attending the University of Vienna. He was impressed by the fact that general relativity made such definite predictions. It was a bold strategy, because if the evidence didn’t support it, the theory would be shown to be wrong. He decided that this was a defining feature of science: a theory could only count as scientific if it could in principle be shown to be wrong-it has to be falsifiable. According to that view, Aristotle’s claim that acceleration is proportional to mass is scientific, because a simple experiment can determine whether it is true or not. As Galileo had shown, it isn’t, but according to Popper it’s still scientific, because truth isn’t a defining feature of science.

As Popper was developing his idea, some scientists were pointing out that, actually, that’s not how scientists work. Ludwik Fleck, a biologist introduced the idea of a ‘thought collective’, a group of scientists who share some common theory and working practices, their own ‘scientific method’, and collaborate to develop that structure to its fullest potential. Michael Polanyi a professor of chemistry made a similar point. Science, in his experience, was not an objective method that could simply be prescribed and followed, rather it is scientists putting into practice the philosophy and methods they have been taught by particular scientists. Essentially once they have been initiated into a thought collective, they contribute to that collective thought. The physicist Max Planck who, like Einstein never fully accepted the interpretations of quantum mechanics of scientists younger than himself, made the observation that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

A biologist, a chemist and a physicist were all saying that in their professional experience, science did not work as philosophers thought it should: there isn’t one scientific method; there are many. In 1962, Thomas Kuhn published The Structure of Scientific Revolutions which made everyone pay attention to the growing conviction that science is not the pristine objective enterprise that philosophers had been trying to describe.

The structure referred to in the title has three parts. There is a ‘pre-science’ period when there is some feature of the world for which there is no explanation. People speculate and offer suggestions until one comes along in which a sufficient number of scientists see enough potential to commit time and resources to it. If experiments designed in the context of the explanation produce results that match the theory, rather than try to destroy the idea, as Popper recommended, scientists collaborate to enhance the new paradigm. If the paradigm is any good, this can be a very productive period, because the explanation will give the scientists a conceptual framework to explore, which will raise questions that wouldn’t occur outside the paradigm; this “puzzle solving” is what Kuhn called ‘normal science’.

No matter how good the theory underpinning a paradigm is though, we can never know that some discovery will not undermine it. It happened to Aristotle, it happened to Newton and no scientist can guarantee it won’t happen to our current models. If, or more likely when that happens, Kuhn argued that at first there can be some tinkering to protect the paradigm, but as the anomalies build up and eventually plunge the paradigm into crisis, a new paradigm will be required, one which can account for everything the old paradigm could explain and the stuff it couldn’t, just as General Relativity explains behaviour that Newtonian gravity can’t.

Paul Feyerabend was one of four people personally thanked by Kuhn in the preface to The Structure of Scientific Revolutions; he had also turned down an offer to be Popper’s research assistant and having started his academic career as a physicist, he was well qualified to make a judgement. As the history of gravity shows, explanation, demonstration and usefulness have all played a critical role at some point; and Feyerabend was concerned that any prescriptive scientific method would have suppressed some part of that history. The only possible prescription for science that could accommodate every stumble and leap is methodological anarchy, or as Feyerabend explained ’anything goes’. He took the view that by far the most important criterion is that a theory should be useful. It didn’t matter to who or what for; insisting that there are objective criteria that everyone should agree to is oppressive. In everyday life no one likes being told what to think or do and scientists are no exception.

Feyerabend gave an insight into what people are like, him in particular: “Having listened to one of my anarchistic sermons, Professor Wigner exclaimed: ‘But surely, you do not read all the manuscripts which people send you, but you must throw most of them into the wastepaper basket.’ I most certainly do. ‘Anything goes’ does not mean that I shall read every single paper that has been written-God forbid!-it means that I make my selection in a highly individual and idiosyncratic way, partly because I can’t be bothered to read what doesn’t interest me-and my interests change from week to week and day to day-partly because I am convinced that humanity and even science will profit from everyone doing their own thing.”

He needn’t have worried; whatever anyone thinks should or shouldn’t qualify as science, the fact is that ‘science’ is done by people, and some of those people are rationalists, some are empiricists and some are pragmatists, and no matter what rules are imposed, people break them.
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

**DELETE DUPLICATE**
Last edited by TimeSeeker on Thu Sep 20, 2018 3:48 pm, edited 1 time in total.
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

Useful. The scale of utility (in increasing order) is explanation, prediction, control ( https://www.stat.berkeley.edu/~aldous/1 ... hmueli.pdf ). Where I suppose "control" can simply be interpreted as "very precise prediction" - a bounded confidence interval. I also regard all theories with exclusively-explanatory utility in the same category as the Bible. Feel-good stories.

I used to side with Feyerabend on this for a long while, until I convinced myself that the scientific method is applied information theory/computational science. And I would even go as far to say that falsification is a law of physics given the isomorphism between information theory and statistical mechanics.

I still love Feyerabend's spirit none the less, because while one may (eventually) crystalize their thinking in a computational framework (Plato would be so proud we've re-discovered forms/patterns) - the road there is very much epistemic anarchism and pragmatism!

On paradigm shifts: I think computation is a superset of physics. And I think quantum physicists are starting to see it this way. And statistical modeling is (finally) trickling down even into places like sociology.
thedoc
Posts: 6473
Joined: Thu Aug 30, 2012 4:18 pm

Re: Philosophy of science-the first two and a half millennia.

Post by thedoc »

uwot wrote: Thu Sep 20, 2018 2:23 pm
Which of these three propositions do you most agree with?

A scientific theory must be:

A Logically coherent explanation.
Supported by evidence.
Useful.
All of the above.
Dubious
Posts: 4000
Joined: Tue May 19, 2015 7:40 am

Re: Philosophy of science-the first two and a half millennia.

Post by Dubious »

The first two. Whether it's useful or not at the time is immaterial since it may get to be very useful in the future.

Also a logically coherent explanation doesn't have to be true but is more likely to be true if supported by evidence, that being essential.
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

Dubious wrote: Fri Sep 21, 2018 4:20 am The first two. Whether it's useful or not at the time is immaterial since it may get to be very useful in the future.
It may also get to be never-looked-again by humanity. Don’t you think relevance and efficiency is important given that we have finite human minds and finite time to do science with?

We have stockpiled a whole lot of solutions waiting for problems, when the ocean of problems right in front of us remain untackled!
Dubious wrote: Fri Sep 21, 2018 4:20 am Also a logically coherent explanation doesn't have to be true but is more likely to be true if supported by evidence, that being essential.
Having explained Evolution/Natural selection (with far more evidence than necessary), how is it useful asserting it to be true?
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Re: Philosophy of science-the first two and a half millennia.

Post by uwot »

TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmUseful. The scale of utility (in increasing order) is explanation, prediction, control. Where I suppose "control" can simply be interpreted as "very precise prediction" - a bounded confidence interval.
So what role does a "bounded confidence interval" (assuming it is a synonym for "very precise prediction") play in manipulating our environment, i.e. making it practically useful?
TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmI also regard all theories with exclusively-explanatory utility in the same category as the Bible. Feel-good stories.
It's also what philosophers call, and scientists dismiss as metaphysics.
TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmI used to side with Feyerabend on this for a long while, until I convinced myself that the scientific method is applied information theory/computational science.
That's one way of looking at it. But again, how is "applied information theory/computational science" actually applied? I understand, at least on some level, how this scientific method is useful, but are the means of gathering the data and say, making rockets or developing drugs, excluded from 'science' on this interpretation?
TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmAnd I would even go as far to say that falsification is a law of physics given the isomorphism between information theory and statistical mechanics.
I will have to take your word about "the isomorphism between information theory and statistical mechanics". But in what way is falsificationism a law of physics.
TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmI still love Feyerabend's spirit none the less, because while one may (eventually) crystalize their thinking in a computational framework (Plato would be so proud we've re-discovered forms/patterns) - the road there is very much epistemic anarchism and pragmatism!
Well, there have always been mathematical realists who believe that maths is discovered, rather than invented.
TimeSeeker wrote: Thu Sep 20, 2018 3:47 pmOn paradigm shifts: I think computation is a superset of physics. And I think quantum physicists are starting to see it this way. And statistical modeling is (finally) trickling down even into places like sociology.
It's certainly true that the amount of data collected, particularly in big projects like CERN and Human Genome Sequencing, is so vast that it can only productively be processed by computers. What role do you think human beings still have in designing algorithms to search for particular patterns, and what criteria do they use?
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Re: Philosophy of science-the first two and a half millennia.

Post by uwot »

thedoc wrote: Fri Sep 21, 2018 1:35 am
uwot wrote: Thu Sep 20, 2018 2:23 pm
Which of these three propositions do you most agree with?

A scientific theory must be:

A Logically coherent explanation.
Supported by evidence.
Useful.
All of the above.
Fair enough, but do you think that, for instance, General Relativity is important because the field equations work, or that it explains that gravity is the result of 'warped spacetime', even though that may not be true?
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Re: Philosophy of science-the first two and a half millennia.

Post by uwot »

Dubious wrote: Fri Sep 21, 2018 4:20 am The first two. Whether it's useful or not at the time is immaterial since it may get to be very useful in the future.

Also a logically coherent explanation doesn't have to be true but is more likely to be true if supported by evidence, that being essential.
I've posted this before, but in it Richard Feynman explains the usefulness of different explanations, "one gives a man different ideas than the other" and that "every theoretical physicist that's any good knows six or seven different theoretical representations for exactly the same physics" which are logically and even mathematically equivalent. But then Feynman was also a Popperian who believed that we can never know that a scientific theory is true, we can only know that it is wrong; or as he put it: "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong." So you're in good company.

https://www.youtube.com/watch?v=NM-zWTU7X-k&t=74s
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

uwot wrote: Fri Sep 21, 2018 11:02 am So what role does a "bounded confidence interval" (assuming it is a synonym for "very precise prediction") play in manipulating our environment, i.e. making it practically useful?
Let me take a few steps back to make sure we are on the same page. We have to agree on utility and knowledge first. And in particular the distinction between useful and useless knowledge.

The regress problem in epistemology is unsolved. Foundationalism is a flawed strategy. JTB says "justified true belief". Justify your justification - game over It's all nonsense because there is a meta-problem in epistemology: How does the knower know that (s)he has knowledge?

It may have had knowledge, but the world is constantly changing - knowledge becomes stale. Or maybe it was never knowledge to begin with? Language evolves - 'knowledge' can be mis-interpreted/mis-understood/mis-read. What PROPERTIES does knowledge have? What can I DO to verify that I still have knowledge? Note that I reject any such notions of 'objectivity'. Science is a completely subjective endeavor! It's all about the observer. The map is not the territory.

The only way around this is verificationism. Which requires an experiment. Which makes the scientific method recursive - e.g algorithmic. If you want to explore this line of reasoning more Google for "Elephants don't play chess".

For now, here is my thought experiment: Tomorrow I may or may not die. This meets the JTB criterion for knowledge. And it's less useful than toilet paper.

An important observation to make is that this statement doesn't tell me "nothing" - quite the opposite. It tells me everything. Every possible thing that could happen to me is covered by "I may or may not die tomorrow".

But if you recognise that the above statement tells you exactly the same thing as a coin - then we could agree that we have an objective epistemic standard for useless knowledge. It predicts nothing OR everything. And so useful knowledge is somewhere in between. Precision. Information.

"Tomorrow I may or may not die" translates into two testable hypotheses:

A: There's a 50% chance of me dying in the next 24 hours.
B: There is also a 50% chance of me not-dying in the next 24 hours.

Flip a coin. And so you have a feedback loop - a mechanism to keep yourself honest that you "know" something.

With a little practice you can turn most any uncertainty into a bunch of yes/no experiments (1 bit of uncertainty = 1 yes/no question). Will I make it to the airport by 15:30? Will I pass my exams? And so - useful knowledge is predicting better than a coin.
You can improve your knowledge and measure your own progress - by making better and better predictions.

There is no getting away from asking the right question...

And from this point of departure: useful knowledge that predicts with near 100% accuracy IS by definition - control.
Will I be drinking coffee within 30 seconds? Yes! (goes to coffee machine to demonstrate control (and maybe gets stuck in a queue))

Now, while all of the above may be "common sense" - the important points is the isomorphism between hypothesis testing (asking yes/no questions) and bits of information.

I'll get to your other points a little later. My attention needs to go elsewhere presently.
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Re: Philosophy of science-the first two and a half millennia.

Post by uwot »

TimeSeeker wrote: Fri Sep 21, 2018 12:54 pmIt's all nonsense because there is a meta-problem in epistemology: How does the knower know that (s)he has knowledge?
What difference will that make to her behaviour?
TimeSeeker wrote: Fri Sep 21, 2018 12:54 pmNow, while all of the above may be "common sense" - the important points is the isomorphism between hypothesis testing (asking yes/no questions) and bits of information.
If that means that experimental results=data, which ultimately can be digitised, then I'm not sure why you think I might disagree.
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

uwot wrote: Fri Sep 21, 2018 2:13 pm What difference will that make to her behaviour?
Economics/Expected value theory. I consider the probability * cost (time, money, harm, etc.) of error.
For example 1 in 20 chance of burnt toast is acceptable to me, 1 in 20 chance of parachute not opening isn't.

Bottom line though. It's all best guess with best information available. Until your "gut feel" (arbitrary risk tolerance threshold) tells you that you are comfortable with taking the leap of faith. And if you've made an error... you will get feedback from reality soon enough (falsification).
uwot wrote: Fri Sep 21, 2018 2:13 pm If that means that experimental results=data, which ultimately can be digitised, then I'm not sure why you think I might disagree.
It means experiments are disambiguations. Ambiguity is doubt/uncertainty (at least that's how I parse my emotions).

If you don't disagree - great. Most people object to thinking of themselves as computers. I find it most useful!
Dubious
Posts: 4000
Joined: Tue May 19, 2015 7:40 am

Re: Philosophy of science-the first two and a half millennia.

Post by Dubious »

uwot wrote: Fri Sep 21, 2018 11:24 am
But then Feynman was also a Popperian who believed that we can never know that a scientific theory is true, we can only know that it is wrong; or as he put it: "It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong." So you're in good company.
That idea, it seems to me, is self-evident. Falsification implies negation which is absolute when referring to that whose main or upholding thesis has been disproven.

Conversely, surviving theories which reflect our current conceptions of reality, exist within a probability index measured by its ability to blend with observed facts. Though falsification may negate the theory as a whole, it may still hold value in certain of its details which may be transcribed into another theory not unlike an organ transplant. The operating room of science has always been a messy one. A theory may be dead but some of its parts may still be useful elsewhere.

Also, in my view, the one thing still called a theory but no-longer within a probability range to justify calling it that is evolution of which Earth is both evidence and manifestation. What remains theoretical are some of its details, those functions which drive the process onward...or backward.
TimeSeeker
Posts: 2866
Joined: Tue Sep 11, 2018 8:42 am

Re: Philosophy of science-the first two and a half millennia.

Post by TimeSeeker »

Dubious wrote: Fri Sep 21, 2018 9:52 pm That idea, it seems to me, is self-evident. Falsification implies negation which is absolute when referring to that whose main or upholding thesis has been disproven.

Also, in my view, the one thing still called a theory but no-longer within a probability range to justify calling it that is evolution of which Earth is both evidence and manifestation. What remains theoretical are some of its details, those functions which drive the process onward...or backward.
I can agree with that, and want to add that by the same criteria by which evolution was nominated a theory and deemed “true” then one can make the case for “the universe is a computer simulation” and prove it true.

Neither is falsifiable. But the latter has better explanatory AND predictive utility and far more supporting evidence.

I guess I am making an argument for constructivist epistemology. And it is but a small step from there to Genesis 1:27...

And so we have a universe grounded in the notions of “information” and “entropy”. Nobody has any idea what those are. But they are the foundations of physics.

So I guess in a few hundred(?) years, when somebody has figured it out, people who still think like me will be seen as “religious zealots”. Today - I get to call myself “progressive”.
uwot
Posts: 6093
Joined: Mon Jul 23, 2012 7:21 am

Re: Philosophy of science-the first two and a half millennia.

Post by uwot »

Dubious wrote: Fri Sep 21, 2018 9:52 pmFalsification implies negation which is absolute when referring to that whose main or upholding thesis has been disproven.
One of the criticisms of falsificationism is that scientific theories are themselves theory-laden. On a simple level, if a brick doesn't fall to the floor, there's something wrong with the theory that bricks always fall to the floor. On the other hand, when you are analysing the data from, say the Large Hadron Collider, if you don't get the results that your theory predicts, there could be any number of reasons why that might be the case. Any one of the theories that combined to build a machine the size of a small city could be wrong.
Dubious wrote: Fri Sep 21, 2018 9:52 pmConversely, surviving theories which reflect our current conceptions of reality, exist within a probability index measured by its ability to blend with observed facts. Though falsification may negate the theory as a whole, it may still hold value in certain of its details which may be transcribed into another theory not unlike an organ transplant. The operating room of science has always been a messy one. A theory may be dead but some of its parts may still be useful elsewhere.
And by the same token, body parts from elsewhere might keep the first theory breathing.
Dubious wrote: Fri Sep 21, 2018 9:52 pmAlso, in my view, the one thing still called a theory but no-longer within a probability range to justify calling it that is evolution of which Earth is both evidence and manifestation. What remains theoretical are some of its details, those functions which drive the process onward...or backward.
Well, the full title of Darwin's book was 'On the Origin of Species by Means of Natural Selection'. That living beings adapt to their environment is not a theory, it is demonstrably the case. The theory bit is the natural selection; at the time, the prevailing opinion was that god did it, but of course you are right; there is much about the actual mechanism that remains theoretical.
Post Reply