In recent months, I’ve written a couple of posts that dance around the subject of knowledge. I would like to explore that topic further here.
There are two basic categories of knowledge—a priori and a posteriori. The first type, a priori knowledge, is the most pure type of knowledge. This is knowledge that is not based upon observation or evidence, but is based upon pure logic and reason. I have heard it described before as “something you can prove without leaving your couch”—something like “I think therefore I am”, or mathematical proofs.
Then there is a posteriori knowledge—knowledge which is based upon observation. Within this category are all scientific theories and philosophies based upon input from the world around us. Most scientific knowledge we have, of course, falls into the a posteriori category.
So true knowledge (a priori) is to say that two trees and three trees equals five trees. This is irrefutable and there is no scenario in which it is not the case. But secondary knowledge (a posteriori) is that these five trees will produce oranges this year.
What is interesting to me is when you boil through all of the difficulty, science ultimately comes down to this—we observe situations that predictably repeat themselves. These observations are facts. From these facts, we try and derive underlying laws or principles, which we believe explain these facts. But never separate that the facts themselves are more ‘real’ than the principles we attempt to use to describe them. So even within the a posteriori knowledge of science, there is a division of knowledge.
To put it another way, you may say that all knowledge can be broken down into four parts, in decreasing order of certainty: the highest is a priori knowledge (like most mathematics and some philosophy); a posteriori facts which have been observed (data); well-documented a posteriori theories based upon those facts (laws/theories); and hypotheses which are not well-documented (hypotheses). Using these strata, then, we would say that we give top priority and very little question to mathematics and data; we remain appropriately skeptical of laws/theories of science; and we remain extremely skeptical of hypotheses and theories not thoroughly documented.
However, scientists today have (wrongfully) changed the debate. Instead of the well-reasoned layers of knowledge we list above, we divide all knowledge into two types: well-accepted science and everything else. If the scientific community accepts it, then we call it a “scientific fact” and anyone who disagrees is “anti-science” or un-intellectual.
Of course, this is absurd. A moment’s effort should demonstrate to you that a logical person should always remain largely skeptical of scientific theories and laws; this is in fact what all good scientists should do. There are several reasons that we should always retain healthy skepticism:
Limited sample size.
As we noted before, all science is based upon the data available to us. And our data (facts) that we use as the basis for science are highly limited. We have performed observable science for only a few hundred years, with only the most recent hundred years or so having the technological precision to gather reliable data.
Let me try to put it in perspective. We believe that the earth sits in one part of a galaxy some 100,000 light-years, and we have studied the universe seriously for some 400 years of the galaxy’s history. And all of our assumptions are based upon this small sample size (one tiny planet for a short period of time).
What if I told you that I knew everything I needed to about the Earth and its history, based upon studying one hydrogen atom for 0.84 seconds? You’d think I was pretty crazy, right? But mathematically, that is exactly what we are saying.
Our sample size (a 6000 km planet in a 100,000 light year galaxy, studied for 400 years of a 15 billion year history) is exactly the same as studying one hydrogen atom out of the entire planet for 0.84 seconds and assuming that the conclusions we draw apply to the entire earth’s history!
Do I need to describe how ridiculous a statement this is? Yet this is what we do in science – we ignore how small our sample size is, and from the few observations we have, we draw big conclusions. Very big. Too big, for my care. Another way to state it would be to say that because you observed one bag of luggage come out of the carousel at the airport in the correct location, then it is safe to assume that all bags of luggage in the world will be properly handled…for the next 17.6 million years.
To perform any science at all, we must assume that everything that we have observed is typical of the universe as a whole.
Now this may seem logical but keep in mind – this is a bold, major assumption. We have no particular reason to assume that the measurements we make here would hold true in another galaxy. We have no particular reason to assume that the universe is basically the same all the way over. We simply assume it, because otherwise we are limited in what we can go after. And after a while, this assumption gets forgotten. It is assumed that if we consistently measure the speed of light as having a certain value in our corner of the universe, then it must have the same value elsewhere in the universe.
Imagine that you are a tiny creature who was raised on one particular iceburg in the middle of the Antarctic. You repeatedly measure gravity’s effects, over and over, for years. You use the assumption above, and therefore conclude that gravity always has the effect of 9.81 m/s2. The problem? A bad assumption. Because if you could measure on top of the Himalayas, or on the moon, you would learn that gravitational attraction was not always the same. And thus all of your theories, based upon this one observation, would turn out to be false.
We do not have the technology at this time to perform such observations in other parts of the solar system, our galaxy – much less the universe as a whole. And if it turns out that one of our commonly-used measurements is not in fact consistent across time or space, then a whole lot of science falls down.
So never forget that with all of the great knowledge science can gather, it remains a house of cards built upon a few foundational assumptions and calculations. And if the rules and data of the universe is not standard everywhere (and we have absolutely no evidence to gather to show us whether it is), then the entire deck comes falling down.
We also must always remember to differentiate between what is knowable and what is unknowable – a distinction that scientists all too frequently fail to make. Take evolutionary theory, for example. Evolutionary theory cannot be evaluated in one big box; rather, it is actually two separate theories which must be evaluated. One the one hand, you have what I call General Evolutionary Theory – the theory that mutational changes to the genome which provide a higher probability of life are likely to spread throughout the population. On the second hand, you have Specific Evolutionary Theory – the theory that this process led to the descent of life into mankind.
These are two vastly different theories. The primary difference being this—General Evolutionary Theory is knowable, and Specific Evolutionary Theory is unknowable. You can create experiments to test GET, at least in theory: take a population of animals in which one has a genetic mutation which should increase survival rate, isolate them from other breeding populations, and study whether that mutation spreads. (Note: such direct experiments into GET are hard to find, if they exist. There are many which look at data and suppose the results; but actually performing the experiment is more rare.)
But SET is actually unknowable, because all the data is historic. There is no situation in which we can ever actually know whether man descended from bacteria. We can look at the fossil record and suppose; we can test GET and use this to draw conclusions. But these are always assumptions, because by nature of the fact that the past is the past, we can never actually experiment and observe the data. Thus, the Specific Evolutionary Theory is always unknowable. We can never do more than suspect it.
Yet this is not how these are presented! If you look into evolutionary texts, you will find that the “evidence” falls into one of two categories: data/facts (which draw no conclusions but simply describe something which happened); or experimental support for general evolutionary theory. The specific evolutionary theory is based on pure speculation from the above.
So for example, the existence of a fossil which seems transitionary (the alleged “missing link”) is not evidence! It simply proves that a species existed that looks like something that is a cross between an ape and a man. It is data – not an observed change from apes into men. But this piece of data, combined with GET findings, lead people to speculate for SET. And that is fine…as long as we remember that it is speculation.
It is critical that we remember this scope limitation: that science can only know things that it can test and directly observe, but can only suspect things when our theory involves looking at historical data and developing theories to ‘connect the dots’ from our past.
Consider two recent news articles: here, and here. The first discovery, that some particles were observed traveling faster than the speed of light, falls into the “Knowable” category (we can directly measure it). The other article, however, falls moderately into the “unknowable” category (we are unlikely ever to be able to directly observe this) and is still only hypothesized, as you cannot directly observe the phenomenon (only its effects).
Not all scientific knowledge is created equally, as you can see!
So it is critical as we hear of a scientific finding, or are debating scientific theory, that we ask the following questions:
1. What is the “level” of the knowledge being discussed? Is it (a) an a priori proof; (b) a piece of data (without suppositions/conclusions); (c) a well-founded a posteriori theory based upon data; or (d) a logical hypothesis or inherence based upon one of the above?
2. Is the topic even theoretically knowable, or does it always remain unknowable/not directly testable?
3. Have you properly kept in mind the inherent limitations common in many theories (assumptions, scope, and sample size)?
Only then can you have a reasonable discussion of the situation.