http://qs1969.pair.com?node_id=379224

One of the most basic questions in philosophy. Also a question which can rapidly dead end, for instance with the favorite answer, you can't even prove that I exist! This is also a question where the answers often show more about your implicit assumptions than anything else. For instance Descartes started with I think, therefore I am and ended up with God exists.

I'd like to take a more practical look at an aspect of it. Because, juvenile and high-falutin' philosophy notwithstanding, this is a question that we answer every day in our actions. It is worth trying to answer it well.

I am commonly faced with the problem that I'd like to learn about something that I don't already know. How do I go about learning?

My usual initial strategy is to work off of recommendations. I've long had success in learning interesting things by asking someone who I believe knows something about the topic to recommend a good book. This is how I've found classics like Winning at New Products, Betrayal of Trust and Information Rules. Even if I don't know anyone personally, I can often get a good lead. For instance see how in How to locate a good reference (was: Poker Probability Processor) I would tackle the problem of finding a good book on poker.

But does that always work? Consider the case of someone who doesn't know Perl who is looking at Perl Black Book. It looks pretty good, right? But as stated at Re: Re: Any Book Recommendation, I'd not recommend it. What's different?

What is different is that Perl is something that I think I know something about. I'm not about to trust random recommendations seen on the web over my own judgement. Furthermore I have enough knowledge on the topic to be aware of important factors (like attention to error reporting and security) which strongly affect quality but are invisible to most readers.

A few years ago, Matt Wright's Script Archives would have made another good example of recommendations going wrong. Today the situation is somewhat better. If you find about Matt Wright and do a google search for him being bad, you'll get lots of cogent criticism. My usual procedure when I get a recommendation is to look for both positive and negative commentary before I act on it. Of course people who aren't so careful could easily still be mislead - but not much can realistically be done about people who only look at one side before making up their minds.

So go off of recommendations. But double-check them however you can. With your own knowledge if at all possible. Is this enough to steer you right?

Unfortunately, no. As Why People Believe Weird Things makes painfully clear, it is very easy to come to believe in something that is objectively rather dubious. (Incidental note about that book, the author got a lot of mail of the form, "I enjoyed your book very much, but you got chapter X wrong - that's actually true." People disagreed on which chapter was mistaken...) From your own "knowledge", you discount anything that supports the mainstream consensus. And you're conversely far more likely to accept whatever fits your beliefs.

Wouldn't it be nice to believe that this just happens to cults and to weird people? Unfortunately it doesn't. As Kuhn pointed out in The Structure of Scientific Revolutions, this is how scientists work. Furthermore it is a good thing. For it is only when you've embedded a belief system about how to think about a topic (let's call that a "paradigm") that you're able to really focus your thinking on that topic and can start coming up with useful ways to test it. Sure, you're probably wrong, but as Francis Bacon pointed out centuries ago, Truth comes out of error more easily than confusion.

Of course errors aren't good, you want misunderstandings to be corrected. Which is why science engages in a pattern of concerted "destruction testing" of its paradigms. Pushing them to the boundaries of their applicability, and looking hard at the anomolies. Eventually this causes one of those infamous "paradigm shifts", where an existing paradigm develops bad enough problems that people are forced to look for a new paradigm.

Or at least so says a simplified version of scientific history. As normally happens, history takes a theme and feeds it back on itself until a smooth stream folds into chaos. Because this pattern of holding on to beliefs, challenging them, and re-evaluating from time to time happens at all levels in science. Ranging from things like classical mechanics down to what genes are important in specific developmental pathways.

For one random example, see my comments on the disagreement between researchers on whether amyloid plaques have anything to do with Alzheimer's disease.

Of course this doesn't just apply to science. Nothing is special about scientists - they are just people. They have no tools to approach generating knowledge that the rest of us lack. Now science is special. Two special things about the scientific process is that it limits itself to questions which (by consensus) there are clear ways to tackle. (The "social sciences" typically don't so limit themselves - which is why most people don't count them as "hard sciences".) And it engages in systematic attempts to push its theories to the limit. Humans don't generally get to choose which questions to have opinions on. And most rarely challenge our own beliefs.

So where do we stand now? To learn about something, a good starting place is to work off of recommendations, analyzed to the best of your knowledge. Unfortunately much of what you think that you know is tied up in self-reinforcing belief systems that you have. There isn't too much that you can do about this inevitable fact - but trying to challenge your own beliefs from time to time.

So what are some of these self-reinforcing belief systems? They include things like belief in alien abductions, The Bible, Evolution, trickle-down economics, and weapons of mass destruction in Iraq.

So how do you test them? Well for one thing, the next time that you are about to reject what someone has to say as "obviously" being wrong, why not stop and ask yourself how you know what you think you do? If you are honest, you might find that your beliefs tie into a self-reinforcing knot, and there is nothing that you could say to really convince someone who disagrees with you.

That's an interesting experience. I highly recommend that everyone try it more often. (Including me.)

Replies are listed 'Best First'.
Re: What do you know, and how do you know that you know it?
by hv (Prior) on Aug 02, 2004 at 12:44 UTC

    Interesting meditation. :)

    As for what I know that I know: I'd limit that to mathematics of a particular formal sort, that I've proved myself, along the lines of "given these axioms, and these rules of logic, this conclusion follows". I'd like to see a project to prove all known maths in this way, using declared rules of logic and steps small enough to permit computer verification - it boils down to nothing but symbolic manipulation. (I even registered "axiomattic.org" for a while, hoping to start the project myself.)

    An important aspect of this approach is the definition of 'axiom'. Sometimes axioms are defined as "things that are so obviously true that they need no proof", but that is not how I use the term - if I did I'd have to deny all axioms. Instead, axioms are simply "the things assumed true for the purposes of this proof", and the result of a proof is "assuming these axioms and these rules, this necessarily follows".

    Within the realm of programming, I had a very fortunate and salutary experience around 1991, when working on a 6800-based machine with 48K of RAM at a time when I was familiar with every line of code in the O/S. I encountered an unexpected error which halted the machine, and because of the limited RAM saving and printing out the entire core was a practical thing to do.

    Tracking back from the location of the error showed that some memory had been corrupted ... by an error elsewhere, in which some memory had been corrupted ... by an error elsewhere, in which some memory had been corrupted ... in a very strange way. It turned out that one instruction had been changed from 0xA8 0x01 (LDAA+ 1 - load register A with the byte pointed to by the pointer register + 1) to 0xA9 xxx, an invalid opcode which nonetheless by symmetry should represent an instruction like 'STS#' - store stack immediate.

    The processor had actually skipped a byte, then stored the stack pointer over the following two bytes, and the rest of the mayhem was caused by that. What changed the 0xA8 to 0xA9? Probably an alpha particle: examining that byte of memory showed that it had a parity error. (With the help of a colleague, I was able to account for every byte in memory other than that one.)

    That experience in my formative years was very instructive: no matter how defensively you code, the fallibility of hardware means there are no guarantees, ever. This is why important applications use redundancy (see "#8: Why five computers?" under that link).

    A couple of years later I learned another important lesson. By now I was writing C code for DOS and Windows, using Microsoft's compiler which was buggy as hell. Every few days I'd track down a problem, contact MS technical support, work my way through the layers of gap-toothed flunkeys determined to point me at a page of a manual I knew better than they, eventually get through to a hallowed programmer who'd (on a good day) look at my test case, confirm that it was a bug, and promise me that it'd be fixed in the next release - which was 6 months away, would cost me more money, and would introduce myriad new bugs.

    After a while I noticed that my progress had slowed down purely because of my lack of trust in the tools - whenever my program failed to behave as expected, my first port of call was the debugger's facility to show the C code side by side with the generated assembler, to try and discover how the compiler had screwed up this time. But bad as the compiler was, the majority of the time the bug was in my code, and I wasted an awful lot of time ruling out the compiler each time before examining my own code.

    So from this I learnt the importance of good tools (and the value, only realised when I discovered the open source community a few years later, of being able to fix the tools yourself). I also learnt that chances are, it's me that screwed up this time. And I learnt, more generally, that I (and by observation others) will always look first to blame anything other than our own code.

    Update: s/cosmic ray/alpha particle/ after reading http://www.science.uva.nl/~mes/jargon/c/cosmicrays.html

    Hugo

      I'd like to see a project to prove all known maths in this way, using declared rules of logic and steps small enough to permit computer verification - it boils down to nothing but symbolic manipulation.

      it's called the Principia Mathematica. :)

      As for what I know that I know: I'd limit that to mathematics of a particular formal sort, that I've proved myself, along the lines of "given these axioms, and these rules of logic, this conclusion follows". I'd like to see a project to prove all known maths in this way, using declared rules of logic and steps small enough to permit computer verification - it boils down to nothing but symbolic manipulation. (I even registered "axiomattic.org" for a while, hoping to start the project myself.)

      What you describe sounds like Hilbert's Program. Unfortunately, Godel's Incompleteness Theorem states that we can never find an all encompassing axiomatic system which is able to prove all mathematical truths, but no falsehoods.

        What he described is less ambitious than Hilbert's Program.

        He just wanted to have computers (which hopefully are less fallible than humans) systematically verify known mathematics. While I see this as interesting to attempt, it is also doomed to fail because more math is being produced faster than you can possibly verify it.

        Which brings up a dirty little secret of mathematics. You can consider a mathematical proof as being like a computer program that has been specced out in detail, but never run. Yeah, you think that it would work. But if you tried to implement it you'd run into little roadblocks. Most of them you could get around. But you never have any idea what is lurking in there, and no real way to find out.

        And no, this is not a trivial problem. In fact entire areas of mathematics have fallen apart because problems were found, and found to be serious. My current favorite example is the classification of finite groups. Nobody has really reviewed the proof. Everyone knows that the proof has flaws. And the situation is serious enough that there is a decades-long effort under way to find a different proof of the result! (The effort was started by some of the people who made their reputation working on the original proof.)

Re: What do you know, and how do you know that you know it?
by maetrics (Sexton) on Aug 02, 2004 at 10:13 UTC

    I'm not sure how this sits with me yet. The topic and beginning was interesting enough for me to read the rest. But instead of answering "How do you know what you know?" or "How does one go about learning?", it seems to have diffused into "Don't flame people for thinking differently". Maybe I missed something.

    Much of your conclusion could be strengthened with understanding forensic debating and the scientific process.

    All in all I did like the topics discussed in this post, so I did vote ++

Re: What do you know, and how do you know that you know it?
by xdg (Monsignor) on Aug 02, 2004 at 13:56 UTC

    My initial reaction was "what does this have to do with Perl?" However, it suddenly occured to me that there is an obvious (to me, at least) Perl corollary:

    There is more than one way to do it

    From different starting points, reasonable people will reach very different conclusions on how to solve problems. This is normal and natural. The importance, ease, and cultural encouragement of testing as part of code writing is another parallel -- what do you know your code does and how do you know it? -- or at least, what do you assume you know about your code and how do you check your assumptions. (Does this suggest the Perl community is naturally inclined to avoid the self-reinforcing belief trap?) Now, if only there were a Test::More suite for life...

    In all seriousness, to the point of the article, I find one of the best ways to avoid a self-reinforcing belief system is the debating team notion that one should be able to argue both sides of any argument equally well.

    -xdg

    Code posted by xdg on PerlMonks is public domain. It has no warranties, express or implied. Posted code may not have been tested. Use at your own risk.

      I actually started with the plan of ending with a discussion of some of the self-referencing belief systems dividing programmers (eg dynamic techniques vs B&D practices), but ran out of steam. (It was late at night...)

      However let me assure you that while TIMTOWDI encourages questioning how you do things, nothing protects anyone from falling into this type of trap. Not even being a Perl programmer. To the extent that you can be protected, one of the best protections is to be painfully aware of how easy it can be to flip the bozo bit on information that would change your views, and give the alternatives careful consideration.

Re: What do you know, and how do you know that you know it?
by BrowserUk (Patriarch) on Aug 02, 2004 at 09:52 UTC

    I agree++.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
Re: What do you know, and how do you know that you know it?
by zentara (Archbishop) on Aug 02, 2004 at 14:17 UTC
    My approach is to sit back and envision the "knowledge" in terms of "the big picture". Most knowledge applies to solving a specific problem, and the solutions often ignore it's interaction with the "big picture". I often think of the old saying "what does this have to do with the price of tea in China?".

    The main question which you must ask yourself, is "what is the big picture" to you? Is it corporate profits, or world economy, or world peace, or even " how does it affect the creation fields in the quantuum foam" ?

    So it is up to the observer to know the limits of his "big picture", and to keep in the back of his mind that "there is always something bigger".

    I just saw a sig in a post here recently(maybe yours?)......

    An intellectual is someone whose mind watches itself. -- Albert Camus


    I'm not really a human, but I play one on earth. flash japh
Re: What do you know, and how do you know that you know it?
by jZed (Prior) on Aug 02, 2004 at 18:20 UTC
    I think William Gibson's idea that "the future is already here, it just isn't evenly distributed yet" applies. The current paradigms and the future ones co-exist. The orthodox and the cautious support the former and the free-thinkers and crackpots support the latter. Pejorative terms like orthodoxy and crackpot are part of the problem, ideas should not be accepted or rejected either on the basis that "everyone" believes them, or that "noone" believes them. But any field of science needs to be able to distinguish between ideas which go beyond current assumptions and ideas which go beyond current evidence. Sometimes those are the same thing and sometimes they aren't. In Europe, a few centuries ago, there were folk beliefs that witches could fly. There was also a guy named Da Vinci who drew pictures of flying machines. Would rejecting the idea that humans would ever be able to fly have been a good thing? Nope. Would giving more weight to Da Vinci's approach to it than to the believers in witchcraft have been a good thing, probably so. OTOH there are other aspects of medieval witchcraft which foreshadowed ideas (e.g. in psychology) that were just as much "ahead of their time" as Da Vinci's ideas.

    Have you seen the movie "What the #$*! Do We Know!?". A thoroughly confused look at the relation of paradigms to evidence IM(NS)HO but one that highlights that a romantic notion that new ideas are automatically good ideas is not any better than the notion that new ideas are automatically wrong.

    Your advice about self-introspection re things that are "obviously" wrong is a good one, I wish I could follow it more often.

      What you say has some justice, but is far from the full story. When all is said and done, most people labelled "crackpots" will not be vindicated by history. Furthermore the path to discovering better future paradigms tends to be laid down by people working under current paradigms. Currently accepted paradigms may be imperfect, but they got to being currently accepted through a testing process that is pretty good.

      I'm saying (among other things) that it is good for all of us to contribute to the testing process. That doesn't mean that we should entirely discard the results of other people's testing!

        I guess I wasn't very clear, since I was trying to say something similar to what you just said here, thanks for paraphrasing me more clearly. I guess the only part where I have a slightly different view is that crackpot ideas, even those that won't be vindicated by history, can contain some small spark of an idea that may ignite some other idea which will be vindicated. Science fiction is useful for sparking ideas even when it doesn't directly contribute to science. When someone tells me that Basque must be the oldest human language because the last known Neanderthals lived in the Iberian penninsula, I can imagine a nice science fiction story which would make the statement true even if on a different level I know that the statement is a series of weak or impossible links in a chain of improbable assumptions on a subject that is a magnet for quacks.
Re: What do you know, and how do you know that you know it?
by johndageek (Hermit) on Aug 02, 2004 at 19:54 UTC
    "What do you know, and how do you know that you know it?"

    I know of all the things I have percieved.

    Since I can not validate what I have percieved, other than by more perception, my perceptions may be wrong.

    But since it is the only game in town (as far as I know), I think I will stay in the game for a while (sigh).

    "How do I go about learning?"

    I have a desire, need, want or similar feeling.

    Validate that I do not have an existing solution based upon previous experiences.

    Attempt to imagine a solution based upon previous experiences and information, as well as some suggestions from the little voices in my head (or what ever you wish to call those thoughts that rise unbidden that are not based upon prevoius experiences)

    Having reached the conclusion that I do not have any (or enough) information about the subject at hand, I will attempt to gather information about the subject from various sources, statring with sources that have proven to be reliable in the past. Then begin building on that, evaluating each piece of information based upon my growing base of information, and accepting or rejecting each new piece of information into my gained knowledge.

    Once again, we run into the basic problem of perception. First, how accurate are my perceptions? Second, how accurate are the communication tools others (and I) use to relay experiences they have had, to me? Third, since my perceptions are doubted by me, why would I trust someone elses perceptions of a situation they are in (or have been in)?

    Once again we are in the quandary of doubting the only sources of information fed to the little bone encased black box that rides about atop our necks.

    The fun thing about the discussion is - once you begin to doubt, where do you stop.

    Alternate ending: The fun thing about the discussion is - once you begin to believe, where do you stop.

    Enjoy!
    Dageek
Re: What do you know, and how do you know that you know it?
by Velaki (Chaplain) on Aug 04, 2004 at 12:37 UTC

    I start off with the axiom
    Knowledge is True and Justified Belief
    and take it from there.

    However, in testing a theorem, would the realization of skills acquisition be considered a lemma to the original proof, and thus be a concrete epistemological truth -- by direct observation -- of the skill itself? Hence, learning can be proven, with the side-effect of potential practicability thereto! And thus you KNOW a new skill -- like Perl Programming!

    Thoughts? Musings?

    -v
    "Perl. There is no substitute."
Re: What do you know, and how do you know that you know it?
by qq (Hermit) on Aug 03, 2004 at 19:46 UTC

    "...Socrates claimed that he being aware of his ignorance is wiser than those who, though ignorant, still claimed knowledge..." wikipedia