Critical thinking, an outline from my notes.
28-07-2012, 02:11 PM (This post was last modified: 31-07-2012 05:21 AM by fstratzero.)
Critical thinking, an outline from my notes.
After my de-conversion I was fooled by many conspiracy theories, the news, and other sources of information. I'd be proven wrong by other people, and that prompted a thought.
Quote:What if the way I think is broken? People have been thinking since the beginning of the species. I'm sure there must be information on how to think.
That's when I assailed the internet and found out about critical thinking. After reading some books, listening to podcasts, watching youtube videos, I found a new way to think! Finally I could think more clearly, accurately, and actually enjoy being corrected.
Loosely defined its a system of opening ones mind to logically analyze, with out bias, both positions on a topic, then to reach a conclusion that's closest to reality.
Now for the outline.
Enculturation - is the process by which a person learns the requirements of the culture by which he or she is surrounded, and acquires values and behaviours that are appropriate or necessary in that culture.
Ego Defenses - Ego defenses are psychological coping strategies that distort reality in order to protect ourselves from anxiety, guilt, and other bad feelings.
Denial - When we simply refuse to accept an unpleasant reality, we are using denial.
Projection - Projection is the defense mechanism by which we see in others a part of ourselves that we cannot accept and do not recognize.
Emotions - Emotions are an important mark of human experience. They are in part what separates humans from machines and the lower animals, for machines can compute but they can not experience joy. And animals may find themselves attached to others, but they do not love them. Emotions give our world taste and richness, joy and surprise, but also pain and sorrow. Emotions can affect and inspire thought, said William James, but he also said they can destroy it. Later in this book we look at how emotions can give birth to thinking, but for now our attention focuses on their inhibiting influence, on their capacity to bury, twist, and fragment the thinking process and take it to the depths of the irrational.
Rationalization - Of all the defense mechanisms, rationalization is perhaps the greatest inhibitor of clear thinking. Rationalization is distorted thinking that attempts to justify behavior motivated by self-interest or unacceptable drives. It serves to protect us from bad feelings by, for example, turning selfish motives into honorable ones.
Self Serving Biases - If our motives are good they do not need to be rationalized. But sometimes, in spite of our good motives, undesirable consequences occur, consequences that threaten our self-esteem. The actions of others can also threaten our self-esteem. Such ego-threatening situations can lead us to cognitive biases.
Schemata - Not only do we tend to think about the world according to what we want to see and what we need to see, we tend to think of it in terms of what we expect to see. We tend to perceive and think about others and situations in terms of the ideas we have already formed about them. These ideas are called schema.
Cognitive Dissonance - is a discomfort caused by holding conflicting cognitions (e.g., ideas, beliefs, values, emotional reactions) simultaneously.
Expectations - expectation is directly linked to the Self-fulfilling Prophecy. Whether or not such an expectation was truthful or not has little or no effect on the outcome. If a person believes what they are told or convinces himself/herself of the fact, chances are this person will see the expectation to its inevitable conclusion.
*Six common thinking errors*
1) Personalization - taking something personal, when it's not meant to be.
2) Polarized Thinking - black and white thinking
3) Over generalization - drawing broad conclusions on the basis of a single incident.
4) Catastrophizing - assuming the worst possible outcome. IE car crash equals death
5) Selective Abstraction - focusing on a small point of something, ignoring the bigger picture
6) Belief Identity Duality - We are not our beliefs, rather they are a piece of knowledge we've chosen to be true.
Deductive Thinking - is the process of reasoning from one or more general statements regarding what is known to reach a logically certain conclusion. Deductive reasoning involves using given true premises to reach a conclusion that is also true. Deductive reasoning contrasts with inductive reasoning in that a specific conclusion is arrived at from a general principle. If the rules and logic of deduction are followed, this procedure ensures an accurate conclusion.
p1 All men are mortal.
p2 Socrates is a man.
c Therefore, Socrates is mortal.
Basically premise + premise = conclusion.
Inductive Thinking - The philosophical definition of inductive reasoning is much more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from generalizations to individual instances. Inductive reasoning consists of inferring general principles or rules from specific facts. A well-known laboratory example of inductive reasoning works like a guessing game. The participants are shown cards that contain figures differing in several ways, such as shape, number, and color. On each trial, they are given two cards and asked to choose the one that represents a particular concept. After they choose a card, the researcher says "right" or "wrong."
Categorical Syllogisms - is an argument consisting of exactly three categorical propositions (two premises and a conclusion) in which there appear a total of exactly three categorical terms, each of which is used exactly twice.
The following summarizes the basic rules for the valid categorical syllogism.
1. At least one affirmative premise (“All humans are mortal”)
2. At least one universal premise (“All humans are mortal” or “No humans are immortal”)
3. Exactly three terms Logical Rules 1. If one of the premises is negative, the conclusion must be negative. 2. If both premises are positive, the conclusion must be positive. 3. If one of the premises is particular, the conclusion must be particular.
4. If one of the premises is singular, the conclusion must be singular. 5. The middle term must be distributed at least once. 6. A term distributed in the conclusion must be distributed in a premise. Remember, even if a syllogism meets all these rules, if the premises are false, we cannot rely upon the conclusion.
Logical Fallacies - is usually an error in reasoning often due to a misconception or a presumption.
Fallacy of division
A fallacy of division occurs when one reasons logically that something true of a thing must also be true of all or some of its parts.
The fallacy of composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole (or even of every proper part). For example: "This fragment of metal cannot be broken with a hammer, therefore the machine of which it is a part cannot be broken with a hammer." This is clearly fallacious, because many machines can be broken into their constituent parts without any of those parts being breakable.
An ad hominem argument is any that attempts to counter another’s claims or conclusions by attacking the person, rather than addressing the argument itself. True believers will often commit this fallacy by countering the arguments of skeptics by stating that skeptics are closed minded. Skeptics, on the other hand, may fall into the trap of dismissing the claims of UFO believers, for example, by stating that people who believe in UFO’s are crazy or stupid.
A common form of this fallacy is also frequently present in the arguments of conspiracy theorists (who also rely heavily on ad-hoc reasoning). For example, they may argue that the government must be lying because they are corrupt.
It should be noted that simply calling someone a name or otherwise making an ad hominem attack is not in itself a logical fallacy. It is only a fallacy to claim that an argument is wrong because of a negative attribute of someone making the argument. (i.e. “John is a jerk.” is not a fallacy. “John is wrong because he is a jerk.” is a logical fallacy.)
The term “poisoning the well” also refers to a form of ad hominem fallacy. This is an attempt to discredit the argument of another by implying that they possess an unsavory trait, or that they are affiliated with other beliefs or people that are wrong or unpopular. A common form of this also has its own name – Godwin’s Law or the reductio ad Hitlerum. This refers to an attempt at poisoning the well by drawing an analogy between another’s position and Hitler or the Nazis.
The argument from ignorance basically states that a specific belief is true because we don’t know that it isn’t true. Defenders of extrasensory perception, for example, will often overemphasize how much we do not know about the human brain. It is therefore possible, they argue, that the brain may be capable of transmitting signals at a distance.
UFO proponents are probably the most frequent violators of this fallacy. Almost all UFO eyewitness evidence is ultimately an argument from ignorance – lights or objects sighted in the sky are unknown, and therefore they are alien spacecraft.
Intelligent design is almost entirely based upon this fallacy. The core argument for intelligent design is that there are biological structures that have not been fully explained by evolution, therefore a powerful intelligent designer must have created them.
In order to make a positive claim, however, positive evidence for the specific claim must be presented. The absence of another explanation only means that we do not know – it doesn’t mean we get to make up a specific explanation.
Argument from authority
The basic structure of such arguments is as follows: Professor X believes A, Professor X speaks from authority, therefore A is true. Often this argument is implied by emphasizing the many years of experience, or the formal degrees held by the individual making a specific claim. The converse of this argument is sometimes used, that someone does not possess authority, and therefore their claims must be false. (This may also be considered an ad-hominen logical fallacy – see below.)
In practice this can be a complex logical fallacy to deal with. It is legitimate to consider the training and experience of an individual when examining their assessment of a particular claim. Also, a consensus of scientific opinion does carry some legitimate authority. But it is still possible for highly educated individuals, and a broad consensus to be wrong – speaking from authority does not make a claim true.
This logical fallacy crops up in more subtle ways also. For example, UFO proponents have argued that UFO sightings by airline pilots should be given special weight because pilots are trained observers, are reliable characters, and are trained not to panic in emergencies. In essence, they are arguing that we should trust the pilot’s authority as an eye witness.
There are many subtypes of the argument from authority, essentially referring to the implied source of authority. A common example is the argument ad populum – a belief must be true because it is popular, essentially assuming the authority of the masses. Another example is the argument from antiquity – a belief has been around for a long time and therefore must be true.
Argument from final Consequences
Such arguments (also called teleological) are based on a reversal of cause and effect, because they argue that something is caused by the ultimate effect that it has, or purpose that is serves. Christian creationists have argued, for example, that evolution must be wrong because if it were true it would lead to immorality.
One type of teleological argument is the argument from design. For example, the universe has all the properties necessary to support live, therefore it was designed specifically to support life (and therefore had a designer.
Argument from Personal Incredulity
I cannot explain or understand this, therefore it cannot be true. Creationists are fond of arguing that they cannot imagine the complexity of life resulting from blind evolution, but that does not mean life did not evolve.
Begging the Question
The term “begging the question” is often misused to mean “raises the question,” (and common use will likely change, or at least add this new, definition). However, the intended meaning is to assume a conclusion in one’s question. This is similar to circular reasoning, and an argument is trying to slip in a conclusion in a premise or question – but it is not the same as circular reasoning because the question being begged can be a separate point. Whereas with circular reasoning the premise and conclusion are the same.
The classic example of begging the question is to ask someone if they have stopped beating their wife yet. Of course, the question assumes that they every beat their wife.
In my appearance on the Dr. Oz show I was asked – what are alternative medicine skeptics (termed “holdouts”) afraid of? This is a double feature of begging the question. By using the term “holdout” the question assumes that acceptance is already become the majority position and is inevitable. But also, Oz begged the question that skeptics are “afraid.” This also created a straw man (see below) of our position, which is rather based on a dedication to reasonable standards of science and evidence.
Confusing association with causation
This is similar to the post-hoc fallacy in that it assumes cause and effect for two variables simply because they occur together. This fallacy is often used to give a statistical correlation a causal interpretation. For example, during the 1990’s both religious attendance and illegal drug use have been on the rise. It would be a fallacy to conclude that therefore, religious attendance causes illegal drug use. It is also possible that drug use leads to an increase in religious attendance, or that both drug use and religious attendance are increased by a third variable, such as an increase in societal unrest. It is also possible that both variables are independent of one another, and it is mere coincidence that they are both increasing at the same time.
This fallacy, however, has a tendency to be abused, or applied inappropriately, to deny all statistical evidence. In fact this constitutes a logical fallacy in itself, the denial of causation. This abuse takes two basic forms. The first is to deny the significance of correlations that are demonstrated with prospective controlled data, such as would be acquired during a clinical experiment. The problem with assuming cause and effect from mere correlation is not that a causal relationship is impossible, it’s just that there are other variables that must be considered and not ruled out a-priori. A controlled trial, however, by its design attempts to control for as many variables as possible in order to maximize the probability that a positive correlation is in fact due to a causation.
Further, even with purely epidemiological, or statistical, evidence it is still possible to build a strong scientific case for a specific cause. The way to do this is to look at multiple independent correlations to see if they all point to the same causal relationship. For example, it was observed that cigarette smoking correlates with getting lung cancer. The tobacco industry, invoking the “correlation is not causation” logical fallacy, argued that this did not prove causation. They offered as an alternate explanation “factor x”, a third variable that causes both smoking and lung cancer. But we can make predictions based upon the smoking causes cancer hypothesis. If this is the correct causal relationship, then duration of smoking should correlate with cancer risk, quitting smoking should decrease cancer risk, smoking unfiltered cigarettes should have a higher cancer risk than filtered cigarettes, etc. If all of these correlations turn out to be true, which they are, then we can triangulate to the smoking causes cancer hypothesis as the most likely possible causal relationship and it is not a logical fallacy to conclude from this evidence that smoking probably causes lung cancer.
Confusing currently unexplained with unexplainable
Because we do not currently have an adequate explanation for a phenomenon does not mean that it is forever unexplainable, or that it therefore defies the laws of nature or requires a paranormal explanation. An example of this is the "God of the Gapsa" strategy of creationists that whatever we cannot currently explain is unexplainable and was therefore an act of god.
Analogies are very useful as they allow us to draw lessons from the familiar and apply them to the unfamiliar. Life is like a box of chocolate – you never know what you’re going to get.
A false analogy is an argument based upon an assumed similarity between two things, people, or situations when in fact the two things being compared are not similar in the manner invoked. Saying that the probability of a complex organism evolving by chance is the same as a tornado ripping through a junkyard and created a 747 by chance is a false analogy. Evolution, in fact, does not work by chance but is the non-random accumulation of favorable changes.
Creationists also make the analogy between life and your home, invoking the notion of thermodynamics or entropy. Over time your home will become messy, and things will start to break down. The house does not spontaneously become more clean or in better repair.
The false analogy here is that a home is an inanimate collection of objects. Whereas life uses energy to grow and reproduce – the addition of energy to the system of life allows for the local reduction in entropy – for evolution to happen.
Another way in which false analogies are invoked is to make an analogy between two things that are in fact analogous in many ways – just not the specific way being invoked in the argument. Just because two things are analogous in some ways does not mean they are analogous in every way.
The idea that because there is no definitive demarcation line between two extremes, that the distinction between the extremes is not real or meaningful: There is a fuzzy line between cults and religion, therefore they are really the same thing.
Arbitrarily reducing a set of many possibilities to only two. For example, evolution is not possible, therefore we must have been created (assumes these are the only two possibilities). This fallacy can also be used to oversimplify a continuum of variation to two black and white choices. For example, science and pseudoscience are not two discrete entities, but rather the methods and claims of all those who attempt to explain reality fall along a continuum from one extreme to the other.
The term “genetic” here does not refer to DNA and genes, but to history (and therefore a connection through the concept of inheritance). This fallacy assumes that something’s current utility is dictated by and constrained by its historical utility. This is easiest to demonstrate with words – a words current use may be entirely unrelated to its etymological origins. For example, if I use the term “sunset” or “sunrise” I am not implying belief in a geocentric cosmology in which the sun revolves about the Earth and literally “rises” and “sets.”
Applying criteria or rules to one belief, claim, argument, or position but not to others. For example, some consumer advocates argue that we need stronger regulation of prescription drugs to ensure their safety and effectiveness, but at the same time argue that medicinal herbs should be sold with no regulation for either safety or effectiveness.
No True Scotsman
This fallacy is a form of circular reasoning, in that it attempts to include a conclusion about something in the very definition of the word itself. It is therefore also a semantic argument.
The term comes from the example: If Ian claims that all Scotsman are brave, and you provide a counter example of a Scotsman who is clearly a coward, Ian might respond, "Well, then, he's no true Scotsman." In essence Ian claims that all Scotsman are brave by including bravery in the definition of what it is to be a Scotsman. This argument does not establish and facts or new information, and is limited to Ian's definition of the word, "Scotsman."
In Latin this term translates to "doesn't follow". This refers to an argument in which the conclusion does not necessarily follow from the premises. In other words, a logical connection is implied where none exists.
Post-hoc ergo propter hoc
This fallacy follows the basic format of: A preceded B, therefore A caused B, and therefore assumes cause and effect for two events just because they are temporally related (the latin translates to "after this, therefore because of this").
Reductio ad absurdum
In formal logic, the reductio ad absurdum is a legitimate argument. It follows the form that if the premises are assumed to be true it necessarily leads to an absurd (false) conclusion and therefore one or more premises must be false. The term is now often used to refer to the abuse of this style of argument, by stretching the logic in order to force an absurd conclusion. For example a UFO enthusiast once argued that if I am skeptical about the existence of alien visitors, I must also be skeptical of the existence of the Great Wall of China, since I have not personally seen either. This is a false reductio ad absurdum because he is ignoring evidence other than personal eyewitness evidence, and also logical inference. In short, being skeptical of UFO’s does not require rejecting the existence of the Great Wall.
This logical fallacy is the argument that a position is not consistent or tenable because accepting the position means that the extreme of the position must also be accepted. But moderate positions do not necessarily lead down the slippery slope to the extreme.
Special pleading, or ad-hoc reasoning
This is a subtle fallacy which is often difficult to recognize. In essence, it is the arbitrary introduction of new elements into an argument in order to fix them so that they appear valid. A good example of this is the ad-hoc dismissal of negative test results. For example, one might point out that ESP has never been demonstrated under adequate test conditions, therefore ESP is not a genuine phenomenon. Defenders of ESP have attempted to counter this argument by introducing the arbitrary premise that ESP does not work in the presence of skeptics. This fallacy is often taken to ridiculous extremes, and more and more bizarre ad hoc elements are added to explain experimental failures or logical inconsistencies.
A straw man argument attempts to counter a position by attacking a different position – usually one that is easier to counter. The arguer invents a caricature of his opponent’s position – a “straw man” – that is easily refuted, but not the position that his opponent actually holds.
For example, defenders of alternative medicine often argue that skeptics refuse to accept their claims because they conflict with their world-view. If “Western” science cannot explain how a treatment works, then it is dismissed out-of-hand. If you read skeptical treatment of so-called “alternative” modalities, however, you will find the skeptical position much more nuanced than that.
Claims are not a-prior dismissed because they are not currently explained by science. Rather, in some cases (like homeopathy) there is a vast body of scientific knowledge that says that homeopathy is not possible. Having an unknown mechanism is not the same thing as demonstrably impossible (at least as best as modern science can tell). Further, skeptical treatments of homeopathy often thoroughly review the clinical evidence. Even when the question of mechanism is put aside, the evidence shows that homeopathic remedies are indistinguishable from placebo – which means they do not work.
Tautology in formal logic refers to a statement that must be true in every interpretation by its very construction. In rhetorical logic, it is an argument that utilizes circular reasoning, which means that the conclusion is also its own premise. Typically the premise is simply restated in the conclusion, without adding additional information or clarification. The structure of such arguments is A=B therefore A=B, although the premise and conclusion might be formulated differently so it is not immediately apparent as such. For example, saying that therapeutic touch works because it manipulates the life force is a tautology because the definition of therapeutic touch is the alleged manipulation (without touching) of the life force.
The Fallacy Fallacy
As I mentioned near the beginning of this article, just because someone invokes an unsound argument for a conclusion, that does not necessarily mean the conclusion is false. A conclusion may happen to be true even if an argument used to support is is not sound. I may argue, for example, Obama is a Democrat because the sky is blue – an obvious non-sequitur. But the conclusion, Obama is a Democrat, is still true.
Related to this, and common in the comments sections of blogs, is the position that because some random person on the internet is unable to defend a position well, that the position is therefore false. All that has really been demonstrated is that the one person in question cannot adequately defend their position.
This is especially relevant when the question is highly scientific, technical, or requires specialized knowledge. A non-expert likely does not have the knowledge at their fingertips to counter an elaborate, but unscientific, argument against an accepted science. “If you (a lay person) cannot explain to me,” the argument frequently goes, “exactly how this science works, then it is false.”
Rather, such questions are better handled by actual experts. And, in fact, intellectual honesty requires that at least an attempt should be made to find the best evidence and arguments for a position, articulated by those with recognized expertise, and then account for those arguments before a claim is dismissed.
The Moving Goalpost
A method of denial arbitrarily moving the criteria for “proof” or acceptance out of range of whatever evidence currently exists. If new evidence comes to light meeting the prior criteria, the goalpost is pushed back further – keeping it out of range of the new evidence. Sometimes impossible criteria are set up at the start – moving the goalpost impossibly out of range -for the purpose of denying an undesirable conclusion.
The Scientific Method - is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary says that scientific method is: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."
Observation - The scientific method relies primarily on systematic observation of the world. It uses formal procedures for testing ideas about cause-and-effect relationships between variables.
Hypothesis - A hypothesis is a tentative statement about the relationship between two variables, usually in the form of a prediction: “If A, then B.” For example, if (1) we had observed that people dying of cancer are usually heavy cola drinkers, (2) we were aware that cancer rates were lower before cola was invented, and (3) there was considerable scientific debate about the safety of cola additives, then our thinking and observation might lead us to suspect that the cause of cancer is excessive cola drinking. We could express the hypothesis in an ifthen statement, such as “If people drink large amounts of cola, then they are more likely to develop cancer.” This if-then hypothesis could be simplified into a single statement: “The cause of cancer is drinking too much cola.” No matter how the hypothesis is formulated, it must be tested for its truthfulness because the casual observations alone are not enough to support it.
Experimentation - The testing of the hypothesis is done through research and experimentation, and is the third step of the scientific method. There are many ways to conduct these studies, each with its own advantages and disadvantages, as we discuss later. For instance, in our cola example, we could feed large quantities of cola to chimpanzees and after a while compare their cancer rates with those of a group of chimpanzees that did not receive cola. Or we could find human beings with a history of excessive cola consumption and compare their cancer incidence with that of humans who avoid such consumption. Once the experiment or a data collection is complete, we move on to the last step of the scientific method, verification.
Verification - is the analysis of our data to see if the data support or deny the hypothesis. In our example, we would analyze the results of our experiment to see if the excessive cola drinkers did indeed have a higher incidence of cancer. If they did, then our hypothesis was supported (but not proven). If there was no difference between the groups, then we must go back to our first step to look for new observations or begin thinking about other causeand-effect relationships that might explain our observations. This last step of the scientific method can be fortified through replication, which means running the study again to ensure that the results are reliable. It is especially helpful if other researchers replicate the results. Verification can also be fortified through prediction, which is the ability to use our study’s conclusions to reliably predict other outcomes.
Defining the Problem - We cannot solve a problem unless we know what it is. If our car doesn’t start, we wouldn’t want to define the problem as “Life is one frustration after another.” Even something like “My car doesn’t work right,” although more precise, is still an ill-defined problem. Defining the problem carefully means being as precise and as specific as possible. We could state, “My car doesn’t start,” or better, “My car doesn’t start in the morning,” or better yet, “My car doesn’t start on wet mornings.” This more specific definition enables us to identify the possible causes of the malfunctioning, and it shortens our path to the solution of the problem.
Discovering Causes - We can discover the causes of our problems by noticing the relationships among details. For example, if we notice that our concentration is impaired only when our children are playing in the house, we can be fairly confident that our children distract us.
Problems Without a Cause - Some problems calling for answers do not have a cause. For instance, members of a product-marketing committee gridlocked over a marketing strategy for a new product do not look for causes but for creative ideas to sell their product. Another example of a causeless problem is the classic nine-dot problem. The challenge is to connect all nine dots with only four straight lines without the pencil leaving the paper. A search for causes will not help you.
Removing Barriers - 1) One barrier that clearly stands in the way of problem solving is the irrational wish for a perfect solution and the related belief that one best solution exists.
2) Everyone has problem-solving ability. Problem solving requires creativity, which correlates only modestly with intelligence (Barron and Harrington, 1981). Thus, one does not have to be Immanuel Kant or Albert Einstein to be a good problem solver. That
Gathering information - Trying to solve a problem without information is like trying to drive a car without a steering wheel. Consider a college student faced with a scheduling problem and in a dilemma about what courses to take this semester. If she chooses her courses without adequate information about the college’s policies and course scheduling, she may never see graduation. However, if she acquires information that answers the following questions, she will likely find a way to graduate on time: (1) what courses are required for the major? (2) do those courses require a prerequisite? (3) which of those courses are offered only once a year or every other year? (4) how many upper-level courses are required? (5) how many credits are allowed to transfer from other institutions? (6) what courses are offered during the summer session? (7) is independent study an option to meet a degree requirement when a scheduling conflict occurs? (8) are there any conditions under which course substitutions are allowed? (9) what general requirements must still be completed? And so on. Without gathering this information, she may not be able to solve her scheduling problem adequately.
Identify the Components - The more information we can collect, the better we are able to solve the problem. We can gain comprehensive information by identifying each known component of the problem and then obtaining information about each one. The components of any problem are the persons and objects involved in the problem, as well as the problem goal itself.
Member of the Cult of ReasonBitcion:1DNeQMswMdvx4xLPP6qNE7RkeTwXGC7Bzp
The atheist is a man who destroys the imaginary things which afflict the human race, and so leads men back to nature, to experience and to reason.
The following 6 users Like fstratzero's post:6 users Like fstratzero's post
Chas (28-07-2012), cheapthrillseaker (28-07-2012), morondog (28-07-2012), bemore (29-07-2012), LadyJane (30-07-2012), Vosur (25-10-2012)
|Messages In This Thread|
Critical thinking, an outline from my notes. - fstratzero - 28-07-2012 02:11 PM