Search Results for Behaviorism
Behaviorism is a psychological theory that sees mankind as operating more like a machine than as a free agent. Its modern form arose in reaction to so-called armchair philosophers, depth psychologists and alleged mystics who tried to understand human motivation in terms of what went on inside the mind or soul. For behaviorists, what really counts is what we can directly observe—in a word, behavior.
This approach is traceable to thinkers like Thomas Hobbes, John Locke David Hume, George Berkeley and David Hartley. Hobbes viewed man as a natural and social creature, while the others stressed the importance of the association of ideas.
In 1739, the so-called British empiricist philosopher David Hume wrote in A Treatise of Human Nature:
The qualities, from which…association arises, and by which the mind is after this manner conveyed from one idea to another, are three, viz. resemblance, contiguity in time or place, and cause and effect.¹
Most will say that the scientific study of behaviorism begins with the Russian, Ivan Pavlov (1849-1936), who conditioned dogs to salivate not just at the sight of food but also at the sound of a bell that preceded feeding.
The American psychologist J. B. Watson (1878-1958) generalized these findings to human beings, emphasizing the importance of recency and frequency. This means that if we’ve smiled every time we’ve seen a child for the past ten years, we’re very likely to smile if we see a child today. The American B. F. Skinner (1904-1990) extended this system to include the idea of positive and negative reinforcement.
Pavlov’s type of learning is usually called classical conditioning, while Skinner’s is called operant conditioning. Skinner soon became the most popular advocate of behaviorism. He argues that past reinforcements determine behavior. We learn to repeat or decline behaviors based on their consequences. This is called the Stimulus-Response-Reinforcement (S-R-R) model.
Skinner also formulated the idea of shaping. By controlling the environmental rewards and punishments for behaviors, one is able to shape behavior. Psychologist also call this behavior modification.
Critics of behaviorism say it depicts a soulless, mechanistic view of mankind. Instead of resembling a pleasure-seeking machine, critics say that human beings are uniquely free, replete with emotional, intuitive, intellectual and spiritual concerns extending well beyond the narrow confines of reward and punishment.
Daniel Dennett contends that human beings are Skinnerian, Popperian and also Darwinian creatures. This means that we learn from stimulus, response and reinforcement but we also have the inner ability to test our hypotheses prior to enacting them in the real world.
This challenges Skinner’s anti-mentalism, as does Dennett’s Darwinian component. According to Dennett we act partially in accord with ancestrally acquired knowledge. A good example of this can be found in our capacity for language. Because of our language skills, many believe that human beings are hard-wired to learn languages. And we do, in fact, learn language if we’re raised in the right kind of environment, whereas a child parented by wolves in the wild won’t learn how to speak a language.²
¹ David Hume, A Treatise of Human Nature London: Collins, 1962 , p. 54.
² Wittgenstein’s notion of a private language might seem to challenge this idea. But Wittgenstein, himself, argues that any kind of representation that isn’t socially shared cannot truly be language. More recently, the postmodern notion of connotation complicates this claim. Some postmoderns ask: If everyone understands signs differently, are we really communicating?
- Today’s Birthday: BURRHUS FREDERIC “B.F.” SKINNER (1904) (euzicasa.wordpress.com)
- Ep 191: What Was B. F. Skinner Really Like? (thepsychfiles.com)
- Positive Reinforcement and the General Public (ayahska.wordpress.com)
- New Textbook! Behavior Analysis and Learning, 5th Edition (psypress.com)
- 4 Fantastic Thinkers Who Helped to Shape Psychology (whatispsychology.biz)
- David Hume: Reason is Dead(ness) (pathtothepossible.wordpress.com)
- Artificial Artificial Intelligence (smashingboxes.com)
- “Networked Minds” Require A Fundamentally New Kind of Economics (videolectures.net)
- B.F. Skinner: The Man Who Taught Pigeons to Play Ping-Pong and Rats to Pull Levers (blogs.smithsonianmag.com)
- Behaviorism 101 (ronnekafrasergreen.wordpress.com)
George Berkeley (1685-1753) was the Anglican Dean of Derry (1724), bishop of Cloyne (1734) and an important philosopher belonging to the school of idealism. Born in Ireland, Berkeley moved to Oxford in 1752 and became one of the so-called British empiricists.
Berkeley believed that the material world exists as an idea created in our minds, ultimately by God. In his New theory of Vision (1709), he argued that our sense of distance isn’t directly perceived but inferred from the repeated association of visual and tactile cues. All of existence, itself, is a group of interacting minds, connecting with archetypes, which themselves derive from God.
He uttered the famous line, perhaps adapted from Shakespeare,
To be is to be perceived or a perceiver.
This means that existence is either a mind or stimuli in a mind.
One way that Berkeley tried to support his view was to note that the idea of heat – what the philosopher John Locke called a “secondary quality” – is somewhat relative. If one of our hands is cold and the other hot, and we place them into warm water, the one hand feels hot and the other cold. Anyone can do this little experiment and see that it’s true. However, Berkeley added that Locke’s so-called “primary qualities” (e.g. shape, quantity) were also dependent on a perceiving mind. Berkeley, in fact, challenged the entire distinction between primary and secondary qualities, as elaborated upon at Wikipedia:
Berkeley maintains that the ideas created by sensations are all that people can know for sure. As a result, what is perceived as real consists only of ideas in the mind. The crux of his argument is that once an object is stripped of all its secondary qualities, it becomes very problematic to assign any acceptable meaning to the idea that there is some object. Not that we can’t picture to ourselves (in our minds) that some object exists apart from any perceiver—we clearly think we can do this—but rather, can we give any content to this idea in any particular case?¹
A slightly different take on the belief that the material world doesn’t exist independent of the mind has been popularized in many books reporting recent discoveries in sub-atomic physics, such as Gary Zukav’s The Dancing Wu Li Masters and Fritzoff Capra’s The Turning Point.
- Influential Figures in My Life: Locke, Berkeley and Hume (jonathanhockey.wordpress.com)
- A View From Here (o50328b.wordpress.com)
- Behaviorism (earthpages.wordpress.com)
- The Mess of Me at the Moment (brittavalentin.wordpress.com)
- Part 9: Beyond Atheism – A History of Western Philosophy (coppellpianoshop.wordpress.com)
- Matter and Mind (middlepane.com)
- Rewrite Your Life (barbarasreality.wordpress.com)
Bad Faith (French, mauvaise foi) is a social-psychological and philosophical idea conceived by Jean-Paul Sarte where one apparently ignores the possibility of actively choosing one’s commitments. Instead, one becomes a passive pawn for external forces, or merely avoids making a decision about what to commit to.
An example could, perhaps, be the Nazi guard who arbitrarily executes ordinary people for Adolf Hitler despite inner moral attitudes decrying this behavior.
The idea of bad faith is predicated on the assumption of a “gap of nothingness.”
The “gap of nothingness” concept suggests that human beings are not mere stimulus-response machines (à la behaviorism) but possess the psychological freedom needed to make responsible decisions in response to incoming stimuli. The illustration often given in undergraduate humanities courses, rightly or wrongly, is that animals will eat whenever hungry, whereas human beings usually delay eating until a personally or socially appropriate time.
I think Sartre has a very complex connotation to the term [bad faith]. Sometimes wide, sometimes narrow. Very closely related to the concept of authenticity, he has used the term to show the shackles that man chooses despite the knowledge of freedom, at least deep within. » See in context
More examples of bad faith can be found here: http://en.wikipedia.org/wiki/Bad_faith_%28existentialism%29
- Tangent: Bad Faith, part 1 (lancek4.wordpress.com)
- Tangent: Bad Faith, part 2 (lancek4.wordpress.com)
- Shareholder accuses Wausau Paper CEO of ‘bad faith,’ nominates slate to board (jsonline.com)
- Sartre on Bad Faith (psychologytoday.com)
- Paul Krugman: Broccoli and Bad Faith (economistsview.typepad.com)
- The Disease (epages.wordpress.com)
- BLOG: Chinese authorities plan to take action on bad faith utility model and design patent applications (iam-magazine.com)
- Bad Faith Insurance Companies (questadj.wordpress.com)
- ECommerce company Eyemagine found guilty of reverse domain name hijacking (tldmagazine.com)
Daniel Dennett (1942-) is an American philosopher and atheist who argues that the mind operates like a computer. For Dennett, the sum total of our experiences shape and prod us from day-one of our existence.
Does this include space for individual free-will? Dennett argues that, although some activities may seem intentionally planned and chosen by an agent or agents, behind that lies an original intention not derived from any individual agent or collection of agents—i.e. Nature has endowed us with an original intention to protect our genes, and everything follows from that.
For Dennett the conscious aspect of the self that expresses a particular viewpoint arises from the act of expressing that viewpoint, much like electricity is generated by the spinning of a rotor within a coil.
He usually refuses to debate with other thinkers because he is so thoroughly convinced that his terminology is right and theirs is riddled with errors.
My refusal to play ball with my colleagues is deliberate, of course, since I view the standard philosophical terminology as worse than useless — a major obstacle to progress since it consists of so many errors.¹
He also implies that his view is more comprehensive than other philosophical views because, being more abstract, it can account for differences among philosophers.
But theologians could use the same type of argument to account for differences between Dennett and other philosophers who, themselves, believe that their views are closer to the truth than Dennett’s. Indeed, theologians could maintain that theirs is the more comprehensive view, one which proves incorrect Dennett’s initial assumptions about original intentionality and its relation to consciousness. Specifically, the theologian could say that Dennett overlooks the two essential agencies of human free-will and divine inspiration.
Dennett’s views have sparked much debate, most likely because he employs technological metaphors to explain consciousness. He has also opened the door to speculation among those who believe that encoding human brain patterns within a computer’s memory might be a plausible ticket to immortality within the not-too-distant future. In this case, eternal life would reside – or, perhaps, be trapped – in a silicon chip or its technological successor.
¹ Daniel Dennett, The Message is: There is no Medium, cited at http://en.wikipedia.org/wiki/Daniel_Dennett
- A Conversation Between Richard Dawkins and Daniel Dennett (patheos.com)
- Is the Internet the End of Religion? (religiondispatches.org)
- Philosophy of Mind – “Zombies Within” – Chalmers, Dennett, Noë (zombielaw.wordpress.com)
- Daniel Dennett sorta zombies (zombielaw.wordpress.com)
- Dennett on atheism denial (whyevolutionistrue.wordpress.com)
- The Magic of Consciousness (popalx.wordpress.com)
- Full Length Talk – ‘How To Tell You’re An Atheist’ – Dan Dennett – YouTube – TheClergyProject (richarddawkins.net)
- William Lane Craig vs atheist Daniel Dennett on cosmology and fine-tuning (winteryknight.wordpress.com)
- Darwin & Turing: The Evolution of Artificial Intelligence (bigthink.com)
Charles Robert Darwin (1809-82) was an English naturalist whose The Origin of Species by Means of Natural Selection of 1859 proposed a view of evolution in which “natural selection” determines which species survive and which perish.
In opposition to Larmarck, Darwin believed that evolutionary changes were the result of mutations.¹ New species that happened to survive in physical environments (which also changed) replaced those species that did not.
For many followers of Darwin there is no master or divine plan guiding evolution. In 1871 he wrote The Descent of Man which traced, according to the theory, mankind back to the anthropoids. The clarity of his exposition and the force of his ideas have influenced practically every aspect of modern society.
The Welsh naturalist, Alfred Russel Wallace, independently conceived the idea of “survival of the fittest” around the same time as Darwin. Recent challenges to this view are still deemed quite suspect, but postmodern, New Age and religious trends towards seeking alternative ways of viewing evolution continue to challenge the current scientific paradigm, which ironically has come to resemble a religious belief.²
Pope Benedict XVI has supported the idea of Theistic Design, a view that some believe is similar to Intelligent Design. Benedict, however, questions aspects of evolutionary theory, arguing that it’s not truly scientific and cannot explain an implied rationality of the process it outlines.
¹ The following outlines how Darwin’s understanding of mutations differed from those of today.
Today, most scientists regard the term “mutation” as a description of a change in an individual gene, and more precisely as some minute alteration of the DNA of that gene, especially a nucleotide substitution. But the idea of mutation has changed considerably from the pre-Mendelian concepts of Darwin’s generation, who viewed “fluctuating variations” as the raw material on which evolution acted, to today’s up-to-the-minute genomic context of mutation. Source: http://www.cshlpress.com/default.tpl?action=full&–eqskudatarq=911&typ2=hpl
² Few realize how the unavoidably biased interpretation of experimental results can shape our worldview, in both the social and the so-called “hard” sciences. See also: http://en.wikipedia.org/wiki/Experimenter%27s_bias
- Charles Darwin (articles4friends.com)
- Darwin’s Doubt: Evolutionary Argument Against Naturalism (Kenneth Samples) (rodiagnusdei.wordpress.com)
- Turkey: Creationists Want To Airbrush Darwin Out Of Evolutionary Picture (eurasiareview.com)
- Charles Darwin (speculativefictionweblog.wordpress.com)
- Misrepresenting Darwin (choiceindying.com)
- What Darwin didn’t know (rodiagnusdei.wordpress.com)
- Go On a Virtual Journey with Charles Darwin (freetech4teachers.com)
- Biologist and Atheist Richard Dawkins on Charles Darwin – Brian Gallagher – Santa Barbara’s Independent (richarddawkins.net)
- South Korean Textbooks Embrace Creationism (newsfeed.time.com)
- The Pirates! With Charles Darwin! (freethoughtblogs.com)
Free Will is the belief that human beings have the ability to make choices. Most philosophers advocating the belief in free will agree that personal freedom has practical limits, but not all agree that the freedom to choose is limited with regard to ethics. That is, some say that we can always choose the good, even though we may not always be able to choose certain activities.
The view that we can always choose the good, however, is complicated. As both Catholic theologians and psychiatrists will say, personal culpability for doing bad things might be lessened by such factors as peer pressure (with teenagers), stress, trauma, emotional immaturity or instability, and so-called mental illness or mental injury. Of course, just what constitutes a bad thing is not always agreed upon among theologians and psychiatrists—masturbation being a good example.¹
J.-P. Sartre called the practical limits of personal freedom ‘freedom in facticity’, meaning that individuals have a limited range of choices, particularly with regard to available opportunities and activities.² But for Sartre individuals can choose to do ethically right or wrong actions, and to give or not give consent to issues involving ethics.
Meanwhile, the Protestant Christian reformer John Calvin believed that some people are predestined for hell and others for heaven.
Who can figure!
Related Posts » Behaviorism
¹ Here’s a good comment: http://www.debatepolitics.com/archives/40072-masturbation-religion-and-psychiatry.html
² When I was at school a common example you’d hear was, “can someone in a wheelchair be a mountain climber?’ Today, however, this example doesn’t really hold up because new attitudes about persons with so-called disabilities are, in many cases, contributing to these people being seen as persons with difference. And in many instances, truly extraordinary things are being achieved by persons different from statistical norms. See, for instance, The Blind Painter (below).
- Gap of Nothingness (earthpages.wordpress.com)
- Criticism of Daniel Dennets view of Freedom, Determinism, and the Human Mind (compassioninpolitics.wordpress.com)
- Existentialism is a Humanism (philosophystone.wordpress.com)
- Whole Dude – Whole Philosophy (tarangini.wordpress.com)
- Existentialism (socyberty.com)
David Hume (1711-76) was a Scottish philosopher who developed a naturalist perspective on all aspects of human life.
For Hume, the highest good is based on the pursuit of happiness. We are personally happy when we’re good to others, not due to some high spiritual reward but because this approach leads to a harmonious social whole. So personal and social well-being go hand in hand.
This means that morality isn’t based on austere rational principles but on the desire for enjoyment. Accordingly, Hume believes that reason cannot determine anything without experience. And he goes as far to say that reason is the “slave of passion.”
Hume’s metaphysics, in particular his critique of the belief in cause and effect, remains an important challenge to our conventional way of seeing. All we can be sure of, says Hume, is that certain events occur one after another in a given region and for a certain duration.
In billiards, for instance, the white ball appears to cause the motion of other balls when impacting them on the gaming table. But here’s the radical part. Hume says that all we can truly know is that, in the past, the first ball impacted and the other balls moved. We cannot prove that the first ball’s impact will always be followed by movement of the other balls. And for Hume, there is no rational way to demonstrate a causal connection:
Reason can never shew us the connexion of one object with another, tho’ aided by experience, and the observation of their constant conjunction in all past instances. When the mind, therefore, passes from the idea or impression of one object to the idea or belief of another, it is not determin’d by reason, but by certain principles, which associate together the ideas of these objects, and unite them in the imagination.¹
Put differently, from prior experience we build up a series of expectations and habitual ways of interpreting observations. Hume calls these “ideas.” But ideas they simply are. Although we expect the billiard balls to move, we have no way of proving or knowing that they always will.
At first, this may seem absurd. But Hume’s critique of causality had a profound effect on one of the most important thinkers in the history of Western philosophy, Immanuel Kant. Mortimer Adler says “…Kant tells us that David Hume awakened him from his dogmatic slumbers.”²
In addition, on a quantum level of reality, contemporary physicists claim that observations of subatomic particles support the ideas of probability and simultaneity instead of linear causality.
However, some say it’s invalid to compare quantum and macroscopic levels of reality because subatomic particles exist in an entirely different arena, and behave in different ways than the larger aggregate objects which they make up.
This debate continues to this day, the answer to which might depend on one’s core beliefs and related worldview. Or in Hume’s terms, one’s “customs of thought.”
¹ David Hume, A Treatise of Human Nature (1896 ed.), SECTION VI.: Of the inference from the impression to the idea, paragraph 278.
² Adler, Mortimer J. (1996). Ten Philosophical Mistakes. Simon & Schuster. p. 94, cited at http://en.wikipedia.org/wiki/Critique_of_Pure_Reason#cite_note-2
- Link blog: philosophy, hume, atheism, david-hume (pw201.livejournal.com)
- Causality becomes increasingly elusive (boingboing.net)
- Of the Delicacy of Taste and Passion by David Hume (belladeluna.wordpress.com)
- David Hume on Causation (socyberty.com)
- Hume on Rousseau (cafehayek.com)
- “Cause Is Not a Fact”: (brothersjuddblog.com)
- Of Hume and Bondage (opinionator.blogs.nytimes.com)
- On David Hume (myintelligentlife.wordpress.com)
Functionalism In art and architecture functionalism means the combining of aesthetics and efficiency. With intellectual roots in the 18th and 19th centuries, in the 1920′s and 30′s the Bauhaus movement designed furniture for utility. In architecture, the idea that function should determine form was exemplified by Le Corbusier’s definition of a house as “a machine for living in.” In social anthropology and sociology, functionalism (and structural functionalism) envisions society as a self-regulating organism. Social institutions, customs, beliefs and even social deviance all contribute to societal functioning. This approach was especially prominent in the sociological work of Emile Durkheim and Talcott Parsons. In the Philosophy of Mind functionalism presents a challenge to behaviorism. While strict behaviorism explains the mind by observing external causes and effects, functionalism tries to account for consciousness in terms of all inner and outer causes and effects. Philosophical functionalism considers the possibility, overlooked by behaviorism, of a multiplicity of inner causes and effects existing within the mind. » James (William)
Add to this, report errors, suggest edits or voice your opinion by posting a comment