Monday, January 25, 2010

What's the Right Amount of Homework?

When I ran for election to my local school board, one of my campaign planks was to promote homework. I lost. Many parents objected to my platform, often because homework would interfere with their kids' extracurricular activities or with their part-time job. One parent said to me, "We don't want any homework. My son needs that time to work at his job." I asked, "Why does he need to work?" She said, "Well, to pay for his truck for one thing." "Why does he need a truck?" I asked. Her reply: "Well, you dufus, to get to his job!"

A Duke University neuroscientist, Harris Cooper, posted in The Sacremento Bee on Jan. 17, 2010 some of his findings from research on this topic. He pointed out that an earlier Associated Press poll found that 57% of parents thought their kids got about the right amount of homework. Another 23% thought there was too little homework and 19% thought there was too much.

Harris was interested, not so much in parent opinion, but about the question of whether or not homework helps test performance. When he and his helpers looked at various published homework studies, they found that the effect varied by grade level. Comparing students who were assigned homework with students assigned no homework but who were similar in other ways suggested that homework can improve students' scores on the class tests that come at the end of a topic. Students assigned homework in second grade did better on math, third- and fourth-graders did better on English skills and vocabulary, fifth-graders on social studies, ninth- through 12th-graders on American history and 12th-graders on Shakespeare.

He finds that practice assignments do improve scores on class tests at all grade levels. A little amount of homework may help elementary school students build study habits. Homework for junior high students appears to reach the point of diminishing returns after about 90 minutes a night. For high school students, the positive line continues to climb until between 90 minutes and 2 1/2 hours of homework a night, after which returns diminish.

What nobody seems to have studied is the question of what kind of homework is most effective. Options include busy work such as filling out work sheets, problems to solve, projects to complete, Web quests, essays to write, and various other kinds of tasks. I would expect that the nature of the homework makes a big difference in the effectiveness of learning and in attitude about school.

All forms of homework can help memory formation. Rehearsal of learned material soon after it is learned is a key to efficient memory formation. In my opinion, failure of a teacher to assign homework is educational malpractice.

Tuesday, January 19, 2010

Unreliable Memory


It is one thing to forget. It is quite another to remember, but remember wrongly. Everyday experience reveals how commonly people remember things wrongly. Discuss with most anybody what each party said in a past argument or controversy, and it typically happens that people remember things differently. Somebody has to have it wrong. Such "false memories" commonly contaminate eye-witness reports of accidents and crimes.

This possibility came up in a recent a court case in Massachusetts, where a Catholic priest was convicted of sexual molestation of a child. The accuser, now an adult, ostensibly had suppressed the memories, which surfaced later in psychological counseling. On the basis of this resurrected memory, the priest was convicted and the conviction was upheld on appeal by the Massachusetts' Supreme Court. Scholarly literature supporting the notion that real memories can be suppressed and later retrieved provided the basis for believing the charges against the priest.

However, there is other scholarly literature, apparently not persuasive in this case, that asserts that this is "junk science" and that false memories are common. I concur with the news release's statement: "Experiments have shown that false memories can be created that feel just as valid as real ones and cannot be distinguished from real memories."

Our legal system has not really come to grips with false memory. But there is a growing trend to be skeptical of eye-witness testimony. It is increasingly hard to get a conviction if the only evidence against the accused is a single eye-witness report. Perhaps, in the interests of justice, that is best. There is a whole scholarly literature on false memory, including books, and I reviewed much of this in my memory book.

So, the real issue in court cases like this is that the resurrected memory may or may not be true. If there is no other evidence and it is only one person's word against another, how can you tell what the truth is? The same problem exists when people have differing recollections of something that happened in the past. Somebody got it wrong. Who got it right?


Source: UPI press release, http://www.upi.com/Top_News/US/2010/01/17/Repressed-memory-conviction-upheld/UPI-93911263709673/

Friday, January 15, 2010

Learning Versus Memory

Versus? Learning and memory are different, but like two sides of the same coin. What is the difference? Learning is the acquiring of new information or skills. Memory is the remembering of what was learned. You can’t have memory without learning. You can, of course, have learning that you forget.

Learning involves at least four major processes. It all begins with registering new information. This is the stage when information is detected and encoded in brain. Paying attention obviously facilitates the registration process. Multi-tasking can create an information overload in which much of the information never gets registered. Example: a car driver who is all wrapped up in a cell phone conversation may not realize she just ran a stop sign or cut off the driver behind her in the next lane. Another example comes with reading. Reading comprehension (learning) depends heavily on the eyes actually seeing each cluster of words. The reader needs to focus on words, not letters, and needs to think about what the words mean. Likewise in images, what you learn from an image depends on the details in it that you actually notice and think about.

Next is integration. The brain likes to classify, categorize, and organize its information. Thus, new information has to be fitted into existing learned schema. This is the stage where associations are made with existing memory. Brains are really good at detecting and constructing relationships. If a given relationship is not immediately obvious, the brain may figure it out and remember it. Constructing such relationships is an integral part of the learning process.

Associations can be constructed subconsciously. If two things happen at the same time or go together in some other way, even the simplest of brains can learn the association. Moreover, cueing of relationships can produce what is called conditioned learning. We all have heard about Pavlov’s dogs. But even animals as primitive as flatworms can exhibit conditioned learning. If worms are shown flashes of light, not much happens. If they are given mild electrical shocks to the body, the body contracts. If then a flash of light is delivered just prior to an electrical shock, after enough repetitions, the worm starts contracting when it first detects the light, before any electrical shock is delivered.

Associations are still more powerful when they are consciously constructed. This is the stage where you ask yourself such questions as: Where does this information fit with what I already know? How does this relate to other things I could learn about? What value do I place on this information? How invested in using or remembering it should I be?

Then there is understanding. You can, as I did, pass college calculus by using the right formulas for given problem types, and yet not really understand what is going on with the equations. To understand, you need to answer such questions as: Is this consistent with what I thought I knew? What is missing or still confusing? What can I do with this information? What else does it appoly to, how can it be extended? What is predictable?

Learning is not complete without understanding. Understanding also creates a basis for generate insights and creative syntheses, and these in turn advance the depth and rigor of the original learning. Insights typically come from deduction or induction. Deduction is the Sherlock Holmes process of using one fact or observation to lead logically to another. Induction is the Charles Darwin process of using multiple, apparently unrelated, facts or observations to make a synthesis that accommodates them all.

Finally, there is learning to learn. This is the process of learning the paradigm, the “rules of the game,” that allows you to transfer one learned capability to new learning situations that are related. At this point, one has reached a threshold where the more you know, the more you can know.

One of the first experimental demonstrations of this phenomenon was by H. C. Blodgett in 1929. He studied maze behavior in rats, scoring how many errors they made in running the maze to find the location where a food reward was placed. Rats ran the maze once per day on successive days. The control group ran the maze and found the food, with number of errors decreasing slowly on successive days as they learned where in the maze the food was. Experimental groups ran the maze daily for three or seven days without any food reward. Naturally, they made many errors because there was nothing to learn. However, when they subsequently were allowed access to a food reward, the number of errors dropped precipitously on the very next day’s trial. In other words, the rats had been learning about the maze, its layout, number of turns, etc. during the initial explorations when no reward was available.
Blodgett called this “latent learning,” an idea expanded and formalized some 20 years later in the “Learning Set” theory of Harry Harlow. Harlow studied visual discrimination learning in monkeys and observed that visual and other types of discrimination problems progressed more quickly as a function of training on a series of different, but related problems.

These discoveries were born of necessity, arising from the need to use the same monkeys over and over in a wide variety of experiments because the Harlow lab was so under-funded. Increasing the number of problems on which monkeys were tested led to the observation that the monkeys’ general learning competence improved over time. This of course parallels the general common experience of maturation of children.

Harlow developed the prominent theory that learning any task is associated with implicit learning capabilities that can generalize to other related learning situations. The concept relates simpler trial-and-error learning to more advanced insightful-like learning, which he regarded as a mental ability that depended heavily on prior learning sets. Ability to form learning sets varies with species. Monkeys do it better than dogs or cats, and humans do it best of all.The reasons for human superiority in learning no doubt include the rich connections among various brain areas that can support and integrate more learned associations.