Escape your TV
To add to my end-of-year TV rant, I just found this link on the TV-B-Gone site...a guy blogging about his attempt to get off television. He's almost at the four-week mark : )
I like this idea of blogs that take you through a process with someone, as opposed to just rambling (like this one). I've seen journals of everything from weight loss attempts to the guy trying to get a job at Microsoft.
I've been toying with the idea of setting up a separate blog just to record the process that Bert and I are going through to train our new horses using the Parelli natural horsemanship system. It's a lesson in physical, mental, and emotional development for both the human and the horse (mostly the human), and because of the way the lessons are set-up, you have something new to accomplish almost every time you play with your horse.
One interesting thing about the program is that it's really an attempt to understand how the horse is thinking, and especially how the brain of a prey animal operates differently from the brain of a human (predator). It's a great way to understand humans as well, though, because one thing they make VERY clear--you can't be simultaneously terrified and still use your logical left-brain. Taking the time to think is all the time you need to die, so when instincts take over, left-brain shuts down. To make a horse less afraid, you need to give him logical puzzles to solve... you need to get him to think. Conversely, to make him (or a human) less logical, just give him a reason to be terrified. Terror and logical, rational thinking are almostly completely incompatible.
That reminds me of something... what is it... oh yeah. The recent election campaigns.
I suppose I could blog my attempt to learn to ollie, but that would consist of probably 42 entries of me landing on my ass, followed by the one successful attempt where I get so little air you'd need a nano-scale measuring device. So, I'll keep that to myself. I know it shouldn't be this hard, but considering that I didn't ride at all for the last, oh, 10 years... it's a Big Deal to me. I finally got a new board, now I just have to get on with it.
I'd love to hear what other people are doing in the temporary attempt-to-do-something blog world.
New Year's Resolutions/anti-TV rant
I've never made my New Year's Resolutions public before, but maybe it'll help motivate me, so here goes:
* Finish four books (not including the Tiger updates we're doing). I'll say more about those books later...
* Run the Bolder Boulder 10K in under 56 minutes. So far, my best time is 58. I'm really more of a four-mile type : ), and that extra 2 miles kicks my butt.
* Be insanely consistent with weight training.
* Learn to do an ollie, once and for all. (I can do only the affectionately-named "old school" footwork tricks, from the days when kick flips were done by hooking one toe under one side edge while simultaneously pressing on the opposite side edge.)
* Learn to really read music (I'm just faking it now), and learn to play Vince Guaraldi's Linus and Lucy song. I don't have a real piano, but I have an m-audio keyboard controller and GarageBand (with the extra jam pack that adds a much better grand piano instrument), so I have no excuse.
* Certify on Parelli Level Two. (They say it takes six to eighteen months to complete, so this might be ambitious. Then again, I have an easy (brave) horse. On the other hand, I'm really a newbie and my horse is young and relatively untrained.)
* Change the world. There are so many causes and organizations I care about... but this year I want to be a lot more vocal and active about one in particular: helping more people wean themselves (and especially their kids) off television.
Uh-oh... I'm pulling a bait-and-switch here, because the rest of this post turns into a TV rant. So, don't read further unless you're OK with that : )
When I want to creep myself out, I walk around the neighborhood at 9 PM and count the number of houses in which I can see that blue glow. Television in the U.S. (and many other countries, but especially bad here) is so pervasive that it's like that story of the boiling frog, where if you put the frog in water and then slowly turn up the heat, he won't realize it's happening until it's too late. But if you dropped him in boiling water, he'd instantly know it was BAD and jump out.
Imagine an alien from a planet with intelligent, thoughtful life. He has no idea what television is (ignoring the fact that our signals are "out there") when he drops into the average U.S. neighborhood (city, rural, doesn't matter) and discovers that at night (and often day), the vast majority of people are sitting in front of a flickering screen with that kind of glazed look watching...what? (No matter how many people claim they're watching "educational" programs, the Neilson ratings don't support that. My special favorite are the stats that show the hypocrisy of things like "red states" where folks left the voting booth claiming a vote for moral values, then proceeded to go home and make "Desperate Housewives" a hit). It all sounds very sci-fi to me, because I'm thinking it would look EXACTLY like the whole country is sitting down for a nightly brainwashing.
I'm definitely not trying to insult anyone here; I owned a television until about five years ago, and it was on a lot. And not everyone who watches TV has a problem with it (although virtually nobody, according to the brain research, is entirely immune). And I'm not putting mindfully-watched movies (including TV shows on DVD) in this category. I LOVE my Netflix subscription, and watch some television programs on my iMac (Curb your Enthusiasm, BBC's "The Office" are two favorites). TiVo also seems to be a great solution for a lot of folks.
But two things happened that made me get rid of normal television (although I do have a monitor for DVD's and to use my Playstation 2):
* I noticed that when I was in an environment with no television, my stress level went way down. Whenever I stayed at a mountain cabin or even a B-and-B that just didn't put a TV in your room, I noticed how much better I felt mentally and physically.
* I kept learning more and more about the brain, and couldn't avoid learning about the effects of television. One of my favorite brain scientists, Richard Restak, has become famous as "the brain guy" for television, writing the companion books for various PBS specials, etc. He is like the Carl Sagan of the brain, and I love his books. But even the guy who makes a lot of money from television has suddenly began to speak out about its dangers, especially in this post-9-11 book: The New Brain: How the Modern Age Is Rewiring Your Mind. (where he mentions studies including one suggesting that 9-11 survivors who watched a lot of television had a higher incidence of PTSD than those who watched less television).
(He also talks a little about TV in his newest book on how the brain is involved in fear and anxiety, "Poe's Heart and the Mountain Climber.")
TV isn't good for your brain in a wide range of ways. Just one of the problems is that it can lead to a reduction in left-brain logical thinking unless you're extremely careful (and capable) about making sure the news broadcasts are screened out. Because commercial news broadcasts are driven largely by the "if it bleeds it leads" approach, and those messages trigger the flight-or-fight response because your brain often can't distinguish between experienced vs. visualized terror. MRI scans show that the same parts of your brain light up when you watch high resolution images as when you're seeing it for real.
The issue of whether watching violence on TV is a problem is still hotly debated, but some--like the American Academy of Pediatrics--aren't taking any chances, and have issued a recommendation that children under the age of two should not be exposed to television at all.
My personal complaints about TV aren't about violence or sex in programs (I personally disagree with the notion that this is the big problem), but about news, commercials, and most importantly--the addiction people have to watching television, and the way in which it's become a central focus of so many lives. (And there's that infamous, argued-over study about people's misconceptions about Iraq depending on what they used for their primary news source. )
When I see someone watching TV at night, I like to speculate what they might be doing if the TV disappeared. Pretty much anything is usually healthier and more fun from the optimal experience perspective:
* sex (not necessarily in that order ; )
* listening to music
* making music
* taking a walk
* playing a game
* having a conversation
* becoming involved
* attending a lecture
* listening to a radio show (much better to get bad news from radio, because it doesn't have the same power to trigger the fight-or-flight thing that visuals do)
* or even just day dreaming.
There's evidence that watching television lowers metabolism. That means you could end up burning fewer calories sitting on the couch watching television than simply sitting on the couch, doing nothing, with the television off. At least that's an implication.
There are lots of studies correlating tv watching with obesity for both kids and adults.
Here are just a few tv-awareness sites. I am not endorsing them! And I think it's terribly naive of me to say "turn of your tv and you'll be healthier and the world will be better." Television can't be the scapegoat for everything, but I can't really think of an instance where someone was worse off for getting rid of the TV (assuming you have a radio and/or the internet to stay informed). And again, it all goes back to the "what would you be doing if you were NOT watching tv?" question.
* go kids
I'm also not advocating that everyone get rid of their TV completely--I did it because I didn't have the willpower (neither did my kids) to turn it on only by conscious choice (as opposed to putting it on because it's there), and to completely screen out enough of the news promos and commercials.
But for stress reduction, fitness, and feeling awake, it's the best thing I've ever done.
(Of course, I do miss the superbowl commercials, so for that I either have to go to someone else's house or catch them somewhere on the internet. And I have to rely on others to point out shows worth renting or watching online like Curb Your Enthusiasm or the the Daily Show).
Here's wishing everyone an awake new year!
Getting what you expect is boring.
Otherwise known as the "Oh Shit!/Oh Cool!" technique.
Earlier I blogged about how the brain is tuned for novelty, but tunes out that which is common or expected.
Some of the areas where this matters include training, filmmaking, advertising, and I suppose dating. Director/writer David Mamet says that the prime objective of a director is to present a story that is "both surprising and inevitable at the same time." Kind of the "OH!!" followed by "Oh... of course..." feeling.
AI and learning guru Roger Schank puts it this way in his e-Learning book,
"A good course must enable failures that surprise the student. Failure is the key to learning. We have to work hard to recover when things don't work out the way we expected...For this natural learning process to work in a course, the course must surprise its students. But, more than that, it must put students in a situation where they are entertaining predictions in the first place."
And from an article titled Information is Surprises:
"Information is surprises. We all expect the world to work out in certain ways, but when it does, we're bored. What makes something worth knowing is organized around the concept of expectation failure."
At Sun, we used to have a lot of battles over the evaluation form that customers filled out at the end of a training course. Instructors hated the fact that customers ranked things based on "Meets Expectations" (including the two ends of the expectations scale, "below expectations" and "exceeds expectations"). The instructors wanted it to be based on something less subjective, or at least customer "satisfaction", believing that a measurement of the quality of their work should not be tied to the customer's expectations.
But the business folks like Tom Peters and Seth Godin tell us that when it comes to things like word of mouth, expectation is EVERYTHING. I don't have links handy, but there are plenty of studies that show when someone's expectations are met, they won't talk about it... even if they believe that what they got was awesome! Even if expectations were high, everything is as it should be when they're met.
People talk about things that are surprising, or that really suck.
You talk about the waiter who went way above the call of duty. You talk about that movie that Ebert gave four stars but that you thought was one long and painful cliche.
So wake up the brain and do something surprising or at least unexpected for a given context. If you're a teacher, trainer, or authoring learning materials, for frick's sake don't have all your exercises, lessons, and stories simply confirm what the learner expects!. If the learner spends a half-hour doing an exercise that does exactly what you said it would, that's valid practice, but not memorable.
Sometimes they need the practice and reinforcement, of course, so it doesn't mean you won't include the "confirmation" activities. But when you want them to really learn and remember something new, look HARD for opportunities where things don't work as expected. Places where something behaves counterintuitively, or radically different from something that appears (at least on the surface) similar are golden.
I've already talked about ways we try to use this in the books:
* Garden paths (things that look like sound approaches, but then blow up at the end).
* Counterintuitive examples.
* Using show-don't-tell on common mistakes.
* Examples that have a common framework, but often with a weird twist.
* Unusual visuals and metaphors.
I worked in the mid 90's for one of the coolest new media companies, AND Interactive (later sold to TCI and then brought down in a spectacular flame-out) co-founded by Hollywood creative Allen Debevoise. My projects were managed by another Hollywood producer, John Valenti (yes, Jack's son), and the thing I remember most about them both (and John especially) is that the WORST thing you could do is be "on the nose."
The classic example for me was when I was developing an interactive CD-ROM of Oracle's annual report. It had lots of splashy graphics and video, etc. but I committed pretty much THE worst possible offense when I chose, for the financial section splash screen, a graphic of a cash register. But then, why stop with just ONE cliche...I went ahead and added a really cool sound effect of a cash register ch-ching!
That John ever let me enter the building again is a mystery, and the warning to never EVER be "on the nose" ever again, still haunts me. And I've noticed that the phrase "on the nose" is now a popular way to criticize screenplay dialog that not only violates "show don't tell", but eliminates all possibility of subtext. (I think "thou shall not be on the nose" is one of McKee's ten commandments of story)
So just in case you needed one more reason to be surprising, unexpected, or simply weird--you can say it's just being brain-friendly. Yet another way to get past the brain's crap filter.
And if you're one of our authors, you can expect us to be looking for ways you've encouraged the "Oh Shit!" (yikes, can't believe it did that) experience, or the "Oh Cool!" (wow, that's amazing... I didn't know you could do that) feeling. : )
Why we love "users"
I think it was a whole 48 hours into this new blog when someone wrote a comment that criticized us for the word "users".
"You start off by insulting "users" (get a clue and find a better term - they're out there)"
We knew sooner or later we'd have to justify the word "users" to someone.
Why do people object to the word? The arguments I've been hearing for years (by a very small, but occasionally vocal group that does include some very smart folks) are that it's the same word drug dealers use for their customers. Fair enough. But just because drug dealers use it doesn't mean we shouldn't.
The chances of the word "user" disturbing just about anyone in our core audience is pretty remote. We did upset at least one guy, in this blog, but anyone who disapproves of the word "user" in this context probably won't like much of the other less-than-politically-correct and possibly insensitive things we do, say, and recommend.
Besides the drug reference, another complaint I've heard about "users" is that it has connotations of "he's a user" as in "he's using her". But again, that's a different context. I certainly won't stop saying, "I'm using my favorite hammer..." because it, what, implies I'm exploiting the poor defenseless tool? (I know, I know, this just means that I "don't get it".)
Finally, there's the argument that says "users" implies "using up". But again, that's a different context, and we're damn sure our users are smart enough to understand the difference without problems, consciously or otherwise.
Of the "better terms" that are "out there" as alternatives, the number one candidate is customer. And customer is a great word... but it's too restrictive for our purposes, because (in our opinion) it still implies a "buyer/seller" relationship. Customers pay for things. And when we say "users" we're talking about anything from organization members to music fans to web site visitors to forum guests and yes, of course, paying customers. But "customers" just didn't feel right to us because it was too specific.
The word "fans" is another good alternative, and if you inspire passionate users, you might very well end up with "fans". But the reason we don't like the word "fans" as much is because it puts the emphasis on the wrong thing... the provider. And given that our core belief is:
It doesn't matter what users think of YOU. What matters is what they think and feel about themselves as a result of interacting with (using) your product or service.
We think "users" is exactly the right word. It makes the USER the important thing. The one who cares about USEability and USEfullness, two words we really like. We don't care if we make fans. We aren't even motivated to create customers. We believe that if you can do things in such a way that you help people become passionate users, the rest takes care of itself. Because being passionate is strongly tied to happiness, and anything that makes people feel a little happier is a Good Thing.
We think that helping people develop a positive addiction to something good helps everyone. Oops... another drug reference.
Of course anything addictive can, in the extreme, become an obsession, but we trust you know what we mean.
Should we be so flippant with language? To use a word like "users" despite the implications? Sure. Because if we try to sanitize everything, to make sure that we offend and insult nobody, then we suck the life right out of it and end up delighting nobody. We'd rather piss off a pile of people than be bland but tolerable to everyone. Because once you start going down that road, you're screwed. (Oops--I said screwed.) There is rarely a way to make everyone happy and still be remarkable.
I do agree that perhaps 15 years ago, this might have been a valid complaint. But today, the word "user" has become so mainstream (even snowboarders and skiiers at our local Copper Mountain ski resort are referred to in some signage there as "users"), that it doesn't creep most people out, and especially not the younger, high-tech audience we interact with.
Ask a group of typical twenty-somethings if that word bothers them in any way. We did. They laughed. "You're joking... right?"
So, I'll go on using my scissors and my favorite spatula and my "inside voice" and my blue ski gloves and my cranberry bubble bath and NPR because... I'm a user. And a very happy one : )
Is your message memorable?
To be remembered, it must be memorable.
Yeah, that's a DUH thing. But we're constantly amazed by how many people ignore it.
(Us included, but we're trying.)
But focusing on that one simple concept can make a HUGE impact on whatever you're trying to do-- teach, sell, whatever.
After years of research and speculation, the neurobiologists are finally unlocking some of the deeper secrets of memory, led in large part by the work of Nobel winner Eric Kandel. Bert and I attended a presentation of his on the neurochemistry of memory, and it was... memorable : ) They can actually take a neuron (not from a human), large enough to see with the naked eye, stick it in a petri dish and... teach it. By altering the chemicals, they can put it in a state where it will never learn, and they can put it in a state where it learns after just one trial. (Read once, remember always.)
And they now know the agent responsible for my, um, less-than-A college final exam grades: CREB-2. Your brain is constantly doing a balancing act between CREB-1 (enables long-term memory formation) and CREB-2 (prevents it). It's all connected to protein synthesis, needed for encoding memories to long-term storage.
So if you're a teacher, trainer, author, advertiser... and you want to increase recall and retention, you're in for the fight of your life against CREB-2. Why is CREB-2 there? To save your life. Or at least your sanity. You obviously don't want to remember everything.
The big problem, of course, is that you aren't in control of your CREB-2. Your brain is making the decisions about what's important and what isn't. I talked about this a lot in one of my first blogs: Getting past the brain's crap filter.
How, then, do you get past someone's CREB-2 (crap filter)? How do you make something memorable? Exploit what the brain is tuned to pay attention to. Exploit what the brain thinks is important.
The rough part is that even when people TRY to tell their brain "this is important, this is important, this is important", the brain says, "no it isn't, no it isn't, no it isn't." So if you're trying to get people to remember something, the sad part is that even when they WANT to remember, it's not guaranteed. You know this, of course, since you've all tried to remember things you read and study, but it just doesn't happen the way you'd like or even need.
So what does the brain remember? There are two main roads to memory--the slow painful one, or the much faster one. The slow painful one is through repetition. Repeated exposure (or what Kandel and others call "trials") eventually works. It's as if your brain says, "this sure doesn't FEEL very important, but he's read this damn paragraph 17 times, so I guess it is..." The quick one is to use the chemistry of emotions. Or as Roger Schank puts it:
"You remember that which you feel."
I'm really blending two things here--getting their attention, and getting them to remember. And they are closely related, because they're tied to triggering things the brain thinks is worth paying attention to. But I'm still mixing them more than is technically correct, because it is certainly possible to get someone's attention without getting them to remember, but for the most part, the distinctions don't matter. All I'm concerned about now is how to make the brain care.
And the key is to evoke feelings. The stronger the feeling, the more likely the brain is to pay attention and record what's happening. If you register a big flatline on the emotional richter scale (as you would during a dry lecture or reading a dull text book), your brain takes that as a perfect sign that "this is SO not life threatening."
There's a catch, though. Because intense feelings of stress also act as a memory suppresor. So it can't be just any feeling, but most will do the job. That's why putting people in a learning situation where they're feeling stressed, pressured, incapable, overwhelmed, stupid, etc. just compounds the problems they already have trying to memorize the stuff you're teaching.
OK, so what kind of feelings can I use? ANYTHING ELSE! Not everything is appropriate, of course, but anything I can get away with could work. Anything that causes a chemical shift, however slight, is an improvement.
That means humor, shock, horror, surprise, delight, joy, sex, thrill, etc. The problem today is that there's already so much of that, especially as advertisers try to break though the noise when the noise level today is already so high. It takes a LOT more to, say, shock someone than it did even ten years ago as people become desensitized. But context matters. In Colorado Springs, CO, I'd be shocked to see a billboard with a naked person on it. But when I worked in Hollywood, I wouldn't even notice the posters, billboards, store displays featuring naked people (often of uncertain gender) selling everything from shoes to software, because they were so common.
A racy scene in even a mainstream novel isn't too surprising (although often still memorable), but in a programming text book, even the slightest hint is unexpected. And sometimes unexpected is all you need.
The brain is highly tuned for novelty. It spent thousands and thousands of years scanning for the unusual, the moving, the changing, the doesn't-quite-pattern-match. USE THAT. The brain is tuned for sex... (like I had to actually tell you that : )), so USE THAT. I was about to add the requisite (where appropriate of course), but then... using where it is NOT quite appropriate works even better. Again, if you can get away with it. Please don't give me a morality lecture... I'll assume that everyone is using good judgement with respect to children, sexual harrassment, etc. I'm just talking about how the brain works, period.
The brain is tuned for things perceived as scary or threatening. USE THAT. (Although that one is a little riskier, because too much stress leads to the opposite effect). Shock and surprise are great, though. Again, anything you overuse will dimish its effectiveness, so the more variety of brain-triggering techniques the better.
In other blogs, I'll focus in on individual techniques. But of all the approaches to getting past CREB-2, the one that might be the best and easiest in most situations is simply "novelty". In other words, "don't do what is expected in that context." I think that'll probably be my next post...
Note to our authors: expect me, Bert, Eric, and Beth to be grilling you on what you're doing to get past the CREB-2. Even just a little cleverness, something just slightly off-center, something ordinary in one context but a little bizarre in thiscontext, or anything that elicits even the slightest head tilt or slight smile can be a big improvement in a technical text book, so it doesn't take a lot.
If you're an advertiser/marketer, on the other hand... wow. That's more of a challenge. On the other hand, people are so used to (and tuned out to) bullsh**, that simply being brutally honest (once they stop being cynical that you're just PRETENDING to be honest) is a major out-of-context experience that will work. If everyone finally gets on the Hughtrain, though, that'll only work until it's become the norm. (I doubt that'll ever happen, but the world would be a much better place if it does!).
If you want something to be remembered, CREB-2 is the moat you gotta get past. Shock on.
Show-don't-tell applied to learning.
In a previous post today on learning isn't a push model, I made a reference to the novelist/filmmaking admonition "Show Don't Tell."
It's a crucial concept for us in our learning books, and perhaps one of the best examples of how we've used this is in our newest book (written primarily by Beth and Eric) on Design Patterns. We use this principle in several different ways, the two main ones are:
1) Motivation/relevance. One of the learning theories we subscribe to says that a learner must be personally motivated to learn each particular piece of what you're trying to teach. If you can't make a case for why this thing you're trying to teach them is personally meaningful and beneficial, you have no reason to expect their brain to care to process or retain it.
(And if you really can't make a case for the topic, what the hell is it doing in the course or book in the first place? I can't believe how many things I've put in a book or class only to realize that I have no clue why anyone should care... I just put it there because it was in the API, or because everyone else always covers it, etc. We tell our authors to put all topics on trial for their life... make those little suckers DEFEND their reason for being in the book, and if they can't come up with something important, the author should be brave and cut them. This is easier said than done, of course, because all us--me especially--have this fear that if we leave something out, critics will destroy us with, "How can they not teach THIS [insert critic's favorite topic] to beginners? It's a crime to leave that out!".)
At any point in a class or book, a teacher/writer can do one of these things:
* Teach the new concept, fact, procedure whatever without explaining why it matters. That's obviously the worst scenario. (We do it sometimes, but we always feel guilty.)
* Teach the new concept, and afterwards explain why it's useful. That's the second worst way to do it, because the motivation comes too late! Think back to times you've skimmed something or even zoned out during a lecture only to realize an hour later that you really did need to learn this. Haven't we all had the experience of saying, "Well why didn't you say that before?! I would have stayed awake if I knew it was important..."
* Teach the new concept, but explain in advance why it matters. This can work well if you do a really good job of finding a meaningful, personally relevant benefit. (In some later post I'll talk about ways in which you can help find and communicate those.) Although this is still a "telling" model, it's a far better way to improve the learning than giving the benefit only after you present the thing you're trying to teach.
* Before trying to teach the concept, provide a scenario or story or example that makes it obvious in a meaningful way, why this topic you're about to try to teach really matters. This is one way of applying show-don't-tell to a learning context, and we try to use it when we can. It is our opinion (and just an opinion, so all disclaimers apply) that this is usually the best way to help learning happen because it's the closest to generating the way you'd feel in real life.
Of course, this all depends on whether the scenario is meaningful to the learner. We've all been in lame-ass corporate training classes that attempt to provide motivation, but with something that you could not care less about. Like, "do this and your boss will praise you." when all you really care about is working more efficiently so you can get the hell out of the office an hour earlier.. But as long as you've demonstrated that this thing really matters, you're way ahead.
2) The other benefit of show-don't-tell is that it reduces the amount of time you spend "lecturing". The beta reviewers of our books are always quick to offer comments like, "On pages 200-220 you just lectured!! Not acceptable!" Sometimes we do it because we're lazy, or pressed for time, or because we just fall back into the habits of "telling", since that's the easiest way for the writer/teacher to communicate. But it's the least effective for the learner.
As I've said before, the success of our books depends entirely on doing what we promise--delivering a better faster learning experience to the reader/learner. And for that to happen, the reader/learner can't just skim the book or get bored after the first couple pages of a chapter. We have to keep them turning the pages, or they won't be able to learn from it.
And the "show don't tell" model is one tool that helps keep people turning the pages. That's why it's so important to novelists, for example. (See what sci-fi writer Robert Sawyer has to say about it.) We don't for a heartbeat think that we're good writers. NONE of us secretly harbors that "what I really want to do is direct" thing (which, for a lot of tech book writers is more like, "what I really want to do is write a novel.") But that's the cool thing about applying this to things like programming or other tech topics--you don't have to be brilliant and clever, given that most traditional methods of presenting technical topics are nearly 100% tell. In other words, even a poor-man's "show" is usually better than a really good "tell", when it comes to learning experiences. Because "show" is closer to reality. Show is about "you do this thing like this, and this terrible thing happens over here, and see how you spend the rest of the month fixing the code because you designed it in such a brittle way?" (And especially helpful when that "brittle way" actually looked pretty promising on the surface... a garden path scenario I mentioned in the "push" blog.)
Keep in mind please that everything I talk about here is related to learning, and certainly does not apply to ALL forms of communicating information. There's a big difference between things like "briefings", where the whole point is to quickly communicate new information, vs. trying to inspire deep learning and retention. If I just want the new data, quickly, the last thing I want is to have to wade through some elaborate scenario designed to show me why it matters. I'm already motivated on that specific point. That's why much of what we advocate in this blog isn't what we'd do in, say, an hour-long presentation at a conference. And in fact, it isn't what I do in my blog entries, because I don't consider them to be Big Learning Experiences. They're much closer to info briefings, reminders, and simple personal thoughts about the topic. On the other hand, If I were teaching a course or writing a book on these topics, that would be a completely different picture, and there would have to be far less telling and a lot more showing.
Ok, that's it for now. It's an unseasonably nice day here in Colorado, so we're heading out to play with our too-smart horses : )
Please stop using *asterisks*!
I've been taken to task, and rightfully so, for using and *abusing* asterisks in my posts. I'm going to break myself of that habit, and please yell at me if I do it again. (Except for the one above). I've spent too many years having to mark up my own plain text using asterisks for emphasis, and now I keep forgetting that the beautiful and wonderful Typepad give me tags for italics and bold! (So even worse, I mix the two--real tags and fake ones--throughout each post.)
I cannot promise that I'll do any better at spell-checking, but I'll work on it. I'm a horrible proof-reader, but at least I know enough about the brain to let myself (and all the *other*, I mean other bad proof-readers) off the hook--your brain sees what it knows you meant, not what's necessarily there. It's just one of those lovely ways in which the brain thinks it's doing you a nice big favor (why should I bother her with those pesky little typos? It's not like she doesn't know what it means...) and it's really hard to switch that off. People who can proof-read well should be paid a lot of money.
I also can't seem to get the Typepad spell-checking interface to work for me, but that's a different matter. Maybe I'll try pasting my posts into my little text editor and let it spell check, or maybe I'll try one more time to read the damn thing before I post it. I appreciate the feedback on this, and I really will make a better effort, but then, "I was told blogs weren't subject to the same expectations people have when they read articles." That's the excuse I was using, anyway. And maybe if I cut out all the asterisks and *fake* emphasis and stick with the tags consistently, people will be more inclined to look the other way on my typos.
I'm trying to find the balance so that I don't start treating every blog entry as an article, and start editing the hell out of it and reducing the number of times I'm willing to post. This is still new to me, and I'm trying to put an emphasis on posting more frequently rather than agonizing over it as I would an article. (Which is not to say I'm any better when I write articles, but at least then I have a real editor.)
So, thanks for being patient, and I really appreciate the feedback (thanks Fred!). I'll try to do better.
Learning isn't a push model
Back in my AI days (when I used to have a clue, or thought I did anyway ; ), the book Scripts, Plans, Goals, and Understanding was required reading for some of us. And I've been a fan of Roger Schank ever since. Of all the work that has influenced the direction of our learning principles, his has had the greatest impact. We were thrilled to see his work on intelligence move toward developing better theories (and tools for) learning.
We get letters from people who want to know more about the metacognitive topics we cover in the intro to the Head First books, and I'd suggest that anyone interested in learning theory put his e-Learning book high on your list. Maybe even at the top.
Some consider him an acquired taste, and he has a lot of detractors. He's one of the more outspoken critics of the education system in the US and slams just about everything from secondary schools to colleges to corporate training. A typical quote of his from the book, discussing corporate training:
"So what's wrong with training? Everything that's wrong with training can be stated in four words: it's just like school. The educational model in school does not work. That fact, however, hasn't deterred business from adopting this model, which is based on the belief that people learn through listening. Memorize the teacher's words; memorize the training book's policies and procedures. It's at this point in my public talks that audience members rise up in protest."
And one of my favorites:
"First and foremost: When learning isn't engaging, it's not learning. The movies, for all their faults, usually get this idea right. In the film Dead Poet's Society, Robin Williams plays a teacher who jumps on top of desks, makes the class laugh, tells great stories, and gets the class involved in what he's teaching. The educational establishment at the school hates the way Williams teaches, based on the premise that if students are having fun, something must be wrong. Listening to lectures and memorizing countless facts and figures aren't engaging activities for most people. Minds wander; real goals take over."
Another book that has some good research data is this one by long-time learning guru Ruth Clark.
Yes, both of these books happen to be focused on e-learning, but the principles apply whether you're doing classroom training, learning books, online training, or developing just-in-time performance support materials.
But regardless of differences among learning theories, one thing virtually everyone agrees on is this: Knowledge cannot be pushed into someone's head while they sit passively reading or listening. Knowledge is a co-creation... the learner must construct the new knowledge in his own head. And usually (or some say ALWAYS), the new knowledge must be mapped into something that's already IN the learner's head.
This notion of knowledge-as-co-creation is crucial for us. Which is one of the reasons we were horrified at the thought of creating learning books. Because for the most part, reading is just like listening. Worse, reading a fairly dry text book is like listening to a dry lecture-- pretty much THE weakest form of learning. So trying to make a book into a learning experience flies in the face of everything we believe in about learning (our backgrounds are in computer science, game development, entertainment, teaching, and looooong stints in artificial intelligence including the field of intelligent tutoring systems (AI meets CBT)).
So our mission was, given the constraints of a book format, and knowing that learning is far less likely to happen if the learner is just... reading, what can we do to help get them involved and start flexing their brain cells? So we tried a bunch of different things, figuring that the more we can throw at it, the more likely it is that something will work at least some of the time, for the people who try to learn from the book. From the feedback we've gotten now, 18 months in, we know that some of what we did to help make this happen is working, and some of the things just bombed. And some of the things we didn't plan--that are there simply as artifacts of trying to apply some other principle, turned out to be a key component to getting the learner involved.
Getting the learner to co-create knowledge isn't a simple task; lots of pieces have to come together. For example, if we provide the absolute best thought-provoking exercises, but can't get the reader to stay awake long enough to get to them, we lose. If we provide ways in which the reader can get involved and really build some brain cells, but the challenge level is simply too high (or just as bad, too low), we lose. If the material simply isn't stimulating enough to hook the reader in, and he won't stay with us start to finish, we lose (since we're not a reference book). In other words, our whole premise and promise to the user--that they'll learn more quickly, more deeply, and with less pain--depends on them really staying with it and doing the work. And we're painfully aware that if we don't deliver on that promise, they'll have no reason to ever buy another book in this format. We're in this for the long haul, so we're deeply dedicated to really making this work for people who want to learn.
We looked at the whole system in which an environment for learning occurs, and that's why we drew on so many domains to help us. Learning theory says the learner must be motivated, but says almost nothing about how to get people motivated. Learning theory says the learner must be engaged and involved, but says almost nothing about how to really make that happen. On the other hand, Hollywood, Madison Avenue, and good sales people have something to teach us about providing motivation. They understand seduction and communicating personally meaningful benefits. (More on that topic in other posts.) Game designs have something to teach us about that as well. After all, kids and adults alike will spend hours and hours and hours immersed in thinking/planning/strategizing in the course of playing a video game.
We wondered, can we try to turn a technical book into something that will make people want to stay with it and keep turning the pages? And even if we can, will they be motivated to actually DO the exercises? What if they don't do the exercises? Can we provide OTHER ways to try to make learning happen? To get the user to think at a higher level... to process the new content in such a way that he constructs new knowledge in his head, rather than just passively reading?
Our answer was, "We don't know if we can, but yes, we can certainly try, and here's how..." Among the things we use to try to get people flexing brain cells are:
1) We use cliff-hangers, where the learner is drawn into the scenario only to be left hanging without the full answer, to help spur their curiosity into speculating on the solution or result.
2) We use debates/arguments/discussions between two characters (which could be people or even anthropomorphized parts of the system like the compiler vs. the virtual machine) where there isn't always a clear winner. Both sides might make compelling, convincing cases for their personal view, and this kind of forces the learner's brain into evaluating (one of those higher-level thinking tasks on Bloom's taxonomy), weighing the merits of each side, and drawing his own conclusions. Sometimes we have a definite side, but looking at the same scenario from more than one perspective is in an of itself a way to help inspire deeper thought processing of the concepts.
3) Knowing that most people claim to skip the lab exercises in a book that say, "Now go to your computer and type this in...", we have 40-50 in-book (workbook style) exercises you do with a pencil, right inside the pages of the book. We want readers/learners to have NO excuse for not doing the exercise when they're using the book the way they tell us they do--on the train, bus, plane, in the park, wherever they have a spare moment at lunch, etc. In other words, when they're not necessarily within easy reach of doing the real thing they're learning.
4) We make those exercises use a wide range of brain capabilities--so there are right-brained pattern-matching exercises, left-brained code troubleshooting and logic puzzles, draw this picture, answer this question, write this code, make this decision, etc.
5) We ask questions and provide exercises sometimes for things that we haven't fully explained, so that the learner must apply what he's already learned and extrapolate to figure out something he hasn't actually seen yet.
6) We provide "garden path" scenarios, where the learner is led down a road that looks so right, but turns out to be SO wrong. This is based on the theory (Roger Schank has a lot to say about this as well) that we learn a lot more from mistakes and surprises than from things that work as expected. Think about it... what are the things you're most likely to remember when you're working? When things go just as you expect, just as you were told, there's nothing memorable. But when you're humming along and suddenly the thing you expect fails, and you get just the opposite... you get that WTF?? feeling. And that is what you remember. So we try to provide at least a few of those visceral, "I won't make THAT mistake again!" experiences when they happen. (And thanks to the wonderful Java APIs, those doesn't-work-the-way-you'd-think counterintuitive scenarios are plentiful in some of our books : )
Of course it's a little tricky using these techniques. It makes our books suck as reference books, of course, but we're 100% clear that our books are learning experiences, not references. Because you might flip back to a page and actually find yourself reading something that's just wrong. So we have to use a lot of other things in our books to try to get people to read the whole topic, at least the first time through, in order to get the whole "Here I am doing this stuff and BAM! It blew up..." experience of the story. And there will always be some people who hate the approach precisely because it DOES includes these tricks. They feel cheated that we deliberately led them down a garden path when we could have just told them how it really works to begin with. And while they have a valid point, that "telling not showing" approach (considered a really bad thing to do among novelists and filmmakers) is the weakest form of learning.
You might hate the approach, but it's closer to the messy, confusing, oh SHIT experience of what happens when you do these things in the real world, and it's guaranteed to be more memorable. If you can stand getting through it. That's part of why we have to do a lot of other things to try to make that "getting through it" more tolerable, or even interesting. (That's where the game design theory comes in; more on that later, but here's a hint: the flow state psychologists call optimal experience that game designers know as "make them want to get to the next level by getting the challenge/skill/seduction blend right."
Ok, that's it for now. My apologies for spelling errors and typos. Repeat to yourself: this is NOT an article; it's a blog. I'm trying to get better at this...
A Tiger-on-Tiger (OSX/Java) poem...
Just in case there's any remaining doubt about what an awful writer I am, this should nail it. My first geek poem. In the "Night Before Christmas" beat...
Twas the night after Christmas and inside my house,
I stayed up configuring my new Bluetooth mouse.
The Setup Assistant that I had dismissed
refused to stay closed, it was making me PISSED.
"Searching for keyboard..." it kept on repeating
Then "Zero were found" it wouldn't stop pleading.
Updated the firmware, installed a new driver
But nothing that led to a cheerful high-fiver.
But then I remembered my OS is beta
I had to get Tiger; I just couldn't wait-a.
So here I sit looking at Mac 10-dot-four
and things just don't work quite like they did before.
But to run Java five on my brand new iMac
I can't use 10-three (without a bad hack).
Oh why won't McNealy acknowledge the Mac?
He hates the PC, but Sun's java we lack.
They write it for Windows, Solaris, and Linux
But won't do the Mac, sniff, got a Kleenix?
And because I'm a gambler, I did not dual-boot
Just installed OS Tiger--lock, stock, as root.
But oh how I love all of Tiger's new stuff
Even in beta, I can't get enough.
With 64 bits, just where I need it.
And I think a new screen font, the better to read it.
There's Dashboard, and XCode, now IChat AV,
the RSS feeds do look cool to me.
XCode two-dot-oh will help with my Java
for those late night sessions, fueled on Kava.
I do think I might be missing some data,
but that's what I get for trusting a beta.
But even in beta 10-four is WAY stronger
than Windows XP or a beta of Longhorn.
It works with my iPod, my Wacom, and reader.
The new improved kernel--a real Unix leader.
Now if only my wireless mouse would "pair up"
I'd go back to surfing, see what I scare up.
Oh look there's my problem! Can it really be?
"Switch the back of your keyboard toward the LED"
It turns out my mouse was OK all along,
And it wasn't my OS that had gone wrong.
My wireless *keyboard* just wasn't quite right,
but now that I've fixed it, I can stay up all night...
coding with enums, generics, and varargs,
lots of cool new features for programming die-hards.
I love the new Java, and the new 10-dot-four
but what did they give them the same code name for?
Tiger on Tiger is kind of confusing,
Still I am thrilled that it's what I'm using.
So thank-you to Apple, and thank-you to Sun.
Now if I could just get the Wireless Toolkit to run...
Update: Head First Java
We're in the final few days of wrapping up the new version of Head First Java. The main differences are that we tigerfied the content in the book (which doesn't have a big impact on most of it) to include autoboxing, the enhanced for, generics, co-variant returns, using String.format(), String.split() (instead of the deprecated StringTokenizer), and a couple of other things. But the biggest change in the book is that we're adding a new chapter on Collections--promoting it from an appendix topic to a first-class chapter topic. In the first edition, we covered ArrayList throughout the book, but discussed nothing else about collections until the appendix.
Given how shockingly fast O'Reilly is at getting our books from PDF-to-bookstore, I expect it'll be available in February. We'll post updates here and on javaranch as we narrow down the real dates. People have asked if we'll be putting a new "big-head" person on the cover, but the answer is no. : ) While the last two books have had women on the covers (see our about page for cover pictures), we're keeping our original Java guy for this new second edition of HF Java.
We'll probably have another You Pick The Cover contest for the *new* HF books the way we did with the HF Design Patterns book. Stay tuned for that...
I'm hoping that in a week or two we'll be able to tell you more about the next HF books coming out. I can tell you that none of the next batch is on Java... There are currently six other people besides Bert and I that are acting as the primary co-authors of the next HF books. We're VERY excited about that--we've had three HF author "bootcamps" so far, and it's taken quite a while for everyone to ramp up, but we've seen some chapters coming in that are really cool. If only we could make the tech book market come back even just a LITTLE...