The intertubes are abuzz over the latest & coolest toy released from Google Labs: the Google Books Ngram Viewer. What is it? And why am I writing two posts about Google technology in a single week???
Google Labs' info page explains: "When you enter phrases into the Google Books Ngram Viewer, it displays a graph showing how those phrases have occurred in a corpus of books (e.g., 'British English', 'English Fiction', 'French') over the selected years." ("How" here means "how frequently.")
Brenna Ehrlich wrote about it for Mashable.com. Nathan Bransford blogged about it. Patricia Cohen wrote about it for the NY Times. It was a topic of e-mail and conversation and conference calls at my office this week. Geoffrey Nunberg, of UC Berkeley's School of Information, and staff reporter Mark Parry wrote about it in the Chronicle of Higher Education. The most far-reaching claims about n-gram analysis of the Google Books corpora may be last week's article in Science Magazine, Quantitative Analysis of Culture Using Millions of Digitized Books. Nunberg's article considers those claims critically.
Okay, so I tried it out for myself.
What, I wondered, can the Google Books Ngram Viewer tell me about my novel manuscript, Consequence, which (FWIW) is being considered for representation even as I type by literary agents across the continent?
Consequence is about activists who are up in arms over looming environmental catastrophe. Here's an elevator speech: San Francisco political activists get in over their heads when peaceful protest collides with gun runners, road rage, and conspiracy to pilot six kilotons of truck bomb into the heart of a midwestern research facility.
So what does the Google Books Ngram Viewer tell me about how these ideas have been represented in English language books during the period 1930 to 2008? (I'm figuring that period overlaps well with books that folks interested in Consequence are likely to pick up in a bookstore.)
What I see in the graph shown in this post (click the image for a larger view) is that appearance in books of the words catastrophe, terrorism, and consequence have held pretty steady over a period of 78 years. Books that treated the environment spiked in the late 90s and have been falling through the oughts ("environmental" and "ecology" produce graphs in roughly the same shape over this period; "genetic engineering" which is the sort of environmental catastrophe that the novel's activists focus on, graphs with a flatter tail but rises and falls in parallel with "environment").
And that all means ... what?
That people are always up for a story about terrorism and its consequences? That environmental concerns are passe? That environmental concerns are poised for resurgence? That there used to be a glut of books about the environment but now there's space for new books on the topic to emerge? That catastrophe never goes out of fashion?
When I study the graphs (not very hard, I confess) I come to a simple conclusion. Google's ngram viewer has predicted that some people will want to read Consequence if and when it is published, and others won't bother.
Did I need an app for that?
What I predict is that ngram viewing at the level of crude inquiry enabled by the new Google tool -- nifty and fun as it is to play with -- will prove to be just another technological fetish, probably sooner than later.
Geoffrey Nunberg, in his CHE article on Google's Ngram Viewer, links to a May 2010 article in that same journal about Google Books and what it means or doesn't to novels, reading, and humanist scholarship: The Humanities Go Google, by Mark Parry. In it, Professor Katie Trumpener of Yale University (Comp Lit & Film Studies) is quoted describing the kind of analysis made easy by the new ngram viewer as "one that could yield a slew of insignificant numbers with 'jumped-up claims about what they mean.'"
Indeed.
So I think I'll stop here.
What do you think of the ngram viewer? What have you learned from it in the service's first week of operation?
I'm going to take a week's posting break. Enjoy the holidays, and I'll see you back here at One Finger Typing in January!
Thursday, December 23, 2010
N-gram fetishism
Labels:
books,
e-books,
Google,
reading,
technology,
technology and literature,
writing
Monday, December 20, 2010
Reading level in Google search
This isn't my breaking news: Barry Schwartz of searchengineland.com noticed it ten days ago and the news was slashdotted Thursday, and a friend did me the favor of setting up a link to how One Finger Typing blog posts rate on Google's reading-level scale. Nundu, a Google employee with a very thin profile, announced the new feature on 9 December in the Google Web Search Help Forum.
As you can see from the screenshot, 25% of this blog's post appear to score as "Basic" and 75% as "Intermediate" reading level.
Alas, the statistical display is actually misleading. Only some of this blog's posts are classified. Four, to be exact, as of mid-December 2010.
The first thing I wonder when I see this sort of claim, involving automated classification, is what methodology was used. That is, how does Google Search measure reading level measured?
Nundu, in that help forum post, says: "Re: how we classify pages -- The feature is based primarily on statistical models we built with the help of teachers. We paid teachers to classify pages for different reading levels, and then took their classifications to build a statistical model. With this model, we can compare the words on any webpage with the words in the model to classify reading levels. We also use data from Google Scholar, since most of the articles in Scholar are advanced."
Okay, that's pretty cool. Teachers rate texts, professors write texts, then Google's Natural Language Processing magic matches search result content to content of known or rated texts. Those that are similar to "basic" texts are themselves rated "basic," and so forth. There are a fair few known and reliable ways to measure 'similarity' between texts, and while Google doesn't like to tell which it uses I'm fairly confident that they know what they're doing -- what with all the content at their disposal, and their strong financial interest in getting 'similarity' right in order to successfully present search results.
Reading level analysis has been a part of Google Docs for a while. If you've got a document in Google Docs, and use the Word Count tool, numeric ratings are given on three different scales of "readability": Flesch Reading Ease & Flesch-Kincaid Grade Level and the Automated Readability Index.
For example, one of the search-classified posts on One Finger Typing -- Nominative determinism in fiction -- is given a Flesch Reading Ease score of 61.13 by Google Docs, indicating it's "easily understandable by 13- to 15-year-old students." That's about on a par with Reader's Digest, but easier to digest than a Time magazine article, or so says Wikipedia. Google Search classifies this score as "Basic."
I'm curious to know the full distribution of classifications across my blog posts to-date, but I guess Google's servers have to chew over the intertubes a bit more before that's made easy as a few keystrokes. I'm not so eager to stare at my own blog's navel that I'm going to iterate through every one of my posts with the Word Count feature in Google Docs.
In the meantime, it's good to know that I'm not pitching prose exclusively to university professors. One hopes to be more broadly accessible than that, no offense intended to any university professor who ever has or might ever read & comment on One Finger Typing.
Related posts on One Finger Typing:
Google's new Blogger interface
Google yanks APIs, developers caught with pants around ankles
N-gram fetishism
As you can see from the screenshot, 25% of this blog's post appear to score as "Basic" and 75% as "Intermediate" reading level.
Alas, the statistical display is actually misleading. Only some of this blog's posts are classified. Four, to be exact, as of mid-December 2010.
The first thing I wonder when I see this sort of claim, involving automated classification, is what methodology was used. That is, how does Google Search measure reading level measured?
Nundu, in that help forum post, says: "Re: how we classify pages -- The feature is based primarily on statistical models we built with the help of teachers. We paid teachers to classify pages for different reading levels, and then took their classifications to build a statistical model. With this model, we can compare the words on any webpage with the words in the model to classify reading levels. We also use data from Google Scholar, since most of the articles in Scholar are advanced."
Okay, that's pretty cool. Teachers rate texts, professors write texts, then Google's Natural Language Processing magic matches search result content to content of known or rated texts. Those that are similar to "basic" texts are themselves rated "basic," and so forth. There are a fair few known and reliable ways to measure 'similarity' between texts, and while Google doesn't like to tell which it uses I'm fairly confident that they know what they're doing -- what with all the content at their disposal, and their strong financial interest in getting 'similarity' right in order to successfully present search results.
Reading level analysis has been a part of Google Docs for a while. If you've got a document in Google Docs, and use the Word Count tool, numeric ratings are given on three different scales of "readability": Flesch Reading Ease & Flesch-Kincaid Grade Level and the Automated Readability Index.
For example, one of the search-classified posts on One Finger Typing -- Nominative determinism in fiction -- is given a Flesch Reading Ease score of 61.13 by Google Docs, indicating it's "easily understandable by 13- to 15-year-old students." That's about on a par with Reader's Digest, but easier to digest than a Time magazine article, or so says Wikipedia. Google Search classifies this score as "Basic."
I'm curious to know the full distribution of classifications across my blog posts to-date, but I guess Google's servers have to chew over the intertubes a bit more before that's made easy as a few keystrokes. I'm not so eager to stare at my own blog's navel that I'm going to iterate through every one of my posts with the Word Count feature in Google Docs.
In the meantime, it's good to know that I'm not pitching prose exclusively to university professors. One hopes to be more broadly accessible than that, no offense intended to any university professor who ever has or might ever read & comment on One Finger Typing.
Related posts on One Finger Typing:
Google's new Blogger interface
Google yanks APIs, developers caught with pants around ankles
N-gram fetishism
Labels:
Google,
reading,
technology,
writing
Thursday, December 16, 2010
Immodest budget proposals for a broken California
Once and future California governor Jerry Brown is trying to get a jump start on the Sisyphean task of fixing the state's broken budget.
Is California's budget the most broken among U.S. states? The Wall Street Journal reported last week that "States are reporting billions in midyear budget shortfalls, and the crunch is likely to continue for at least several more years, a new report says. [...] Illinois had the largest midyear shortfall relative to the size of its budget among the 15 states with deficits: $13 billion, or 47% of its general-fund budget."
Let's look at how California is grappling with its fiscal nightmare.
On Wednesday of last week, the SF Chronicle reported, "Gov.-elect Jerry Brown convened an unusual summit Wednesday, bringing together lawmakers and state financial officials for an in-depth look at the state's myriad fiscal problems, and he signaled that the days of relying on gimmicks to bridge the budget deficit are over."
Just the day before yesterday, the Chron's columnist Kathleen Pender wrote about one of a depressing number of those budgetary gimmicks that was baked into the 'resolution' of the 2010-11 budget this past fall. As Pender wrote in Loss of estate tax leaves hole in state budget: "The proposed tax deal in Congress would fail to deliver about $2.7 billion in estate tax revenues California was counting on receiving this fiscal year and next, but some say the state should never have expected those revenues in the first place. 'It sounds like your budget estimators were counting chickens that not only hadn't hatched, the eggs hadn't even been laid yet,' says Howard Zaritsky, an attorney and estate planning consultant in Rapidan, Va."
It's not like this was a surprise. As Reuters reported the day the budget was passed, "California lawmakers on Friday approved a state budget filled with spending cuts and creative accounting to fill a $19.1 billion deficit, 100 days after a spending plan should have been in place. Governor Arnold Schwarzenegger said he hoped to sign the budget package as soon as Friday evening, but critics fear his successor, to be elected on November 2, will immediately face a new shortfall as rosy revenue assumptions prove unfounded."
Creative accounting. California's government has been pretty good at that over the past decade. But maybe a new kind of creativity is called for to climb out of the deep deep budgetary hole that we citizens permitted our governator and legislature to dig.
As Brown summited last week, I was hanging out with an old friend on his last day of a visit back to the Bay Area. Stuart lived in Berkeley and Oakland in the 1980s and 1990s, between Brown administrations. He reeled off a range of options the new governor might consider, and has kindly permitted me to spin them in this post.
These ideas are not gimmicks. They've been tried and proven in many a Third World country, and the last has even been piloted here in the U.S.
And so, without further ado:
1. Call in the IMF
The IMF is great at restructuring economies, and because California's is the 8th largest in the world maybe it'd be a fun challenge for the intrepid imposers of austerity measures. Heck, with a hole $28 billion deep, austerity measures are exactly where the state's going anyway.
2. Invite Bono to organize a charity concert
George Harrison and Ravi Shankar did it for Bangladesh. Bono organized Live 8, and was part of Band Aid and played at Live Aid.
What would Bono call a charity concert to save California? "Kool Aid"? "Anti-tax crusAid Aid"?
Maybe Linda Ronstat would play if her former boyfriend asked really really nicely.
3. Apply for AIDS funding from the United Nations
Why not? California has been home to nearly 15% of the 1,073,124 HIV/AIDS diagnoses in the United States (cumulative through 2008). Yes, it's true that there are currently about 33 million people living with HIV (not cumulative, current) ... a staggering number ... but still, it couldn't hurt to ask UNAIDS, right?
4. Beg Hugo Chavez to subsidize the state's oil habit
Venezuela's 'revolutionary' leader has given a break to Honduras, the Dominican Republic, Cuba, Costa Rica ... he even sold oil cheap to "poor populations and low-income populations" here in the U.S., quoting from his announcement of the plan, on Democracy Now (20 Sept 2005). The Washington Post described one part of that program as an effort to "bring 7.5 million gallons of deeply discounted heating oil to as many as 37,000 low-income households in Maryland, Virginia and the District and free heating oil to some homeless shelters."
Sounds like just the ticket. Go on, Gov.-elect Brown. Ask Mr. Chavez. Please, please, pretty-please?
I suppose it's best to close on an amusing note -- you know, amusing in a gallows-humor kind of way. Here's a thought from that SF Chronicle article of last week:
"Mac Taylor, the state legislative analyst who joined Brown at the forum, said any real fixes would be difficult. 'No matter how you resolve the budget problem ... it's not going to have a good effect on the economy,' Taylor said."
No sh*t, Sherlock.....
Is California's budget the most broken among U.S. states? The Wall Street Journal reported last week that "States are reporting billions in midyear budget shortfalls, and the crunch is likely to continue for at least several more years, a new report says. [...] Illinois had the largest midyear shortfall relative to the size of its budget among the 15 states with deficits: $13 billion, or 47% of its general-fund budget."
Let's look at how California is grappling with its fiscal nightmare.
On Wednesday of last week, the SF Chronicle reported, "Gov.-elect Jerry Brown convened an unusual summit Wednesday, bringing together lawmakers and state financial officials for an in-depth look at the state's myriad fiscal problems, and he signaled that the days of relying on gimmicks to bridge the budget deficit are over."
Just the day before yesterday, the Chron's columnist Kathleen Pender wrote about one of a depressing number of those budgetary gimmicks that was baked into the 'resolution' of the 2010-11 budget this past fall. As Pender wrote in Loss of estate tax leaves hole in state budget: "The proposed tax deal in Congress would fail to deliver about $2.7 billion in estate tax revenues California was counting on receiving this fiscal year and next, but some say the state should never have expected those revenues in the first place. 'It sounds like your budget estimators were counting chickens that not only hadn't hatched, the eggs hadn't even been laid yet,' says Howard Zaritsky, an attorney and estate planning consultant in Rapidan, Va."
It's not like this was a surprise. As Reuters reported the day the budget was passed, "California lawmakers on Friday approved a state budget filled with spending cuts and creative accounting to fill a $19.1 billion deficit, 100 days after a spending plan should have been in place. Governor Arnold Schwarzenegger said he hoped to sign the budget package as soon as Friday evening, but critics fear his successor, to be elected on November 2, will immediately face a new shortfall as rosy revenue assumptions prove unfounded."
Creative accounting. California's government has been pretty good at that over the past decade. But maybe a new kind of creativity is called for to climb out of the deep deep budgetary hole that we citizens permitted our governator and legislature to dig.
As Brown summited last week, I was hanging out with an old friend on his last day of a visit back to the Bay Area. Stuart lived in Berkeley and Oakland in the 1980s and 1990s, between Brown administrations. He reeled off a range of options the new governor might consider, and has kindly permitted me to spin them in this post.
These ideas are not gimmicks. They've been tried and proven in many a Third World country, and the last has even been piloted here in the U.S.
And so, without further ado:
1. Call in the IMF
The IMF is great at restructuring economies, and because California's is the 8th largest in the world maybe it'd be a fun challenge for the intrepid imposers of austerity measures. Heck, with a hole $28 billion deep, austerity measures are exactly where the state's going anyway.
2. Invite Bono to organize a charity concert
George Harrison and Ravi Shankar did it for Bangladesh. Bono organized Live 8, and was part of Band Aid and played at Live Aid.
What would Bono call a charity concert to save California? "Kool Aid"? "Anti-tax crusAid Aid"?
Maybe Linda Ronstat would play if her former boyfriend asked really really nicely.
3. Apply for AIDS funding from the United Nations
Why not? California has been home to nearly 15% of the 1,073,124 HIV/AIDS diagnoses in the United States (cumulative through 2008). Yes, it's true that there are currently about 33 million people living with HIV (not cumulative, current) ... a staggering number ... but still, it couldn't hurt to ask UNAIDS, right?
4. Beg Hugo Chavez to subsidize the state's oil habit
Venezuela's 'revolutionary' leader has given a break to Honduras, the Dominican Republic, Cuba, Costa Rica ... he even sold oil cheap to "poor populations and low-income populations" here in the U.S., quoting from his announcement of the plan, on Democracy Now (20 Sept 2005). The Washington Post described one part of that program as an effort to "bring 7.5 million gallons of deeply discounted heating oil to as many as 37,000 low-income households in Maryland, Virginia and the District and free heating oil to some homeless shelters."
Sounds like just the ticket. Go on, Gov.-elect Brown. Ask Mr. Chavez. Please, please, pretty-please?
I suppose it's best to close on an amusing note -- you know, amusing in a gallows-humor kind of way. Here's a thought from that SF Chronicle article of last week:
"Mac Taylor, the state legislative analyst who joined Brown at the forum, said any real fixes would be difficult. 'No matter how you resolve the budget problem ... it's not going to have a good effect on the economy,' Taylor said."
No sh*t, Sherlock.....
Labels:
politics
Monday, December 13, 2010
Graffiti
A new graffito recently appeared on the unisex bathroom wall at one of my favorite cafés in Berkeley, Café Milano, just across the street from Sproul Plaza: "Life is short. Spend it making history."
I keep half an eye on graffiti on the campus and in cafés ... it's a way to track a certain segment of the zeitgeist. I find it amusing, aggravating, fascinating, or just plain dumb. Depending.
My friend and University of Chicago colleague, Quinn Dombrowski, has a much more regular and rigorous habit. She has photographed nearly 1500 pieces of graffiti from U Chicago alone; and has also conducted foraging expeditions to Brown University (930 photos), the University of Colorado (262), Arizona State University (507), and UC Berkeley (142).
Legacy readers of One Finger Typing will recall that Quinn's book, Crescat Graffiti, Vita Excolatur: Confessions of the University of Chicago, was an element of my early post Forays into self-publishing. Crescat Graffiti... is available on Amazon; and, as I mentioned some months back, generated a nice portfolio of media attention, from a Chicago Tonight TV appearance to the Wall Street Journal, to Der Spiegel.
Quinn's latest project in this vein is to analyze graffiti at the five schools she has sampled to categorize them topically and rate their "interestingness." Her methodology was unveiled on 28 November, and while one can take issue with metrics and methods at least Quinn is explaining what she's up to. The first (and "least interesting") school analyzed on her blog is Arizona State; Colorado's graffiti came up for review on Friday; and Berkeley is next. Quinn is "publishing the data from least interesting to most interesting, to end the year on a good note." As an alumnus and employee I am obliged to be disappointed that Berkeley didn't percolate to the top of Quinn's analysis -- but, hey, it's her rubric and she does have her own loyalties....
I will say that there's something about "interestingness" as Quinn measures it that doesn't capture dimensions of what I think makes some graffiti zingy and sweet. I didn't ask how she'd score "Life is short. Spend it making history." But I don't think it would earn as many points under her scoring system as graffiti that referenced or quoted "high culture" (you know, the kind of stuff university students are supposed to espouse), or even pop culture. The thing about the graffito in Café Milano is that it's not so much derivative as prescriptive, in a joyful, engaged, activist sort of way. Seeing it warmed this grizzled rabble-rouser's heart.
When I searched for "Life is short. Spend it making history." I didn't find an original source. Anybody out there recognize it as a quote? By whom, and in what book or song or film or revolutionary screed?
Check out Quinn's collection, and the analysis on her blog. I'd be interested to hear what kind of graffiti makes your heart skip a beat. Tell me in the comments?
I keep half an eye on graffiti on the campus and in cafés ... it's a way to track a certain segment of the zeitgeist. I find it amusing, aggravating, fascinating, or just plain dumb. Depending.
My friend and University of Chicago colleague, Quinn Dombrowski, has a much more regular and rigorous habit. She has photographed nearly 1500 pieces of graffiti from U Chicago alone; and has also conducted foraging expeditions to Brown University (930 photos), the University of Colorado (262), Arizona State University (507), and UC Berkeley (142).
Legacy readers of One Finger Typing will recall that Quinn's book, Crescat Graffiti, Vita Excolatur: Confessions of the University of Chicago, was an element of my early post Forays into self-publishing. Crescat Graffiti... is available on Amazon; and, as I mentioned some months back, generated a nice portfolio of media attention, from a Chicago Tonight TV appearance to the Wall Street Journal, to Der Spiegel.
Quinn's latest project in this vein is to analyze graffiti at the five schools she has sampled to categorize them topically and rate their "interestingness." Her methodology was unveiled on 28 November, and while one can take issue with metrics and methods at least Quinn is explaining what she's up to. The first (and "least interesting") school analyzed on her blog is Arizona State; Colorado's graffiti came up for review on Friday; and Berkeley is next. Quinn is "publishing the data from least interesting to most interesting, to end the year on a good note." As an alumnus and employee I am obliged to be disappointed that Berkeley didn't percolate to the top of Quinn's analysis -- but, hey, it's her rubric and she does have her own loyalties....
I will say that there's something about "interestingness" as Quinn measures it that doesn't capture dimensions of what I think makes some graffiti zingy and sweet. I didn't ask how she'd score "Life is short. Spend it making history." But I don't think it would earn as many points under her scoring system as graffiti that referenced or quoted "high culture" (you know, the kind of stuff university students are supposed to espouse), or even pop culture. The thing about the graffito in Café Milano is that it's not so much derivative as prescriptive, in a joyful, engaged, activist sort of way. Seeing it warmed this grizzled rabble-rouser's heart.
When I searched for "Life is short. Spend it making history." I didn't find an original source. Anybody out there recognize it as a quote? By whom, and in what book or song or film or revolutionary screed?
Check out Quinn's collection, and the analysis on her blog. I'd be interested to hear what kind of graffiti makes your heart skip a beat. Tell me in the comments?
Labels:
culture,
self-publishing
Thursday, December 9, 2010
Unvarnished truth is hard to swallow
A summary for a study titled Global Warming Warnings Can Backfire begins: "From Priuses to solar panels and plastic-bag bans (and even green dating!), it seems that everyone’s going green. The message that our world is in danger if we do not take action is also everywhere: from images of baby polar bears drowning to frightening images of a parched barren future. The push to go green is based in good intentions, but an upcoming study in Psychological Science shows that the popular 'do or die' global-warming messages can backfire if the situation is presented too negatively."
The study is slated for publication in January's Psychological Science; the authors are Matthew Feinberg & Robb Willer.
(Referral credit where it's due: UC Berkeley's press release factory published Dire messages about global warming can backfire, new study shows on 16 November, but I didn't come across the story until our student newspaper, The Daily Californian, picked it up in an article a week later.)
Earlier this week I wrote about finding Nevil Shute's On The Beach (1957) problematic for presenting the danger of nuclear war in clean, controlled, and airbrushed colors. As I wrote in my blog post of Monday, when I watched the film and reread the book -- each a classic -- over Thanksgiving weekend I found On the Beach a bit ridiculous. I think Cormac McCarthy's The Road (2006) gives a more honest view of what apocalypse-on-Earth might look like, should things come to that.
Each book packs a moral wallop. Each warns of a future only a maniac would want to see happen, a future most would go some distance to forestall. But I had a hard time taking On The Beach seriously because its characters go to their deaths with such stiff upper lips, conforming like docile children to social strictures that about to flatline. I had to pinch myself from time to time to remember I wasn't reading a farce. McCarthy's The Road? No such problem.
But the Feinberg and Willer study to be published next month suggests that of the approaches taken by these novels, published nearly fifty years apart, the kinder, gentler message is more likely to reach its readers.
Again from the Psychological Science summary: "Results showed that those who read the positive messages were more open to believing in the existence of global warming and had more faith in science’s ability to solve the problem, than those exposed to doomsday messages, who became more skeptical about global warming."
I have to confess that when I read that sentence Mary Poppins fairly leaped to mind ...
... but not in a very delightful way.
So let's dig a little deeper into Feinberg & Willer:
"Our study indicates that the potentially devastating consequences of global warming threaten people’s fundamental tendency to see the world as safe, stable and fair. As a result, people may respond by discounting evidence for global warming," said Willer [...]. "The scarier the message, the more people who are committed to viewing the world as fundamentally stable and fair are motivated to deny it," agreed Feinberg.
But perhaps motivation to deny isn't the only issue the study is surfacing, or even the most important.
Feinberg & Willer's study, its summary explains, finds that people are "open to believing" a serious problem exists when they have "more faith in science’s ability to solve the problem." These are two sides of a single coin, though, aren't they?
If a doomsday message suggests that a very large problem can't be solved, people don't want to believe the problem exists. That's heads. Tails might be: people are willing to believe a very large problem exists, so long as it's not really very large (because it can and will be resolved -- and, perhaps much more tellingly, somebody else is going to take care of it). Whether the world is, indeed, "safe, stable and fair" -- whatever a person's convictions might be on the question -- is something else again.
But let's shift gears for a moment, and consider another angle on global warming.
I saw the preview several weeks ago in a theater, but have not watched this year's documentary Cool It!, which portrays Danish statistician Bjorn Lomborg and his conviction that climate change is a "manageable crisis" (as the Boston Globe put it). Lomborg is selling big engineered solutions. For example, according to Reuters, "Geo-engineering that could [...] be used to reflect sunlight into space." (Yeah, right.)
I suspect this film is a variant of the wishful human tendency Feinberg & Willer have found in their subjects.
Judging from the trailer and reviews, Cool It! pushes a message that Al Gore and the 2006 documentary An Inconvenient Truth are massive downers, man. This is not only the frame of Cool It!, it is also the frame of Feinberg and Willer's study: "Overall, the study concludes, 'Fear-based appeals, especially when not coupled with a clear solution, can backfire and undermine the intended effects of these messages.'"
While some of what I gleaned from glimpses into the movie looked on the mark to me -- if all we can throw at climate change are efficient light bulbs and hybrid cars it's certainly not going to go away -- Lomborg appears to lean toward giving smart scientists big money to solve problems for humankind, rather than engaging people to behave differently than we have to-date under the influence of free market capitalism, and the burdens and bonuses of industrial civilization.
The effect of behavior to-date, of course, is that we're heading for climate-induced catastrophe before the end of the current century, according to a supermajority of scientists. There's still a chance to head off the worst of what's predicted, most think, but doing so will require widespread and radical changes in our behavior (which, most scientists agree, lies at the root of today's climate change) and the economies that influence them.
The chance that the worst can be engineered into submission by a vanguard of geeks and their electronic slide rules? Not so much, in this skeptic's humble opinion. And if a "clear solution" -- like reflecting sunlight back into space -- is actually a pipe-dream, it doesn't actually add value to an appeal, whether "fear-based" or not.
What would be required to motivate radical changes in human behavior and the economies that influence them? I think that question has an easy answer, generally speaking: it would require, first, that humans recognize and acknowledge that current behaviors are leading us headlong into deep doo-doo. From there, compared to from a state of denial, it's a much shorter distance to shouldering the responsibility necessary to fix a problem that's too big to be delegated to faceless bureaucrats and professionals.
But, as Feinberg and Willer show in their forthcoming study, Lomborg's message is of a type people are more likely to take in -- precisely because it suggests that there's an easy way out. It certainly would be easier to leave it to engineers to solve their equations, while the rest of us, as George W. Bush urged Americans in September 2001, "Do your business around the country. Fly and enjoy America's great destination spots. Get down to Disney World in Florida. Take your families and enjoy life, the way we want it to be enjoyed."
But the easier thing is unlikely to help, any more than going to Disney World made the world safe from murderous lunatics willing to fly airplanes into buildings.
Thomas Friedman, responding to diplomatic dirty laundry recently aired by WikiLeaks, listed a series of behavior patterns that have seriously -- and perhaps irrevocably -- weakened the United States in relation to putative allies as well as declared enemies. Friedman wrote this past Saturday in a New York Times op-ed, "Geopolitics is all about leverage. We cannot make ourselves safer abroad unless we change our behavior at home. But our politics never connects the two."
And, indeed, that failure to connect is precisely what Feinberg and Willer -- and Nevil Shute -- have shown that people prefer to do.
If climate change is real (which the vast majority of scientists believe is so), and if there may still be a way out (which also seems to be a widely shared assessment), what kind of messages will motivate the engagement and commitment necessary to fix it? A message that softens the raw horror of what will come if we fail? Or one that sticks you right smack in the sights of the worst that can happen?
I know what kind of a message strikes me deeply, and I suppose that makes me an outlier in Feinberg and Willer's universe. Y tu?
Thanks to Pierre J. for the photo of the nuclear explosion.
The study is slated for publication in January's Psychological Science; the authors are Matthew Feinberg & Robb Willer.
(Referral credit where it's due: UC Berkeley's press release factory published Dire messages about global warming can backfire, new study shows on 16 November, but I didn't come across the story until our student newspaper, The Daily Californian, picked it up in an article a week later.)
Earlier this week I wrote about finding Nevil Shute's On The Beach (1957) problematic for presenting the danger of nuclear war in clean, controlled, and airbrushed colors. As I wrote in my blog post of Monday, when I watched the film and reread the book -- each a classic -- over Thanksgiving weekend I found On the Beach a bit ridiculous. I think Cormac McCarthy's The Road (2006) gives a more honest view of what apocalypse-on-Earth might look like, should things come to that.
Each book packs a moral wallop. Each warns of a future only a maniac would want to see happen, a future most would go some distance to forestall. But I had a hard time taking On The Beach seriously because its characters go to their deaths with such stiff upper lips, conforming like docile children to social strictures that about to flatline. I had to pinch myself from time to time to remember I wasn't reading a farce. McCarthy's The Road? No such problem.
But the Feinberg and Willer study to be published next month suggests that of the approaches taken by these novels, published nearly fifty years apart, the kinder, gentler message is more likely to reach its readers.
Again from the Psychological Science summary: "Results showed that those who read the positive messages were more open to believing in the existence of global warming and had more faith in science’s ability to solve the problem, than those exposed to doomsday messages, who became more skeptical about global warming."
I have to confess that when I read that sentence Mary Poppins fairly leaped to mind ...
... but not in a very delightful way.
So let's dig a little deeper into Feinberg & Willer:
"Our study indicates that the potentially devastating consequences of global warming threaten people’s fundamental tendency to see the world as safe, stable and fair. As a result, people may respond by discounting evidence for global warming," said Willer [...]. "The scarier the message, the more people who are committed to viewing the world as fundamentally stable and fair are motivated to deny it," agreed Feinberg.
But perhaps motivation to deny isn't the only issue the study is surfacing, or even the most important.
Feinberg & Willer's study, its summary explains, finds that people are "open to believing" a serious problem exists when they have "more faith in science’s ability to solve the problem." These are two sides of a single coin, though, aren't they?
If a doomsday message suggests that a very large problem can't be solved, people don't want to believe the problem exists. That's heads. Tails might be: people are willing to believe a very large problem exists, so long as it's not really very large (because it can and will be resolved -- and, perhaps much more tellingly, somebody else is going to take care of it). Whether the world is, indeed, "safe, stable and fair" -- whatever a person's convictions might be on the question -- is something else again.
But let's shift gears for a moment, and consider another angle on global warming.
I saw the preview several weeks ago in a theater, but have not watched this year's documentary Cool It!, which portrays Danish statistician Bjorn Lomborg and his conviction that climate change is a "manageable crisis" (as the Boston Globe put it). Lomborg is selling big engineered solutions. For example, according to Reuters, "Geo-engineering that could [...] be used to reflect sunlight into space." (Yeah, right.)
I suspect this film is a variant of the wishful human tendency Feinberg & Willer have found in their subjects.
Judging from the trailer and reviews, Cool It! pushes a message that Al Gore and the 2006 documentary An Inconvenient Truth are massive downers, man. This is not only the frame of Cool It!, it is also the frame of Feinberg and Willer's study: "Overall, the study concludes, 'Fear-based appeals, especially when not coupled with a clear solution, can backfire and undermine the intended effects of these messages.'"
While some of what I gleaned from glimpses into the movie looked on the mark to me -- if all we can throw at climate change are efficient light bulbs and hybrid cars it's certainly not going to go away -- Lomborg appears to lean toward giving smart scientists big money to solve problems for humankind, rather than engaging people to behave differently than we have to-date under the influence of free market capitalism, and the burdens and bonuses of industrial civilization.
The effect of behavior to-date, of course, is that we're heading for climate-induced catastrophe before the end of the current century, according to a supermajority of scientists. There's still a chance to head off the worst of what's predicted, most think, but doing so will require widespread and radical changes in our behavior (which, most scientists agree, lies at the root of today's climate change) and the economies that influence them.
The chance that the worst can be engineered into submission by a vanguard of geeks and their electronic slide rules? Not so much, in this skeptic's humble opinion. And if a "clear solution" -- like reflecting sunlight back into space -- is actually a pipe-dream, it doesn't actually add value to an appeal, whether "fear-based" or not.
What would be required to motivate radical changes in human behavior and the economies that influence them? I think that question has an easy answer, generally speaking: it would require, first, that humans recognize and acknowledge that current behaviors are leading us headlong into deep doo-doo. From there, compared to from a state of denial, it's a much shorter distance to shouldering the responsibility necessary to fix a problem that's too big to be delegated to faceless bureaucrats and professionals.
But, as Feinberg and Willer show in their forthcoming study, Lomborg's message is of a type people are more likely to take in -- precisely because it suggests that there's an easy way out. It certainly would be easier to leave it to engineers to solve their equations, while the rest of us, as George W. Bush urged Americans in September 2001, "Do your business around the country. Fly and enjoy America's great destination spots. Get down to Disney World in Florida. Take your families and enjoy life, the way we want it to be enjoyed."
But the easier thing is unlikely to help, any more than going to Disney World made the world safe from murderous lunatics willing to fly airplanes into buildings.
Thomas Friedman, responding to diplomatic dirty laundry recently aired by WikiLeaks, listed a series of behavior patterns that have seriously -- and perhaps irrevocably -- weakened the United States in relation to putative allies as well as declared enemies. Friedman wrote this past Saturday in a New York Times op-ed, "Geopolitics is all about leverage. We cannot make ourselves safer abroad unless we change our behavior at home. But our politics never connects the two."
And, indeed, that failure to connect is precisely what Feinberg and Willer -- and Nevil Shute -- have shown that people prefer to do.
If climate change is real (which the vast majority of scientists believe is so), and if there may still be a way out (which also seems to be a widely shared assessment), what kind of messages will motivate the engagement and commitment necessary to fix it? A message that softens the raw horror of what will come if we fail? Or one that sticks you right smack in the sights of the worst that can happen?
I know what kind of a message strikes me deeply, and I suppose that makes me an outlier in Feinberg and Willer's universe. Y tu?
Thanks to Pierre J. for the photo of the nuclear explosion.
Labels:
books,
energy,
politics,
technology
Monday, December 6, 2010
Apocalyptic fiction: the distance between Nevil Shute and Cormac McCarthy
Last month, riffing off Kazuo Ishiguro's novel Never Let Me Go and the recent film based on that novel, I blogged about Dystopias in fiction. Thinking in that vein, and after a conversation with a friend who recently read Consequence, my own dystopia-inflected novel manuscript, I decided to reread Nevil Shute's On The Beach. I first read Shute's 1957 novel in the mid-1970s or so.
Warning: this post contains spoilers for both Nevil Shute's On The Beach and Cormac McCarthy's The Road.
Looking through DVDs in the public library just before Thanksgiving I came across the 1959 film version of On The Beach, starring Gregory Peck, Ava Gardener, and Fred Astaire. I watched, and found it both riveting and insufferable: riveting for Shute's core idea -- that life on earth might well be annihilated by radiation, in inexorable slow-motion, following a nuclear war; and insufferable for the entirely unlikely decency, civility, and obliqueness with which the characters and their society meet the end of life and civilization. I didn't remember thinking the book insufferable the first time I read it, but we're talking long-distance memory here.
I returned the DVD to the library the day after Thanksgiving, and promptly violated my intention to observe Buy Nothing Day by stopping in at a used bookshop and picking up a copy of Shute's novel. In my mind I was already comparing Shute's story to Cormac McCarthy's The Road, published half a century after On The Beach and covering parallel ground in a very different register.
I'd say On The Beach as a novel is somewhat less airbrushed than the film in its approach to the question of life ending on earth with a post-bang whimper. Perhaps the censors had something to do with the film's attenuated portrayal of the characters' darker moments; perhaps it was the nature of movies in the late 1950s. I was interested enough to keep turning pages; I finished the novel before I had to return to work on Monday. And yet ... as Shute portrayed it, the tidiness attending the death of the last great city left in the world struck me as ridiculous.
It wasn't just the way people of Melbourne waited peaceably -- drinking more than I might, but, hey, the novel is set in Australia. Take, for example, the primness with which radiation sickness is shown. People are sick "off camera," even in the book; they hide their symptoms from one another out of politesse, and hide behind gentle words -- "tired," "spasm," "ill," "trembling" -- a superficial glance and then quickly looking away. Take, for another example, that emotional responses to impending doom range from disciplined self-control to denial to the occasional, brief, and stilted outburst. I use the word "range" with reservations.
Sure, it's credible that a government might give out cyanide pills to spare people the agony of radiation sickness, as Shute imagined; and that intervention might well spare a city some of the chaos one reads in descriptions, say, of plague in 14th century Europe. But it doesn't sufficiently explain Shute's clean city streets in North America -- little out of order, hardly any evidence of looting, no dead bodies -- when the novel's characters pay an exploratory visit by submarine. Nor does it explain the conveniently uniform disinterest the novel's principal characters evince in examining the bodies of the dead -- characters whose role in their society is to understand what has happened in areas affected by radiation in order to plan for what's coming. Best to let that ugly stuff alone, Shute seems to have supposed, so the reader need not be bothered with the gruesome reality of mass death.
Vague sops are thrown to the reader to explain Shute's omissions: likening human behavior to a dog's tendency to slink off to die in hiding, or dwelling on limits of a submarine's periscope to view much of coastal cities as a sub lies offshore. These, in my reading, don't answer Shute's portrayal of pristine landscapes and cityscapes in the wake of the death of every living being in an entire landmass.
And then there were the novel's women.
Moira has retreated into drink and sexual abandon; yet, as she comes out of her hedonistic fog, disappointment at shallow and fruitless relationships doesn't get in the way of faultless restraint in pursuing American submarine captain Dwight Tower, whose fantasy attachment to his annihilated family is a wee bit unbalanced. Mary Holmes' deep refusal to see the writing on the wall is just plain wacky, and her husband Peter indulges her wackiness: She lived in the dream world of unreality, or else she would not admit reality; he did not know. In any case, he loved her as she was. How sweet, dissociated, and in keeping with 1950s bourgeois decorum.
All that unbalance and wackiness suggests that there's something fundamentally unsettled in these characters who are holding it all together on the surface (and how could there not be?) ... but Shute's masking of this novel's main event -- the emotional responses of Dwight, Peter, Moira, Mary, and the others to the end of their lives and their world -- struck me as strange to the point of silliness, and thoroughly distracting.
In the end, Shute gets it right about human fallibility, though he was more of an optimist in 1957 than I can manage nowadays.
Here's Peter again, in the final chapter of the book, mourning the lost opportunity to educate humanity out of the "silliness" that led to a war whose after-effects are destroying life on our planet: "Newspapers," he said. "You could have done something with newspapers. We didn't do it. No nation did, because we were all too silly. We liked our newspapers with pictures of beach girls and headlines about cases of indecent assault, and no government was wise enough to stop us having them that way. But something might have been done with newspapers, if we'd been wise enough." Mary responds: She did not fully comprehend his reasoning. "I'm glad we haven't got newspapers now," she said. "It's been much nicer without them."
More about human fallibility -- and people's receptivity to hard, unvarnished truths -- in my next blog post.
Here I'll conclude with a brief nod to Cormac McCarthy's The Road (2006). In this novel too, some cataclysm -- possibly war, the author doesn't say -- has devastated life on Earth. Not everybody's dead yet, but things are looking pretty grim. There is nothing growing that can be eaten. People scavenge, and there's not much left. A man and his son are making their way to the coast, certain that they won't survive another winter in the interior of the North American continent. Of the few survivors left, most seem to be living in nomadic, cannibalistic tribes, enslaving their human prey and driving them like cattle before eating them. The protagonist dies toward the end of the book. His son, a reader is led to believe, won't last much longer.
It's a very different take on apocalypse from Shute's gallant, duty-bound, brandy- and port- and whiskey-swilling Americans and Australians. Drawn out for years, as the death of a planet might actually be, McCarthy's apocalypse lacks the convenience of a concise final act. McCarthy tends more toward a take on humankind that emerged during the blackout riots in New York City in July 1977; or war in the Balkans or in Rwanda in the 1990s, with attendant rapes, hacked limbs, ethnic genocide, and other forms of anarchically depraved inhumanity; or the Armenian Genocide following World War I; or the holocausts of Hitler, Stalin, and Mao; or the killing fields of the Khmer Rouge.
To me, McCarthy's depiction of the end of human life and civilization seems more likely to foretell an actual future. Sure, there will be people who behave well in even the most extreme circumstances (as McCarthy's protagonist and his son do, more or less). Pretty much everyone I know hopes that in dire straits s/he would turn out to be one of these, one of "the good guys," one of those who, in McCarthy's words, "carry the fire." But when the seas rise and the plagues spread and the radiation burns -- when food and water and shelter run out -- life won't just go on 'normally' until the last of us gently subsides. To imagine it might seems willfully blind, misguided, utopian. Cf. the history of the twentieth century, excerpted above. And let's not forget that understanding of the fragility of humane behavior is baked into nearly every theology on earth: everybody's got some concept or other of hell.
Nevil Shute's On The Beach deracinated nastiness from the human culture he portrayed, except in a remote and abstract set of off-stage actors -- those "silly" people who started the war to end the world. Yet in its time, On The Beach had a devastating effect on readers, as its author clearly intended: "The most haunting evocation we have of a world dying of radiation after an atomic war," said the New York Times (according to Amazon's collection of editorial reviews). On The Beach introduced the reality, taken for granted today, that humankind now holds the power to destroy most, and perhaps all, complex life-forms on Earth. And the possibility that we might just do it.
If Cormac McCarthy had published The Road in 1957 instead of a half-century later, perhaps no one would have read his novel. In that era, The Road might have been judged pornographically raw. In 2007, the author was awarded the Pulitzer Prize.
On The Beach is a classic, no doubt. But it is so deeply embedded in its time and culture, in its fetish for order and authority, that I can't read it except as an anachronism.
The Road may be a difficult novel to get through for some -- one Facebook friend, after finishing McCarthy's book, posted a status asking why anyone would even write such a depressing story -- but I think it depicts a future that seems credible to open-eyed readers in the 21st century.
Warning: this post contains spoilers for both Nevil Shute's On The Beach and Cormac McCarthy's The Road.
Looking through DVDs in the public library just before Thanksgiving I came across the 1959 film version of On The Beach, starring Gregory Peck, Ava Gardener, and Fred Astaire. I watched, and found it both riveting and insufferable: riveting for Shute's core idea -- that life on earth might well be annihilated by radiation, in inexorable slow-motion, following a nuclear war; and insufferable for the entirely unlikely decency, civility, and obliqueness with which the characters and their society meet the end of life and civilization. I didn't remember thinking the book insufferable the first time I read it, but we're talking long-distance memory here.
I returned the DVD to the library the day after Thanksgiving, and promptly violated my intention to observe Buy Nothing Day by stopping in at a used bookshop and picking up a copy of Shute's novel. In my mind I was already comparing Shute's story to Cormac McCarthy's The Road, published half a century after On The Beach and covering parallel ground in a very different register.
I'd say On The Beach as a novel is somewhat less airbrushed than the film in its approach to the question of life ending on earth with a post-bang whimper. Perhaps the censors had something to do with the film's attenuated portrayal of the characters' darker moments; perhaps it was the nature of movies in the late 1950s. I was interested enough to keep turning pages; I finished the novel before I had to return to work on Monday. And yet ... as Shute portrayed it, the tidiness attending the death of the last great city left in the world struck me as ridiculous.
It wasn't just the way people of Melbourne waited peaceably -- drinking more than I might, but, hey, the novel is set in Australia. Take, for example, the primness with which radiation sickness is shown. People are sick "off camera," even in the book; they hide their symptoms from one another out of politesse, and hide behind gentle words -- "tired," "spasm," "ill," "trembling" -- a superficial glance and then quickly looking away. Take, for another example, that emotional responses to impending doom range from disciplined self-control to denial to the occasional, brief, and stilted outburst. I use the word "range" with reservations.
Sure, it's credible that a government might give out cyanide pills to spare people the agony of radiation sickness, as Shute imagined; and that intervention might well spare a city some of the chaos one reads in descriptions, say, of plague in 14th century Europe. But it doesn't sufficiently explain Shute's clean city streets in North America -- little out of order, hardly any evidence of looting, no dead bodies -- when the novel's characters pay an exploratory visit by submarine. Nor does it explain the conveniently uniform disinterest the novel's principal characters evince in examining the bodies of the dead -- characters whose role in their society is to understand what has happened in areas affected by radiation in order to plan for what's coming. Best to let that ugly stuff alone, Shute seems to have supposed, so the reader need not be bothered with the gruesome reality of mass death.
Vague sops are thrown to the reader to explain Shute's omissions: likening human behavior to a dog's tendency to slink off to die in hiding, or dwelling on limits of a submarine's periscope to view much of coastal cities as a sub lies offshore. These, in my reading, don't answer Shute's portrayal of pristine landscapes and cityscapes in the wake of the death of every living being in an entire landmass.
And then there were the novel's women.
Moira has retreated into drink and sexual abandon; yet, as she comes out of her hedonistic fog, disappointment at shallow and fruitless relationships doesn't get in the way of faultless restraint in pursuing American submarine captain Dwight Tower, whose fantasy attachment to his annihilated family is a wee bit unbalanced. Mary Holmes' deep refusal to see the writing on the wall is just plain wacky, and her husband Peter indulges her wackiness: She lived in the dream world of unreality, or else she would not admit reality; he did not know. In any case, he loved her as she was. How sweet, dissociated, and in keeping with 1950s bourgeois decorum.
All that unbalance and wackiness suggests that there's something fundamentally unsettled in these characters who are holding it all together on the surface (and how could there not be?) ... but Shute's masking of this novel's main event -- the emotional responses of Dwight, Peter, Moira, Mary, and the others to the end of their lives and their world -- struck me as strange to the point of silliness, and thoroughly distracting.
In the end, Shute gets it right about human fallibility, though he was more of an optimist in 1957 than I can manage nowadays.
Here's Peter again, in the final chapter of the book, mourning the lost opportunity to educate humanity out of the "silliness" that led to a war whose after-effects are destroying life on our planet: "Newspapers," he said. "You could have done something with newspapers. We didn't do it. No nation did, because we were all too silly. We liked our newspapers with pictures of beach girls and headlines about cases of indecent assault, and no government was wise enough to stop us having them that way. But something might have been done with newspapers, if we'd been wise enough." Mary responds: She did not fully comprehend his reasoning. "I'm glad we haven't got newspapers now," she said. "It's been much nicer without them."
More about human fallibility -- and people's receptivity to hard, unvarnished truths -- in my next blog post.
Here I'll conclude with a brief nod to Cormac McCarthy's The Road (2006). In this novel too, some cataclysm -- possibly war, the author doesn't say -- has devastated life on Earth. Not everybody's dead yet, but things are looking pretty grim. There is nothing growing that can be eaten. People scavenge, and there's not much left. A man and his son are making their way to the coast, certain that they won't survive another winter in the interior of the North American continent. Of the few survivors left, most seem to be living in nomadic, cannibalistic tribes, enslaving their human prey and driving them like cattle before eating them. The protagonist dies toward the end of the book. His son, a reader is led to believe, won't last much longer.
It's a very different take on apocalypse from Shute's gallant, duty-bound, brandy- and port- and whiskey-swilling Americans and Australians. Drawn out for years, as the death of a planet might actually be, McCarthy's apocalypse lacks the convenience of a concise final act. McCarthy tends more toward a take on humankind that emerged during the blackout riots in New York City in July 1977; or war in the Balkans or in Rwanda in the 1990s, with attendant rapes, hacked limbs, ethnic genocide, and other forms of anarchically depraved inhumanity; or the Armenian Genocide following World War I; or the holocausts of Hitler, Stalin, and Mao; or the killing fields of the Khmer Rouge.
To me, McCarthy's depiction of the end of human life and civilization seems more likely to foretell an actual future. Sure, there will be people who behave well in even the most extreme circumstances (as McCarthy's protagonist and his son do, more or less). Pretty much everyone I know hopes that in dire straits s/he would turn out to be one of these, one of "the good guys," one of those who, in McCarthy's words, "carry the fire." But when the seas rise and the plagues spread and the radiation burns -- when food and water and shelter run out -- life won't just go on 'normally' until the last of us gently subsides. To imagine it might seems willfully blind, misguided, utopian. Cf. the history of the twentieth century, excerpted above. And let's not forget that understanding of the fragility of humane behavior is baked into nearly every theology on earth: everybody's got some concept or other of hell.
Nevil Shute's On The Beach deracinated nastiness from the human culture he portrayed, except in a remote and abstract set of off-stage actors -- those "silly" people who started the war to end the world. Yet in its time, On The Beach had a devastating effect on readers, as its author clearly intended: "The most haunting evocation we have of a world dying of radiation after an atomic war," said the New York Times (according to Amazon's collection of editorial reviews). On The Beach introduced the reality, taken for granted today, that humankind now holds the power to destroy most, and perhaps all, complex life-forms on Earth. And the possibility that we might just do it.
If Cormac McCarthy had published The Road in 1957 instead of a half-century later, perhaps no one would have read his novel. In that era, The Road might have been judged pornographically raw. In 2007, the author was awarded the Pulitzer Prize.
On The Beach is a classic, no doubt. But it is so deeply embedded in its time and culture, in its fetish for order and authority, that I can't read it except as an anachronism.
The Road may be a difficult novel to get through for some -- one Facebook friend, after finishing McCarthy's book, posted a status asking why anyone would even write such a depressing story -- but I think it depicts a future that seems credible to open-eyed readers in the 21st century.
Thursday, December 2, 2010
Fifteen authors: reflections on a Facebook 'you show me yours' list
The challenge came over my Facebook wire the Wednesday before Thanksgiving: "List fifteen authors, including poets and playwrights, who have influenced you and will always stick with you. Do this in no more than fifteen minutes. Tag at least fifteen friends, including me, because I'm interested in seeing what authors my friends choose. [...]"
I bit. Here are the authors I posted, alphabetically by name:
Following the rules, I cobbled together my list in a very few minutes. I took a few minutes to browse my bookshelves, which skewed the results toward authors whose work I happen to have handy. The quantity limit guaranteed that I'd leave off a load of authors who have influenced me and will always stick with me, but the time constraint meant there was little room to measure and consider which of those other authors should have bubbled up into the top fifteen slots. And there's the circumstantial factor too: I compiled this list at a certain hour on a certain day. If you ask me next week or next month, I might come up with a very different set.
Disclaimers aside, though, the list typed above is the one I came up with on the spur of the moment. Circling back, here are some thoughts about why each author came to mind.
Pat Barker - This author's Regeneration trilogy (the other two novels are The Eye in the Door and The Ghost Road) brought the human cost of World War I to vivid, harrowing life for me. Her novel Life Class, also about the first world war, read less powerfully to me but remains an important element in my mental map of a war that ended forty years before I was born yet whose human and geopolitical effects continue to reverberate in this century. Barker is certainly not the only author of vivid work that circles WWI: Sebastian Faulks' Birdsong comes to mind; and the novels of Joseph Roth ... Louis de Bernières, Ernest Hemingway, T.E. Lawrence ... but Barker is the one whose work came most vividly to my mind the day before Thanksgiving.
Carlos Castenada - Was the author really a disciple of a shaman named Don Juan? Or any shaman at all? I never really cared. Castenada was among the most evocative of several authors to extend the borders of reality, as I conceive(d) it, out into metaphysics as I grew into adulthood (yes, of course there was more to it than reading...). Also cf. Herman Hesse, below; and Michael Murphy, author of Jacob Atabet, who didn't make this list.
J.M. Coetzee - Lucid prose, brutal geographical and human landscapes, impassable moral conundrums, deep sympathies running through all of it. There's a reason Coetzee was awarded the Booker Prize twice, and the Nobel Prize for Literature.
Umberto Eco - Eco has proven that being a medievalist can be fun. The Name of the Rose was the best book I've ever read about a library. I usually find obsession with mysterious secret societies (and their close relatives, conspiracy theories) to be numbingly silly, but Foucault's Pendulum kept me awake for nights on end.
Herman Hesse - There's something faintly embarrassing about my youthful enthusiasm for Hesse's orientalist fantasias, but there you have it. Like Castenada, and Eco on this list, Hesse was instrumental in opening my eyes to worlds behind the world's surfaces.
James Joyce - I blogged about Ulysses during Banned Books Week this year ... ever since I read his work during my years as a student, Joyce has stood, in my mind, for a fundamental shift in literature, in which the conception of heroism in western "high" culture was radically democratized.
Ian McEwan - Incandescent sentences. McEwan writes with a scalpel, I'm sure of it. His novels' endings often disappoint, but nothing could keep me from his delicious prose.
John Milton - I can't think of John Milton without remembering the late Julian Boyd, one of the most inspiring professors with whom I had the privilege to study at UC Berkeley. Professor Boyd was ill during the term I took his Milton seminar. I learned afterward, when I visited during the next term's office hours to thank him for an unforgettable introduction to Paradise Lost, that he was bedridden the whole quarter, and only dragged himself to campus by force of will when it was time to teach. He remembered our class only as a series of incoherent rants. But out of his riveting lectures, lunatic or not, I came to appreciate how rigorous application of etymological resonance could slither past conscious 'defenses' to work an author's moral instruction on a reader.
Haruki Murakami - This author's melancholy nostalgia, the fluid borders he draws between dream and reality, his fascination with Western pop culture, his fearlessness in depicting human depravity without stooping to dissociated voyeurism -- not to mention his love of cats -- has compelled my attention repeatedly, from The Windup Bird Chronicle to Norwegian Wood to Kafka on the Shore. His rendering of atrocities during Japan's occupation of China is indelibly burned into my literary and political memory; and Kafka Tamura's wonderfully intricate search for his origins and himself has entranced me through multiple readings and one of the best reading group discussions ever.
W.G. Sebald - Sebald's meditative peregrinations -- across Europe, time, and all the world -- evoke as powerfully as any modern writer's work the individual's helpless transfixion before the tides of history. The grainy black and white photographs interleaved through his prose (is it history, memoir, fiction, journalism?) deepen the mystery and amplify the dreaminess of Sebald's exacting observation. I've blogged about Sebald's work in Time, History, and Human Forgetting and More on place in fiction.
Will Shakespeare - I don't need to explain this one, right?
Gary Snyder - In sharp observation and incisive analysis, in poetics and in politics, with a keen eye and a wry wit, Snyder teaches that human culture is a rich melodic line amid a symphony of parts and players in the great music of being. His Back on the Fire (essays) was my top pick in an April blog post, Books everyone should read.
J.R.R. Tolkien - Exposing me to Professor Boyd's lessons on Milton (see above) before I was old enough to understand them, Tolkein gave the gift of of myth reimagined to the latter half of the twentieth century, casting deeply imprinted stories (Norse mythology, Beowulf, Wagner's Der Ring des Nibelungen) into philologically rich yet accessible and gripping adventures in a fantastic and oddly, enchantingly, wishfully plausible world. Condolences to George Lucas and J.K. Rowling, who failed (in this viewer's and reader's opinion) even to come close.
Mark Twain - Is it the bitter humor? The fantabulous hyperbole? The skewering of provincial America? A rich rendering of boyhood longings for independence and adventure? It's everything. To read Twain as a boy is to learn that books can make one giddy, and to relish the ride.
William Butler Yeats - Lush verse, elitist arrogance, hopeless romanticism, a view of Ireland through mythically-tinted lenses ... but, really, it's the finely tuned language that echoes, year after year, in the mind's library.
And who else might have made the cut on a no-limits list? It's hardly possible to be exhaustive. But authors offered by my Facebook friends gave me a lot of ideas.
The professor of English who recruited me into this Fifteen Authors game listed a number of childrens' writers among the grow'd-up sort, several of whom certainly influenced me in ways that will always stick with me: Dr. Seuss, Roald Dahl, Beverly Cleary.
Other FB friends listed authors I might have included had they come to mind in the moment ... Annie Dillard, Gabriel Garcia Marquez, Alexander Pope, Tom Robbins, Henrik Ibsen.
Then there's the science & speculative fiction read in my teens and twenties: Isaac Asimov, Margaret Atwood, Ray Bradbury, Arthur C. Clark, Philip José Farmer, Robert Heinlein, Ursula LeGuin.
Not to mention Homer, Ovid, Nathaniel Hawthorne, Charles Dickens, Joseph Roth (whom I did mention, actually, above), C.P. Cavafy, T.S. Eliot, Thomas Wolfe, George Orwell, Kurban Said, Walker Percy, José Saramago .........
I mean, really. Fifteen?
I bit. Here are the authors I posted, alphabetically by name:
- Pat Barker
- Carlos Castenada
- J.M. Coetzee
- Umberto Eco
- Herman Hesse
- James Joyce
- Ian McEwan
- John Milton
- Haruki Murakami
- W.G. Sebald
- Will Shakespeare
- Gary Snyder
- J.R.R. Tolkien
- Mark Twain
- William Butler Yeats
Following the rules, I cobbled together my list in a very few minutes. I took a few minutes to browse my bookshelves, which skewed the results toward authors whose work I happen to have handy. The quantity limit guaranteed that I'd leave off a load of authors who have influenced me and will always stick with me, but the time constraint meant there was little room to measure and consider which of those other authors should have bubbled up into the top fifteen slots. And there's the circumstantial factor too: I compiled this list at a certain hour on a certain day. If you ask me next week or next month, I might come up with a very different set.
Disclaimers aside, though, the list typed above is the one I came up with on the spur of the moment. Circling back, here are some thoughts about why each author came to mind.
Pat Barker - This author's Regeneration trilogy (the other two novels are The Eye in the Door and The Ghost Road) brought the human cost of World War I to vivid, harrowing life for me. Her novel Life Class, also about the first world war, read less powerfully to me but remains an important element in my mental map of a war that ended forty years before I was born yet whose human and geopolitical effects continue to reverberate in this century. Barker is certainly not the only author of vivid work that circles WWI: Sebastian Faulks' Birdsong comes to mind; and the novels of Joseph Roth ... Louis de Bernières, Ernest Hemingway, T.E. Lawrence ... but Barker is the one whose work came most vividly to my mind the day before Thanksgiving.
Carlos Castenada - Was the author really a disciple of a shaman named Don Juan? Or any shaman at all? I never really cared. Castenada was among the most evocative of several authors to extend the borders of reality, as I conceive(d) it, out into metaphysics as I grew into adulthood (yes, of course there was more to it than reading...). Also cf. Herman Hesse, below; and Michael Murphy, author of Jacob Atabet, who didn't make this list.
J.M. Coetzee - Lucid prose, brutal geographical and human landscapes, impassable moral conundrums, deep sympathies running through all of it. There's a reason Coetzee was awarded the Booker Prize twice, and the Nobel Prize for Literature.
Umberto Eco - Eco has proven that being a medievalist can be fun. The Name of the Rose was the best book I've ever read about a library. I usually find obsession with mysterious secret societies (and their close relatives, conspiracy theories) to be numbingly silly, but Foucault's Pendulum kept me awake for nights on end.
Herman Hesse - There's something faintly embarrassing about my youthful enthusiasm for Hesse's orientalist fantasias, but there you have it. Like Castenada, and Eco on this list, Hesse was instrumental in opening my eyes to worlds behind the world's surfaces.
James Joyce - I blogged about Ulysses during Banned Books Week this year ... ever since I read his work during my years as a student, Joyce has stood, in my mind, for a fundamental shift in literature, in which the conception of heroism in western "high" culture was radically democratized.
Ian McEwan - Incandescent sentences. McEwan writes with a scalpel, I'm sure of it. His novels' endings often disappoint, but nothing could keep me from his delicious prose.
John Milton - I can't think of John Milton without remembering the late Julian Boyd, one of the most inspiring professors with whom I had the privilege to study at UC Berkeley. Professor Boyd was ill during the term I took his Milton seminar. I learned afterward, when I visited during the next term's office hours to thank him for an unforgettable introduction to Paradise Lost, that he was bedridden the whole quarter, and only dragged himself to campus by force of will when it was time to teach. He remembered our class only as a series of incoherent rants. But out of his riveting lectures, lunatic or not, I came to appreciate how rigorous application of etymological resonance could slither past conscious 'defenses' to work an author's moral instruction on a reader.
Haruki Murakami - This author's melancholy nostalgia, the fluid borders he draws between dream and reality, his fascination with Western pop culture, his fearlessness in depicting human depravity without stooping to dissociated voyeurism -- not to mention his love of cats -- has compelled my attention repeatedly, from The Windup Bird Chronicle to Norwegian Wood to Kafka on the Shore. His rendering of atrocities during Japan's occupation of China is indelibly burned into my literary and political memory; and Kafka Tamura's wonderfully intricate search for his origins and himself has entranced me through multiple readings and one of the best reading group discussions ever.
W.G. Sebald - Sebald's meditative peregrinations -- across Europe, time, and all the world -- evoke as powerfully as any modern writer's work the individual's helpless transfixion before the tides of history. The grainy black and white photographs interleaved through his prose (is it history, memoir, fiction, journalism?) deepen the mystery and amplify the dreaminess of Sebald's exacting observation. I've blogged about Sebald's work in Time, History, and Human Forgetting and More on place in fiction.
Will Shakespeare - I don't need to explain this one, right?
Gary Snyder - In sharp observation and incisive analysis, in poetics and in politics, with a keen eye and a wry wit, Snyder teaches that human culture is a rich melodic line amid a symphony of parts and players in the great music of being. His Back on the Fire (essays) was my top pick in an April blog post, Books everyone should read.
J.R.R. Tolkien - Exposing me to Professor Boyd's lessons on Milton (see above) before I was old enough to understand them, Tolkein gave the gift of of myth reimagined to the latter half of the twentieth century, casting deeply imprinted stories (Norse mythology, Beowulf, Wagner's Der Ring des Nibelungen) into philologically rich yet accessible and gripping adventures in a fantastic and oddly, enchantingly, wishfully plausible world. Condolences to George Lucas and J.K. Rowling, who failed (in this viewer's and reader's opinion) even to come close.
Mark Twain - Is it the bitter humor? The fantabulous hyperbole? The skewering of provincial America? A rich rendering of boyhood longings for independence and adventure? It's everything. To read Twain as a boy is to learn that books can make one giddy, and to relish the ride.
William Butler Yeats - Lush verse, elitist arrogance, hopeless romanticism, a view of Ireland through mythically-tinted lenses ... but, really, it's the finely tuned language that echoes, year after year, in the mind's library.
And who else might have made the cut on a no-limits list? It's hardly possible to be exhaustive. But authors offered by my Facebook friends gave me a lot of ideas.
The professor of English who recruited me into this Fifteen Authors game listed a number of childrens' writers among the grow'd-up sort, several of whom certainly influenced me in ways that will always stick with me: Dr. Seuss, Roald Dahl, Beverly Cleary.
Other FB friends listed authors I might have included had they come to mind in the moment ... Annie Dillard, Gabriel Garcia Marquez, Alexander Pope, Tom Robbins, Henrik Ibsen.
Then there's the science & speculative fiction read in my teens and twenties: Isaac Asimov, Margaret Atwood, Ray Bradbury, Arthur C. Clark, Philip José Farmer, Robert Heinlein, Ursula LeGuin.
Not to mention Homer, Ovid, Nathaniel Hawthorne, Charles Dickens, Joseph Roth (whom I did mention, actually, above), C.P. Cavafy, T.S. Eliot, Thomas Wolfe, George Orwell, Kurban Said, Walker Percy, José Saramago .........
I mean, really. Fifteen?
Labels:
books,
reading,
technology and literature
Monday, November 29, 2010
Google signals its next social media move
The Voice of Google published announcements in recent months that functionality in Google Groups will be pared back radically come January. This is not important enough to make any newspaper's front page, but I do think it's a signal of the company's impending moves in social media space -- of the next battle in the search giant's war with Facebook.
Warning: this post starts with a shaggy dog story, but I've marked it off with a sub-heading so you can skip down to the rumor and speculation bits if you'd rather...
Shaggy Dog Story: functionality stripped from Google Groups
My on-line writer's coven uses Google Groups as a platform for discussion, critique, and other sorts of feedback on the work-in-progress of its members. Google's offering meets our needs pretty well. It allows us to restrict content-access to our group's members, upload files (usually the pages offered up for critique), and link out to other useful Google apps, like Calendar (where we maintain our schedule) and Docs (where we evolve and publish the group's guidelines).
So I was kind of irritated when Google threw an announcement over the transom in late September notifying us that file upload would no longer be supported. My irritation didn't last long, because there was a workable, if somewhat awkward fix: we created a Google Site, and -- in advance of the scheduled January shutoff of Google Groups' support for files -- began to upload to the new Site instead. (We're not uploading to Google Docs because these are not, for the most part, files that we edit collaboratively: they're pages-in-progress that writers share with other group members.)
The "Welcome Message" feature that permits linking to our posting schedule on Google Calendar easily accommodates another link from Google Groups to the new file upload URL. Groups remains the core of our on-line activity, from which we link to the ancillary stuff. Problem solved.
Only it isn't.
A few weeks after the first pare-back bulletin, Google announced that the Google Groups "Welcome Message" would go away too: "Google Groups will no longer be supporting the Welcome Message feature. Starting January 13, you won't be able to edit your welcome messages, but you will still be able to view and download the existing content."
The "Welcome Message" is a feature that allows a group to customize its Google Groups home page. We use it in a very thin, but essential way, as I mentioned above: to link to all the other Googlicious products and features we use, like Calendar, Sites, and Docs.
So Google Groups is losing the ability to upload files, create web pages, and even to have a "home" page? It's being stripped down to a bare-bones discussion forum, supporting only member management & mailing list / forum functionality? Okay, I thought. Such is the price of using services offered for free. If you get more than you pay for you can't say much about the nature of the gift. So, I figured, we can adjust to this change too. We'll just move the center of gravity of our group from Google Groups to Google Sites.
But wait! Turns out there's no supported gadget for integrating discussion forums from Google Groups into a Google Site. Not good.
Why hasn't Google provided integration, I wondered? Looking at the Google Sites forums it was clear that a number of people had asked for it, and had even fashioned some clumsy work-arounds. I had a sneaking suspicion what was up, but I thought I'd ask ... and complain a bit ... by starting a new thread in the Google Sites user forum. In Groups discussion widget for Sites integration I asked for real, supported, unclumsy integration -- which, despite the workarounds and unsupported gadgets other Sites users have suggested or provided, remains an ugly hole in Google appscape.
The thread drew some posts from other complainants, but, to-date, no response from the Google-plex itself.
Birth pangs of Google's latest foray into social-media?
I suspect that stripping functionality from Google Groups is part of a repositioning of Google's offerings. That repositioning, I'd say, is part of the search giant's next steps in the ongoing battle against Facebook for dominance of eyeballs-on-the-intertubes.
In late June of this year, rumors began to appear of a new offering from the biggest search engine on Earth. "Google Me" was the rumored offering's name (which many found pretty silly-sounding, as I do). It sounded like it was shaping up to be a Facebook rival to many technology-watchers. In Fall the rumor mill perked up again as hints were dropped that that "Google Me" might make its debut this Fall.
The nature of this new offering began to emerge from the fog a couple months ago, as PC World reported, when CEO Eric Schmidt announced in mid-September that: "Google will be adding 'a social layer' into its suite of search, video and mapping products."
A couple days later, the same PC World journalist, Brennon Slattery, cited unnamed sources who described that layered approach to Google's social media strategy -- something quite different from a one-stop social media destination like Facebook: "Google Me will produce an activity stream generated by all Google products. Google Buzz has been rewritten to be the host of it all. And the reason Google Buzz isn't currently working in Google Apps is because they'll use the latest Buzz to support the activity stream in Apps... All Google products have been refactored to be part of the activity stream, including Google Docs, etc. They'll build their social graph around the stream."
These hints became more solid earlier this month when, as the U.K.'s Telegraph reported, Hugo Barra -- Google's director of mobile product management -- said: "We are not working on building a traditional social network platform. We do think 'social' is a key ingredient ... but we think of it more broadly. We think of social as an ingredient rather than a vertical platform."
And hence: an explanation for what's happening to Google Groups this fall. Stripping away functionality is, I'd wager, part of what Slattery's unnamed sources described as a refactoring of "all Google products" to fit, as an "ingredient" into the ill-named "Google Me."
If Google Groups delivers only discussion forum functionality, it is likely to fit more seamlessly into Google's new modes for digital interaction -- as part of a larger constellation of engagement between Google users.
I'm guessing that Groups' stripped down functionality isn't well-supported in Google Sites for a reason. Google's plan, I think, is to steer users of current Google offerings (like my writing group) into "activity streams" that are part of its new social-media strategy ... not into that flat old static web page thing that Sites enables.
Will it work out for the better as far as my writing group is concerned?
Time will tell. It's clear that Google wants to do better than Orkut in the social-media space that Facebook currently dominates. How that will fall out for mere mortals is anybody's guess.
Warning: this post starts with a shaggy dog story, but I've marked it off with a sub-heading so you can skip down to the rumor and speculation bits if you'd rather...
Shaggy Dog Story: functionality stripped from Google Groups
My on-line writer's coven uses Google Groups as a platform for discussion, critique, and other sorts of feedback on the work-in-progress of its members. Google's offering meets our needs pretty well. It allows us to restrict content-access to our group's members, upload files (usually the pages offered up for critique), and link out to other useful Google apps, like Calendar (where we maintain our schedule) and Docs (where we evolve and publish the group's guidelines).
So I was kind of irritated when Google threw an announcement over the transom in late September notifying us that file upload would no longer be supported. My irritation didn't last long, because there was a workable, if somewhat awkward fix: we created a Google Site, and -- in advance of the scheduled January shutoff of Google Groups' support for files -- began to upload to the new Site instead. (We're not uploading to Google Docs because these are not, for the most part, files that we edit collaboratively: they're pages-in-progress that writers share with other group members.)
The "Welcome Message" feature that permits linking to our posting schedule on Google Calendar easily accommodates another link from Google Groups to the new file upload URL. Groups remains the core of our on-line activity, from which we link to the ancillary stuff. Problem solved.
Only it isn't.
A few weeks after the first pare-back bulletin, Google announced that the Google Groups "Welcome Message" would go away too: "Google Groups will no longer be supporting the Welcome Message feature. Starting January 13, you won't be able to edit your welcome messages, but you will still be able to view and download the existing content."
The "Welcome Message" is a feature that allows a group to customize its Google Groups home page. We use it in a very thin, but essential way, as I mentioned above: to link to all the other Googlicious products and features we use, like Calendar, Sites, and Docs.
So Google Groups is losing the ability to upload files, create web pages, and even to have a "home" page? It's being stripped down to a bare-bones discussion forum, supporting only member management & mailing list / forum functionality? Okay, I thought. Such is the price of using services offered for free. If you get more than you pay for you can't say much about the nature of the gift. So, I figured, we can adjust to this change too. We'll just move the center of gravity of our group from Google Groups to Google Sites.
But wait! Turns out there's no supported gadget for integrating discussion forums from Google Groups into a Google Site. Not good.
Why hasn't Google provided integration, I wondered? Looking at the Google Sites forums it was clear that a number of people had asked for it, and had even fashioned some clumsy work-arounds. I had a sneaking suspicion what was up, but I thought I'd ask ... and complain a bit ... by starting a new thread in the Google Sites user forum. In Groups discussion widget for Sites integration I asked for real, supported, unclumsy integration -- which, despite the workarounds and unsupported gadgets other Sites users have suggested or provided, remains an ugly hole in Google appscape.
The thread drew some posts from other complainants, but, to-date, no response from the Google-plex itself.
Birth pangs of Google's latest foray into social-media?
I suspect that stripping functionality from Google Groups is part of a repositioning of Google's offerings. That repositioning, I'd say, is part of the search giant's next steps in the ongoing battle against Facebook for dominance of eyeballs-on-the-intertubes.
In late June of this year, rumors began to appear of a new offering from the biggest search engine on Earth. "Google Me" was the rumored offering's name (which many found pretty silly-sounding, as I do). It sounded like it was shaping up to be a Facebook rival to many technology-watchers. In Fall the rumor mill perked up again as hints were dropped that that "Google Me" might make its debut this Fall.
The nature of this new offering began to emerge from the fog a couple months ago, as PC World reported, when CEO Eric Schmidt announced in mid-September that: "Google will be adding 'a social layer' into its suite of search, video and mapping products."
A couple days later, the same PC World journalist, Brennon Slattery, cited unnamed sources who described that layered approach to Google's social media strategy -- something quite different from a one-stop social media destination like Facebook: "Google Me will produce an activity stream generated by all Google products. Google Buzz has been rewritten to be the host of it all. And the reason Google Buzz isn't currently working in Google Apps is because they'll use the latest Buzz to support the activity stream in Apps... All Google products have been refactored to be part of the activity stream, including Google Docs, etc. They'll build their social graph around the stream."
These hints became more solid earlier this month when, as the U.K.'s Telegraph reported, Hugo Barra -- Google's director of mobile product management -- said: "We are not working on building a traditional social network platform. We do think 'social' is a key ingredient ... but we think of it more broadly. We think of social as an ingredient rather than a vertical platform."
And hence: an explanation for what's happening to Google Groups this fall. Stripping away functionality is, I'd wager, part of what Slattery's unnamed sources described as a refactoring of "all Google products" to fit, as an "ingredient" into the ill-named "Google Me."
If Google Groups delivers only discussion forum functionality, it is likely to fit more seamlessly into Google's new modes for digital interaction -- as part of a larger constellation of engagement between Google users.
I'm guessing that Groups' stripped down functionality isn't well-supported in Google Sites for a reason. Google's plan, I think, is to steer users of current Google offerings (like my writing group) into "activity streams" that are part of its new social-media strategy ... not into that flat old static web page thing that Sites enables.
Will it work out for the better as far as my writing group is concerned?
Time will tell. It's clear that Google wants to do better than Orkut in the social-media space that Facebook currently dominates. How that will fall out for mere mortals is anybody's guess.
Labels:
Google,
technology,
writing
Thursday, November 25, 2010
Eating insects
(or: Does entomophagy bug you?)
A week ago today I had a solo evening free in Cambridge, Massachusetts; I'd come east for a series of work-related meetings.
I took the T from my hotel to Harvard Station, walked around the campus a bit, and then around Harvard Square. I picked up a free newspaper and found a decent-looking Thai restaurant. The chicken curry with mango looked pretty good, and when the server brought it out it tasted pretty good too. One of a pair of young Japanese women sitting at the next table turned my way to ask what my dish was called. She spoke English with a thick accent, and I'd heard her order plates of drunken noodles for both herself and her friend. The rest of their conversation had been in Japanese. I wondered whether drunken noodles was the only dish she recognized on the English-language menu, and whether she was planning to order whatever I was having next time around.
The newspaper I'd picked up to keep me company over dinner was a weekly called The Phoenix, one of many freebies in stands along the streets surrounding the campus. An article featured on the cover had caught my eye: Eat me: Delicious insects will save us all. The lead paragraph: "Insects are a more sustainable protein source than cows or pigs, they're more nutritious, and they're being taken seriously. The United Nations has thrown its weight behind insect consumption, and more and more people are recognizing that bugs could be a solution to a host of emerging problems, including world hunger and environmental woes."
It's Thanksgiving today, so why not share some of what I learned from The Phoenix on this topic, and, after returning to my high-speed internet equipped hotel room, on the intertubes too.
Fascinating facts:
Arnold van Huis, the entomologist quoted above, really gets around. The UN's FAO site links to his research group in Holland. The U.K.'s Guardian published an article in August titled Insects could be the key to meeting food needs of growing global population and van Huis seemed to be the expert behind the curtain in that article too. But this isn't just some Dutch scientist's fetish. Follow the Guardian link, and check out the photo in that article of skewered scorpions waiting for hungry customers at a food stall in Beijing. I saw virtually the same scene when I visited Wangfujing market in that city about five years ago. This insect-eating business is for real. No, I didn't sample any myself ... in Beijing I stuck to the bin tang hu lu, skewers of candied hawthorn fruits dipped in a sugar syrup that hardens to a sweet, crackly carapace. Delicious.
The article in The Phoenix gives recipes for Roasted snack crickets à la carte, Mealworm Chocolate Chip Cookies, and a Mealworm Stir-Fry. The cricket recipe is very simple. There are only two ingredients: live crickets and salt. The Mealworm Stir-Fry was pictured, in color, in the print copy of the newspaper. Honestly? My mango curry was easier on the eyes.
Happy Thanksgiving!!
Thanks to Xosé Castro for the image of deep-fried crickets from a market near Chiang Mai, Thailand.
A week ago today I had a solo evening free in Cambridge, Massachusetts; I'd come east for a series of work-related meetings.
I took the T from my hotel to Harvard Station, walked around the campus a bit, and then around Harvard Square. I picked up a free newspaper and found a decent-looking Thai restaurant. The chicken curry with mango looked pretty good, and when the server brought it out it tasted pretty good too. One of a pair of young Japanese women sitting at the next table turned my way to ask what my dish was called. She spoke English with a thick accent, and I'd heard her order plates of drunken noodles for both herself and her friend. The rest of their conversation had been in Japanese. I wondered whether drunken noodles was the only dish she recognized on the English-language menu, and whether she was planning to order whatever I was having next time around.
The newspaper I'd picked up to keep me company over dinner was a weekly called The Phoenix, one of many freebies in stands along the streets surrounding the campus. An article featured on the cover had caught my eye: Eat me: Delicious insects will save us all. The lead paragraph: "Insects are a more sustainable protein source than cows or pigs, they're more nutritious, and they're being taken seriously. The United Nations has thrown its weight behind insect consumption, and more and more people are recognizing that bugs could be a solution to a host of emerging problems, including world hunger and environmental woes."
It's Thanksgiving today, so why not share some of what I learned from The Phoenix on this topic, and, after returning to my high-speed internet equipped hotel room, on the intertubes too.
Fascinating facts:
- The world's total meat supply quadrupled between 1961 and 2007, during which time per capita consumption more than doubled, according to the New York Times in 2008.
- "An estimated 30 percent of the earth’s ice-free land is directly or indirectly involved in livestock production, according to the United Nation’s Food and Agriculture Organization, which also estimates that livestock production generates nearly a fifth of the world’s greenhouse gases — more than transportation," according to that same NYT article.
- "To produce one kilogram of meat, a cricket needs 1.7 kilogram of feed -- significantly less than a chicken (2.2), pig (3.6), sheep (6.3), and cow (7.7)." This according to Arnold van Huis, an entomologist based at Wageningen University in the Netherlands, in a recent opinion-piece in The Scientist, to which the article in The Phoenix called my attention.
- "Additionally, the edible proportion after processing is much higher for insects -- it's 80 percent in crickets -- than for pork (70 percent), chicken (65 percent), beef (55 percent), and lamb (35 percent)." Ibid.
- The UN is really into this insects-as-food thing. Check out the Edible forest insects page, complete with video, on the site of the Food and Agricultural Organization (FAO) of the United Nations.
Arnold van Huis, the entomologist quoted above, really gets around. The UN's FAO site links to his research group in Holland. The U.K.'s Guardian published an article in August titled Insects could be the key to meeting food needs of growing global population and van Huis seemed to be the expert behind the curtain in that article too. But this isn't just some Dutch scientist's fetish. Follow the Guardian link, and check out the photo in that article of skewered scorpions waiting for hungry customers at a food stall in Beijing. I saw virtually the same scene when I visited Wangfujing market in that city about five years ago. This insect-eating business is for real. No, I didn't sample any myself ... in Beijing I stuck to the bin tang hu lu, skewers of candied hawthorn fruits dipped in a sugar syrup that hardens to a sweet, crackly carapace. Delicious.
The article in The Phoenix gives recipes for Roasted snack crickets à la carte, Mealworm Chocolate Chip Cookies, and a Mealworm Stir-Fry. The cricket recipe is very simple. There are only two ingredients: live crickets and salt. The Mealworm Stir-Fry was pictured, in color, in the print copy of the newspaper. Honestly? My mango curry was easier on the eyes.
Happy Thanksgiving!!
Thanks to Xosé Castro for the image of deep-fried crickets from a market near Chiang Mai, Thailand.
Monday, November 22, 2010
Happiness is a warm focus
Sometimes I apologize to friends or colleagues for being unable to do more than one thing at a time. As if it's a deficiency, and proves I'm stupid. In my heart-of-hearts, though, I don't feel sorry. I like focusing. And last week, on a flight to Boston, as I took off my noise-canceling headphones and came up for air from a focused edit of a report my project is preparing for a funding agency, our project director (who was sitting across the aisle) passed over a section of that morning's New York Times. He was showing me an article titled When the mind wanders, happiness also strays.
The article riffs off a study done by Harvard psychologists Matthew A. Killingsworth and Daniel T. Gilbert, published in Science magazine. The study showed that when people's minds wander, they tend to wander into unhappy territory. Therefore, people are happier if they stay focused on what's before them. They proved this with an iPhone app, naturally, because there's an iPhone app for everything.
But, hey -- sarcasm aside, really -- that means I'm not stupid, right? I'm happy!
Here's my favorite paragraph from the NYT article. It's about what people who aren't psychologists have to say on the topic of focus: "What psychologists call 'flow' -- immersing your mind fully in activity -- has long been advocated by nonpsychologists. 'Life is not long,' Samuel Johnson said, 'and too much of it must not pass in idle deliberation how it shall be spent.' Henry Ford was more blunt: 'Idleness warps the mind.' The iPhone results jibe nicely with one of the favorite sayings of William F. Buckley Jr.: 'Industry is the enemy of melancholy.'"
Perhaps you're familiar with the concept of multitasking. I don't mean the kind computers do; I mean the kind humans do, or are supposed to do according to certain management consultants. This is one of those brain-burps with staying power that purport to be about getting things done. The kind of thing that that seems to ripple regularly out of business schools and into the management 'culture' of unsuspecting organizations, never mind that pretty much everybody else seems to know better.
Here's the three-sentence Wikipedia definition: "Human multitasking is the performance by an individual of appearing to handle more than one task at the same time. The term is derived from computer multitasking. An example of multitasking is listening to a radio interview while typing an email."
Later in the Wikipedia article, author and psychiatrist Richard Hallowell is quoted describing multitasking as a "mythical activity in which people believe they can perform two or more tasks simultaneously."
I'm with Dr. Hallowell on that one. I know it's mythical to me. The only 'multitasking' I'm good at is listening to music while I cook, clean, or drive. I can do those tasks while distracted because I've been doing them for so many years that they no longer involve much cognition: they're mostly reflex, muscle memory, sense-and-response. Still, I'd bet that if there were a reliable way to measure, somebody could prove that I don't cook, clean, or drive as well while listening to music as I do when the radio's off.
Managers seem to be hardwired to respond to the myth of multitasking. There was a time in the mid-1990s when every job listing I saw called out multitasking as a desirable trait in prospective employees (and I saw a lot of job listings then, because I was working in UC Berkeley's Human Resources department). Answer e-mail, juggle phone calls, edit a report, manage a few student interns, balance a budget, design a web page, analyze a contract, draft a meeting agenda, normalize a database. A day in the life...
It's a great fantasy if you're a manager: hire one worker, get multiple workers' productivity. The fantasy gets even sillier when it involves a worker whose value is directly tied to her ability to focus and think deeply about a problem. Imagining that such a worker can have her day sliced into modular chunks that can be plugged into any number of projects and problems is ... well, it's just not real.
There's all kinds of mythbusting out on the intertubes about multitasking, so it's hardly necessary for me to do a literature review on the topic. But I will give a shout-out to one of the most practical and thoughtful computer programmers who Writes About Stuff on the intertubes, Joel Spolsky, of "Joel on Software" fame. Spolsky wrote -- nearly ten years ago, in an article titled Human Task Switches Considered Harmful -- about the friction inherent in giving computer programmers more than one project to work on at a time. Spolsky knows a lot about this, because giving programmers tasks to work on is his job. I believe what he writes in this article because it is exactly aligned with my own experience (as a computer programmer, as a colleague of computer programmers, and otherwise). To wit:
"The trick here is that when you manage programmers, specifically, task switches take a really, really, really long time. That's because programming is the kind of task where you have to keep a lot of things in your head at once. The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve."
And now, according to a bunch of Harvard psychologists and their iPhone app, that same human brain, deprived of the focus it really really wants, is in a bad mood.
Who needs it? Don't multitask ... be happy.
Thanks to Andrew McMillain & Wikipedia Commons for the image of The Thinker at the San Francisco Legion of Honor.
The article riffs off a study done by Harvard psychologists Matthew A. Killingsworth and Daniel T. Gilbert, published in Science magazine. The study showed that when people's minds wander, they tend to wander into unhappy territory. Therefore, people are happier if they stay focused on what's before them. They proved this with an iPhone app, naturally, because there's an iPhone app for everything.
But, hey -- sarcasm aside, really -- that means I'm not stupid, right? I'm happy!
Here's my favorite paragraph from the NYT article. It's about what people who aren't psychologists have to say on the topic of focus: "What psychologists call 'flow' -- immersing your mind fully in activity -- has long been advocated by nonpsychologists. 'Life is not long,' Samuel Johnson said, 'and too much of it must not pass in idle deliberation how it shall be spent.' Henry Ford was more blunt: 'Idleness warps the mind.' The iPhone results jibe nicely with one of the favorite sayings of William F. Buckley Jr.: 'Industry is the enemy of melancholy.'"
Perhaps you're familiar with the concept of multitasking. I don't mean the kind computers do; I mean the kind humans do, or are supposed to do according to certain management consultants. This is one of those brain-burps with staying power that purport to be about getting things done. The kind of thing that that seems to ripple regularly out of business schools and into the management 'culture' of unsuspecting organizations, never mind that pretty much everybody else seems to know better.
Here's the three-sentence Wikipedia definition: "Human multitasking is the performance by an individual of appearing to handle more than one task at the same time. The term is derived from computer multitasking. An example of multitasking is listening to a radio interview while typing an email."
Later in the Wikipedia article, author and psychiatrist Richard Hallowell is quoted describing multitasking as a "mythical activity in which people believe they can perform two or more tasks simultaneously."
I'm with Dr. Hallowell on that one. I know it's mythical to me. The only 'multitasking' I'm good at is listening to music while I cook, clean, or drive. I can do those tasks while distracted because I've been doing them for so many years that they no longer involve much cognition: they're mostly reflex, muscle memory, sense-and-response. Still, I'd bet that if there were a reliable way to measure, somebody could prove that I don't cook, clean, or drive as well while listening to music as I do when the radio's off.
Managers seem to be hardwired to respond to the myth of multitasking. There was a time in the mid-1990s when every job listing I saw called out multitasking as a desirable trait in prospective employees (and I saw a lot of job listings then, because I was working in UC Berkeley's Human Resources department). Answer e-mail, juggle phone calls, edit a report, manage a few student interns, balance a budget, design a web page, analyze a contract, draft a meeting agenda, normalize a database. A day in the life...
It's a great fantasy if you're a manager: hire one worker, get multiple workers' productivity. The fantasy gets even sillier when it involves a worker whose value is directly tied to her ability to focus and think deeply about a problem. Imagining that such a worker can have her day sliced into modular chunks that can be plugged into any number of projects and problems is ... well, it's just not real.
There's all kinds of mythbusting out on the intertubes about multitasking, so it's hardly necessary for me to do a literature review on the topic. But I will give a shout-out to one of the most practical and thoughtful computer programmers who Writes About Stuff on the intertubes, Joel Spolsky, of "Joel on Software" fame. Spolsky wrote -- nearly ten years ago, in an article titled Human Task Switches Considered Harmful -- about the friction inherent in giving computer programmers more than one project to work on at a time. Spolsky knows a lot about this, because giving programmers tasks to work on is his job. I believe what he writes in this article because it is exactly aligned with my own experience (as a computer programmer, as a colleague of computer programmers, and otherwise). To wit:
"The trick here is that when you manage programmers, specifically, task switches take a really, really, really long time. That's because programming is the kind of task where you have to keep a lot of things in your head at once. The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve."
And now, according to a bunch of Harvard psychologists and their iPhone app, that same human brain, deprived of the focus it really really wants, is in a bad mood.
Who needs it? Don't multitask ... be happy.
Thanks to Andrew McMillain & Wikipedia Commons for the image of The Thinker at the San Francisco Legion of Honor.
Labels:
culture,
technology
Thursday, November 18, 2010
Drafting vs. editing
I'm in a transition mode. Until a couple of weeks ago I'd been revising (and revising and revising and revising) my novel project, Consequence, for ... well, let's just say it's been a few years since I "finished" a first complete draft of the mss. Because it's how I work, I was also editing as I drafted (sometimes going back over a chapter or two, sometimes taking a deep breath and running through from the beginning to wherever I was). Because it's how editing goes, there was always a bit of drafting -- new chapters, new scenes -- during the editing phase. Red pen, black pen.
This month I launched into an agent-querying phase. Now, there's no guarantee I'm done with Consequence. In fact, the best I can hope for is that an agent will take a shine to the manuscript and ask me to make changes. Then an editor will take a shine to the project and ... ask me to make more changes. The worst I can expect? That nobody in the precarious world of 21st century publishing will bite, and I'll have to decide whether and how to have at it again.
In the meantime, I'm taking index cards and post-its and journal entries I've written over the past ten or so months about the next novel project I'm contemplating, and putting flesh on what are currently some fairly skimpy bones. Do I have a name for this next project? Well, yes, sort of, but it's so provisional that I'm not going to say. Not yet. Wouldn't want to jinx it.
Drafting vs. editing: what I can tell you is that what I'm doing with index cards and post-its and journal entries now feels really different from what I was doing with manuscript pages in October.
I do like the sudden wealth of possibility -- I mean, it's fiction, anything could happen -- but drafting does feel oddly free-form after having a built-structure to work in for quite a long time. I've certainly found another thing to appreciate about blogging, in that facing the blank blog-post page a couple of times a week for most of this year means that this free-form business isn't altogether foreign as I circle back to it.
But it feels soooooooooooooooooooooo slow! Thinking my way slowly slowly slowly into a new world, one detail at a time, one character's features, one scene's outline or tone, one turn of the plot, a bit of backstory ... the sudden change of tempo foregrounds the easily forgotten truth that Consequence was built up out of tiny accretions over years. Did I slip into a fantasy that it sprang up overnight, fully marshaled to face the red pen?
I canvassed my writing group last week about whether they like drafting or editing better, and in the course of some lively exchange got a mix of responses. One of our group said unequivocally that he always prefers drafting to editing, that editing is "a slow slog" for him. Others riffed on the way the process of building a work of long-form fiction involves a lot of back and forth: drafting, editing what you've drafted, letting the work settle for a bit, thinking of a brilliant new twist on the bus or in the shower, finding new scenes that need to be written, finding that pages on end were actually about building background for one's self, the writer, and don't need to be in the novel at all.
And then there's the part where the inmates take over the asylum, the characters assert themselves and determine what's next ... whether the author planned it that way or not. That, actually, can be the most fun, and the most fluid part of a story. Of course, once the characters settle back onto the page the author still has to revise. And revise again. And -- well, you get it.
At the Galleria dell'Accademia in Firenze (a.k.a. Florence, Italy), Michelangelo's four unfinished sculptures, titled "Prisoners," are displayed on either side of what amounts to the path to a soaring, domed space where the great artist's "David" is exhibited. Those rooms hold my touchstone images about what it feels like to write, no matter that Michelangelo worked in stone and I work in words, and never mind that what he achieved is infinitely greater than I can even dream of inventing. The visceral sense of emergence one has looking at these massive blocks from which the sculptor began to chip away the spalls that obscured his vision, the power rearing out of the marble as the figures are 'revealed' in stone that has itself imprisoned them, stone from which they are being freed by the sculptor's chisel ... and then the breathtaking perfection of the finished David, towering over the next room. On a very good day, editing feels like it's a part of that same game.
(If only one's prose read a fraction as heroically and beautifully even as Michelangelo's unfinished work ...)
Related posts on One Finger Typing:
Mental floss
Craft and art: erasure and accent
Aleksandar Hemon on Narrative, Biography, Language
Does a writer need a writers' group?
This month I launched into an agent-querying phase. Now, there's no guarantee I'm done with Consequence. In fact, the best I can hope for is that an agent will take a shine to the manuscript and ask me to make changes. Then an editor will take a shine to the project and ... ask me to make more changes. The worst I can expect? That nobody in the precarious world of 21st century publishing will bite, and I'll have to decide whether and how to have at it again.
In the meantime, I'm taking index cards and post-its and journal entries I've written over the past ten or so months about the next novel project I'm contemplating, and putting flesh on what are currently some fairly skimpy bones. Do I have a name for this next project? Well, yes, sort of, but it's so provisional that I'm not going to say. Not yet. Wouldn't want to jinx it.
Drafting vs. editing: what I can tell you is that what I'm doing with index cards and post-its and journal entries now feels really different from what I was doing with manuscript pages in October.
I do like the sudden wealth of possibility -- I mean, it's fiction, anything could happen -- but drafting does feel oddly free-form after having a built-structure to work in for quite a long time. I've certainly found another thing to appreciate about blogging, in that facing the blank blog-post page a couple of times a week for most of this year means that this free-form business isn't altogether foreign as I circle back to it.
But it feels soooooooooooooooooooooo slow! Thinking my way slowly slowly slowly into a new world, one detail at a time, one character's features, one scene's outline or tone, one turn of the plot, a bit of backstory ... the sudden change of tempo foregrounds the easily forgotten truth that Consequence was built up out of tiny accretions over years. Did I slip into a fantasy that it sprang up overnight, fully marshaled to face the red pen?
I canvassed my writing group last week about whether they like drafting or editing better, and in the course of some lively exchange got a mix of responses. One of our group said unequivocally that he always prefers drafting to editing, that editing is "a slow slog" for him. Others riffed on the way the process of building a work of long-form fiction involves a lot of back and forth: drafting, editing what you've drafted, letting the work settle for a bit, thinking of a brilliant new twist on the bus or in the shower, finding new scenes that need to be written, finding that pages on end were actually about building background for one's self, the writer, and don't need to be in the novel at all.
And then there's the part where the inmates take over the asylum, the characters assert themselves and determine what's next ... whether the author planned it that way or not. That, actually, can be the most fun, and the most fluid part of a story. Of course, once the characters settle back onto the page the author still has to revise. And revise again. And -- well, you get it.
At the Galleria dell'Accademia in Firenze (a.k.a. Florence, Italy), Michelangelo's four unfinished sculptures, titled "Prisoners," are displayed on either side of what amounts to the path to a soaring, domed space where the great artist's "David" is exhibited. Those rooms hold my touchstone images about what it feels like to write, no matter that Michelangelo worked in stone and I work in words, and never mind that what he achieved is infinitely greater than I can even dream of inventing. The visceral sense of emergence one has looking at these massive blocks from which the sculptor began to chip away the spalls that obscured his vision, the power rearing out of the marble as the figures are 'revealed' in stone that has itself imprisoned them, stone from which they are being freed by the sculptor's chisel ... and then the breathtaking perfection of the finished David, towering over the next room. On a very good day, editing feels like it's a part of that same game.
(If only one's prose read a fraction as heroically and beautifully even as Michelangelo's unfinished work ...)
Related posts on One Finger Typing:
Mental floss
Craft and art: erasure and accent
Aleksandar Hemon on Narrative, Biography, Language
Does a writer need a writers' group?
Monday, November 15, 2010
Matrixed Higher Education
Henry Kissenger famously noted (even though the fame doesn't justifiably belong to him) that: "academic politics are so vicious precisely because the stakes are so small." I don't think that's what's going on in the on-line education kerfuffle.
In June I blogged about the effort UC Berkeley law school dean Christopher Edley has been leading to "pilot" on-line learning dispensed by a leading research university. That effort is moving ahead on schedule, as reported by the Chronicle of Higher Education in May, with a call for proposals for up to 25 pilot courses, according to a UC-published puff piece earlier this month. But, bowing to the kinds of faculty balkiness that Dean Edley warned of when he worried that "the coalition of the willing among frontline faculty who would like to pursue this idea will be stopped dead in their tracks by the bureaucracy," this month's announcement from UC's central "Office of the President" devoted over 25% of its word count to a section titled "Faculty concerns to be studied."
That was enough backpeddling for Nanette Asimov to write an article for the San Francisco Chronicle two days later, titled UC leaders downplay plans for online courses. As she spins it, "University of California students will be able to enroll in the schools' first top-tier, UC-quality online courses by January 2012, but UC officials have strongly scaled back their expectations about what such courses can achieve. [...] the new effort was intended to see whether UC might pull off a fully online degree program as rigorous as what the selective university offers in its classrooms. For now, that plan is on hold."
In a related item, did you catch Bill Gates on the topic of technology replacing higher ed, at the Techonomy Conference 2010, in August? "The self motivated learner," Gates predicted over the summer, "will be on the web, and there will be far less place-based [education]. [...] College -- except for the parties -- needs to be less place-based. Place-based activity in that college thing [sic] will be five times less important than it is today."
Sharp-eyed monitors of zeitgeist-past will recall that in the mid-1970s Bill Gates dropped out of a small, well-endowed institution of higher education called Harvard, based in a place known as Cambridge, Massachusetts.
A statistical aside. Census figures for October 2008 show 11,378,000 students enrolled as undergraduates in institutions of higher education. Harvard College currently enrolls 6,655 undergraduates. Dividing the oranges by the apples, we can guesstimate that Harvard enrolls one out of 1,736 college students in the United States. That fact that Gates was admitted to Harvard in the first place means -- wait for it -- he was never your typical student. That he, or Facebook founder Mark Zuckerberg, dropped out of Harvard and still became billionaires doesn't predict much about education as it applies to the rest of us. I will say straight out that I am grateful for my place-based education, and grateful that nobody tried to take it away from me, stick me in front of a One Laptop Per Child screen, and call it an 'equivalently' rigorous experience.
See, here's the thing.
The CHE reports that Suzanne Guerlac, a UC Berkeley faculty member in French, believes that "[o]ffering full online degrees would undermine the quality of undergraduate instruction [...] by reducing the opportunity for students to learn directly from research faculty members. 'It's access to what?' asked Ms. Guerlac. 'It's not access to UC, and that's got to be made clear.'" I'd like to second that opinion.
Look at and listen to the Bill Gates video embedded above. All three minutes of it. Listen to the part where he talks about the value of a "full immersion environment" to education. He's careful to speak of this value only in the context of elementary and secondary schooling, which he likens to baby-sitting in one breath, before turning around in the next and saying that the "full immersion environment" is what enables students to learn successfully. Wha??? It's babysitting and it's the most effective form of education?
Notice that he doesn't actually explain why he thinks this full immersion stuff works for younger students but suddenly loses its applicability when kids reach college level. Pay close attention to the special hand-waving transitions at 1:14-1:16 ("That's very different than saying, okay, for college courses...") and 1:34-1:36 ("The self-motivated learner will be on the web..."). Why is it different? What about the learner (I hate that word) who is not self-motivated? Later in the video he's making a case for on-line education because it's cheaper, not because it is an effective mode of education.
It would be an ideologue's mistake to say there's nowhere along the Spectrum of What's Good that on-line education fits. The thing to remember, though, is that on-line learning is not the same thing as face-to-face, interactive experience with peers and teachers. Sure, it may be good for teaching some things. Maybe even some useful things, probably better in some areas than others. But what of the way students learn from the other students in a seminar or discussion section, and from the interactions those students have with a professor or a graduate student instructor?
On-line education isn't visceral, and it lacks the enormous incentives to learning inherent in synergies, competitiveness, sympathy and other forms of interpersonal zing that crop up in engaged rooms full of students and teachers. Ever wonder why people pay $40 or $80 or $120 for two or three hours of live theatre (or sports, or music), then complain about a cable TV bill that costs that much per month? It's the value, people.
And therein lies the issue behind the issue.
Across the United States, public funding for higher education is being pared back. In many states, this is true for elementary and secondary school education as well. In September of last year, UC Berkeley Chancellor Robert J. Birgenau and Vice Chancellor Frank D. Yeary wrote an op-ed piece for the Washington Post. They argued that "Public universities by definition teach large numbers of students and substantially help shape our nation," describing how the top ten public universities enroll 350,000 undergraduates compared to a sixth as many for the eight Ivy League schools. They argued that strong public support for public higher ed means that public universities "have an admirable cross-section of ethnically and economically diverse students. In essence, their student bodies look like America. They are the conduits into mainstream society for a huge number of highly talented people from financially disadvantaged backgrounds, as well as the key to the American dream of an increasingly better life for the middle class."
Then they describe how, "over several decades there has been a material and progressive disinvestment by states in higher education. The economic crisis has made this a countrywide phenomenon, with devastating cuts in some states, including California. Historically acclaimed public institutions are struggling to remain true to their mission as tuitions rise and in-state students from middle- and low-income families are displaced by out-of-state students from higher socioeconomic brackets who pay steeper fees."
Public funding for higher ed is being pared back internationally too. The government of the U.K., as part of a drastic package of austerity measures outlined last month, has proposed "to cut education spending and steeply increase tuition for students," drawing tens of thousands of protesters to London, as reported by the NY Times on 10 Nov.
Why is this happening?
I don't know the answers to these questions, but I'm inclined to believe that their answers are driving the stampede to 'on-line education' in the U.S. and abroad ... not because it's better, but because it's cheaper and because some groups of policymakers imagine that diminishing quality of your average citizen's education is, well, not a big deal. Not to them.
It's not the availability of technology -- it's the ends that social, economic, and political forces are driving toward, using technology as a means. You can't build a case on a three minute video of Bill Gates talking at a conference up at Lake Tahoe, but the brand of "vision" without substance that he was espousing earlier this year does raise my antennae.
There's something about on-line education -- and the agendas that may be driving it -- that makes me nervous. I don't know about you, but I'm not so keen on living in a world engineered and operated by people whose educational background and trained compliance could have been lifted straight out of The Matrix. Let alone governed by some shadowy Architect!
In June I blogged about the effort UC Berkeley law school dean Christopher Edley has been leading to "pilot" on-line learning dispensed by a leading research university. That effort is moving ahead on schedule, as reported by the Chronicle of Higher Education in May, with a call for proposals for up to 25 pilot courses, according to a UC-published puff piece earlier this month. But, bowing to the kinds of faculty balkiness that Dean Edley warned of when he worried that "the coalition of the willing among frontline faculty who would like to pursue this idea will be stopped dead in their tracks by the bureaucracy," this month's announcement from UC's central "Office of the President" devoted over 25% of its word count to a section titled "Faculty concerns to be studied."
That was enough backpeddling for Nanette Asimov to write an article for the San Francisco Chronicle two days later, titled UC leaders downplay plans for online courses. As she spins it, "University of California students will be able to enroll in the schools' first top-tier, UC-quality online courses by January 2012, but UC officials have strongly scaled back their expectations about what such courses can achieve. [...] the new effort was intended to see whether UC might pull off a fully online degree program as rigorous as what the selective university offers in its classrooms. For now, that plan is on hold."
In a related item, did you catch Bill Gates on the topic of technology replacing higher ed, at the Techonomy Conference 2010, in August? "The self motivated learner," Gates predicted over the summer, "will be on the web, and there will be far less place-based [education]. [...] College -- except for the parties -- needs to be less place-based. Place-based activity in that college thing [sic] will be five times less important than it is today."
Sharp-eyed monitors of zeitgeist-past will recall that in the mid-1970s Bill Gates dropped out of a small, well-endowed institution of higher education called Harvard, based in a place known as Cambridge, Massachusetts.
A statistical aside. Census figures for October 2008 show 11,378,000 students enrolled as undergraduates in institutions of higher education. Harvard College currently enrolls 6,655 undergraduates. Dividing the oranges by the apples, we can guesstimate that Harvard enrolls one out of 1,736 college students in the United States. That fact that Gates was admitted to Harvard in the first place means -- wait for it -- he was never your typical student. That he, or Facebook founder Mark Zuckerberg, dropped out of Harvard and still became billionaires doesn't predict much about education as it applies to the rest of us. I will say straight out that I am grateful for my place-based education, and grateful that nobody tried to take it away from me, stick me in front of a One Laptop Per Child screen, and call it an 'equivalently' rigorous experience.
See, here's the thing.
The CHE reports that Suzanne Guerlac, a UC Berkeley faculty member in French, believes that "[o]ffering full online degrees would undermine the quality of undergraduate instruction [...] by reducing the opportunity for students to learn directly from research faculty members. 'It's access to what?' asked Ms. Guerlac. 'It's not access to UC, and that's got to be made clear.'" I'd like to second that opinion.
Look at and listen to the Bill Gates video embedded above. All three minutes of it. Listen to the part where he talks about the value of a "full immersion environment" to education. He's careful to speak of this value only in the context of elementary and secondary schooling, which he likens to baby-sitting in one breath, before turning around in the next and saying that the "full immersion environment" is what enables students to learn successfully. Wha??? It's babysitting and it's the most effective form of education?
Notice that he doesn't actually explain why he thinks this full immersion stuff works for younger students but suddenly loses its applicability when kids reach college level. Pay close attention to the special hand-waving transitions at 1:14-1:16 ("That's very different than saying, okay, for college courses...") and 1:34-1:36 ("The self-motivated learner will be on the web..."). Why is it different? What about the learner (I hate that word) who is not self-motivated? Later in the video he's making a case for on-line education because it's cheaper, not because it is an effective mode of education.
It would be an ideologue's mistake to say there's nowhere along the Spectrum of What's Good that on-line education fits. The thing to remember, though, is that on-line learning is not the same thing as face-to-face, interactive experience with peers and teachers. Sure, it may be good for teaching some things. Maybe even some useful things, probably better in some areas than others. But what of the way students learn from the other students in a seminar or discussion section, and from the interactions those students have with a professor or a graduate student instructor?
On-line education isn't visceral, and it lacks the enormous incentives to learning inherent in synergies, competitiveness, sympathy and other forms of interpersonal zing that crop up in engaged rooms full of students and teachers. Ever wonder why people pay $40 or $80 or $120 for two or three hours of live theatre (or sports, or music), then complain about a cable TV bill that costs that much per month? It's the value, people.
And therein lies the issue behind the issue.
Across the United States, public funding for higher education is being pared back. In many states, this is true for elementary and secondary school education as well. In September of last year, UC Berkeley Chancellor Robert J. Birgenau and Vice Chancellor Frank D. Yeary wrote an op-ed piece for the Washington Post. They argued that "Public universities by definition teach large numbers of students and substantially help shape our nation," describing how the top ten public universities enroll 350,000 undergraduates compared to a sixth as many for the eight Ivy League schools. They argued that strong public support for public higher ed means that public universities "have an admirable cross-section of ethnically and economically diverse students. In essence, their student bodies look like America. They are the conduits into mainstream society for a huge number of highly talented people from financially disadvantaged backgrounds, as well as the key to the American dream of an increasingly better life for the middle class."
Then they describe how, "over several decades there has been a material and progressive disinvestment by states in higher education. The economic crisis has made this a countrywide phenomenon, with devastating cuts in some states, including California. Historically acclaimed public institutions are struggling to remain true to their mission as tuitions rise and in-state students from middle- and low-income families are displaced by out-of-state students from higher socioeconomic brackets who pay steeper fees."
Public funding for higher ed is being pared back internationally too. The government of the U.K., as part of a drastic package of austerity measures outlined last month, has proposed "to cut education spending and steeply increase tuition for students," drawing tens of thousands of protesters to London, as reported by the NY Times on 10 Nov.
Why is this happening?
- Is it to hurry along the well-documented shift in wealth from working- and middle-classes to an economic elite?
- Does it follow from that shift in wealth that a smaller cohort of educated workers is required to drive the economy?
- Is public divestment from higher ed an effect of economic globalization, in which a lion's share of 'the smarts' doesn't need to come from the United States and Europe any more because nations like China and India are supposed to be doing such a bang-up job educating their vast populations, and business owners can pay lower wages for their labor?
- Is it an effect of 'higher productivity,' in which fewer workers plus better technology produce greater wealth -- and -- I can never get this straight what with the competing narratives -- does this mean that there are fewer jobs to go around or that this magically-increasing wealth is somehow supposed to lift all boats?
- Is diminishing public support for higher ed an effect of fetishizing "the free market," an ideology that has been applied with an overly broad and abolutist brush?
I don't know the answers to these questions, but I'm inclined to believe that their answers are driving the stampede to 'on-line education' in the U.S. and abroad ... not because it's better, but because it's cheaper and because some groups of policymakers imagine that diminishing quality of your average citizen's education is, well, not a big deal. Not to them.
It's not the availability of technology -- it's the ends that social, economic, and political forces are driving toward, using technology as a means. You can't build a case on a three minute video of Bill Gates talking at a conference up at Lake Tahoe, but the brand of "vision" without substance that he was espousing earlier this year does raise my antennae.
There's something about on-line education -- and the agendas that may be driving it -- that makes me nervous. I don't know about you, but I'm not so keen on living in a world engineered and operated by people whose educational background and trained compliance could have been lifted straight out of The Matrix. Let alone governed by some shadowy Architect!
Labels:
higher ed,
politics,
technology
Subscribe to:
Posts (Atom)