Sunday, August 30, 2009

my late-entry sxsw panel proposal: plz to vote for me

File under: shameless self-promotion

This year, the South by Southwest Interactive Festival is crowdsourcing its lineup of panelists by allowing people to vote thumbsup or thumbdown on panel proposals.

I recently submitted my proposal, which is on the Free / Libre / Open Education (FLOE) movement.

You guys, I really really want to get accepted. Will you go to the site and vote for me?

You can read my proposal here.

RIP Ed Rondthaler, foenetic speler

Edward Rondthaler, who died August 19 at the age of 104, was a lifelong typophilic and a champion of the movement to simply the English language by simplifying, and phoneticizing, word spellings.




Thursday, August 27, 2009

why I chose openness: David Wiley, I've completed my homework assignment!

In a recent post on his blog iterating toward openness, David Wiley makes a request of all adherents to the "openness" movement who read his blog:

Without any special authority to do so, may I please give you a homework assignment? Would you please blog about why you choose to be open? What is the fundamental, underlying goal or goals you hope to accomplish by being open? What keeps you motivated? Why do you spend your precious little free time on my blog, reading this post and this question? If each of us put some thought and some public reflective writing into this question, the field would likely be greatly served. The more honest and open you are in your response, the more useful the exercise will be for you and for us.


The assigmnent is the result of a previous post in which Wiley wrote:
While I think everyone in the field of “open education” is dedicated to increasing access to educational opportunity, there is an increasingly radical element within the field – good old-fashioned guillotine and molotov type revolutionaries. At the conference I heard a number of people say that things would be greatly improved if we could just get rid of all the institutions of formal education. I once heard a follow up comment, “and governments, too.” I turned to laugh at his joke, but saw that he was serious. This “burn it all down” attitude really scares me.


As you can imagine if you know even a small chunk of the history of projects like the Free / Libre / Open Source Software movement (it keeps getting more words tacked onto it for a reason), Wiley's post generated some fierce responses. So the request, and Wiley's decision to back away from his initial stance, appears to be an effort to consider the broad range of issues that attract people to the openness movement in general, and open education in particular.

Back in 1984, Seymour Papert said this:
There won't be schools in the future.... I think the computer will blow up the school. That is, the school defined as something where there are classes, teachers running exams, people structured in groups by age, following a curriculum-- all of that. The whole system is based on a set of structural concepts that are incompatible with the presence of the computer... But this will happen only in communities of children who have access to computers on a sufficient scale.


Twenty-five years later, we are forced to conclude that one of the following is probably true:

  • The computer didn't actually blow up anything at all; schools are basically the same as they always were, with the same curricula, approaches, and values; or
  • The computer did blow up the school, but nobody noticed that the school was blown to bits, and kept operating as if education continues to serve the purpose it served 50, 100, or more years ago.

Either way, the results are the same: schools equip kids with a set of mindsets and skillsets that prepare them increasingly less well for the culture into which they will emerge.

Perhaps Papert's mistake was in attributing intention to the computer; if, to further extend the metaphor, the computer really was an explosive device, then it had no ability to decide how, when, and here to detonate.

I was drawn to the open education movement because it attempts to do on purpose what we thought computers would do by default: blow wide open the walls, and therefore the constraints, surrounding education. In arguing against the binary nature of the notion of "openness," Wiley argues that "[i]n the eyes of the defenders of the 'open source' brand, if you’re not open enough you’re not open at all.... It is just as inappropriate for you to try to force your goals on others as it is for others to try to force their goals on you."

Of course he's right, but on the other hand, things aren't always quite so clearcut in the field of education. I shouldn't be able to force my values on educators, researchers, and administrators who disagree with my approach to teaching and learning; but if I leave them be, then they are free to inculcate young people with exactly the wrong set of skills, ideals, and values--the kind that reify outdated, unfair, and wrongheaded assumptions about how the world can, does, and should work.

I am, as I hope I have made clear, an increasingly radical element within the field. I am a revolutionary. And this is precisely what drew me to openness as a movement. In fact, I wish the open education movement would embrace a more inclusive name, perhaps something like Free / Libre / Open Education, or FLOE. In fact, I think I'll start calling it exactly that.

I agree with Wiley that the term "open" is problematic, but for the exact opposite reason that Wiley gives. I think people are too likely to call almost anything open, even if the door is only open a centimeter. If it's open exactly that far, and there's a doorstop behind it preventing it from opening any further, then that door is effectively closed.

Related posts by other writers:
David Wiley: A few notes about openness (and a request)
Jeremy Brown: Bard Quest 2: Wiley’s motivation, TomaĆĄevski’s motivation, and the real reason people get into Open Education
Jared Spurbeck: Why my creative work is "open"
davidp:Optimal, not ideal

Wednesday, August 26, 2009

why I am a technological determinist

I'm fascinated by danah boyd's recent post intended for the New Media Consortium's upcoming Symposium for the Future. In her post, she cautions new media theorists to avoid what she labels "technological determinism." She explains:

Rejecting technological determinism should be a mantra in our professional conversations. It's really easy to get in the habit of seeing a new shiny piece of technology and just assume that we can dump it into an educational setting and !voila! miracles will happen. Yet, we also know that the field of dreams is merely that, a dream. Dumping laptops into a classroom does no good if a teacher doesn't know how to leverage the technology for educational purposes. Building virtual worlds serves no educational purpose without curricula that connects a lesson plan with the affordances of the technology. Without educators, technology in the classroom is useless.


boyd's point is well taken, though I'd be hard pressed to find a single new media scholar who embraces the kind of technological determinism she describes in the above passage. There may have been a time when the "if we build it, they will come" mindset was commonplace, but virtually no serious thinker I have encountered, either in person or in text, actually believes that new media technologies can or should offer quick fixes to society's ills.

The problem, as I see it, is a two-part one. The first issue is one of terminology: Increasingly, we talk about "technology" as this set of tools, platforms, and communication devices that have emerged from the rise of the internet. This is useful insofar as it allows new media thinkers to converge as members of a field (typically labeled something like digital media and learning or the like), but it does so at the expense of the deep, complicated and deeply intertwined history of technologies and what we call "human progress." In truth, social media platforms are an extension of communications technologies that reach back to the beginning of human development--before computers, television, motion pictures, radio, before word processing equipment, to telegraphs, typewriters, Morse code, pencils, paper, the printing press...all the way back to the very first communication technology, language itself.

"Technology" is not a monolith, and there is a distinct danger in presenting it as such, as boyd does in her final paragraph:

As we talk about the wonderfulness of technology, please keep in mind the complexities involved. Technology is a wonderful tool but it is not a panacea. It cannot solve all societal ills just by its mere existence. To have relevance and power, it must be leveraged by people to meet needs. This requires all of us to push past what we hope might happen and focus on introducing technology in a context that makes sense.


The second problem is a rhetorical one. New media theorists have found themselves engaged in a mutually antagonistic dance with those who prefer to focus on what they see as the negative cultural effects of digital technologies. For better or worse, people engaged directly in this dance find themselves coming down more firmly than they might otherwise in one of these camps and, because the best defense is a good offense, staking out a more strident position than they might take in private or among more like-minded thinkers. Thus, those who dislike Twitter feign disdain, repulsion, or fear and are labeled (or label themselves) luddites; and those who like Twitter find themselves arguing for its astronomical revolutionary potential and are labeled (or label themselves) uncritical utopianists.

In fact, media theorists have been targets of the "technological determinism" accusation for so long that they refuse to acknowledge that technologies actually can and often do determine practice. Homeric verse took the structure it did because the cadences were easy for pre-literate poets and orators to remember. The sentences of Hemingway, Faulkner, and many of their literary contemporaries shortened up because they needed to be sent by telegraph--leading to a key characteristic of the Modernist movement. The emergence of wikis (especially, let's face it, Wikipedia) has led to a change in how we think about information, encyclopedias, knowledge, and expertise.

A more accurate--but more complex and therefore more fraught--way to think about the relationship between humans and their technologies is that each acts on the other: We design technologies that help us to communicate, which in turn impact how we communicate, and when, and why, and with whom. Then we design new technologies to meet our changing communications needs.

Again, virtually no media theorist that I know of would really disagree with this characterization of our relationship to technologies--yet say it too loudly in mixed company, and you're likely to get slapped with the technological determinism label. I say this as someone who has been accused more than once, and in my view wrongly, of technological determinism.

Overly deterministic or not, however, I agree with boyd that technologies do not offer a panacea. More importantly, she argues against the use of terms like "digital natives" and, presumably, its complement, "digital immigrants." These are easy terms that let us off the hook: people under 30 get something that people over 30 will never understand, and there's nothing you can do about this divide. As boyd explains,

Just because many of today's youth are growing up in a society dripping with technology does not mean that they inherently know how to use it. They don't. Most of you have a better sense of how to get information from Google than the average youth. Most of you know how to navigate privacy settings of a social media tool better than the average teen. Understanding technology requires learning. Sure, there are countless youth engaged in informal learning every day when they go online. But what about all of the youth who lack access? Or who live in a community where learning how to use technology is not valued? Or who tries to engage alone? There's an ever-increasing participation gap emerging between the haves and the have-nots. What distinguishes the groups is not just a question of access, although that is an issue; it's also a question of community and education and opportunities for exploration. Youth learn through active participation, but phrases like "digital natives" obscure the considerable learning that occurs to enable some youth to be technologically fluent while others fail to engage.


The key question on the minds of researchers in digital media and learning is not (or should not be) how we can get computers in the hands of every student but how we can support participation in the valued practices, mindsets, and skillsets that go along with a networked, digital society. To get this question answered right requires an ability to engage in the complex, thorny, and socially charged issues that boyd and others have identified in their research and writings. It requires development of a common language within the broad digital media and learning community and an ability to communicate that language to the vast range of stakeholders who are paying attention to what we say and how we say it.


Related posts by other writers:

danah boyd: Some thoughts on technophilia
Kevin Kelly: Technophilia

Monday, August 24, 2009

eppur si muove: a defense of Twitter

Recently, media scholar (and, full disclosure, my former boss) Henry Jenkins published a new post on his always-mind-blowing blog, Confessions of an Aca/Fan. This post focuses on the affordances and, in his view, the limitations of Twitter.

The post itself is the result of a Twitter exchange wherein one of Henry's followers, @aramique, wrote: "you theorize on participatory models over spectatorial but i've noticed your whole twitter feed is monologue." Ultimately, Henry responded with this: "yr questions get Twt's strengths, limits. but answer won't fit in character limits. Watch for blog post soon." Then, in his blogpost, he begins with this:

I will admit that there is a certain irony about having to refer people to my blog for an exchange that started on Twitter but couldn't really be played out within the character limits of that platform. But then, note that armique's very first post had to be broken into two tweets just to convey the emotional nuances he needed. And that's part of my point.

From the start, I've questioned whether Twitter was the right medium for me to do my work. I've always said that as a writer, I am a marathon runner and not a sprinter. I am scarcely blogging here by traditional standards given the average length of my posts. Yet I believe this blog has experimented with how academics might better interface with a broader public and how we can expand who has access to ideas that surface through our teaching and research.


Jenkins, who makes it clear that his blog is his primary focus for online communication and that Twitter is a space for him to both direct traffic to his blog and track who follows his links, and when, and how, argues that though Twitter has its value as a social media platform, it has resulted in some losses. His main concerns are linked to a core issue with the key feature of Twitter: its brevity. As it grows in popularity, he explains, deep, thoughtful commentary on his blogposts has decreased:

Most often, the retweets simply condense and pass along my original Tweet. At best, I get a few additional words on the level of "Awesome" or "Inspiring" or "Interesting." So, in so far as Twitter replaces blogs, we are impoverishing the discourse which occurs on line.


"[I]n so far as people are using (Twitter) to take on functions once played on blogs," he writes, "there is a serious loss to digital culture."

I guess I'm approximately as serious about blogging as a medium as the next guy who posts tens of thousands of words each month, but I'm not sure I share Henry's concern. There were, after all, those who worried that blogs would lead to the decline of serious and thoughtful intellectual conversation. But as Henry's blog (and hundreds or thousands of others like it) demonstrates, blogs can in fact afford both a higher level of expression and a greater capacity for circulation of those ideas. The phenomenon of the blog also--and this was a key element of the initial concern about the decline and fall of civilization at the hands of the weblog--means anybody with internet access, basic typing skills, and a couple of ideas about anything at all can express, post and circulate them. Blogs even support cirulation of the most ignorant, repulsive claptrap a person can imagine. The onus is therefore on the consumer, and no longer the producer, to filter out the white noise in search of real music. The fear, real or imagined, was that the general public would not be able to filter intelligently and would therefore accept any nonsense they read online.

Actually, this fear is not a new one. The same anxiety was prevalent among educated elites when the universal literacy movement began to take hold. It was the same fear that gripped members of "high culture" when movies, then radio, then television, then YouTube became increasingly popular and available. See, that's the peculiar feature of democratizing technologies: Elites no longer get to decide what's culturally valuable and filter it out before it reaches the unwashed masses. Now we all get to decide, and that's precisely what leads the privileged class--even members of this class who are pro-democracy--to react so strongly that they try to stamp it out.

It's the same cry I hear from people who oppose Twitter: There's so much meaningless noise. It's leading to a decline in critical thinking. Jenkins writes that

there is an awful lot of relatively trivial and personal chatter intended to strengthen our social and emotional ties to other members of our community. The information value of someone telling me what s/he had for breakfast is relatively low and I tend to scan pretty quickly past these tweets in search of the links that are my primary interests. And if the signal to noise ration is too low, I start to ponder how much of a social gaff I would commit if i unsubscribed from someone's account.


Twitter, for all its seeming triviality, is one of the most complex, nuanced social media environments I've ever participated in. It's layered over with the kind of community expertise required for authentic, valued participation in a vast range of social networking sites, both online and offline. Add to that the fact that Twitter users bring to their engagement with the site any number of social motivations; multiply that by the nearly limitless number of possible subsets of Twitter followers the typical user might communicate with; and square that by the breathtaking creativity that the 140-character limit both supports and fosters.

This is what's most difficult to explain to a new Twitter user, and what's nearly intuitive for those who have internalized the tacit norms of the space: No tweet can be interpreted in isolation. No Twitter stream exists wholly independently of any other. Twitter's depth exists precisely in the delicate intertwining of inanity with complexity. Yes, most of the time I skip over people's breakfast tweets. But I don't always skip over them. Much of the time I click on the links Henry posts. But I don't always click on them.

Sure, Twitter is no substitute for a series of deep, thoughtful blogposts. But my sense is that the vast majority of Twitter users know this, and don't bother trying to turn Twitter into a blog, or even a microblog--though it may seem like it on the surface.

And even if some users really are trying to do exactly that, it's much easier to focus on Twitter's constraints than on the deep, breathtaking creativity it affords. I follow lots of Twitter users who are very good at linking to interesting, useful websites; and I follow a smaller number of users who are very good at the more difficult work of leveraging the technology in infinitely creative ways.

I wanted to offer an example of this creativity, but it's impossible to demonstrate outside of its context. You'd have to follow users' hashtags, or see how they fit an idea into 140 characters, or read a surprising tweet exactly in context.

Here's the closest I can come:
@jennamcjenna can someone link me to an article that tells me something completely mind-blowing? It doesn't matter what topic.8:52 PM Jun 16th from web

@dizzyjosh: @jennamcjenna try http://bit.ly/eQf3m http://bit.ly/zCUQM http://bit.ly/Sh06v http://bit.ly/Ks9qG http://bit.ly/PgNqT http://bit.ly/PgNqT


Related posts by other writers:

danah boyd: Twitter: "pointless babble" or peripheral awareness + social grooming?
Henry Jenkins: The Message of Twitter: "Here It Is" and "Here I Am"






Friday, August 21, 2009

stop saying 'ATM machine,' and other exhortations of a participatory culture theorist

I hate grammatical redundancy. Some of the best examples of this are:
  • ATM Machine (Automated Teller Machine Machine)
  • PIN Number (Personal Identification Number Number)
  • ISBN Number (International Standard Book Number Number)
There's actually a term for this: RAS syndrome, or Redundant Acronym Syndrome syndrome.

"But," said my buddy Dan, with a look of pure glee, "you say ATM Machine like everyone else, right?"

"I do not," I answered. And I don't.

"That's a dilemma," Dan said, still gleeful. "The English major part of you conflicting with the participatory culture theorist, who says that whatever the people decide is right."

He was ribbing me, but in truth it's a fair enough critique. After all, some of the most influential books on participatory culture and the social revolution include the following titles, all of which intentionally fly in the face of common attitudes toward morality, ethics, and human progress:

Here Comes Everybody (Clay Shirky)
The World is Flat (Thomas Friedman)
Wikinomics: How Mass Collaboration Changes Everything (Don Tapscott)
Tribes: We Need You to Lead Us (Seth Godin)

And, I'll just admit it, my blog is absolutely peppered with sweeping declarations: Print journalism isn't viable. Young people are leading the social revolution. The question isn't 'is it moral?', but 'is it popular?'

Why, after all, isn't the question 'is it moral?' Simply put, most of the time when people ask that question about aspects of the social revolution, what they're actually asking is more along the lines of 'is this better or worse than the experiences and culture I'm used to?' This is a matter of personal preference, and there's no accounting for taste.

Some of my friends think wearing a wristwatch makes it easier for them to make it to their meetings on time; some of my friends think watches just make them more time-conscious and anxious. If suddenly a critical mass of people started wearing watches and pressuring the rest of their culture to wear watches too, some of my friends would be thrilled (everybody will have to be on time now!), some would be upset (we're all going to start caring more about the time than about each other!), and some wouldn't care at all (*shrug* it's just another tool to help me get through my day.).

Some people think online social networks signal the decline of community. Some people think new, valuable community structures have emerged around these networks. And some people just think online Scrabble is a fun way to spice up a boring work day. All of these people are right, but arguing about whether we're better or worse off (or the same) is pointless, because is a wristwatch-bearing culture better than one that uses sundials? Your answer depends on a lot of things, like: whether you make your living off of sundial manufacturing; whether you can personally afford a watch; whether you were someone who cared a lot about keeping track of the time in the first place; and whether you think a watch looks good on your wrist.

Please don't accuse me of absolute moral relativism, though; even participatory culture theorists have their limits. It's wrong to force everyone to wear wristwatches, for example, just as it's wrong to ban sundials. If democracy, freedom of the press, or free speech falter when print journalism hits its death throes, I will be among the throngs calling for social change. Participatory media platforms tend, as all previous platforms have, to silence certain groups (nonwhites, nonstraights, older participants, less educated [formally or informally] participants); this is painful and wrong.

And RAS syndrome will always be wrong, no matter what percentage of the population adopts the phrase "ATM machine."





Footnote upon the construction of the masses:
some people are young and nothing
else and
some people are old and nothing
else
and some people are in between and
just in between.

and if the flies wore clothes on their
backs
and all the buildings burned in
golden fire,
if heaven shook like a belly
dancer
and all the atom bombs began to
cry,
some people would be young and nothing
else and
some people old and nothing
else,
and the rest would be the same
the rest would be the same.

the few who are different
are eliminated quickly enough
by the police, by their mothers, their
brothers, others; by
themselves.


all that's left is what you
see.

it's
hard.

(Charles Bukowski, The Days Run Away Like Wild Horses over the Hills, 1969)



Thursday, August 20, 2009

how to think like a good {fill in the blank}

"The message of Wikipedia," writes Michael Wesch, "is not 'trust authority' but 'explore authority.' Authorized information is not beyond discussion on Wikipedia, information is authorized through discussion, and this discussion is available for the world to see and even participate in."

This comes from Wesch's January 2009 Academic Commons article, "From Knowledgable to Knowledge-able: Learning in New Media Environments." The piece is part of an issue dedicated to exactly this problem: How do we teach and learn in a cultural moment where even the very definition of "knowledge," "teaching," and "learning," and even of "information" is being called into question?

Wesch focuses in on the brick-and-mortar university, arguing that despite growing recognition among higher-ed faculty and administration that university teaching and learning desperately needs to shift away from its authoritarian roots, a series of physical, social, and cognitive structures stymie this effort at nearly every turn. The physical deterrents are, Wesch argues, the easiest to recognize, and they

are on prominent display in any large “state of the art” classroom. Rows of fixed chairs often face a stage or podium housing a computer from which the professor controls at least 786,432 points of light on a massive screen. Stadium seating, sound-absorbing panels and other acoustic technologies are designed to draw maximum attention to the professor at the front of the room. The “message” of this environment is that to learn is to acquire information, that information is scarce and hard to find (that's why you have to come to this room to get it), that you should trust authority for good information, and that good information is beyond discussion (that's why the chairs don't move or turn toward one another). In short, it tells students to trust authority and follow along.

This is a message that very few faculty could agree with, and in fact some may use the room to launch spirited attacks against it. But the content of such talks are overshadowed by the ongoing hour-to-hour and day-to-day practice of sitting and listening to authority for information and then regurgitating that information on exams.


These are a key feature of the social structures that work against change in higher education: The ongoing pressure to standardize curriculum and use (easily quantified) standardized assessments for accountability purposes. Wesch writes:

When I speak frankly with professors all over the world, I find that, like me, they often find themselves jury-rigging old assessment tools to serve the new needs brought into focus by a world of infinite information. Content is no longer king, but many of our tools have been habitually used to measure content recall. For example, I have often found myself writing content-based multiple-choice questions in a way that I hope will indicate that the student has mastered a new subjectivity or perspective. Of course, the results are not satisfactory. More importantly, these questions ask students to waste great amounts of mental energy memorizing content instead of exercising a new perspective in the pursuit of real and relevant questions.


This is, perhaps, one of the most significant dangers inherent in re-mediating assessment: The risk of re-mediating the wrong aspects of current assessment strategies. Rewriting a multiple-choice test is surely not the answer, but it's often, and understandably, what innovative and new media-friendly educators do. The results of this effort may not be satisfactory, after all, but they're better than nothing. And short of overhauling an entire course, it's often a useful stopgap measure.

And what of overhauling an entire course? Wesch, recognizing that "our courses have to be about something," argues for a shift away from "subjects" (English, History, Science) and toward "subjectivities"--ways of approaching and thinking about content. One simple way of thinking about this shift is by thinking about the difference between learning the steps of the scientific method and developing the mindsets embraced by a profession that embraces the scientific method as a useful approach to experimentation.

The "subjectivities" approach is, in fact, the favored approach of many graduate programs. My sister, who is beginning law school this fall, is immersed in a cognitive apprenticeship designed to make her think, act, and speak like a lawyer. As a new doctoral student in Indiana University's Learning Sciences program, I'm undertaking the same apprenticeship. A series of courses, including IU's Professional Seminar in the Learning Sciences and Theory and Method in the Learning Sciences, are intended to equip new grad students with the Learning Sciences mindset.

This approaches, however, gives rise to a key question: If the "subjectivities" approach is intended to is intended to help learners think, act, and speak like a {fill in the blank}, then who decides how a {fill in the blank} is supposed to think, act, and speak?

Jim Gee offers a fascinating critique of "learning to think like a lawyer" in his book Social Linguistics and Literacies. He argues that success in law school is slanted toward people who think, act, and speak like white, middle-class men, explaining that:

[t]o write a competent brief the student has to be able to read the text being briefed in much the same way as the professor does.... Students are not taught these reading skills—the ones necessary to be able to write briefs—directly. Briefs are not, for instance, turned in to the professor; they are written for the students' own use in class.... One of the basic assumptions of law school is that if students are not told overtly what to do and how to proceed, this will spur them on essentially to teach themselves. Minnis argues that this assumption does not, however, work equally well for everyone. Many students from minority or otherwise non-mainstream backgrounds fail in law school.


(A female friend who recently completed law school agrees with this argument, and struggled mightily with the inequities inherent in her program and inside the field of law in general. I've written about her experience here.)

This issue is certainly not limited to law school; it's a thorny problem in every program designed to help students think like a {fill in the blank.} I understand that this is an issue that IU's Learning Sciences program has grappled with recently, and I imagine this is the reason that the Professional Seminar in the Learning Sciences, previously a required course, has now been made optional.

What do I know, right? I haven't even started my first semester in the program yet. But it seems to me that if this issue is worth grappling with (and I believe it is), it's worth grappling with alongside of the program's apprentices. I'm for making the course mandatory and then using it to expose, discuss, and clarify the very issues that led to the faculty's decision.

Here we can take a page out of the Wikipedia lesson book. There's no point in simply trusting authority when the social revolution supports not just questioning, not just opposing, but actually exploring authority. After all, thinking like a good {Learning Scientist} is about much more than embracing a set of approaches to teaching, learning, and knowledge; it's also about questioning, contesting and exploring the very foundation of the field itself.

Wednesday, August 19, 2009

@danieltosh really knows how to work a crowd

Regular readers of this blog know what a fan I am of comedian Daniel Tosh and his new show, Tosh.0. My love is simple and pure: The show culls the most humiliating moments from millions of online videos, and Tosh exercises the most exacting wit in elaborating on the humiliation.

Here's something else Tosh does well: cultivating his twitter presence. He livetweets during his show each week, responding to viewer questions and proddings, and during and in between he uses twitter not like many celebrities do but exactly like normal people do.

Recently, he posted a twitpic of his summer haircut:



I'm fairly certain he's making fun of twitpic users here, with the wood-paneled cabinets, the slightly tilted head, the direct, semi-flirty eye contact. But, see, he's not just making fun of twitpic; he's also using it with sincerity, for exactly the purpose god vested it with.


Here's what his livetweets look like:



And here's a sample of toshtweets during his off hours:


You guys, this is a comic who's in full command of his medium. It's a bonus for me that his medium happens to be the internet, of which I am a fairly big fan.

If you're interested, the show's on Thursday night at 10 PM Eastern Time on Comedy Central.

barney frank gets all snarky

You may have seen this already:




It's true what Frank says, that it's a tribute to the First Amendment that this kind of vile, contemptible behavior is not only completely legal but easily disseminated.

At the same time, even though Barney Frank is right to refuse to engage with a disruptive public, he still kinda looks like a big jerk. When you have the floor like he does, you really do need to hold yourself to a higher standard. You don't get to use the platform to insult the public, even if they're trying to do it to you.

new advice on surviving the zombie apocalypse, this time with math!

the second in a two-part series on how to survive a zombie invasion

As I've mentioned in previous posts (here, here, here, and here), I believe that an all-out zombie apocalypse is likely to wipe out the vast majority of humanity, with the exception of those armed with guns, food, and new media.

Forewarned is forearmed, I always say. Which is why it's worthwhile to examine a recent article that considers various zombie survival scenarios through a mathematical lens.

The article, "When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection," argues this:

An outbreak of zombies infecting humans is likely to be disastrous, unless extremely aggressive tactics are employed against the undead. While aggressive quarantine may eradicate the infection, this is unlikely to happen in practice. A cure would only result in some humans surviving the outbreak, although they will still coexist with zombies. Only sufïŹciently frequent attacks, with increasing force, will result in eradication, assuming the available resources can be mustered in time.


Further, the authors agree with my argument that previously successful responses to widespread and contagious infectious diseases will be ineffective against a zombie invasion.

You have to have some basic expertise in reading mathematical equations to parse the results of the various models the authors present, as the article is populated with such arguments as this:



In fact, while the models presented are intended to offer realistic analyses of efficacy of various zombie-attack responses, the authors point out that the odds of a widespread zombie infection are fairly low. Still, they argue that this article does have its value even given the low risk of zombie attack. First, they write, "possible real-life applications may include allegiance to political parties, or diseases with a dormant infection."

Additionally, the authors explain:

This is, perhaps unsurprisingly, the ïŹrst mathematical analysis of an outbreak of zombie infection. While the scenarios considered are obviously not realistic, it is nevertheless instructive to develop mathematical models for an unusual outbreak. This demonstrates the ïŹ‚exibility of mathematical modelling and shows how modelling can respond to a wide variety of challenges in ‘biology’.


It's a pity these mathematicians, who have clearly completed some of the best scientific work to date on the zombie apocalypse, fail to recognize the plausibility of a zombie invasion. Still, despite their cavalier approach to the subject matter, they are willing to end with this:
In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.

Tuesday, August 18, 2009

the sleeping alone review of films: district 9

summary: don't bother.

This afternoon, just before showtime for the alien-adventure flick District 9, I learned from my friend Emily that the United States engaged in a decades-long bombing campaign of Laos as part of the so-called "Secret War" in Indochina (which also included bombing campaigns in Vietnam and Cambodia). Apparently, Laos is the most heavily-bombed nation per capita in the world.

District 9 is not about bombs, but it is about how we react to people whose way of life we don't understand. In this case, the people are actually aliens, and we react to them the way we always have: by dumping them in tent cities and shooting them in the head.

Aside from a few unique features--a floating mothership hovering above not Manhattan, not Washington, D.C., but Johannesberg, South Africa; a physically vulnerable alien populace; and a weak, simpering hero--there's nothing new or particularly interesting to see here. Well, there is one thing: a heinously cursory take on the nature of human compassion. "Go home!" the humans shout at the alien refugees; and the contractors hired to keep the aliens in line punch with the butts of their guns; and the aliens scrounge through trash heaps, dress themselves up in tattered human clothes, and demonstrate familial bonds by hugging their children close. It's enough to make you shout at the screen: Okay, already. We get it.

The filmmakers appear conflicted: They wanted to make a touching movie about refugees, empathy, and humanity; but they also wanted a blockbuster with neat special effects and impressive, extraterrestrial explosions. Maybe someone thought it would be a brilliant idea to combine a touching story of refugee camp residents with the excitement of an alien invasion. It turns out that whoever came up with that brilliant idea was wrong.

District 9 is rated R. It contains serious gore of the sort that lands on and sticks to the camera lens, an extended gross-out metamorphosis scene, and a plot so lame it's obscene.

where to move if you want to survive the zombie apocalypse

the first in a two-part series on how to survive in case of zombie invasion

Though theories on this vary, it seems safe to assume that when the zombie apocalypse comes (as come it certainly will), it will start slow and pick up steam quickly in a fairly predictable pattern. We've seen this pattern before in the emergence of previous epidemics, including the bubonic plague, smallpox, HIV, and swine flu.

The difference, of course, is that traditional precautions--handwashing, safe sex, and face masks--won't protect you in the event of a zombie epidemic. Here's what you'll need to survive the zombie apocalypse: Guns, food and water, and access to new media. Through a complex triangulation system that accounts for these key factors, I have pinpointed the geographic location that offers the highest chance of survival: Mobile, Alabama.

Guns: Priority Number One
Because of their effectiveness in destroying brains from a safe distance, guns are by far the most effective weapon against zombies. This means, of course, that your best bet of survival is by residing in the United States. With 90 guns per 100 people, America leads the entire world in small arms ownership--which is a steaming hot pile of insanity during civilized times but a cache of awesomeness when the zombies invade. Through an accompanying world record-level firearm-related death rate, America has also proven its collective ability to aim for the whites (or, as the case may be, the sickly yellows) of their eyes.

The deadliest states also, coincidentally enough, happen to be those with the highest gun ownership rates: Louisiana (45.6%), Alabama (57.2%), Alaska (60.6%), Mississippi (54.3%) and Nevada (31.5%). This makes it easy to narrow the field of competitors for Safest City in Case of Zombie Apocalypse.

Food and New Media: More Closely Linked than Previously Thought
All the guns in the world won't save you if you don't know how to deploy them. Given that the majority of U.S. residents are at least passing familiar with what a zombie is and how to kill it, it still seems fairly likely that the first wave of humans casualties will stem from surprise-induced paralysis. Early survivors will be those among us who are naturally attuned to running from danger.

That's right, the geeks will survive us all.

And what do you think they'll do first? Why, head to their technology, of course. It's likely that the first reports of the zombie apocalypse will spread via Twitter, Facebook, or user forums on free software sites. Alert social media users will be able to become informed about the invasion, learn from the early failures and successes of human resistance, and prepare themselves for the onslaught. Preparations will include gathering the abovementioned weaponry, along with sufficient supplies to allow survivors to outlast the epidemic. As zombies, contrary to some reports, don't die off because they are already dead, the epidemic is likely to last a long time.

We have lost our ability to grow or hunt for our own food, and this is especially true of the geeks among us. In general, however, geeks are highly adept at foraging, given the right circumstances. For geeks, the right circumstances include: a supermarket. Additionally, while it's feasible that internet access will not outlast the zombies, survival odds increase for those who have prolonged access to networked technologies. That, in all likelihood, means a major metropolitan area. That rules out Alaska (unless--and this seems unlikely--zombies prove vulnerable to cold).

The Finalists: Louisiana, Alabama, Mississippi, Nevada
Of the remaining states with large weapons stores, we can rule out Nevada's major cities, Vegas and Reno, for the obvious reason that zombies have already invaded them. This leaves just three states, Louisiana, Alabama, and Mississippi. While the survival odds for residents of any of these states are approximately even, one last factor serves as a tie-breaker: relative health of its citizens. All three states rank near the bottom of the life expectancy scale and at or near the top in obesity rates.

The Winner: Mobile, Alabama

With a population of just under 200,000 and an ideal seaside location in one of the most gun-friendly states in the U.S., Mobile offers food, shelter, a temperate climates, access to cubic tons of water that's just begging for desalinazation, and enough firepower to blow the heads off of as many zombies as can find their way to this southern city. As an added bonus, Mobile boasts a subtropical climate that's ideal for producing small, year-round rooftop gardens, just in case the Wal-Marts, Save-A-Lots, Winn-Dixies, and Circle K's run out before the zombies do.

Tomorrow: a mathematical approach to surviving the zombie apocalypse.

Monday, August 17, 2009

what is new media literacy?

Until about a month ago, I worked for a research group called Project New Media Literacies. During my tenure there, the group's Creative Manager, Anna van Someren, produced the following video to describe our work:


I love this video and think it does a fantastic job describing the focus of Project New Media Literacies. What it doesn't do, however, is answer the question my friend Kathleen asked me the other day: "What is new media literacy?"

Here's my answer: It's like print literacy, only different.


A short definition of print literacy

Think about learning how to read. You start by figuring out words and sounding out short sentences. Kids spend a lot of time learning vocabulary, practicing with different kinds of texts, and writing their own texts. The whole point of that is to increase learners' fluency with the words, symbols, and markers that comprise a language, so that when they encounter an unfamiliar type of text they'll be able to decipher it in context. By learning how to read this




you can also (theoretically) develop an ability to read this:


and this:


and this:


And even if you can't exactly decipher everything included in the examples above, most people would likely at least know what kind of text they were looking at and, even if they didn't know what opah was or whether it tasted good, they would at least know how much it cost to find out.


New Media Literacy
Keep in mind that, though we tend not to think too much about this, there are tons of technologies involved in the creating and communicating of print messages. Word processors are communication technologies, of course, but so are pencils, quill pens, telegraphs--even language itself is a technology--an invention devised to support a specific kind of communication.

New media literacy starts from the premise that digital technologies like email, Twitter, chatrooms and so on are simply new communication resources that can be clustered in the same category as pens, paper, and the printing press. While they're the same class of technologies, however, the types of communication these new digital technologies support are significantly different from those supported by print technologies.


One interesting feature of print literacy is that while it's related to oral literacy--the ability to speak and understand a language--oral literacy and print literacy can in theory and often in practice exist in mutual exclusion. This is because what it takes to interpret the symbols that make up a spoken language (deciphering a series of intentionally ordered sounds) is fundamentally different from what it takes to interpret the symbols of a printed language (deciphering a series of symbols, intentionally ordered). Print literacy, however, is built on the shoulders of oral literacy: While we can easily imagine a powerful public speaker being functionally illiterate, it's practically impossible to picture someone who is able to read but unable to speak or understand the language she can read.

Print literacy and new media literacy are connected and separate in a similar way. In order to master new media platforms and social communication tools, you have to possess a fluency with print media (in addition, increasingly, to visual and sound-based media formats). On the SAT, here's how this all would play out:

oral literacy:print literacy::print literacy:new media literacy


In other words, print literacy is necessary but not sufficient, because the conditions surrounding print media in social communication environments are fundamentally different from those surrounding print media in, for example, a textbook.

The goals behind new media literacy education are, however, the same as those surrounding print literacy education: To support learners' facility with a set of texts and allow them to navigate new media platforms with relative fluency. There's no point in learning how to edit Wikipedia, for example, if it doesn't offer us skills that carry over into other collaborative knowledge-building environments. Twitter might (though I doubt it) flounder and fail within five years, but that doesn't mean learning how to engage with it it pointless. Through Twitter, we can learn how to build and participate in a community that features a largely invisible audience, persistence of information, and tacit but fairly firm rules for engagement. If we can learn how to jump into the world of Twitter, then, we might also learn how to navigate this:


or this:





or this:


All of the above technologies are built on combinations of oral and print literacies, but that doesn't mean knowing how to speak, read, write and understand are enough. The words may be the same, but the social competencies required to decipher them inside of their context are different.

There: new media literacy. It's kind of like learning how to read and write and kind of not like that at all.

Monday, August 10, 2009

blogging burns more calories than watching tv or sleeping

I recently went nearly completely offline for ten full days as I packed and moved from Boston to Indiana, having the unbelievable good fortune along the way to witness the first minutes in the life of my brand new niece, Morgan Brianna DeGeer. During this time, I only found the time and connectivity to publish two blog entries, post four tweets, and enter five Facebook status updates.

Being offline was hard but not impossible, thank god; I halfway expected to suffer from serious irritability and sudden fits of rage and sadness. What I missed most was my daily morning routine of waking up, reaching over to the passenger side of my bed, and grabbing hold of my laptop. This is a routine I'll be glad to get back to.

And I'm not alone, according to a recent New York Times article that describes an increasingly typical A.M. routine:

This is morning in America in the Internet age. After six to eight hours of network deprivation — also known as sleep — people are increasingly waking up and lunging for cellphones and laptops, sometimes even before swinging their legs to the floor and tending to more biologically urgent activities.


Some people--including some interviewed for the NYTimes article--may decry this new trend as unnatural, antisocial, or unhealthy. I can't speak for the experience of others, but for myself, I disagree with this analysis. (And here I risk being part of what another New York Times article calls a potentially problematic anti-print media "drumbeat." "This drumbeat," Michael Sokolove writes in the piece about the faltering of Philadelphia's major newspapers,

a relentless declaration that print is doomed, may be a problem in and of itself, making it easy to cast anyone who wants to save print as a Luddite.)


Perhaps "lunging" for cellphones and laptops before emptying your bladder might be considered unhealthy, but only if you think of the lunging as on par with waking up and reaching for the TV remote. Watching television, after all, is the ultimate passive activity, burning a mere 68 calories an hour (to the 46 calories burned per hour of dead sleep). But for a lot of people, opening a laptop is practically the diametric opposite of turning on the tv: Instead of watching something someone else made, they get to make something for themselves and others, to build something new out of nearly endless buckets of clay that get replenished by the day, the hour, the minute.

In a previous career trajectory, I was a newly minted poet freshly emerged from an M.F.A. program. Most mornings, I woke up early, flipped open a notebook, and wrote. That activity seems to me now to be innately self-contained and self-absorbed, existing as it did in an intentional vacuum. I don't know how many calories blogging burns per hour, but I know it generates both light and heat for me and, I hope, for other people who land here. It's why I haven't yet been swayed by accusations that blogging, tweeting, and working with social networks are a vain, self-centered and self-aggrandizing acts: When leveraged in the most interesting ways, these media platforms become not only the materials molded out of clay but the raw materials from which others may build their own designs.



Oh, and here's a pic of my new niece. I swear she is exactly as gorgeous as this photo suggests.

Tuesday, August 4, 2009

why Twitter's not trivial, by a print journalist who trivializes Twitter

From the opinion section of the Detroit Free Press comes further evidence that even the most social media-friendly print journalists either: a.) don't really understand the value of social media; or b.) have yet to master the finer points of conveying irony through the written word.

This most recent proof comes the Freep's deputy managing editor Steve Dorsey, in a column about Thursday's Twitter hack. The column, "Tweetless Thursday was a shock," makes the sound and fully reasonable argument that in its approach to reporting the outage, mainstream media "once again...underestimated the value of social media to its users."

Dorsey points to the Associated Press description of the event:
“For Twitter users, the outage meant no tweeting about lunch plans, the weather or the fact that Twitter was down.”

As Dorsey explains,
The site has played a big role in several recent news stories ranging from the social turbulence following the Iranian elections to the first picture of a plane that landed in New York’s Hudson River. Locally, the earliest headlines and photos after the tanker explosion on I-75 at 9 Mile were on Twitter....

Some use Twitter solely as an input channel, to update their Facebook status. Others value its conversational nature. I take advantage of both. Ultimately the biggest value of Twitter is a weeded out, faster connection to things I didn’t know before.

Dorsey had me until the final paragraph of his piece: "Embrace it or not, but don’t ignore Twitter. Give it a try: it’s free (and it’s addictively fun!)."

Okay, so after an entire piece that argues for taking Twitter more seriously, he ends by encouraging his readers to try it not because of its social value but because it's free and addictive? All we can do now is hope the irony was intentional.

putting the "our" in "open source": on the dearth of women in the open source programming movement

In case you haven't seen it yet, I wanted to link you to Kirrily Robert's keynote at this year's O'Reilly Open Source Convention. Robert's keynote, "Standing Out in the Crowd," focused on the dearth of female developers in the open source movement. She offers this image from the 2008 Linux Kernel Summit:


Image credit: Jonathan Corbet, lwn.net


Robert writes:
This is a normal sort of open source project. I’ll give you a minute to spot the women in the picture. Sorry, make that woman. She’s on the right. Can you see her?

While women are a minority in most tech communities, Robert explains, the gender disparity in open source development is more pronounced than in other technology disciplines. While women make up between 10-30% of the tech community in general, they comprise about 5% of Perl developers, about 10% of Drupal developers, and (according to an EU-funded survey of open source usage and development, called FLOSSPOLS) about 1.5% of open source contributors in general.

Robert surveyed female developers to find out why women seem to be so reluctant to contribute to open source projects; the most common reason was some variation of "I didn't feel welcome." She points to a pair of innovative projects whose members have actively worked to recruit women. One is the Organization for Transformative Works' (OTW) Archive of Our Own (or AO3); the other is Dreamwidth, a blogging and community platform forked from the LiveJournal codebase. Both projects focused on recruiting women, not to be inclusive but because they felt it was essential for the success of the projects.

The entire talk is worth a read-through or a listen, but I want to highlight one key point from the set of strategies she offers for recruiting diverse candidates: Find potential users of the application and teach them programming, instead of recruiting good programmers and teaching them about the value of the application. She says:

If you’re working on a desktop app, recruit desktop users. If you’re writing a music sharing toolkit, recruit music lovers. Don’t worry about their programming skills. You can teach programming; you can’t teach passion or diversity.


I've been thinking about this very aspect of the open education movement since the Sakai 2009 Conference I attended last month. Sakai offers an open source collaborative learning environment for secondary and higher education institutions, emphasizing openness of knowledge, content, and technology. This embrace of openness was evident in every aspect of the conference, except for one: The notable lack of educators in the panels and audience.

If you want a good open education resource, you need to start by recruiting open source-friendly educators. Otherwise, you run the risk of developing a highly robust, highly functional tool that's limited only in its ability to offer the features educators actually want.
 

All content on this blog has been relocated to my new website, making edible playdough is hegemonic. Please visit http://jennamcwilliams.com and update your bookmarks!