Sunday, September 13, 2009

weighing in on the natives / immigrants metaphor

Just FYI, "digital" isn't actually a language, no matter how badly Marc Prensky wants it to be.

Prensky's notion of "digital natives" and "digital immigrants" has gained cultural traction because it gives us a way to talk about the generational differences in approaches to technology. We get it when he writes that

[a]s Digital Immigrants learn – like all immigrants, some better than others – to adapt to their environment, they always retain, to some degree, their "accent," that is, their foot in the past. The “digital immigrant accent” can be seen in such things as turning to the Internet for information second rather than first, or in reading the manual for a program rather than assuming that the program itself will teach us to use it. Today's older folk were "socialized" differently from their kids, and are now in the process of learning a new language. And a language learned later in life, scientists tell us, goes into a different part of the brain.



My mom prints emails that interest her and trusts the information delivered in print form to her front door, but not the information delivered digitally to her computer screen; the kids I work with don't really bother with email and gather digital data like it's Super Mario Brothers coins. Ha! we say. Digital immigrants! Digital natives!

Fine. Except "digital" is not a language.

"Digital" is a way of conveying information. "Digital" is a cultural tool for delivering language, not the language itself.

And that's just the tip of the iceberg when it comes to the problems with the natives / immigrants metaphor. More troublesome is the question of who gets to decide which of us are the natives and which are the immigrants. We need to consider how this metaphor--taken up so widely in our cultural conversations--continues to reify a divide in participation based on gender, class, and ethnicity.

Even those who subscribe to the Prensky metaphor have to concede that not all young people can be considered "natives" by his definition, and not all old people can be considered "immigrants." When we make the sweeping proclamation that kids these days are digital natives, what we're really doing is identifying the type of kid whose practices and ways of being in the world have gone mainstream.

Had we but world enough, and time, this cultural approach, Prensky, were no crime. But what we actually have is a desperate divide: (largely middle and upper class, largely white) kids with excess time and access to resources and support for developing a technological fluency; and (largely lower class, often nonwhite) kids without the resources or support to develop the kinds of social competencies that will enable them to join the larger cultural conversation.

The digital natives / digital immigrants metaphor is yet another tool that gets used, intentionally or unintentionally, to support our culture's dominant Discourse, dominated as it is by the same members of the privileged classes who have historically monopolized cultural conversations.

One of the most thrilling aspects of the social revolution is its potential to overthrow gender, class, and ethnic divides. So far, we haven't come anywhere near realizing even a fraction of this potential, and sweeping terms like Prensky's--steeped as they are in a long history smacking of hegemony--make the revolutionary potential of new media technologies increasingly difficult to realize.



Related posts by other writers:
Marc Prensky: Digital Natives, Digital Immigrants--A New Way To Look At Ourselves and Our Kids
Marc Prensky: Overcoming Educators' Digital Immigrant Accents: A Rebuttal
Henry Jenkins: Reconsidering digital immigrants...
John Palfrey: Born Digital
danah boyd:some thoughts on technophilia
Timothy VanSlyke: Digital Natives, Digital Immigrants:Some Thoughts from the Generation Gap

Saturday, September 12, 2009

new media literacy with an attitude

In an act that feels like the intellectual equivalent of flipping through old photo albums, I recently re-read the Project New Media Literacies white paper, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century.

That white paper (now, FYI, a book published by MIT Press) was the guiding force behind all my work for the two years I spent with ProjectNML, first as an education outreach coordinator and then as a curriculum specialist. Now, as an emerging learning scientist (and a kind of social justice-y one, at that), I've had the chance to reconsider the piece through a fresh, clean lens. I'm still a fan, though from my new outsider's perspective I find myself leaning on some problem areas along with admiring the core of this paper's intellectual work.

the awesomeness: a nod to open source culture
First, and to my great surprise (I had not remembered this), the white paper actually describes the new media era as "an open-source culture based on sampling, appropriation, transformation, and repurposing" (p. 20). The authors argue that "we must push further" in considering the social aspects of literacy, especially in a culture that increasingly values collaborative knowledge-building and collective meaning-making.

Readers of this blog know that I embrace the free/libre/open source movement as a crucial driver of real and lasting social change, so I'm thrilled to see this nod to open-source culture.

In general, however, I'm disappointed that this notion is not taken further. Perhaps this paper aligns more with the open-source contingent than with the free/libre contingent. The difference, briefly, is that open source adherents generally want to see better products made available to more people, and believe that opening up the source materials will support this; free/libre adherents generally believe in making source materials available so as to destabilize economic, political, and cultural institutions. As a free/libre kind of guy, I can attest that there are those who might worry that the key tenet of this white paper doesn't go quite far enough:

Not every member must contribute, but all must believe they are free to contribute when ready and that what they contribute will be appropriately valued.


Too often, the rules of contribution are set nearly in stone before any potential contributor comes along--which means that questions of what will be contributed, and by whom, and where, and why, and so on have been decided by some vaguely defined powers that be. In America and many Western countries, this isn't an issue of literal censorship; it's more a question of paradigm. Take, for example, the recent finding that 87% of all Wikipedia editors are male. Or consider the dearth of women in the open source movement. Or what about Anna Everett's argument that "[f]rom 1995 to the present...the structured absences of black [and by extension other minority] bodies that have marked most popular imaginings of the brave new world order were in danger of reifying [or naturalizing] an updated myth of black intellectual lag, or black Technophobia"?

on participatory democracy
Which brings me to my next point: Considering the nature of "participation." Having worked with Henry Jenkins directly for two years, I know how deeply he values civic engagement. This includes political involvement as well as the kinds of everyday interactions in which consumers-turned-producers engage. I happen to know that his current project is focused on a consideration of "participatory democracy," which good lord, I can't wait to read what comes out of the ethnographic work he and his team are working on now.

The New Media Literacies white paper doesn't go far enough in this area. It couldn't possibly, of course, since it provided the groundwork for Henry's current focus on civic participation. My concern, though, is that this paper is being taken up in so many ways, by so many different types of people, and with such great force, that it may be interpreted as the definitive argument about new media education. As such, it merits a caveat.

This paper does give a brief nod toward the notion that "civic participation" may not necessarily be about politics, by explaining that
the new participatory culture offers many opportunities for youth to engage in civic debates, to participate in community life, to become political leaders, even if sometimes only through the “second lives” offered by massively multiplayer games or online fan communities.
In the end, however, this section of the white paper asserts that the primary goal of a civics education is "to help learners to connect decisions in the context of our everyday lives with the decisions made at local, state, or national levels."

Political engagement is about more, much, much more, than elections and the law. Indeed, as Jennifer Earl and Alan Shussman argue, in a culture where corporate entities have an increasingly powerful and important role in the lives of citizens, many people "are protesting against corporations themselves in hopes of directly changing corporate policies or products." This is, as I wrote in an earlier post,

the new model of civic engagement, a type of activism that goes largely unrecognized by political scientists, cultural theorists, and pollsters but that offers a new model of democratic participation—the struggle over ownership and definition of public spaces, both physical and virtual. It's hard to identify, harder to measure, because it's deeply integrated into the everyday activities of an entire generation whose lives, identities, and self-making increasingly extend into virtual spaces.
Part of the difficulty in designing and defining civics education is that learners don't seem to care about politics--why should they care about something that Jenkins et al. rightly identify as something happening far outside of their spheres of consequence, when they have so much more power, so much greater ability to influence decision-making, within the social networks, the community spaces, that they navigate online every day? Facebook users, angry at a change in their terms of service contract, pushed back hard enough to get admins to reverse their stance. MySpace users have worked for ad-free zones. Fans of TV series have successfully petitioned to keep floundering shows on the air long past their intended expiration date.

Like it or not, this is civic engagement too, and it has its value alongside of the kinds of actions that push for health care reform, election or rejection of politicians, and the overturn of unjust laws.

...and a word on copyright, intellectual property, and breaking the law
I do not know Henry's official stance on copyright laws and intellectual property, but this white paper appears to take a fairly conservative view on these things. In addressing the new media literacy practice of appropriation, the authors write:
Artists build on, are inspired by, appropriate and transform other artists’ work. They do so by tapping into a cultural tradition or deploying the conventions of a particular genre. Beginning artists often undergo an apprenticeship, during which they try on for size the styles and techniques of other,more established artists. Even well established artists work with images and themes that have some currency within the culture. Of course, this is not how we generally talk about creativity in schools, where the tendency is to discuss artists as individuals who rise upon or stand outside any aesthetic tradition....

Many of the forms of expression that are most important to American youth accent this sampling and remixing process, in part because digitization makes it much easier to combine and repurpose media content than ever before.... Despite the pervasiveness of these cultural practices, school arts and creative writing programs remain hostile to overt signs of repurposed content, emphasizing the ideal of the autonomous artist. Yet, in doing so, they sacrifice the opportunity to help youth think more deeply about the ethical and legal implications of repurposing existing media content, and they often fail to provide the conceptual tools students need to analyze and interpret works produced in this appropriative process. (pgs. 32-33)

Let's take this one step farther: Sometimes, creative appropriation is mistaken for copyright violation; and, importantly, some things that would currently be considered copyright violation are simply a product of a legal system threaded through with corruption and corporate greed (cf. the Disney Corporation). Sometimes, people purposely violate intellectual property and copyright laws for political purposes. Indeed, scads of creative types oppose the current copyright system, steeped as it is in corruption and corporate domination. There is great value in teaching not only the longstanding tradition of appropriation but also the political ramifications of appropriative practices.

I was recently called a conspiracy theorist by a colleague. I wasn't offended: I am a conspiracy theorist. I believe that a deep if largely unintentional conspiracy exists whose goal is to help maintain the status quo at all costs. The institution of education is an important element in achieving this goal; it transmits a set of core values, beliefs, and discourses that allow continued cultural domination by a vastly outnumbered subset of our culture. There is deep value in challenging our culture's primary Discourse, even if a complete overthrow would be impossible (and maybe even undesirable).

About a year and a half ago, I heard Ernest Morrell call media literacy "a matter of life and death" for the urban youth he works with in Los Angeles. At the time, I thought he was exaggerating for effect. I no longer think he was exaggerating.

on virtual worlds as petri dishes

avoiding the pitfalls of researching games


Yesterday I attended a talk by virtual worlds economist Edward Castronova.

I wasn't as impressed as I had hoped to be, given how much I love Castronova's writing--though my friends later told me not to judge Castronova too quickly. The talk was intended, they explained, as a presentation to a general audience, so Castronova perforce needed to simplify and streamline his big ideas.

Okay, fine, so I won't perform my typical Final Judgment on Ted Castronova. But I do want to take up one point that I found problematic, not only in the context of his talk but in the larger context of research into games and virtual worlds.

The talk was called "Virtual Worlds as Petri Dishes," and it was linked to his recent paper, "A Test of the Law of Demand in a Virtual World: Exploring the Petri Dish Approach to Social Science." Castronova's basic argument, in the paper and in the talk, was that virtual gaming environments offer a rich space for researching a wide range of social issues. Because Castronova is an economist, his focus is on how economic principles apply (or don't apply) in virtual worlds. In his talk, he argued that many people refuse to consider the possibility that human behavior in game spaces can offer us insight into human behavior in the "real" world--even though it's clear that at least certain key human traits carry over into virtual environments.

It's an interesting point, and one well worth exploring. But where Castronova went wrong yesterday was when he made this point, paraphrased below:

Game makers and policy makers are basically the same--they both need to create worlds that work for the people operating inside of them.

I shot my hand up.

"This point," I said, "seems fairly central to your research" (he nodded) "and it also seems fairly simplistic--especially since if you're a game maker and I don't like your game, I can just go play another one."

"Well," he answered (and remember that I'm paraphrasing in an effort to avoid misattribution--though I'm fairly certain I'm getting the points right), "the same thing is true in the real world--people can invest internationally, or they can relocate to another jurisdiction."

"Yeah, if (my voice was shaking; I was nervous) they have the money to do that--but there are tons of people who don't have the choice to just move to another country if they don't like the one they're living in."

"We say the same thing about Mexico, but lots of Mexicans who don't like where they're living leave every day."

"But lots don't," I said. "Lots (my voice was shaking; I was mad) stay right where they are."

Castronova ultimately ceded the point, kind of, but the larger issue isn't whether I got him to admit I was right. The larger issue is the sweeping claim that the choices people make in games are the same sorts of choices they can make in real life.

They're not. For one thing, people choose to play games, or not to play games, and they make choices about which kinds of games they want to play, and when, and for how long. We don't, by contrast, choose to be alive--and almost without exception, being alive means living in a society whose rules were created outside of your influence. You don't get to decide whether to play; you only get to decide how to react to the rules of the game. You can't choose not to play the game at all.

Sure, tons of people relocate, looking for a game whose rules suit them better. The wealthy can move to Switzerland or Casablanca or Canada; the most desperate poor can sneak across a border and live each day at risk of being sent home. But everybody in the middle is kinda stuck where they are, and if they're not happy with the game they're playing, they can try to game the system right back. Or they can try to crash the game, if only for a short while.

It may very well be, as Castronova argues, that human behavior in virtual worlds mirrors human behavior in the "real" world. But if we agree that virtual worlds can serve as research petri dishes, then we also need to, as Castronova himself said yesterday, "test the test tube." We can't ignore the fact that games are worlds that people enter by choice, often as a refuge from the compulsory "games" they play every day, regardless of their physical location. We can't ignore the difference in stakes. We can't ignore the differences in mindfulness, attention, and choice. We can't ignore the fact that gamemakers' primary goal is to make a world fun enough for people to want to stay, whereas policymakers' primary goal is to make a world livable enough for people to want to survive.

I agree with Castronova that research into games and virtual environments holds enormous promise. But as a believer in games research, I get anxious when people make sweeping proclamations that don't hold up under closer scrutiny. If we want our work to be taken as seriously as we believe it should be, then we have a responsibility to present it in serious ways. Games research is as complex, nuanced, and knotty as any other kind of social research, and we owe it to interested publics to explain it as exactly that.

Thursday, September 10, 2009

in case you were looking for a reason to like Yoko Ono

This is "L'Eclipse," from Sean Lennon's 2006 album Friendly Fire.


Wednesday, September 9, 2009

what is learning (in new media)?

Alert blogtrollers may have seen multiple posts recently with titles identical to the one accompanying this post--that's because we've been asked by learning scientist and new media researcher Kylie Peppler to address this very concern. The question--what is learning in new media?--is too broad for anyone to address within the context of a single blogpost, but if we all set to work, we might get that turkey stripped down to its bones by the end of the night.

My chunk of the turkey is time.

When I joined Twitter, I lurked for months and months without tweeting a thing. When I finally did join the community as a good, earnest citizen, I started out slowly and picked up speed as I learned to negotiate the community's norms and embrace the valued practices of the space. Now, a year and a half later, I can communicate fairly clearly the spoken and tacit norms of the Twitterspace.

I did the same thing with Facebook, Wikipedia, and blogging--looking around for months before joining the community. By doing so, by taking the time to consider the space I was entering, I was able to reflect on others' practices before offering up my own. I read thousands of blogs before starting my own. I worked with friends to learn how to edit Wikipedia. And I was coerced by another friend to join Facebook; the rest was up to me.

I recently spent some time working with Scratch, a simple visual programming language designed for young learners. As the site explains,


Scratch is designed to help young people (ages 8 and up) develop 21st century learning skills. As they create and share Scratch projects, young people learn important mathematical and computational ideas, while also learning to think creatively, reason systematically, and work collaboratively.


I've designed exactly two projects in Scratch; the first was about a year ago, when a colleague spent the morning helping me work up a little thing I call Jimmy Eats World.


To play this project, click the green flag in the upper right.

I'm annoyed with myself that I didn't make the flying hippo actually disappear at the end of the project, and if I wanted to I could open up the program and make it so. Or I could turn the main sprite, the walking cat, into a hammerhead shark announcing my blog's url.

I could do that if I wanted to, because I am a highly resourceful independent learner who has the passion and the time to devote to projects like this. I find them personally and epistemologically meaningful--I feel enriched, and I feel that the time I devote to these kinds of projects makes me a better, more useful and proficient blogger and educational researcher.

Time, the friend of the highly resourceful learner, is the enemy of teaching. Time: There's never enough and even if there were, it couldn't be spent on tinkering. There's content to cover, and not just in the name of high stakes tests. A teacher's job--one made ever more challenging by the social revolution--is to equip learners with the knowledge, proficiencies, and dispositions that will suit them well for future learning. There comes a time when the teacher must say, It's time to stop with Scratch and start on something else.

Which is a deep shame, because it's the tinkering, the ability to immerse oneself in participatory media or a learning platform, that fosters a real fluency with the space.

This is a key feature of what it means to learn in new media: the choice to engage with certain tools, to join up with certain affinity spaces, beyond the time required by schools. Clay Shirky writes that the days are gone when we could expect to do things only for money; we're in an era when the greatest innovations emerge not for money but for love.

If learning in new media takes time, passion, and some combination of intrinsic and extrinsic motivations, then on its surface school seems to run anathema to a new media education. In fact, it may be that engagement with participatory practices is exactly what schools need at a time when they are struggling to remain relevant to the real world needs, experiences, and expertises into which learners will ultimately emerge.

Thursday, September 3, 2009

why educated elites are lame, by a member of the educated elite

Great piece this morning at Technollama about the struggle between technophobia and technophilia.


The blog's author, Andres Guadamuz, cites a quote he attributes to Douglas Adams:
“There’s a set of rules that anything that was in the world when you were born is normal and natural. Anything invented between when you were 15 and 35 is new and revolutionary and exciting, and you’ll probably get a career in it. Anything invented after you’re 35 is against the natural order of things.”


(If Adams is right, then I better speed up my tech-immersion: I turn 32 on Monday. Also, I'm officially putting all innovative technology designers on notice: Everything awesome must be designed and made public within the next three years.)

The Adams quote aligns with that old political axiom: If you're not liberal when you're young, you have no heart; if you're not conservative when you're old, you have no brains.

Both point to a key characteristic of most humans: the impulse toward self-preservation. For all their whinging about the dangers of participatory media, many if not most anti-tech curmudgeons will, when backed into a corner, acknowledge the democratic possibilities of new media, even if they're not sure those possibilities will ever be realized. As I explained in a recent post, I believe that people who engage in pro/con debates about social media become more strident in public than they are in private.

Guadamuz addresses a key issue of what he calls "the war between old media and the internet": the fact that when everyone is a potential media outlet, a lot of what gets published is drivel and dross. He writes:

We will always need some authoritative and well-written version of events, and for that the traditional publishing mechanisms will continue to exist. However, social media has come to allow more people’s voices to be made available. Is this a good thing? I personally think that it is a fundamental and empowering change in society, one that could potentially create a more participative and rich intellectual environment. Is there a lot of dross out there? Certainly! But there is also a lot of dross in traditional media, as any thinking person who picks up a copy of Heat magazine will attest to.


Guadamuz is right, of course. But, more importantly, the dross that exists on the internet has always existed; it's just that until the emergence of participatory media, the educated elites never had to lower themselves to engaging with it. Lame ideas, poorly designed creative works, ignorant or bigoted political stances, and individual identity work had no avenue for widespread expression, and so the people in charge got to act like none of the above actually existed. And, for all cultural intents and purposes, none of the above actually did exist.

Let me put a finer point on this: The decline-in-quality argument is an elitist stance in reaction to the transformative democratic potential of social media for the unwashed masses. I say this as an educated elite, as someone who has benefited as much as the next guy from the ability to participate in the dominant group's dominant Discourse.

People have always had stupid ideas and uninformed opinions; but what makes social media so powerfully transformative is that it allows people to not only communicate but also to refine, clarify, and potentially reject those ideas and opinions. Private opinions brought out in the sun are nearly always better than private opinions, privately held--especially when nearly everybody has approximately the same ability to express and circulate their beliefs.

This does not, by the way, make me an uncritical utopianist. I am aware that participatory media platforms allow stupid ideas to gain adherents and to therefore gain power. I'm also aware that these platforms make it so that more stupid ideas are more readily available, and that it becomes increasingly difficult--both individually and culturally--to separate the wheat from the chaff.

That's ok by me. Worrying about how to filter more opinions is way better than worrying about how to provide more people with platforms for expressing their opinions.*



*Note: We still need to worry about how to provide more people with platforms for expressing their opinions.

Wednesday, September 2, 2009

"Things fall apart"? SRSLY?

Over at the HASTAC blog, Cathy Davidson has posted a fantastic piece about a so-called "Facebook Exodus."

Davidson's post is called "Is Facebook the Technology from Hell?" and it tackles a New York Times article by Virginia Heffernan that suggests that

Things fall apart; the center cannot hold. Facebook, the online social grid, could not command loyalty forever. If you ask around, as I did, you’ll find quitters.... [W]hile people are still joining Facebook and compulsively visiting the site, a small but noticeable group are fleeing — some of them ostentatiously.


Davidson, while acknowledging her affinity for other Heffernan-authored pieces, rightly attacks this article for sloppy research and a bald-faced refusal to interpret data rationally. First, Davidson explains,

The "small but noticeable group" she documents are her friends. Their reasons are the ones that any wise FB user needs to be cautious of. Privacy, mostly. Of course FB is datamining. It's "free," right? Well, no. As every Cat in the Stack user knows by now, the "information is free" fantasy has been over for a long, long time. If it is free, they are gathering information that they can sell on the backend. There is no free lunch and no free Internet.

While it's certainly true, Davidson adds, that Facebook's popularity is declining among the younger demographic and it likely won't remain the behemoth it is now for the rest of time, there's no reason to think it will turn into the "online ghost town" Heffernan believes it's doomed to become--a ghost town, by the way, "run by zombie users who never update their pages and packs of marketers picking at the corpses of social circles they once hoped to exploit."

But Davidson's most important point is this:

methodology, people! We have to hold mainstream media responsible in the same way we hold the Internet bloggers and writers responsible. One's five friends are not necessarily the best filter on the world.


It is passing peculiar that the journalistic revolution--everybody, Clay Shirky writes, is a potential media outlet--is being covered by journalism's old guard, the very people whose vocations are threatened by new media platforms like Facebook, Twitter, blogging applications and forums. Humans have consistently proven their ability to see only what they want to see and ignore the rest; print journalists, for all their training in "objectivity" and "fairness," are really no different.

Of course not all print journalists are focused on studiously ignoring the social revolution, despite the overwhelming likelihood that it will come at the cost of their entire field as we know it today. For proof, just follow any journalist who actively uses Twitter as god intended it (I recommend David Carr, David Pogue, and Rachel Maddow).

Still, the question remains: Given the inherent bias of print media outlets toward print media outlets, how do we decide what to trust? Is it true that Facebook, Twitter and the like are suffering from a decline in popularity, that online reportage is less reliable than print outlets, or, indeed, that print journalism is really in the dire straits it purports to be?
 

All content on this blog has been relocated to my new website, making edible playdough is hegemonic. Please visit http://jennamcwilliams.com and update your bookmarks!