Saturday, April 30, 2016

On Employment

A lot of games coming out around now are Open World style, a result of Grand Theft Auto V making a billion dollars or so. Open world designs mean that objectives can’t be narrowly focused into “get from one end of this level to the other” or “beat this opponent” but instead you’re given an objective and frequently you have a small set-piece in which to accomplish that objective. Frequently this turns into a to-do list called a “quest list” that shows you your objectives in the open world, your progress toward completing them, where to go to complete them, and sometimes the rewards for doing so. It’s a pretty straightforward design that appeals to a lot of people and functions well in a logical system of checks, triggers and variables. Despite this setup’s obvious popularity, there’s always been a certain amount of criticism for the relatively robotic nature of enjoyment of these games, whether it’s criticizing the rote repetition of tasks or the arbitrary way these tasks tend to decide their completion. One of the larger issues with open world design is that the bigger the world, the more limited the things players can actually do in it begins to feel. Grand Theft Auto addressed this by stuffing its open world with all kinds of minigames, ownable properties, clothing options, a virtual stock market and so on. Other games are a bit more limited in scope, usually providing little more than a virtual treasure hunt or two on top of the typically combat oriented gameplay.
The reason this design is so popular – clear tasks, clear rewards, clear direction toward the next task – has a lot to do with how we set up our real world society. In an ideal presentation of life on planet earth, you’re born, you age a bit until you reach schooling, you’re taught certain things and then evaluated and typically rewarded based on your ability to remember or apply those things, then you graduate and move on into a position of employment where ideally you’re given a series of tasks and you complete them in exchange for money, or essentially the right to live, and to live well if you perform exceptionally well. This is the fundamental backbone of our society, from which all our ideology springs. And it’s not bad as a simulation goes! It works in games, no reason it wouldn’t work IRL. Except, of course, that life offers you endless potential actions you can take outside of your questlist, which is itself fallible because frequently your tasks are unclear, your rewards are uncertain, and the path to your next objective is totally unknowable.
What games do fundamentally that work and regular living and basically all of that fails to do is they give you a clarity of purpose, a set progression through a series of conditions that ends with you the victor, triumphant in your prowess (or else with you eventually getting frustrated or bored and moving on). The closest life has to offer you to that kind of clarity is schooling, and even that is subject to myriad systemic issues that prevent it from being as capable and objective as a machine executed series of rules that interpret your input.
So, games are work, yes, but games are an idealized form of work that cleaves more to the mental construct of what a good and functional society should look like. There’s a lot of stuff out there yammering about games as an escapist fantasy but rarely do they bother to really examine what folks are escaping into and what folks are escaping from. There’s a lot of writing out there too about power fantasies and it’s inarguable that this is a genre, but to what degree are these a power fantasy and why? Take Skyrim, for instance. You’re given a massive map full of kinda samey looking barrows and little medieval towns and you’re frequently employed to slay dragons and promptly rewarded for doing so. Meanwhile, as a digital avatar, you don’t need to eat, you don’t need to sleep, you don’t really need shelter, and diseases are cured as easily as tapping a shrine. You have a clear purpose: defeat king dragon and stop the dragons from destroying the world.
I’m gonna segue here a bit and talk about the gig economy. Presently a number of older industries are being upended by vastly smaller tech companies who can offer a similar level of service at vastly reduced cost by almost totally eliminating a workforce and handing the responsibility, and the profits, to other individuals who are doing little more than using an app. The app provides accountability and payment processing, the user provides the accommodations. It sounds great on paper because it frankly is great. With Uber/Lyft and their rating system, the companies have greater control over the conduct of its drivers than taxi companies ever did. With airbnb the prices are cheaper and the trips is way more authentic than anything a big chain hotel can deliver, all with little ratings-chasing perks that would be an upcharge in a cab or hotel.
Consider the “employee’s” point of view, though. What relationship does this have to an idealized work environment? For one, the app buzzes and gives you a quest to go to a certain point on the map and drop them off, transferring your payment when you’re done. Similarly airbnb turns the job of the concierge, the bellhop, and the cleaning maid all into one role performed by the airbnb host for the duration of the airbnb renters.

Saturday, March 26, 2016

On being an interloper

As part of my coursework in college as an undergrad I created a couple of ethnographies after doing participant observation in a couple of different communities in my life. I chose to focus on communities that I was already at least tangentially a part of rather than trying to integrate into a new community or two into my already somewhat busy social/personal life. This felt Ideal to me, as I could kill two birds with one stone, though in retrospect maybe this was ultimately a bad idea as I already felt somewhat alienated from these groups, trying to focus on collecting information and observing with a detached eye only exacerbated the alienation. At least, I think so? This is a rough line of questioning.

I already didn't really feel like I belonged to the individual groups in question for a bunch of other reason. I was too young, I was not well established as a New Orleanian with just a few years of presence under my belt, in a larger sense I didn't really have anything to offer folks. When later I ended up cutting ties after an abortive attempt at demanding accountability I realized too that the folks I'd been hanging out with just had largely different priorities in the world socially and politically. In short, I just really, really, didn't belong.
Which is fine as heck for participant-observation work. There's a lot of opinion written and I think a general consensus that a well-done ethnography requires the author to have at some point achieved "acceptance" within the group; the breakthrough comes when the elders put you through some rite of passage or finally share with you some sacred knowledge. To me this just sounds like a narrative convenience. The stories of your encounters with the tribe build and build to a climax of acceptance and you just coast along from there into a doctoral degree. It's easy, it's intuitive, it fits an individualist narrative. I don't think it's accurate at all. In fact I don't think there really needs to be any amount of acceptance to produce good and useful work and I think that what acceptance you do receive should be thoroughly examined as its own individual social event. Folks in the tribe may never "truly" accept you, but the whole concept itself needs to be examined in its own context. What would have acceptance meant for me within my communities? Folks start calling me to show up versus me just showing up? Folks put some amount of responsibility on me to organize gatherings? Better interviews that were more probing? I think most of my interviews went great personally.

I think part of the issue is that no matter what I did in american culture writ large I'm already an interloper. My politics are incredibly radical, even if the bubble I've built insulates me from that. I'm very gay, but not even in the right kind of acceptable gay way, more in the total disregard for social conventions kinda way. My personal background is highly unusual. Many of my personal habits are basically anti-social. I put a lot of work into passing as a reasonable human being when there's money on the line but if I'm not getting paid I honestly can't bother and I can't really jive with people who do bother. While the groups I did study were on some level or other unusual within America, they were only unusual on one or two vectors and over the course of my research I found again and again that folks involved were actually fairly conservative. Many aspired to be weirder and sought the sort of authenticity that's ascribed to folks outside the norm, but their attempts were basically superficial.

I think ultimately the largest issue is that I just couldn't relate to the folks in the groups, nor could those folks really relate to me. They didn't have the temperament or shared experience or really even the time to do so. A lot of it was probably ageism. Some of it was probably politics. Some of it is just trapped in that modern individualist alienation from others around you. From my perspective I guess this was ultimately helpful, if only in teaching me what sort of things I want to avoid in life. I keep finding out later too that folks were somewhat more invested in my presences than they appeared at first glance. Maybe I could go back, but what would that mean? I came on my own volition (well if we're being honest I showed up because that's where my love was at the time), turned my participation into an advantage for me, and quit when I found that my principles were clashing with my participation. Would I be returning because I'm desperate for human connection? Would I be returning to search for some glimmer of something that looks like emotional fulfillment? Am I returning out of academic curiosity for the growth and shaping of the group? I guess the worst thing might be confrontations with folks I feel like I individually affronted and maybe it's worth going back if only to try and achieve some personal emotional closure. Maybe the time away will have graduated myself from interloper to invested party.

Sunday, March 13, 2016

What Did We Learn

My Grandma has this belief that we’re put here on this earth to live our lives the way they unfold and for our immortal spirits to learn something from the lives we lead. It could be something very big, it could be something very small, but regardless we’re here to learn and learn we will. My friend has the belief that we’re essentially the same spirits repeated again and again, and the two of us met thousands of years ago and been friends before and we’ll meet again in the future (assuming it exists, he’s a bit of an end timesy guy when he’s down). Reincarnation is a super common belief, even in Abrahamism where the incarnations of the immortal soul are off in some new fantastic world (hell or paradise or limbo) arguably because the alternative, that our lives are incredibly fleeting and go from dust to dust in the blink of an eye not only is kinda scary to contemplate, it sets wrong with our estimation of ourselves and those around us.
Are we avoiding this on purpose? One of the many facets of modern life that seems to go badly is the obsession with preserving our selves, our money, our possessions, our will beyond the end of our lives. Whether it’s complex tax schemes to keep the money within our genetic offshoots or putting our name on as many big buildings as we can afford to fund, the screaming terror of mortality tends to manifest in these putrid displays of wealth and enforced posthumous filial worship. It’s not good! Inheritance schemes are pretty similar to bad cholesterol, in that they form plaques within the greater systems of human existence and make it harder for those systems to flow smoothly. Or in other words, it makes it harder for new people to earn that wealth while preserving a handful of folks who, by dint of their father or grandfather or great-grandfather’s efforts can just sit around at home and literally earn more than half this country will ever see in their lifetime of holding two and three simultaneous jobs.
It’s not like this is new or anything, the wealthy of Egypt would arrange to inter themselves along with their money, an arguably better system than inheritance at least. The scary truth is whether or not our consciousness persists after death everything we’ve made in this world is done for. It doesn’t matter anymore. And we all know it, right? It’s another one of those things where people are aware of this factually but it doesn’t really translate into the kind of behavioral shifts you’d expect if people really believed it was true. C’est la vie, ├ža ira, etc.
While we’re still on this earth though, we still gotta deal with earthy stuff. Our messy relationships, our tough decisions, our mistakes. Ideally every time you make a mistake you just, boom, you’ve learned a thing and now you know it and you’re slightly more perfect. Obviously life doesn’t work this way, and in fact a lot of things aren’t even framed as mistakes when they are. Even the concept of a mistake is tied to a personal ethical system. Maybe you think it’s a mistake to cause harm directly but indirect harms are pretty much a-ok. It makes me wonder sometimes what we could possibly be learning when the basic premises of our lives are so different. Maybe that’s the point and you have to learn something that’s buried under a facet of a facet of existence, like, maybe we need to learn exactly how to hurt people specifically. Who the heck knows?
All this gets even more complicated as trauma enters our lives and molds our ability to understand and appreciate our world. Every scar makes approaching life just a little stranger and affects the way we approach situations in both conscious and unconscious ways. Is it still a mistake if it’s a result of the emotional mindset caused by a past trauma? To what degree does your ability to make decisions really come into play with mistakes?
When I’m feeling more or less ok I’m happy to share my own take that reality lacks any real dimension of personal decision, that what we do is set in stone from the start and we’re just here to ride the emotional rollercoaster. It’s a little nihilistic, at least inasmuch as we live in a society that is absolutely obsessed with not just agency, but a sort of personal individualistic agency that makes things like “by your own bootstraps” and “welfare queens” make sense and destroys even very smart folks’ ability to understand systems as systems and not as the result of individual interaction within those systems (e.g. victim-blaming). I think it’s worth it though. All of that stuff is nonsense. Individuals don’t have any agency in the systems they’re trapped in. Stuff changes, of course. It just changes as a result of collective work that’s largely outside the hands of any particular person. You create the new culture you want to live in with your like-minded humans and it butts up against the existing culture and hopefully your culture wins that conflict.
So hey, what do you learn then if your life is basically on rails? Well heck you can learn dang anything. Your “mistakes” are just happenstance. Learn from them and try to avoid them or don’t! Whatever you’re going to do is pretty much already going to happen. There’s not a lot of sense in fussing about it. Really there’s not a lot of sense in fussing about anything. We still do it, I still do it, it’s just a human thing, but it’s not really useful in any real sense.

Thursday, July 23, 2015

Re: some mario maker early preview coverage

So I don’t normally do this because a) console wars are literally the dumbest possible conflict and b) gamers in generally tend to be aggressively wrong in a way matched only by hardened rightists so the folks who need to hear this probably won't, but this particular article irked me in just the right way that I want to respond to it.
To contextualize this discussion here I want to point out that this generation of video game hardware has, across the board, sold worse than the last generation of video game hardware. The Sony Playstation 4 is the only home console currently doing well, and it’s doing about as well as the ps3 (last gen’s sales loser) did at its peak. Likewise the handheld market has decreased overall, with the 3ds doing about half as well as the DS did in its heyday. There are a couple of reasons for this, but most of them just come back to the current economic doldrums all of western society is facing as a result of a bunch of terrible neoliberal decisions.
Even so, the Wii U is firmly in third place even behind the terribly lagging Xbone, but the quote re: the system not selling as well as even the gamecube is ignoring that actually at about this point in the gamecube’s life (two and a half years in) it had sold about as many units and that overall hardware systems can’t be expected to sell as many units as the seventh generation, let alone the sixth.
Concerning the Wii being “seen as slightly faddish” this is some typical gamer rigmarole where anything that sells to “casuals” in an unacceptably high amount (literally every facebook game, mobile gaming, the wii, so on) is in some way an abhorrent aberrance to the true gaming community which only buys “serious” consoles without “gimmicks.” It’s this bullshit language that helps maintain the atmosphere where anyone insufficiently versed in gaming shibboleths (ability to manipulate complex controllers to move a character in 3d space, willing to spend hundreds of dollars on a computer that only plays games instead of a few dollars on a game for a computer you already own) is perpetually an outsider despite the theoretical definition of “gamer” being “one who plays games.” It’s both a failure of empathy and taking the skills built into gaming for granted.
The Wii sold specifically and explicitly on a platform of making games easier for average people to get into, and it’s ironically this same platform that created the Mario being valorized in this same article. That it sold tremendously well is an explicit demonstration of the validity of this approach and the Wii U’s problem isn’t that the wii’s popularity is a flash in the pan, but that the wii u is poorly positioned in the market. Instead of retaining the market that the wii successfully capitalized on, the wii u chose a terrible name and returned to a more complex control scheme that alienated their non-gamer market.
The article attempts to position the problem, as so many comments sections do, as a problem with the tablet screen, suggesting that consumers didn’t respond well to it. This is mostly conjecture, but I’d suggest the issue is less with the screen itself, which is actually broadly popular with actual wii u owners, but with the aforementioned failure to position itself on the market and more importantly a failure to interest developers in creating unique experiences for the screen. Not mentioned at all in this article are both Nintendoland and Game and Wario, arguably the two games that most significantly utilize the various features of the screen and demonstrated a variety of possible control schemes for future games, none of which have been later reused even by Nintendo itself. (That said, it looks like the upcoming star fox heavily leans on the design of the metroid game in Nintendoland)
Which brings me to the next issue here, where the article suggests that Nintendo has dragged its feet about putting out major franchises. This shit is a goddamn gamer Gregorian chant at this point, and a chant that frustratingly ignores two things.  One, Mario is their biggest franchise by a very wide margin and they won’t stop making goddamn Mario shit. Mario Kart 8 has something like a 60% attachment rate and there hasn’t been a year since the console came out that some kind of Mario shit hasn’t. What gamers mean is “why hasn’t [series that doesn’t sell as well as Mario] come out yet” which is number two: Nintendo is having to make HD games now, which have much longer development time and thus much greater development costs. So far this generation Nintendo has explicitly been trying to offset those costs by outsourcing a great deal of its design work to other companies (smash 4 with namco, wonderful 101, the new star fox with platinum, hyrule warriors with koei tecmo) and this is 100% the reason Nintendo is shy of creating new entries in franchises that aren’t guaranteed to sell  Mario or Zelda numbers (it’s also why Splatoon almost had Mario characters until the team convinced Nintendo it could be made on the cheap, and indeed they got it out in about a year with just four maps and three major weapon types, more coming on free dlc. Speaking of DLC this is why Nintendo suddenly seems to be so confident in creating and putting it out, because Nintendo’s DLC, like all DLC, is designed to offset the costs of production, which I’ll reiterate are dramatically higher than the costs for producing a Wii game.
Nintendo is literally dealing with exactly the same issue the rest of the industry is, which is that development costs have vastly outstripped the profitability of selling games at $60 and what we’re seeing across the market is publishers scrambling to deal with this paradigm in all kinds of wildly unpopular ways. This is not a case of blatantly terrible decision making, except most prominently the name of the system (market positioning), but a case where Nintendo is having to adapt to the status quo of other console-makers and is trying to still make a profit. Microsoft loses money on every single xbox sold, Sony presently is only making money through their games division as the rest of their electronics are crashing and burning. Nintendo pretty much only makes games, so they can’t afford to take a bunch of business risks, hence a million fucking Marios.
Tl;dr the wii wasn’t a fad, the controller isn’t the problem, and gamers have no idea how games get made.  And the fucking NX is a handheld.

Monday, July 20, 2015

Six Totally Unexpected Reasons Reviewing Games is Harder Than You Think

I gotta stretch this out into more of a personal blog entry to cohere this into something that isn’t just a real facile aphorism. So, I’m reading Nathan Rabin’s latest year of flops on av club, since I guess the spinoff website thing didn’t work out and Nathan is back at the site that loves/hates him. Anyway it’s a review of So You Wanna Marry Harry, a singularly puerile (and those who know me know I don’t use that term lightly [nah I’m just fucking with ya]) reality show where a handful of ladies are apparently coerced into believing they’re competing for the affections of Prince Harry, who is I guess british royalty of some sort. It’s a reality show, so it has reality show morals, so of course the standpoint of the show is the winner should be someone who “deserves” it, which means someone who is honest and genuine and unassuming and whatever other traits society has deemed love-worthy. This is the biggest thing Nathan wrote about in his review, that the typical moral construction of the reality show narrative comes across as particularly flat and tasteless when built on a construct of deceit above and beyond most reality shows; “Harry” was to be interested in the most “genuine” of possible suitors while pretending to be British royalty.
This makes for an interesting angle to talk about, and invites the reader to find value in a theme not necessarily explicit in the text. In short, it co-operates with a good review.
Naturally this got me thinking about video games. One of the more pervasive concepts in critical readings of games is ludonarrative dissonance, the mismatch of the themes presented by the narrative of a video game and the mechanics present within a video game. A good general example is when games present you with an objective that is implied to be time-limited (e.g. we must find the bomb before it blows up the city!) but in reality the game will simply wait for you to eventually complete that objective before moving on. Another common example is presenting characters within the narrative who are supposedly morally correct and relatively pacifistic (usually as opposed to morally bankrupt and deadly antagonists) who violently murder hundreds or thousands of faceless humans over the course of the game. In practical terms this is no big deal, the gameplay mechanics allowing for time for players to explore or practice without the pressure of a time limit or offering a series of combat challenges that break up the platforming sections and pad out the time spent with the game. In terms of video game thematics, though, this dissonance can create ravines of meaning that make it difficult to extrapolate coherent themes out of a game. Ultimately it fosters a certain kind of cynicism in both the reviewer and the player: the story doesn’t really matter because the mechanics are just going to undermine it anyway.
I could be wrong, but I think this is one of the reasons why the notion that game reviews should be “objective” clings to life in a way that criticism in other media doesn’t have to deal with. It is already widely accepted that game stories are bad, and ludonarrative dissonance is but one part of that puzzle (the other parts being how incredibly stereotypical most of these narratives are under guise of adherence to genre tradition and the relative disinterest most publishers have in foregrounding narrative as an essential part of the product they’re selling rather than simply a tool of market positioning) but the problem with bad game stories ripples outward and affects how we think about and talk about games.
One of my favorite game reviewers is a weird dweeb named Tim Rogers, who writes reviews not as straightforward gamepro-style 300 word affairs but as 14-20k word anecdotes about his life that usually border on shaggy dog stories. When he does write about the games in question, he mostly writes about how the game /feels/ and what the mechanics do to create that feeling. Narrative is rejected wholly as an interesting aspect of games.
Roger Ebert talked about games once and caught a really silly reputation among the gaming crowd, up there with shibboleths about Uwe Boll and Jack Thompson in those days. He suggested that games weren’t Art as he understood it, that the nature of games, the structure, the objective, the win condition, the series of rules themselves precluded games from being Art. It’s not a very unreasonable position, but it struck a nerve with a segment of folks hoping for a cultural legitimacy beyond Hollywood stereotypes of nerdy losers. Ebert only elaborated once before his death, mostly just reinforcing his position and pointing out that a lot of the counterarguments were fundamentally misunderstanding his position.
I don’t wholly agree with him, but I think it’s a worthwhile position that can be explained pretty well through the analogy of playing an instrument. While the music produced through instruments is widely regarded as Art and has been for some time, I think you’d be harder pressed to find someone who would describe chords and scales and hertz values as art, but rather as a necessary study in the process of creating art, much as learning proportions is a necessary study in doing painting. The paintings are the art, and the process is sometimes art. The rules governing both are not themselves art. This holds true for games as well. What resonates with players is the experience of playing the game, the emergent narratives they’re experiencing. The experience can be and is often shared through youtube or meandering anecdotes and these representations can too be art, but the actual process of crunching variables or detecting player input or wrapping a virtual skeleton in a bitmap? Not art.
The reason I bring up this particularly navel-gazey and moot critical argument is because it’s directly related to how narrative is often confused as the artistic part of games. Shadow of the Colossus, for instance, is often described as a sort of “Rosebud of games” owing largely to its relatively somber narrative, rare for its time. The gameplay of Shadow of the Colossus, however, is fairly typical of a 3rd person action title. You have a horse, you ride it to a boss fight, which is more accurately a platforming segment followed by tapping a button, and… that’s about it. Sometimes you perform platform action to get to the boss to platform on it.
This is definitely a reductive description, but think about the elements being reduced here. What happens to a game if you strip away the music, the background textures, the user interface, the narrative? You’re left with a series of systems, appreciable mostly as elegant objects designed to produce an outcome, which in game terms is usually just creating a functional input/output feedback loop from the player. It’s all that other stuff that’s designed to make players believe they’re an assassin in 1500s Damascus. So you can see how the two collide. On the one hand you have a series of systems designed to produce, more or less, a Skinner box response from players, and on the other you have another set of artistic systems designed to make you believe that the skinner box you’re participating in is actually driving forth a narrative about invading an alien planet in the past to save the present from being destroyed.
This duality plus the fact that both sides of the system are constantly improving makes critically discussing games kinda weird and endlessly debatable and it’s unique from other forms of entertainment in that by the very interactive nature of video games it’s difficult to achieve the kind of suspension of disbelief that foregrounds the narrative over the tools used to convey it. It’s much harder to believe that you’re a pirate when you’re conveying all of your piratical things by using tiny buttons and some sticks than it is to believe that you’re simply watching pirates do their day to day business through an omniscient window that works kind of like our dreams do.
Not terribly long ago, games were pretty incapable of presenting a convincing narrative at all and were instead more interested in simply providing compelling things to look at (this paradigm lasted about midway through the original playstation era, plus lots of later games doing it for retro reasons) and to a certain degree the present backlash against critical analysis is a nostalgic yearning for the times of yesteryear (as is most of rightism, really) when gaming magazines would simply check a few boxes and rate games based almost entirely on technical competence + audiovisual appeal. It’s this particular style of critical analysis that lead to the current Metacritic paradigm and it’s this style that quite a lot of smaller outlets are explicitly writing against.

Sunday, June 14, 2015

End-of-History Illusion

Sometime last year I had a conversation with a person I admire very much about leftism since she’s one of the more involved people in the IWW, and I mentioned that one of the biggest things that kept me out of left activism is the terse and really toxically personal infighting. Flash forward maybe a few months and I got heavily involved with a left activist group founded out of a group of really toxic infighters.


Let’s switch gears a bit and talk about ethics for a bit. My major in college (and frankly my ongoing passion) was anthropology, which is the broad study of humans as a whole. The modern discipline is divided into Linguistics, Archaeology, Physical Anthropology (sometimes referred to as Biological Anthropology), and Cultural/Social Anthropology. Each part of the discipline has its own questions and concerns but as a field that explicitly deals with people in all possible forms across the entire planet and throughout history, there’s an overt need for examining the ethical procedures by which study is done. Anth as a discipline has a history of ethically nebulous figures performing spurious research and is likewise fraught with a century of attempts to counteract these individuals through codes and creeds and coercion. The feuds are as epic and legendary as any across other disciplines, and there’s no sign of a real conclusion so long as the AAA refuses to maintain blacklist powers.

Point is, I sat in a lot of ethics courses where students were kinda uncomfortable with making any strong statements either way and the professor was no dang help. Being involved in general leftism is kind of like that, really. That or the other reaction where every ethical violation no matter how convoluted is trying to be respected at once, and then there’s the whole issue where people decide that their ethical commitments stop at their specific identities and then there’s the whole concern where ethical disagreements should be swept under the rug in the name of preserving the community, which ultimately means less that a community is being preserved and more that the cracks are being waxed over and forgotten just long enough for the whole thing to blow up later.

And it’s all entirely an exercise in futility, since The Discourse itself doesn’t really help anyone, just entrenches whatever ideological point of view can outlast the others as an epistemological fact. One of the other takeaways from anthropological theory courses was that consistently across a century and a half of cultural formation/perpetuation theories there’s rare suggestion that individuals might have agency in the creation or formation of culture. Instead myriad theories assume that culture is essentially too large to ever really be in control of a single person or a single group. A metaphor might be: the French nation created Napoleon, rather than napoleon creating France.

So ultimately leftism and leftist movements might themselves be ridiculously inept and it doesn’t matter since the fate of whether or not leftism succeeds or racism or sexism or homophobia ends is out of any individual group’s hands. Economic forces are probably going to drive us toward something that looks very much leftward simply due to technological development the same way capitalism successfully globalized thanks to the Long Peace created by nuclear weapons and communications technology.

But of course that still leaves us with in-fighty leftist movements. There’s definitely a put-up-or-shut-up element to ongoing involvement, a sort of “hey if you’re really committed you’re gonna be here” kind of morality both for the groups themselves and for the sort of turgid call-out exercises popular among a certain crowd, where anything that feels more like you’re doing something is preferable to feeling like you’re not doing something.

The thing is, struggle sessions are easy. Arguments are easy! Holy shit is rationalization easy. Literally if you don’t want anything to be your fault and you have even a small understanding of what makes people tick it’s incredibly simple to build dozens of justifications for anything you do. Despite my couching anthropological ethical violations as largely historical in nature, this is only the case because present ethics are exactly the sort of wobbly, finicky issues that can be propped up with twigs, leading to pointless repartees between two sides that are both plausibly correct. Ethical violations continue anon, depending on who you’re reading.  

What’s hard is creating strong, lasting communities of people who’re mutually invested in each other’s wellbeing. It’s tremendously difficult, even as it’s a patently obvious necessity for any kind of radical organization. There’s several reasons that this is so difficult, and they cross over largely into how consumerist-individualism has thoroughly entrenched a primacy of the self in the modern West combined with an understanding of the internet as a customized content delivery device first, communications platform third, but at their base core leftist groups exist as organizational vectors for a particular political bent. If you’re not a leftist, you’re not in a leftist group. What this means is your entire time and involvement in that group hinges on your political beliefs, which in turn leads to constant reexaminations and redefinitions of what those beliefs are.

An effective counter tactic should at this point become clear: take the politics out of the leftist spaces. Create groups that have reasons to exist beyond leftism itself. Create a set of rules that explicitly bend toward a leftist angle and suppress rightist talk within the group as much as you like, but decenter the politics and you decenter the infighting. Do this and you stand that much better of a chance of creating a space where leftism ceases to be a trial of purity and begins to be understood as simply the way things should be, an unspoken expectation that reaches beyond the rational, argumentative political thought-process and into the centers of the brain that drive cultural creation and interpretation. Do this and create a new culture all our own. 


Wednesday, October 8, 2014

On Reading Art

There is a vast and inescapable gulf between creators and the folks who enjoy stuff that’s been created. This gulf is inevitable. Art is created to convey messages, from top to bottom. Even art that’s created for the sole purpose of “this thing is aesthetically appealing” is still conveying a message about what “aesthetically appealing” is. Plot that’s merely created as a device to move a storyline is still conveying a message. In our post-post-modern times (this ironicist age) we’re inclined to reject the notion that the author has a message in the first place, that the work’s sole value is in the interpretations derived by the fandom of an object. This attitude is especially appealing in an environment driven by individualistic consumer capitalism, where works of art are ultimately only important inasmuch as they provide some compelling experience for us as individuals and can be later appropriated to build a self-image. If the author is irrelevant and our individual experiences paramount, anything can be perceived as supporting any kind of self-conceptualization we can come up with. The actual narrative or message of a story is ultimately irrelevant compared to our individual experience with a story. This is why we can tell bald morality fables even to folks diametrically opposed to the self-limiting concept of morals. The story doesn’t matter, just that it helped momentarily raise your dopamine levels and distracted you from your own mortality.

This is why Final Fantasy’s clear naturist spiritualism can be utterly 100% ignored in favor of, you know, “Aerith dies! Look how evil/badass Sephiroth is. Yuffie is mai waifu” and so on. The story is a bit standard of a paean to anti-pollution or general gaianism, but it’s literally the last thing you hear about final fantasy seven and the folks way into the game aren’t forming anti-pollution initiatives or standing outside at climate change rallies. The message (respect your planet because it’s the source of all life) doesn’t really matter to the people most invested in the actual work. And it’s endemic to every kind of story. Folks who’re big on neon genesis evangelion don’t come away with an idea that all people are more or less one and the same, that difference is an illusion created by an absolute terror field or psychological damage. The list goes on.

It’s possible to read a narrative and accept and internalize its message over its presentation. It’s not common, but it does happen. My contention is just that perhaps in this modern ironicist age, as we’ve broadly accepted the conclusions of post-modernism (that all meaning is constructed) and taken them to their illogical conclusion (meaning is false and sentimentality is lying), we’re less and less equipped with the ability to read a narrative for what message it’s attempting to convey and consequently modern artists are less and less interested in conveying a message. The very act of working a message into art is inauthentic; the message is assailable, the expression artificial and dishonest and untrue. The art itself of course is unoriginal, everything that can be made has been made already. In a culture where the authentic expression of oneself is a moral imperative, inauthenticity and unoriginality is anathema. This creates a tension within the arts community, whereby artists have to confront the conflict between awareness of a lack of originality/authenticity/honesty in their work and the overarching need to be original/authentic/honest.

Different artists solve this different ways, but I’m more concerned about the legions of folks left in the gutters, creatively paralyzed as a result of failing to meet an unrealistic internalized standard of expression created by the proliferation of mass culture. You may notice parallels between what I just wrote and the creation and sustention of beauty standards that leave millions of folks bodily and personally insecure (not to mention gender standards, wealth standards, ethnicity standards, all kinds of normativity). This is intentional, as all of these processes are the tandem result of mass media. Normativity in artistic presentation/consumption is just as ruinous as any other normativity.

What I’m here to tell you today: don’t be afraid to make any art. It’s easy enough to write out, but much harder to internalize. Don’t be afraid to write whatever ridiculous dreck you want. Don’t let your internal editor endlessly compare your work to anyone else’s. Don’t be afraid to read narratives for what they say. Don’t be afraid to embrace a philosophy or a politic or a position. We’re all going to be dead sooner than later and no one will remember us accurately so there’s really no reason to worry about it. It’s out of your hands.

Monday, September 29, 2014

On humans and hierarchies

There’s been this sort of interesting idea kicking around in my head that essentially the only judgments humans can make without needing a cultural reference to back it up is whether a thing is good or bad. Indeed whether a thing is good or bad is often what many descriptions boil down to. A critical review my expound on the myriad factors involved in a work, but ultimately these factors fall upon a dividing line of good or bad.

Once we get to comparative judgment our tools only become slightly more complex: we place objects as greater or lesser than their peers. We might organize one object as more in one aspect and less in another, but the result is still the same. This cup is larger than that cup. This cup is more orange than that cup.

It is through this means that humans create hierarchies or structures of existence. It’s a habit that is as close to universal as behaviors come, and can be seen across time and place and across subject, whether it’s a manga placing its characters on a number line of relative strength or Catholicism determining the importance of angels by their distance from god or a bored student arranging her writing utensils from order of shortest to longest.

Why is this so important to humans? Are we destined to be the universe’s organizers, to find and categorize all living or nonliving things? The irony is palpable, as all things are merely extensions of one continuous object.

Tuesday, July 22, 2014

Skylanders: Swap Force

Three years after its initial iteration, Skylanders shows no sign of slowing down, cannabilizing entire store aisles with cartoon bits of plastic and innovation. The first real adaptation of NFC technology in video games has been a wild success, nailing a vulnerable target market (children) with consumer capitalist dream toys: little devices that a video game requires in order to function. A game that takes all of the best elements of grindy lootfests aimed at older players and combines them with a compulsive and coherent marketing structure doesn’t merely suggest purchasing as many toys as you can but demands that you acquire them or face a drop in euphoric hormones.  
There are 80 Skylanders characters now, 10 each categorized across eight basic elements. Swap Force, the latest iteration, adds in an additional eight movement types spread across its unique mix/match figures while Giants, the previous iteration, had 8 larger than normal figures required for play. In optimal configuration, players only need about eight figures (one of each element, or in the case of Swap Force, one of each element and one of each movement type. These two requirements do coincide, a small mercy) in order to unlock every area in each game and collect all the secrets. This optimization is obscured, however, by both the game’s target market (5-12 year old boys) and features in the game itself. Throughout the game world you’ll find “soul gems” that unlock new powers and feature a promotional video for skylanders you don’t have. They’re toy commercials dolled up as super-secret rewards. On top of that there’s an extensive collection screen that encourages you to seek out and complete the full collection of little dudes with little fluff details and links to the in-game advertisements.  Even the mechanics of the game encourage you to collect more. Beyond the gates that bar entry to all but specific kinds of skylanders, the number of available lives you have for a certain level is hard limited by the number of skylanders you have. The more skylanders, the more lives you have to play with.
It’s brilliant, from top to bottom, and the game would be so easy to condemn if not for the fact that it’s well made and well designed. Attacks have an appropriate amount of friction, enemies are smartly varied, the level design is engaging. All told this game plays as a thoughtful Diablo variant for children.
But that’s just a physical description of the game. If you’re wondering if it’s worth picking up, wonder no more. A bunch of outlets have given Swap Force (the most recent iteration) perfect scores. The game is indubitably fun. What’s more interesting is the questions that the game itself and its runaway success bring up. Why do we sell these things to children? What is it about kids that make marketing a consumerist wet dream to them so much more lucrative than selling to adults? A cynic might suggest that adults are too jaded for this kind of thing to work on them, that kids with their inherently more trusting nature are more likely to buy bald marketing pushes such as these. I don’t think that’s a sufficient answer, as I’ve watched plenty of adults buy and collect plenty of stupid things in my life. I think it has more to do with what we consider childish in America. Collecting things just for the sake of collecting things has simply never been in the stable of sane activities for mature adults to do. Instead we describe adult collecting as a somewhat strange and shameful hobby, to be kept secret and gently mocked when it sees the light of day. At the extreme we consider it a form of hoarding and we put these folks on trial on television, a warning to the rest of us to become anxious about our personal lives. This attitude is slowly and somewhat changing, though. We’re learning to understand and appreciate the collection impulse through things like mobile games, which feature more and more “get this thing to complete your virtual collection” hooks. Maybe in the future there’ll be a more adult oriented form of skylanders, with sexy women and hooded, goateed bald dudes. Or ideally games will have gotten over that impulse too and truly become something transcendent and imaginative. A game with an NFC pass-along mechanic, say, where you send one object from person to person to accrue social power, each person leaving a small stamp on the figure in game terms. A game that works in conjunction with a 3d printer to, rather than put a physical object in the game, uses the game to produce a physical object. Lots of interesting places for this tech to go.
One thing would be missing in a more adult oriented version of skylanders: sheer whimsy. The game is silly as all get out, from the ultra-serious announcement of silly enemy characters (“Grumblebum Blunderbuss”) to the goofy hat options to the characters quipping lines throughout play. It’s cutesy and mostly charming, at least until the cutscenes. The plot of Swap Force is utterly ridiculous and ridiculous in the worst “talking down to children” sort of way, featuring “evilizer” devices powered by solidified evil and cartoony, unbelievable villains. The only saving grace is Patrick Warburton doing his Kronk voice as a self-important airship pilot. The game is aggressively kid oriented, even to the point of rendering its powerups as a variety of foods that kids would find appealing, hot dogs, hamburgers, even a Kid Cuisine tv dinner. Marketing for the game is tailored to the inevitable adult purchasing the $75(!) starter set, extolling it’s value and virtues as unequivocally providing a fun experience to children that provides some sort of nebulous real-world benefit.
It’s gross, really. Reading marketers selling kid stuff always gives me the heebie jeebies. These children aren’t old enough to work or drive or technically sign the 63 page EULA(!) that innocuously appears under a button push on the title screen (the EULA of course states that by playing the game and not returning it to the store immediately, you agree to these terms. Contract lawyers are the devil) yet here we are, marketers playing on unexamined personal wants to inspire them to pester their parents into buy the stuff. I’m never sympathetic to the argument that parents should just be the dividing line between advertisers and their children because these advertisers are well aware that they’re creating conflict within a family, palpable interpersonal drama that can be resolved (if only for the moment) by purchasing a thing. It’s bald emotional manipulation and It’s gross. It’s such a dishonest way to make money.