An old (and far more talented) friend of mine responded to the discussion Rob and I had about frame rate:
The choice to go from 24 fps to 48 fps was that some filmmakers really hated the strobing effect when the camera pans in 3-D versions of movies. Their solution was to up the frame rate—giving the filmmaker more information to play around with. Honestly, the 24 fps strobing never bothered me, cause if you are telling your story right, little nitpicks like the don’t enter the mind of your audience.
For reasons unclear even to me, I responded to his gentle correction with A Brief and Inadequate History of Special Effects:
I didn’t want to get too technical in the podcast, but I was hinting at that: 3-D created a problem that didn’t previously exist, and the solution is worse than the original problem. No more strobing, but now the effects are so obviously “special” that we may as well be watching the original Clash of the Titans. An incredible film, don’t get me wrong, it just required a superhuman suspension of disbelief. Which at the time was fine, because “special effects” like George Reeves flashing across the sky were meant to be “special,” outside of the ordinary, and didn’t need to look as if they were of this world or obeyed its laws of physics.
I tend to think George Lucas ruined this fantastical acceptance of the specialness of “special” effects when he married recognizably modernist styles with space stations and star ships—the Millennium Falcon could’ve been a Le Corbusier, the stormtroopers come from the mind of an Italian fascist, and half the scenery consisted of the same brutalist style that litters my campus. Point being, his realist aesthetic made “special” effects look quaint, the people who loved them rubes, and that’s where we’ve been ever since. Realism or naught! Realism or naught! (With a few exceptions, Del Toro notably among them.)
So I could understand why Jackson wanted The Hobbit to accede to the demands of the regnant style, but in doing so he utterly ruined his film. I mentioned in the podcast that the best scene in the film, Bilbo’s encounter with Gollum, looked like exactly what it was: Martin Freeman in front of a green screen talking to a man in ping-pong ball covered suit. (I know that’s not how they do it anymore but you know what I mean.) It looked like Jackson had decided to avoid the uncanny valley by introducing its monstrous child to an actual human being and hoping the audience wouldn’t be able to tell the difference. I’m not going to say it made me want to cry, but I’m not going to deny I teared up a bit at the sheer waste of it all.
Like you, I’m more interested in the story, so if the technological advances can be integrated into it—like the conference tables in Avatar—I’m fine with that because it complements the narrative. But I don’t even think we need 3-D. It took us millions of years to develop the particular sort of stereoscopic vision we have, and our brains react to an “occupied periphery” the same way now as they did before: by flooding our bodies with hormones that make us nervous, tense, excited, afraid, etc. Since our eyes still point forward, you don’t need anything more fancy than an IMAX to occupy our peripheries, and I’m fine with that.
I thought I was talking about special effects and their more cloyingly “special” forbears, but the real sore spot for me here is the blind lionization of a limited definition of “realism.” Don’t misunderstand me: I find relocating fantastic narratives to a world that resembles ours an admirable endeavor. Heath Ledger’s interpretation of the “Joker” outstrips Jack Nicholson’s because we don’t need a vat of quasi-mystical chemical slurry to believe that a child of neglect and poverty might come to resent those he believes kicked him down to choke him out. I’m all for grounding narratives that occur in fictional worlds in ones that mostly obey the rules of ours. I’m on board with Battlestar Galactica and (though I’ll never admit it) I even watch Arrow. But the “reality” of “realism” has to amount to more than a little extra grease smeared on the walls of some backlot “Brooklyn.” Because when “competitive realism” becomes a sport the audience always loses. Embracing filth for love of the slop as an ethos would be one thing, but embracing it as an aesthetic out of devotion to an empty notion of what constitutes “realism” is more than just a thing:
Consider the most common way to impart unrehearsed immediacy to a scene: the shaky cam, proud descendent of the cameras carried by war reporters who (we imagine) ran alongside the men whose deaths they documented. Because deep in the ancestral soul of every shaky cam is a connection to the atavism whose jittering eye (we imagine) once captured soldiers piling up in Norman shallows. Because the essence of the physical circumstances of war correspondents (we imagine) is transferred not just into the tool they used, the shaky cam, but into scenes whose style bears a family resemblance to ones shot with them. I added “we imagine” to the previous sentences because many people believe in what amounts to a form of idolatry when it comes to the shaky cam shot: God the War Correspondent infuses His essence into the totem of His Shaky Cam in such a way that all evidence of shakiness in film represent an invocation of His Brave Reportage.
Which is the height of insanity considering the ubiquity of shaky cams and shots designed to resemble them. Otherwise we must believe that the Great War Correspondent is present when a teenage girl on a soap opera throws a temper tantrum and slams her bedroom door behind her. Because shaky cams capture plenty of those. Not to mention celebrities. His Journalistic Eminence must love celebrities. Just turn on the television between 5 p.m. and 6 p.m. and witness the celebrity-chasing that passes for “local news” now. Those cams are all a-shaking and there’s not a single noble soldier dying lonely on a foreign shore in sight.
We’ve established that the shaky cam’s war-oriented history is partly responsible for why it’s considered “better” at creating “realistic” representations of the world. We’ve now also taken our first step toward understanding why “realism” is associated with misery: its tools are. Think about it: a shaky cam can almost perfectly approximate the swiveling eye-level perspective of a human head, so if its operator breaks into a run the resulting images are almost perfectly identical to what you would have seen were you the one doing the running. That makes sense. But before we continue I want you to take a look at this picture:
It’s
from science. Which one isn’t important because, for now, I just want
to compare the movement of the human head relative to the body while
walking and running. Humans have evolved to walk chill. Just look at our
head bob as we strut around in our hilariously skinny jeans. But look
what happens when we hear a car backfire in one of “those”
neighborhoods: our stride lengthens and center of gravity lowers,
meaning we’re more stable than we were before we nearly shat our tiny
pants. Now take a look at our running-head. What happened to our
swagger? I’ll be brief: because a bipedal gait is inherently unstable,
our head has a tendency to pitch forward when we run; and because
evolution looks unkindly upon individuals who flee danger by planting
their face where their feet ought to be, we’ve evolved a robust
musculoskeletal system that keeps our head up and eyes front when we
pick up the pace.*
So our head bop up-and-down atop an unflexed musculoskeletal apparatus when we walk, but when we break into a run that same apparatus yanks our head back and stabilizes our body in a way that prevents our head from bopping. Meaning that when we run our eye-level rarely varies, whereas when we walk it’s constantly moving which is the opposite of the “realistic” effect provided by the shaky cam. Consider this randomly selected clip:
While the frightened children walk the handheld camera is canted, but the level of framing barely bobs at all. We have no bobbing where a “realism” conforming to human biomechanics would require it. But when the frightened children start running at 1:33, the level of framing bobs up a foot and down another with every step. Meaning we have excessive bobbing where a “realism” conforming to human biomechanics would demand none. Shots from a shaky cam are only more “realistic” if we define “realism” as “an aesthetic commitment to seeing something and representing it the opposite.” We’re not about to do that. So where do we stand now?
We know that shots from shaky cams look more “realistic” by means of an accident of journalistic history and by virtue of the fact that they represent the world not as it is but as the opposite. Keep in mind that this is the shot whose realist credentials most would consider unimpeachable and you begin to see what an aesthetic commitment to realism entails: the perpetual recreation of the contingent circumstances in which the shaky cam shot became popular and secondary elaborations on those contingent circumstances that borrow the “realist” credentials of the original while doing the opposite of what happens in reality. Considering that this is the strongest case a committed realist could make, you can see the kinds of problems that might arise were a spirit of “competitive realism” to sweep through a generation of filmmakers. How can they be more “realistic” than journalism’s happy accident and the opposite of evolutionary development? What’s more “realistic” than the opposite of perceived reality?
Because not even they can answer questions that make no sense, they’ll change the terms of the debate to features common to the happy accident: the shaky cam will be used in scenes in which battles rage and men are confused. Then they’ll extend the purview of “battles” to include arguments and cast a net wide over all manner of confusions. Now a man will “battle” with his balance and his wife until he runs from an apartment confused because he’s sober and single and in order to imbue the scene with the “realism” it requires it’ll have to be filmed by a trampolining meth addict. My example’s admittedly extreme, but you see my point: if historical accidents and perceptual inaccuracies become the standard for “realism,” a competition can only result in increasingly random exaggerations. (That they’re mistaken for transparent representations of reality only makes the situation more infuriating.)
All of which is only to say that there’s nothing realistic about cinematic “realism,” but there is something more realistic about films that aspire to it than, say, animated features or movies starring Muppets. I’ll grant you that. But once you start talking about crafting something that’s “more realistic” than previous films, you run into a whole host of problems. In literature, when a group of young writers tried to be more-real-than-the-realists, the result was literary naturalism, and the aesthetic of the current crop of directors seems aligned with them—its cities are sullied, its fields despoiled, its masses uneasy—but the aesthetic of the literary naturalists was built on science. Bad science, I admit, but science nonetheless. Literary naturalists had a reason to believe the boot on their neck would be crusted in shit if not covered in blood: their science told them so. Not so for the current realists, for whom the legacy of perceptually incorrect happy accidents suffices. Theirs is an empty aesthetic turned pissing-contest and we’re the ones getting the golden shower.
What does this have to do with Peter Jackson filming The Hobbit in 48 frames-per-second? If, as my friend says, Jackson shot at that frame-rate because he wanted to avoid a nearly unnoticeable strobing, he presumably did so because he felt that it would break the illusion on which cinema depends. If the audience notices the limitations of the camera, it becomes aware that there’s a camera between it and the world depicted on-screen. Since we don’t see strobe effects of the sort in the real world, we shouldn’t see them on the screen because they’re unrealistic. His solution was to eliminate the strobing by making sure that illusion was never created in the first place. Because you can’t be ripped from a world you’re not immersed in.
Of course I don’t actually believe that’s what Jackson intended, but I do believe he thought it was his turn to up the competitive realist ante. The long explanation above is meant, in part, to communicate why Jackson would agree to participate in this competition. The conventions are so naturalized it’s almost impossible not to think of them as realistic, so it takes so heavy-duty defamiliarization to recognize them for what they are. Put differently, I think Del Toro would’ve made a Hobbit in a style that recognized that the “special” in “special effects” isn’t something to run away from, because such effects are only slightly less contrived than their “realist” counterparts and can be far more effective when telling a story, say, about wizards and magic rings.
*For more on that, see Dennis Bramble and Daniel Lieberman, whose hipster I borrowed above, and much of whose work can be downloaded free of charge from Lieberman’s site.
This is one of those posts where I can't tell if I made my point too strongly or not strongly enough. It happens sometimes, in the hinterlands between "appropriate blogging material" and "academic essay." Gah.
Posted by: SEK | Monday, 28 January 2013 at 05:52 PM
Interesting post--this gets at much that has bothered me about "competitive realism," as you put it. I wonder, though, if Cloverfield is the best example to use here. If memory serves, there's an actual diegetic camera involved in that film, which would account for the shakiness, as someone is actually trying to film while also hauling ass.
Posted by: KJE | Wednesday, 30 January 2013 at 11:47 AM
You're right. I chose it because it's the only film that's had shows cancelled on account of the shakiness, but it's diegetic shakiness. Like I said, this is halfway between a tossed off blog post and an academic article, and if it turns into the latter, I'm definitely going with a different example.
Posted by: SEK | Wednesday, 30 January 2013 at 12:30 PM
Dagnabbit... This post really got me thinking, and I wrote a pretty in-depth response trying to tease out the genealogy of shaky cam and linking the problem of competitive realism back to Oscar Wilde.
And it looks like it disappeared into the internet aether. Or is it just not up yet? I was going to show this whole thing to my wife, but if I need to rewrite it, I'll get cracking.
Posted by: mxyzptlk | Thursday, 31 January 2013 at 02:23 PM
I didn't comment because I was waiting to see if somebody else would step in with something smart, but a couple disjointed thoughts:
1) I think there's a connection to be drawn between this argument and your previous post about Jackson playing with genre conventions. There are incentives in Hollywood which reward both experimentation _and_ (in other shows) the ability to reliably match audience expectations and genre conventions (e.g., the reliability of "Hill Street Blues" from that post).
I'm not sure which category I'd place 48fps effects into, but it seems like part of what you're talking about is the tension between those two compass points -- the desire to be able to experiment without giving anything up ("realism" in this case), which is always going to be fraught.
2) David Bordwell's post on the Paranormal Activity series was wonderful -- even though I have no interest in the movies themselves, and has interesting things to say about camera conventions.
3) I'm not sure what you mean when you say, "I tend to think George Lucas ruined this fantastical acceptance of the specialness of “special” effects when he married recognizably modernist styles with space stations and star ships." Looking backwards, I think one of the stylistic aspects of Star Wars (and Star Trek) which makes people nostalgic is the ships which are obviously plastic models. They are emotionally communicative.
4) It just seemed a little harsh when you said, "It looked like Jackson had decided to avoid the uncanny valley by introducing its monstrous child to an actual human being and hoping the audience wouldn’t be able to tell the difference." I haven't seen the scene in question, but you seem to be arguing that not only was the execution flawed, his very conception of how to film the scene was mistaken. Without any basis for this belief, I'd want to keep open the space of, "the impulse could be used productively in some cases, but it failed in this one."
Posted by: NickS | Friday, 01 February 2013 at 11:49 AM
I'm still bouncing off (or around) the post a bit, but I'm taking your comments as license to do so in writing, rather than just in my own head (in other words, preemptive apologies for rambling).
If the audience notices the limitations of the camera, it becomes aware that there’s a camera between it and the world depicted on-screen.
The more I think about this post, the more I keep coming up against two questions which are somewhat outside of it's scope:
1) What are the circumstances in which cinema asks us, as the audience, to forget the presence of the camera (or computer, or other intermediary), and what are the circumstances in which the audience is invited to consider that camera.
2) What exactly defines a special effect?
I realize that you are talking specifically about "realism" as a cinematic style in the sentence quoted above, but it still seems to me that there are lots of circumstances (including shakey-cam shots) where the movie foregrounds it's own nature as cinema.
As one example, as soon as you started talking about 3d, I immediately thought about Avatar but, for all the people who praised Avatar as convincingly immersive, I would argue that it highlights its own technical virtuosity. In that case "realism" is clearly an (artificial) technical achievement, not an unmediated experience.
As another example, consider the movie Primer. It too highlights a certain sort of technical virtuosity -- not in its special effects, but in its ability to create a convincing narrative on a micro-budget. The audience is at least potentially conscious of the ways in which the requirements of the process affect the finished product and that awareness is a virtue not a flaw.
Secondly, what is a special effect? I remember my brother talking at one point about Seijun Suzuki, and marveling at his ability to produce impressive results with extremely low-tech effects. In White Tiger Tatoo (the only film I've seen) my brother talked about one shot, near the climax of the movie, when the protagonist is fighting a number of thugs and there's a sudden cut to a shot in which the action is captured from below the floor looking up.
In a major fight scene it directs our attention on his footwork -- making the fight more abstract and less visceral, but it works, somehow, fitting into the emotional flow of the movie rather than distracting.
Is that a special effect?
I have a bunch of other examples I've been thinking about -- When Fred Astaire films an extended dance sequence as a single take is that "realism" or is that highlighting craft for the audience? What does it say about Star Wars that it has a variety of characters who are obviously guys in costumes but that the shot of the Death Star exploding* is clearly meant to raise the bar for special effects, not only compared to the other effects in the movie but to the state of the art?
But I should probably stop there.
* One last footnote, appropriate to this blog, I was thinking about the Death Star because there's a Spiderman comic in which the super-villain ("The Blaze", IIRC) is created by a couple of film school effects students who just wanted to see if they could pull it off, and one of them references the Death Star as an iconic special effect.
Posted by: NickS | Friday, 01 February 2013 at 03:47 PM
Me again, still babbling but hopefully getting closer to the point of the post.
What would you say is the competition in "competitive realism?"
If it's competition to follow most slavishly the conventions of "realism" or to apply the conventions of realism to the widest range of possible stories then it is, by definition a foolish project of attacking every problem with the same hammer -- the post provides examples of how that goes awry, but the premise contains it's own flaws.
If the competition is to either (a) heighten the elements which the typical viewer will respond to as "realism" or (b) developing special effects to be able to show the widest possible range of imaginary images in a camera style that matches "realism" then it's easy to see how that competition would often lead to bad results, but why people could be motivated to compete on those grounds.
(a) almost guarantees that the resulting film will date badly -- as conventions evolve and shift an exaggerated version of those conventions will be even more out of place.
(b) is a natural part of the development of special effects -- there will be times when the impulse of "can we pull off this effect" will get ahead of, "how can this effect serve the story?"
But do any of those descriptions sound to you like what you were writing about in the post?
Posted by: NickS | Saturday, 02 February 2013 at 01:02 PM
I've been paying too much attention to my field of vision when running over the last few days and I think you made a factual error on this. I think it's a big mistake to use biomechanics as a proxy for the felt experience of running/walking when it's an experience it isn't that hard to check for oneself.
Now it might be just me (and if so disregard all the following), but my field of vision looked more jittery to me when I was running than walking. In both cases there's a distinct shake at every footfall; I'm willing to believe it's of smaller amplitude when running but the difference was barely perceptible to me. At first I thought the shake was more violent when running because you hit the ground harder but that difference was barely perceptible too. But footfalls are a lot more frequent when running, hence more shakes per second, hence more subjective jitters.
I don't think that detracts too much from your overall analysis because those shakes are extremely regular and unlike shakycam but there it is. Don't base an argument on Science! if the scientific fact in question isn't actually that relevant. Note also that shakycam in found-footage movies like Cloverfield (which I haven't watched so correct me if I'm wrong) aren't meant to simulate the shaking of the head, but the shaking of an amateur camera in one's hands. This makes shakycam while running or emotionally tense situations much more reasonable, not that I have much experience with filming things.
It also occurred to me that in emotionally stressful situations shakycam might not simulate the shaking of the head so much as quick movements of the eyes but next time I'm in an emotionally stressful situation I'll probably be too emotionally stressed to check. (of course even if it were the case that our eyes dart around more when stressed it wouldn't make shakycam in this situation realistic because our brain tracks those movements in a way it can't when it's the camera moving, which is why shakycam makes people want to throw up)
Posted by: Caravelle | Friday, 01 March 2013 at 03:38 AM
I buy the argument about shakycam being a manifestation of competitive realism but I'm not so sure about 48fps. What's so great about 24fps anyways?
Posted by: Pseudonym | Monday, 08 April 2013 at 10:01 PM