With the critical and detached perspective we can now afford on the
video game production of the last ten or fifteen years, one of the dominant phenomena
of its evolution promptly appears to us: the resort, more and more significant,
to the cinematographic heritage.
“This is a video game” can we often read about
older games like Super Metroid or the
original X-COM, or even with more
recent examples like Deus Ex or Dark
Souls. Few (or no) cinematics!
Expressive and/or emergent gameplay! This is what video games are about! This
is their distinctive and unique quality: interactivity, choice, player agency,
or something to that effect. The story must be told through gameplay, not
through cutscenes! But since cinematic action
games are the most prominent form of video games right now on the mainstream
scene, and in the public eye, at least when it comes to home consoles, is it to
say that we have to forgo the autonomy of our art form? Are video games, or
what remains of them, still able to survive today without the crutch of cinema?
Are they about to become a subordinate art form, depending on another, more
traditional one?
This problem, it must be said, is nothing new: I’m writing here about how diverse art forms are mutually influencing one another. If video games were two or three thousand years old, we would probably see more clearly that their evolution doesn’t escape the laws that also govern their artful brethren. But they’re only about 42 years old (counting from 1971, the first commercially sold video game), so our historical perspectives are prodigiously squeezed.
Adaptation, imitation, borrowing: of these accusations, surely all art
forms are guilty. But what may fool us about video games, just like cinema
before them, is that these phenomena did not appear at the beginning of their
artistic evolution. Indeed: the autonomy of video games’ expressive means and
the novelty of their subject were never as potent as in the first twenty years of
their life. Sure, a new-born art may try to imitate its elders, and then work
from there until it can slowly define its own laws and themes; after all, this is how we learn: imitating our parents until we’re
mature enough to develop our own personality. It seems less clear, though, why an infant art form should continue to
incorporate more and more alien aesthetics as time goes by, as if its owns
capacity for invention, for specific creation, were inversely proportional to
its expressive force. Such a paradoxical evolution, from “purer” video games like Pac-Man or Pong to a heavily
cinematic one like the Last of Us, could
easily be described as decadence, a decline.
To say that, though, is to forget that video games were not developed in the sociological conditions that once defined traditional art forms. Not so long ago, until the 70’s maybe, the history of art was moving towards the autonomy and the specificity of each art form. This concept of “pure” art (pure literature, pure cinema, etc.) isn’t a hollow expression; it refers to an aesthetic reality as difficult to define as it is to contest its existence. Or at least to contest that it was once the goal of modernism: getting rid of figurative paintings, for example, in order to move toward the essence of painting, i.e. the composition of colors and geometric forms on a flat canvas. But if there’s one thing that cinema taught us in the course of the twentieth century, it’s how an art form can be fundamentally impure.
Alain Badiou once described cinema
as a culmination of every art forms: cinema is not the seventh art, but the
addition of the previous six plus one (“+1” refers to cinema’s own specificity).
It is not an art, defined by its
uniqueness, but rather a specific conjunction of diverse art forms that we once
thought as alien (more on that later). Through this impurity, cinema brought
forth new artistic processes like recycling, sampling, editing, mixing, etc., operations
that came to define our current age of impurity – indeed, the quest for a pure
art is now obsolete, and in that sense, claiming for the supremacy of older,
“purer” video games appears quite vain. After all, video games are a product of
this age, the natural offspring of cinema (see my last article) and as such,
they’re also defined by their impurity – albeit in a different way than cinema.
Everything about video games is a
reminder of this fundamental impurity, from the idea of shared-authorship
between the player and the artist to their identity torn between industry and
art (a trait they share with cinema). And to some extent, they’re also a
culmination of previous art forms, or at least of some of them. Theater and
literature are less present, but even the most seemingly “pure” video games are
taking cues from painting, architecture, sculpture, music and, certainly,
cinema. Just like cinema, then, video games have their unique way to assemble
diverse art forms. And just like cinema, they are composed of moving images, so
it’s only natural that they look up to their elder.
But saying that video games appeared “after” cinema, or that they’re both similarly
impure, doesn’t mean that video games are
necessarily some kind of follower, or that they’re aiming for a similar
experience, just like cinema was not theatre even if they share some features
like actors, scenes, dialogs, etc. Cinema and video games may be both
impure, but they are so in their own specific way. In cinema, mise en scène is what brings together
the tools of painting (the framing), of theater (the actors), of literature
(the storytelling), etc., and assembles these disjointed parts into a coherent
whole that we call a “movie”. As such, mise
en scène cannot exist by itself, in a sort of “pure” state, because it is a
link, a conjunction. It needs at least two disparate elements in order to
exist.
This is how we should think of
gameplay: not as some pure, isolated element, but as a unique way to coordinate
diverse artistic tools into a coherent whole. There is no such thing as “pure”
gameplay: if we define gameplay, Wikipedia-wise,
as the specific way in which players interact with a game, then the rules and
mechanics that constitute the gameplay must be convey to the players somehow, through
graphics and/or text. After all, I’m writing about video games, not about board or card games: the representational
means here are extremely important. And this “video” aspect will always be
heavily influenced by previous art forms, no matter how abstract or minimalist
the image is.
The key word here is “influence”. A
better one would be “adaptation”: in
order to respect the cinematographic experience, its essence, one cannot just
imitate its images. And this is what video games do: they do not imitate
cinema, but adapt some cinematographic tools for their own purposes, or incorporate
them in their own language. For example: cutscenes. At first glance, a cutscene
is a pure imitation of the cinematic form. The usual complaint is that they do
not belong in a video game because it’s basically a scene from a movie that
negates the essential interactivity of video games. But, hey, there is no such
thing as a cutscene outside of video games. There is no cutscene in a movie,
and a movie is not a long cutscene. It’s a movie. When we make a three-hours movie out of the Last of Us, we do
not watch these images with the same mindset we have when we encounter them in the
context of the full game because a cutscene has some functional purposes with
no equivalent in cinema: cutscenes, on the most basic level, are a tool to
establish the next player’s objective. When watching a cutscene, controller in
hand, we’re not only thinking “oh, look at what’s happening!” but also “oh,
this is what I need to do next”. And anyway, a three-hours movie made of
cutscenes from the Last of Us is
radically different than what a real movie of the Last of Us would be like if it was conceived as a movie. For
one, the dramaturgy would be way more focused: it makes sense as it is in the
context of the game because these cutscenes inform the gameplay and affect how
we approach each section by giving different objectives and/or motivations to
the characters, but the same story quickly appears redundant and a bit stale
when condensed in a movie form. This is not a problem in the game: it becomes
one when you cut the game out and try to make a movie out of it.
What I’m saying is that the way cutscenes are conceived is inherently videogame-y: it may borrow the visual language of cinema, but only to the same extent that cinema borrows the dialogs of theater. In both cases, the “alien” aesthetics is given a new purpose when transfer in a new context: the framing and the editing of a scene can enhance, nuance or contradict the words spoken by an actor; likewise, a cutscene can enhance the gameplay by showing you how cool your new gun is or nuance it by trying to convince you that killing is bad (and vice versa, gameplay can also nuance a cutscene). A more apt comparison would be music scores in movies: music and cinema are two different art forms, working in quite different ways. I could argue that music does not belong in a movie because it is superimposed over the images and often serves as a crutch to create a mood or sustain an emotion when the images are unable to do so themselves. In most Hollywoodian movies, music is omnipresent, overflowing every scene, telling us how to feel: this is basically how we talk of cutscenes, a recourse to another form to express something the movie/game is unable to do through its own “pure” means. And in fact, a good movie score works exactly like a good cutscene: either by contrast or emphasis, a score works with the images to create a particular mood, while a good cutscene does the same with the gameplay in regard to storytelling. Cutscenes, like music, are not necessarily a crutch: it may often be the case, but they can also create a unique experience that neither cinema nor video games could achieve alone.
What I’m saying is that the way cutscenes are conceived is inherently videogame-y: it may borrow the visual language of cinema, but only to the same extent that cinema borrows the dialogs of theater. In both cases, the “alien” aesthetics is given a new purpose when transfer in a new context: the framing and the editing of a scene can enhance, nuance or contradict the words spoken by an actor; likewise, a cutscene can enhance the gameplay by showing you how cool your new gun is or nuance it by trying to convince you that killing is bad (and vice versa, gameplay can also nuance a cutscene). A more apt comparison would be music scores in movies: music and cinema are two different art forms, working in quite different ways. I could argue that music does not belong in a movie because it is superimposed over the images and often serves as a crutch to create a mood or sustain an emotion when the images are unable to do so themselves. In most Hollywoodian movies, music is omnipresent, overflowing every scene, telling us how to feel: this is basically how we talk of cutscenes, a recourse to another form to express something the movie/game is unable to do through its own “pure” means. And in fact, a good movie score works exactly like a good cutscene: either by contrast or emphasis, a score works with the images to create a particular mood, while a good cutscene does the same with the gameplay in regard to storytelling. Cutscenes, like music, are not necessarily a crutch: it may often be the case, but they can also create a unique experience that neither cinema nor video games could achieve alone.
Video games may borrow some
cinematographic elements, mainly a linear, non-interactive plot and some storytelling
devices to go with it but by doing so they do not relinquish their “true
nature” as video games because transferring these elements in a new context
grants them a new meaning. Cutscenes are never isolated, so we should not think
of them in isolation: they’re followed or preceded by a sequence of gameplay,
and the interrelations between an interactive sequence and a non-interactive
one can be quite complex (see Metal Gear
Solid, especially Sons of Liberty).
This doesn’t mean that “purer” video
games don’t exist, that they’re less valuable or that knowing what’s unique to
a specific medium isn’t worthwhile; on the contrary, you need to understand the
nature of your own medium before beginning to annex new ones. And this is exactly
the point of this article: if video games
are able today to properly tackle the cinematographic domain, it’s first and
foremost because they are confident enough in themselves to become transparent
in front of their object. They can finally pretend to faithfulness – not an
illusory fidelity of decal – but an intimate intelligence of their own
aesthetical structures, the prerequisite needed to respect their commitment
toward cinema. The multiplication of cinematic video games shouldn’t bother the
critic worrying about the purity of its art form; on the contrary, these games
are the token of its progress.
The idea of valuing “pure” video
games (or cinema) over “impure” ones doesn’t make much sense: impurity is at
the very foundation of video games. Hell, it’s already right there in their
name: a game in video form, a mix of two different, previously unrelated elements,
moving images and games.
“But come on”, will say at this point the nostalgic of Video games with
a tall V, independent, autonomous, specific, pure of any compromise, “why put
all this artistic effort in service of a cause we do not need? Why make Uncharted when we can already see the Indiana
Jones on the big screen? Even if
these cinematic video games are accomplished, surely you will not pretend that
they’re more valuable than their inspiration, that Uncharted is more essential than Indiana Jones, or, above all, that they’re more important than a video game with an
equal artistic value but working on a theme unique to video games? The Last
of Us is a good imitation of movies, but
is it more valuable than The Road
(the book) or than Starseed Pilgrim?
You’re saying this is progress, but on the long run it will only sterilize
video games by reducing them to an annex of cinema. Give back to cinema what
belongs to it, and to video games what can only belong to them.”
For one: if video games are looking increasingly cinematic, it is a fact
that we can merely record and try to understand because there’s nothing we can
do about this. As architecture, video games and cinema are functional
arts: a house is meaningful only if we can live in it, just like cinema and
video games need a minimal audience to exist (and in their cases, this minimum
is immense). Or, following another system of reference, we should say that for
video games (and cinema) the existence precedes the essence. The critic, even in
its most adventurous extrapolations, must proceed from this existence. It may seem like a naive pragmatism,
but in front of an art form, humility is in order. There’s no point in arguing about
how games should be: the critic must look at what games are right now, what
they’re trying to do, and try to understand what it means.
And more importantly: why not look
after cinema? Cinema, a more evolved art
form, and in some of its incarnations more challenging and educated, proposes
to video games complex characters and, concerning the relations between form
and content, a precision and a subtlety that were not used to see in our
interactive medium. Why not use this knowledge? For example, I will not
pretend that Uncharted is equally
valuable as Raiders of the Lost Ark: the latter is far superior in every
way, mainly because it belongs to the body of work of a major artist, whose
whole cinema revolves around his search for a responsible spectacle (I amply
talked about Spielberg here, here and here). And this is exactly what video
games can learn from cinema: not auteurism (I’m not sure it is needed, or even
relevant, in the context of games), but how to use moving images in a
meaningful way, beyond the simple ability to entertain, in order to propose a
vision of the world.
Everything looks now as if the specific themes of video games have been
completely worn out by the techniques that gave birth to them. In order to arouse
emotion, it’s not enough anymore to render a lively 3D open-world. Video games
entered, without a doubt, the age of the scenario; or maybe we’re witnessing a
meaningful inversion between form and content. Not that the former has become irrelevant,
but all this new technique (convincing actor models, credible AI, smooth
animation, etc.) is bringing us towards a transparency of the subject that we
can now appreciate for itself, and for which we are more and more demanding. Maybe we lost something in the
process, something unique to video games, but it is foolish to continue hoping
for some kind of “pure” video games. And why would we? They have been insulated
long enough as it is. Looking out for cinema is sometimes seen as some kind of
apostasy, a negation of what video games can be in order to become something
that they aren’t when really cinema has been part of video games since their
inception. As endless debates have shown us, defining exactly what a video game
is isn’t easy, so why raise arbitrary fences around an imaginary “core purity”
that we have to preserve at all costs from the constant assaults of alien art
forms?
I’m more inclined to salute this
openness to others.
(The passages in italics above are
all my clumsy translations of an essay by French film theorist André Bazin, For Impure Cinema In Defense of Adaptation (1958).
Obviously he didn’t write about video games, so I had to adapt his sentences for
this new context. Most of the time, it means that I substituted the words
“cinema” for “video games” and “theatrical” for “cinematic”, thus changing his
argument “cinema is impure so it’s normal that it adapts literature and
theatre” to “video games are impure so it’s normal that they’re looking out to
cinema”. Some of my changes were more substantial, but I tried to stay as close
as possible to his ideas while transferring them to the domain of video games. I
complemented this fragmentary translation (the original article is quite
longer) with some ideas concerning video games more specifically (and an
anachronistic reference to Badiou, whom Bazin couldn’t possible know), or else
it wouldn’t make much sense; the end result is a schizophrenic article,
well-suited (I hope) for its schizophrenic subject. My opinion on the subject
is also quite schizophrenic, so I will come back shortly with some important
nuances, that were too cumbersome in the context of the translation. Since Bazin’s
original text seems unavailable online, and because I don’t want any confusion
between what belongs to him and my own words, I decided to differentiate the
sentences and arguments that I lifted directly (or mostly so) from him by
putting them in italics.)
Posted this to twitter - it's a refreshingly difference stance on the cut scene. I appreciate this!
ReplyDeleteThanks again for your support, Chris! Oh, Dr. Chris, sorry! Congratulations, by the way.
Delete