Thursday, November 07, 2013

Who Needs a Story Anyway?

I began to write this a couple of months ago but I never bothered to finish it: I wasn't sure whether I was actually talking about the game in question or just using it as an arbitrary starting point for some philosophical musings which are dear to me. But since I wrote it, and since it doesn’t seem completely without interest, why not put it online? We’ll see how it’ll do. And if nothing, it’ll at least offer some kind of counterpoint to my last posts on cinematic video games, a critical perspective on a game with minimal storytelling. Anyway, here it is: why the Wii U may be the most moving (as in emotional, expressive, beautiful) video game console yet.

I bought a Wii U last spring mainly because of Ian Bogost’s non-review on Gamasutra: a console expressing self-doubt? Color me intrigued. My wallet didn’t approve of my inquiries about the alleged conscience of a video game console, but even though I barely touched it since, the philosophical leanings of my mind were rewarded despite the protestations of my bank account.

Playing solo, the two-screens is barely more than a gimmick, feeling a lot like a DS with your television acting as a bigger version of the upper screen (or at least it felt that way in the few games that I played). And just like the DS, hardly any game uses the two-screens in a meaningful or innovative way. Having a map of your surroundings always open on your smaller screen may be practical, but it’s nothing more than that; it doesn’t affect the gameplay in any way, or doesn’t lead to a new kind of experience. In a game like Mass Effect 3 (which I haven’t played, so, I suppose…), I’m still shooting dudes in the face (as the official saying goes) most of the time, only now I can know exactly where I am when doing so. Sure, this game wasn’t designed for the Wii U in the first place, so it may be normal that the second screen remains unused, but it was one of the most publicized features of this port, and it is the only way most games use this new screen. I still need to be convinced that this screen in my hand enhance my experience somehow, or, better, can lead me to new ones.

But my philosophical investigation was scarcely aimed at the single player experience anyway: I was way more interested in the possibilities offered by the asymmetrical gameplay promised by the multiplayer games. And on this matter, it is, indeed, a whole different affair: the Wii U becomes a perfect, ludic representation of our relation with space and time in our modern digital world.

Sunday, September 29, 2013

Honoring cinema

This post and the last one were written for the Blogs of the Round Table at Critical Distance, a monthly invitation for video game bloggers to discuss about a proposed topic. The theme this time was "What's the Story?", storytelling in video games. You can find the other entries by following the previous link.

My last article was a bit dishonest. I almost scrap the entire text a couple of times and instead write about how it would be cool to transfer André Bazin defense of impure cinema to the context of video games, but how it is not quite possible. I do think that video games are fundamentally impure, I love the idea of cinematic video games, and I don’t think there’s anything wrong with linear storytelling or with cutscenes in an interactive medium. The problem I had was that as soon as I began to write about a particular game, my “defense” of cinematic video games didn’t look so much like a “defense” anymore. In truth, I am much more ambivalent about the reality of cinematic video games than what my article implied: let’s say, then, that it was an ideal defense of these games.

So, here are the nuances I lifted out last time, with some additional musings on the subject, with an ethical twist, leading to a long coda on the Last of Us.

Saturday, September 21, 2013

For Impure Video Games: In Defense of Cinematic Storytelling

With the critical and detached perspective we can now afford on the video game production of the last ten or fifteen years, one of the dominant phenomena of its evolution promptly appears to us: the resort, more and more significant, to the cinematographic heritage.

This is a video game” can we often read about older games like Super Metroid or the original X-COM, or even with more recent examples like Deus Ex or Dark Souls. Few (or no) cinematics! Expressive and/or emergent gameplay! This is what video games are about! This is their distinctive and unique quality: interactivity, choice, player agency, or something to that effect. The story must be told through gameplay, not through cutscenes! But since cinematic action games are the most prominent form of video games right now on the mainstream scene, and in the public eye, at least when it comes to home consoles, is it to say that we have to forgo the autonomy of our art form? Are video games, or what remains of them, still able to survive today without the crutch of cinema? Are they about to become a subordinate art form, depending on another, more traditional one?

Thursday, August 15, 2013

Imitation of Life (4): Film is Dead, Long Live Video Games!

I opened this series of articles about CGI on the idea that “Video games are not cinematic and they will never be”, a radical statement that I would not repeat today without a load of nuances; here are some of them (a lot of them actually: be warned, this post is very long! So go grab a cup of coffee, or the entire Bodum, just to be sure…)

Monday, July 15, 2013

Imitation of Life (3): A World Past

In my last apocalyptic article, I presented computer-generated imagery as a threat to the photographic image, but what can be so dangerous with CGI?

As previously discussed, Terminator 2: Judgment Day, James Cameron stages a duel between these two forms of images, CGI vs moving photography. The T-1000 (Robert Patrick), the new Terminator made of liquid metal, is an almost perfect representation of CGI: the T-1000 can morph with its environment or imitate a human body, and he’s a fluid entity, able to metamorphose into almost any shape he wants. Just like CGI, the T-1000 exhibits the appearances of the real object he transforms into, but it’s no more than that, an appearance, because he’s unbound by most of the physical laws that would normally defined this object; the T-1000 looks like reality, but does not act at all like reality as we know it (he can pass through metal bars or reconstitute itself once melted). This is as far as the comparison can go though, because the robot is still a concrete being, made of metal, unlike CGI and its nature as digital information living in some hard drive. Even so, Cameron found in the T-1000 an apt representation of the metamorphic abilities of CGI and its desire to imitate the realism of the photographic image. In one of the most frightening scene of the movie, the T-1000 becomes the floor behind an unsuspecting guard in the asylum: for a moment, we perceive him as if he was a real floor, just like a floor in a movie can be made with CGI. The T-1000 acted as a sort of prophecy about the future of the CGI image, a prediction now fully realized: we cannot know anymore if the environments the characters move in are a real, physical space, a digitally created one, or a mix of both. Nowadays, all floors may hide a T-1000.

On the opposite front, there’s the original Terminator (Arnold Schwarzenegger), a big, slow, solid machine that moves through space and time following the same physical laws as we do... except for the time travel part, but I mean that he has to move from one place to another by using his legs moving at a natural speed, or that he cannot walk through a wall except by breaking it. In comparison with the T-1000, he’s an artefact from the past, a glorious reminder of an obsolete technology. Actually, he may not be so obsolete, since the Terminator, the old-school special effects conceived around the particularity of the photographic image, destroys the T-1000, the evil CGI. In this way, the movie explicitly argues for the pre-eminence of the photographic over CGI – but at the same time, Terminator 2 was an obvious showcase for the possibilities of CGI, a technological landmark in this domain. I have no proof other than my own experience for this, but I’m pretty sure that in the mind of the audience (and the movie industry), the T-1000 made a greater impression than the Terminator. In the fiction, the T-1000 came from the future in order to erase the past, and indeed, for the audience, he was a vision of the future, of what movies could become; maybe we didn’t immediately understood, though, that this new technology does create an image without a past, which complicates, when incorporated into film, our perception of the photographic image as a “world past”.

Friday, June 28, 2013

Imitation of Life (2): The Fall of Man

Let's begin with the obvious: Hollywood doesn’t like changes. So, all novelties Hollywood movies may bring have to be firmly counterbalanced by the most rigorously classical visual style possible. This is what I meant last time when I wrote about how Hollywood movies first represented computer-generated imagery in an ambiguous way: CGI was a new technology Hollywood admired as much as it feared, so we find this contradiction in most movies figuring CGI, from the 80's up to the end of the 90’s (to take a more recent example than the one I will discuss at length below: in the Matrix, the digital world is presented as a falsehood that we must tear apart to go back to the real analogical world, but at the end, Sion, the human city in the real world, is saved by Neo, a digital super-God).

Although Hollywood never hesitated to publicize the many virtues of CGI, filmmakers like producers had several reasons to be anxious about it: CGI was a threat for the photographic image and, more importantly, for the classical language of Hollywood movies (which was conceived around the limitations and possibilities of the photographic image anyway). So, while the movies presented a new kind of image, CGI, they continued to implicitly champion the image of old, photography – a sure way to slowly introduce the radical visual innovations made possible by CGI while anchoring them in the tradition of Hollywood cinema. Ok, we have this new CGI thing, said Hollywood, but don’t worry, our movies will remain the same. Nowadays, movies rarely think about CGI because CGI is a given, an official tool in cinema’s language: the threat has been neutralized, so to speak, in the sense that the limitless possibilities of CGI have been harnessed and restrained by the classical language of Hollywood cinema. In theory, CGI can do anything, but right now, for better and for worst, it continues the narrative tradition previously established by the photographic image.

Saturday, June 08, 2013

Imitation of Life (1)

What is cinema? Does this question still make sense today, in this age of CGI, digital cameras, heavy post-production effects, with movies distributed in a variety of formats, from the smallest screen (cellphones) to the biggest (IMAX), with the proliferation of home-made videos, with television series and video games increasingly looking like movies (and movies increasingly looking like video games), with the ubiquity of all kinds of moving images, with the rise of new media, etc., etc.? “What was cinema” seems like a better question, but I’m not ready yet to declare cinema dead, like it has been quite fashionable to do for some time (since the advent of the VCR in the 80’s at least). Like I wrote last time, I’m mostly interested in computer-generated imagery for this blog, and how it can be incorporated (or not) in the photographic image, but CGI is merely one of many factors responsible for the profound changes happening now in cinema. In this context of ever-changing technologies and constant metamorphosis of the moving image, clinging to the idea of a “pure” cinema, like I did in my introduction last week, is clearly hopeless.

But is there such a thing as “pure” cinema in the first place? Most film theorists (if not all) would say no, there isn’t. There are many reasons for this (the industrial nature of cinema, the fact that it’s a collective effort, Alain Badiou’s mix of art and non-art), but I want to focus on one for now, more related to our subject: cinema, by essence, is an amalgam of many art forms. It’s less the seventh art than the addition of the previous six. So, why is it so different now with CGI? Can’t we just say cinema has integrated animation like it did with theater? In a way, yes, cinema is moving in a new direction due to the influence of CGI, but animation is an intruder in the photographic image, in a way that wasn’t true with other art forms. Indeed, cinema is impure, but even if it can borrow heavily from theater (for the mise en scène), literature (for the screenwriting), painting (for the framing) and music, none of these challenge the specificity of cinema: the succession of photographic moving images. CGI, though, as a form of digital animation, is an intruder, a radical Other that is tearing apart the fabric of the photographic image. It's not necessarily a problem, but then we have to find a way to wed these two opposite forms in a meaningful way; it's the question I want to adress in this series titled Imitation of Life: how did cinema approach this question in movies mixing CGI with the photographic image?

Tuesday, May 28, 2013

Things to Come

Videogames are not cinematic and they will never be.

I’m not saying this because I hate these so-called “cinematic” videogames, or because I’m a ludologist who cares only about mechanics and gameplay – quite the contrary, I prefer my videogames with story, and it’s quite difficult for me to write purely about gameplay, without the support of a fiction. It’s not even the gamer in me who’s speaking, but the film lover, which I am first and foremost: calling cut-scenes heavy, high production value and story-centric videogames “cinematic” demonstrates a profound lack of respect for what cinema really is. I think I’ve said it before: I’m coming to videogames through the “video” angle more than the “game” itself, so my take on this is slightly different than the usual one. In general, we complain about this cinematic leaning in recent videogames because it implies a loss of interactivity, a simplification, or even a rarefaction, of the rules of the game, to which the designers substitute a more classical, linear narration upon which the player has almost no control. This is true, of course, but really, most of the time, I don’t mind; it makes for a different experience, less “gamey” maybe, but it can be compelling and meaningful nonetheless.

But when I’m saying that videogames can never be cinematic, I’m thinking about images, not interaction: I’m an old-school cinephile, already nostalgic for the disappearing celluloid, and a bazinian at heart, so essentially I think it’s impossible to emulate cinema through computer-generated imagery. I’m well aware that what we mean by “cinematic” in videogames is related to the use of camera angles, movement, staging, lighting, etc., and not to the way the images are produced, but it’s a superficial understanding of cinema visual language, as if the content of the images and their ethical relation to reality was insignificant, when actually it is where the very essence of cinema lies. For sure, our conception of cinema has drastically changed in the last twenty years and CGI is pretty much a part of cinema language now, so it may seem foolish or backward-thinking to dismiss everything CGI-related in the name of some pure idea of what cinema once was. Well, I’m not dismissing CGI per se (it is not “evil” or inherently bad), but rather its current use and confusion with cinema. CGI and cinema are too different in essence to be considered as similar means of expression: while an artist working in cinema has to use the real world as his first (or even only) expressive material, CGI is similar to painting or animation in that the artist has to create from scratch everything he wants to represent. How can videogames be “cinematic” when computer-generated imagery is closer in spirit to painting and animation than traditional photographic cinema?

Thursday, May 09, 2013

Tomb Raider (2013): Surviving a Tutorial

I started to play the new Tomb Raider recently and as I already knew the first hour or so is a series of non-ending QTEs. And as I already knew too, the brand new Lara Croft is represented as vulnerable and terrified as opposed to our usual invincible arrogant hero. What I didn’t know, though (but should have guessed), is that these two elements are quite contradictory: simply put, a hand-holding, heavily scripted, QTEfest’s tutorial does not convey, at all, vulnerability and terror. There was not one moment during that whole sequence where I felt vulnerable because everything was so scripted and pre-determined that nothing seemed threatening. At least not to me as a player: I was watching a vulnerable character, yes, but I sure wasn’t playing one. In fact, the few moments I was playing, in control of Lara, I was just like the usual invincible confident hero I played before in every other third-person action game – I mean, how can I fail at pressing W? I know where the W key is after all. Pressing W for half-an-hour can feel meaningful when playing Proteus, because this minimalism suits the contemplative experience the game offers, but it doesn’t work as well when you’re running to get out of a cave which is falling around you: the triviality and impossible-to-fail action of pressing W just doesn’t match the representation of chaos and imminent threat on the screen. 

The opening sequence of Tomb Raider (the first twenty minutes in particular) is as bad a case of ludonarrative dissonance as it can get, cramming in as few minutes as possible all the biggest problems with how AAA videogames envision interactive storytelling nowadays, which is a bit sad because the intentions were good (I want to play a vulnerable character for a change) and the writing is above average, for the most part, so let’s honor this eloquent case study by taking it apart.

Friday, April 12, 2013

Lincoln (2012), Steven Spielberg

I didn’t have a proper conclusion for my two articles on Spielberg’s cinema, but now I found it with Lincoln, his last film, which happens to be also a good follow-up to my last post on ethics. I must say that this is not exactly a review, because I want to focus mainly on one scene that I will use to introduce a new angle from which we can view his cinema; in lieu of proper criticism, I’ll add some general observations at the end.

As always with Spielberg, his movie is an answer to a former one; in this case, Lincoln replies to Saving Private Ryan (among others, but especially). Both movies open on a similar representation of the war: in SPR, it was a long virtuosic set-piece of the Normandy landing, the most famous scene of the movie but also the worst. A little nuance would be in order, but with its presentation of violence in a frontal, ostentatious manner, this fluid camera moving cleverly around the scene, travelling from the characters to the gruesome death of unknown soldiers and back to the characters, as if this violence was taking place especially for this omniscient camera, which always happened to be at the right place at the right moment, with all this technical skill on display, well this whole landing didn’t seem chaotic or arbitrary anymore; instead we felt mostly the absolute mastery of the filmmaker, who was using all his ingenuity to set-up the most impressive spectacle possible (and it is impressive, but this doesn’t really serve the purpose). Lincoln begins on the bloody fields of the Civil War, but this time the violence lasts about one minute: Spielberg turns away from the war itself and heads towards his main character, a Lincoln discussing with two black soldiers. From now on, the filmmaker isn’t interested in the action, but in the ideas behind it (which, incidentally, coincide with his announcement that he will no longer make action movies).

Friday, April 05, 2013

To Kill Or Not To Kill

In the past week, I’ve been having a little back and forth with Joel Goodwin on his blog Electron Dance about ethical choices as they are currently depicted by videogames. As my answer to his last comment grew and grew, and as I realized that I was not arguing anymore, but restating Goodwin's argument in my own words, I thought it would be best to develop it here more fully, as a sort of addendum to my article on The Illusion of Choice.

I first intervened on his blog (on the last part of his excellent series on Dishonored) to comment on this comparison: “The ethical choice of Dishonored and Bioshock is artificial, as worthless as the "trolley problem", a popular thought experiment in ethics. Here's the cut-down version of the trolley problem: five people will die unless you throw a switch in which case only one person will die. There are variations of the problem but basically Spock said it best with "the needs of the many outweigh the needs of the few"” My main argument was that the ethical choice in these videogames is more meaningful than the trolley problem because they appear in the context of a precise narrative. In a sense, we should not considered these choices from the point of view of the player (what would I do?), but rather from the one of the fictional character controlled by the player (what would Corvo do?), just like in any other narrative medium dealing with ethics – and unlike the trolley problem, which exists in a vacuum. I still agree with that part, but I would retract from the rest of my argument now and propose instead, as Goodwin did, that this narrative meaning doesn’t make these choices less hollow. In fact, such a context is exactly why we should not even qualify them as “ethical” in the first place.

Saturday, March 23, 2013

Videogames as Possibilities

Let’s resume what I said in my last two articles: on one hand, the representational aspect of videogames is always shifting, in the sense that the moving image in front of the player is different in each playthrough, and that no two players will ever see the same game in the same way. On the other hand, we should not forget that the mechanics governing the movement of this image are quite fixed, and are not wholly under the player’s control, because it is the designer who decides how much freedom exactly he’s going to allow in his game. Our question now: what lies between these two extremes of the player’s agency and the designer’s control?

In one word: possibilities. So, here’s my proposal: instead of Sid Meier’s “game as a series of interesting decisions”, let’s try out “game as a series of interesting possibilities”. Or even better: “a good videogame is a series of interesting possibilities”, because I’m thinking mainly here of videogames (although I think this qualitative statement can also stretch out to traditional games), and I’m less trying to understand what games are than what games are good at. Truth be told, I’m not fond of definition (that’s usually when name-dropping Wittgenstein is expected), and this is not a rigorous academic paper, so consider this idea as a modest proposal, without any presumption of being all-encompassing or definite. Still, I have my reasons: I prefer thinking in terms of “possibilities” because it stands closer to this intersection of player and designer while “decisions” puts the focus on the player. These decisions have been designed beforehand, so the designer is not completely neglected in Meier’s canonic definition, but it’s the player who ultimately decides, and therefore the emphasis is on him. “Possibilities”, though, is probably closer to the designer’s end, because he created these possibilities in the first place, but it also represents how the player sees a game, how he experiences it: “this or that may happen, I may do this or not, etc.” Also, “possibilities” is more inclusive since it can cover these so-called not-games like Proteus or Dear Esther, in which the player doesn’t have a lot of options, even though these games are rich in possibilities (the procedurally generated island of Proteus or the random selection of the fragmented narration in Dear Esther). It thus removes the idea of challenge, which I do not find necessary: to use a classic example, how is Snakes & Ladders challenging? The player has no decision to make (he merely follows the dice), but the game is fun because of its numerous possibilities, which are born out of the game’s design, the careful arrangement of snakes and ladders on the board.

Friday, March 08, 2013

The Illusion of Choice

“A game is a series of interesting decisions.” We all know this famous assertion made by Sid Meier (does anyone know when and in which context he said it, or do we just have to take it for granted because it has been repeated so many times that it became its own truth?), but what did he mean by “interesting decisions”?

Let’s take strategy games, maybe the more “gamey” genre of game, or at least the one closer to traditional games, and the one Meier is renowned for: they’re a precarious balancing act, where every decision must lead to various consequences, preferably with some degree of unpredictability, or else there’s no strategy at all, just an optimal tactic that will work in every situation. But in a narrative-centric game, what makes a decision interesting is completely different, and is not necessarily coherent with what would be best from a purely ludic’s perspective: for example, it is not always wise to present equally seductive rewards when a player has to choose between a “good” and an “evil” option. The designer would say we should not penalize a player for prefering one or the other path, because who will want to be “good” if the game becomes dull or too hard or too easy? But what does it say, from the point of view of ethics, when a game presents such “interesting decisions”, based on a system of rewards?

Friday, March 01, 2013

The Protean Form of Videogames

Maybe this can explain why I find videogame criticism so difficult or at least so alien for me, with my background in cinema: it is the only form of criticism relying both on what the critic hasn’t experienced himself and on an object that will appear differently for each player. The videogame critic has to write about an object so fluid that nobody else (not even him) will encounter it again the same way he did. 
Sure, seeing a movie in the (usually) respectful ambiance of an almost empty morning press screening is clearly different from seeing the same movie the night of the premiere in a crowded frantic theater, or as seeing it in the comfort of your own sofa, but we’re more or less able to abstract the physical context of our encounter with a movie and concentrate our criticism on the moving images themselves, even if we know that this context contributed to our appreciation. And sure, every individual brings his own experience to an artwork, but in traditional art forms, these subjectivities still confront the same immutable object made by the artist; an artwork may exist only once it’s interpreted, and therefore the same object may produce different artworks (like I wrote before, my Citizen Kane is not your Citizen Kane), but every spectator should be able to describe the same physical object that sustains each of their unique interpretation. Indeed, an interpretation is only as good as its capacity to properly encompass the whole of an artist’s work, in the most coherent way possible, so it’s not an entirely subjective process.

Friday, February 15, 2013

The Experience of Art

I should have learned by now: never announce an article that is not yet written. I will not answer (for now) the questions raised at the end of my last article. I want to write about videogames, but strangely every time I begin an article on the subject, I struggle with my ideas and come back to what I’m more comfortable with: cinema, authors, nature of art, Spielberg, etc. Maybe all those numerous re-definitions of what games are (or not) are confusing me, and now I just can’t grasp the concept anymore? I don’t know, but for now, I have an article on my favorite subject, criticism, which is going to lead, hopefully, to another one on videogame criticism, and we will then be a little closer to the subject – but it’s another not-yet-written text, so I’m not going to promise anything…

If it wasn’t clear already: I come to cinema and videogames through the perspective of art. Without directly addressing the “videogame as art” question, I’ll just say that I believe all videogames have the potential to be art, and in the end it’s all that matters. Likewise, not all movies can be described as artworks, but cinema has the potential to produce artworks, so for me all movies should be considered on that level. I don’t even understand what would be the point of doing otherwise, unless you have a very low opinion of criticism and just want to know where to invest your entertainment money. I guess many people are looking just for that, a consumer’s guide, but they’re the ones we should convince that there is more to cinema and videogames than a good way to spend some time. And I guess, also, that for most people this consumer’s guide approach makes more sense with videogames: after all, the first thing we associate with games is “fun”, as if there’s nothing else to look for in a game, or rather as if everything else is tangential to the “fun” factor (it’s partly true for cinema also, but we’re more accustomed to the idea of movies as something more than pure entertainment). Videogames are still struggling to be considered as a “serious” expressive medium, but in order to achieve this, the first step would be to offer good criticism; we have to prove that we can write about games seriously before we can convince an outsider of their value. All this has been said before in the last decade, and it’s not difficult nowadays to find meaningful videogame criticism, but I think we still lack a proper theoretical framing akin to what was auteurism for cinema, something that could reach outside of the (relatively) small videogame community. One could argue that this is exactly what New Games Journalism did in 2004, and while it is undeniable that Kieron Gillen’s manifesto inspired some important pieces of videogame criticism, I’m not sure it really helped to show how videogames can be important for people who are not a New Games Journalist.

Friday, February 01, 2013

Author as Style

What is an author? Or rather: how does the idea of “author” fit into an interpretation of a work of art?


Let’s begin by the obvious: The Death of the Author, by Roland Barthes, published in 1968, a famous essay arguing against intentionalism, or what we can call biographical criticism, i.e. interpretation relying on the author’s intentions, or what we know of the author’s life. For Barthes, on the contrary, the coherence of a text comes first and foremost from the reader, who provides its meaning to the text, the author himself being nothing more than the person who happens to write the text. This person, the artist, the facts of his social life or his opinions expressed outside of his texts, all this is trivial, only the work itself matters (although the context of its publication is still important). Analysis must then concentrate on the writing itself, on the style, because “the language speaks, not the author”. But then, what about my two articles on Spielberg, which tried to define him as an author? Surely I must think Spielberg is able to impose his will, or his intentions, on his creation, because what would be the point of analyzing his whole body of work if the fact that these movies were all made by an individual who goes by the name of Steven Spielberg is ultimately irrelevant? And, if we follow the logic of anti-intentionalism to its extreme, if we effectively kill the notion of author in interpretation, how can we even make the difference between a man-made work of art and a pile of garbage aimlessly thrown together by the wind, since both of them are created without intention?

Thursday, January 17, 2013

Coming soon

Well, I never finished my Spielberg’s articles, and now I feel too far away from them to be able to write a proper conclusion. But I just want to clarify one thing before moving forward: although I first intended to, I didn’t include Spielberg’s dramas in my analysis, mainly because it was already long enough and I didn’t want to linger longer on the subject. This omission may seem like they are, in some ways, lesser movies, or that they’re not concerned with the same ideas than their more spectacular brethren, but all of Spielberg’s movies are, indeed, interrelated, and a proper, complete, picture of his cinema isn’t possible without them. Maybe I’ll come back to it someday (when I'll finally see Lincoln?)

Now, for what’s coming next: I will slowly come to the subject of videogames, as the subtitle of my blog says I should, by first developing a bit about the notion of the author I lightly touch upon in one of my previous article, before trying to see how it can fit in the context of videogames. Both my analysis of Welles and Spielberg were means to implicitly introduce the critical approach I want to apply to videogames: both filmmakers laid down in their movies their aesthetic philosophy, which I tried to outline, while using myself a somewhat similar approach to the one I tried to describe (at least that was the intent). These are two of my closest friend-filmmakers, not so much because of what they think, but more essentially because of how they think. This is probably still vague for now, but I’ll try to make it clearer, starting soon enough.