Archive for the 'Animation' Category
KT: I do not understand Beowulf. I don’t understand why the director who made one of the great modern Hollywood films, Back to the Future, and several very good ones, including Who Framed Roger Rabbit? and Cast Away, now has this fixation with creating nearly photo-realistic 3D digital images.
DB: I think that on the whole Zemeckis’ films have become weaker since Bob Gale stopped working with him. I like the sentiment of I Wanna Hold Your Hand and the crass misanthropy of Used Cars, and I think the Back to the Future trilogy mixes both in clever ways. But now almost every Zemeckis film seems to be less about telling a story than solving a technical problem. How to best merge cartoons and humans (Roger Rabbit)? How to push the edge of spfx (Death Becomes Her, Forrest Gump, Contact)? How to make half a movie showing only one character (Cast Away)? There’s a stunting aspect to this line of work, though I grant that it can lead to technical breakthroughs, as in Gump.
KT: I also don’t understand why most of the supposedly state-of-the-art effects technology looks distinctly cruder than the CGI in The Lord of the Rings, the first part of which came out six years ago. I don’t understand why studios that are trying to push 3D to a broad audience make a film with silly action aimed at teenage boys. That’s an OK strategy when you’ve got a $30 million horror film, but a budget of $150 million demands a lot broader appeal.
DB: And the evidence so far indicates Beowulf doesn’t have that appeal. The obvious comparison is 300, from earlier this year. According to Box Office Mojo it cost about $65 million to make, but it reaped $70 million domestically in its first weekend and wound up with $450 million theatrically worldwide. Beowulf grossed $27.5 million in its first US weekend and currently sits at about $146 million worldwide. I’d think that this has to be a disappointment. Recall too that the Imax screenings have higher ticket prices, so there are fewer eyeballs taking in Angelina Jolie’s pumps, braid, and upper respiratory area.
In addition, sources suggest that a 3-D version of Beowulf will not be available on DVD, so the sell-through takings—the real source of studio profit—may be significantly smaller than average.
KT: It seems to me that the people who are pushing 3-D so hard and hoping for it to become standard in filmmaking are forcing it on the public too soon. It’s still fiendishly difficult and expensive to shoot live action material in digital 3-D, so most projects are animated. One approach, taken in Beowulf, is to motion-capture real people and animate the characters to make them appear as much like the real people as possible. The problem is that they still have a weird look about them, like moving dolls. People have complained about the dead-eyed gaze of the characters in The Polar Express, and though there’s apparently been an improvement between the two films, the eyes don’t always look as though they’re focusing on anything. It can be done, though; the extended-edition Lord of the Rings DVD supplements about Gollum show how much effort went into making his eyes have a realistic sheen and flicker.
I was also struck by how clunky some of the animation looked. Beowulf is supposedly state of the art, and it certainly had the budget of a major CGI film. Yet some of the rendering and motion-capture was distractingly crude. I noticed that particularly on the horses. Their coats looked pre-Monsters, Inc., and their movements at times reminded me of kids rocking plastic toys back and forth. I suspect this effect had something to do with a lack of believable musculature. If you look at the way the cave troll was done in The Lord of the Rings: The Fellowship of the Ring (again, demonstrated in the DVD supplements), there was a specific program to simulate the way muscles move on skeletons, even the skeletons of imaginary creatures. Now, six years later, we see these things that look like hobby-horses—and that all run alike.
The backgrounds were often strange as well, simple and flat-looking, like painted backdrops. There were some exceptions, with the seascapes and rocky crags pretty realistically done. But just plain hillsides and groups of tents and so on looked almost sketchy in comparison with the moving figures, and I noticed that at times some fog would be put in, presumably to cover that problem up. There certainly was some very good animation as well, most notably the dragon, but there was no consistency of visual style.
DB: I’d go farther and say that 3-D hasn’t improved significantly since the 1950s. It ought to work: just replicate the eyes’ binocular disparity by setting two cameras at the proper interval or, now, by manipulating perspective with software. Yet in films 3-D has always looked weirdly wrong. It creates a cardboardy effect, capturing surfaces but not volumes. Real objects in depth have bulk, but in these movies, objects are just thin planes, slices of space set at different distances from us. If our ancestors had seen the world the way it looks in these movies, they probably wouldn’t have left many descendants.
It would take a perceptual psychologist to explain why 3-D looks fake. Whatever the cause, I’d speculate that good old 2-D cinema is better at suggesting volumes exactly because the cues to depth are less specific and so we can fill in the somewhat ambiguous array.
By the way, in watching a 3-D movie I seem to go through stages. First, there’s some adjustment to this very weird stimulus: I can’t easily focus on the whole image and movement seems excessively fuzzy. Then adaptation settles in and I can see the 2 ¼-D image pretty well. But adaptation carries me further and by the end of the movie I seem to see the image as less dimensional and more simply 2-D; the effects aren’t as striking. But maybe this is just me.
Back to the Future, or at least 1954
DB: I’d like to think about it from a historical perspective for a while. The industry seems to be repeating a cycle of efforts that took place in 1952-1954. The American box office plunged after 1947 as people strayed to other entertainments, including TV, and so the industry tried to woo them back with some new technology. Today, as viewers migrate to videogames, the Internet, and movies on portable devices, how can theatres woo their customers? Answer: Offer spectacle they can’t get at home.
Beowulf brings together at least three factors that eerily remind me of the early 1950s.
(1) Obviously, 3-D. The first successful 3-D feature era was Bwana Devil, released in November of 1952. It was an uninspired B movie, but it launched the brief 3-D craze. Columbia, Warners, and other studios made major pictures in the format, most notably House of Wax (1953). But costs of shooting and screening 3-D were high, with many technical glitches, and apart from novelty value, the process didn’t guarantee a big audience. The fad ended in spring of 1954, when all studios stopped making films in the format.
The process has been sporadically revived, notably in the 1980s (Comin’ at Ya, Jaws 3-D) and once more it fizzled. So, ignoring the lessons of history and chanting the mantra that Digital Changes Everything, we try it again.
(2) Big, big screens. In September of 1952, Cinerama burst on the scene with its huge tripartite screen and multitrack sound. It attracted plenty of viewers, but it could be used only in purpose-built venues. Like 3-D, its technology could never replace ordinary 35mm as the industry standard. The contemporary parallel is Imax, which though very impressive will not replace orthodox multiplex screens—too expensive to install and maintain, pricy tickets. Like 3-D, it’s a novelty. (1)
(3) The sword-and-sandal costume epic. It’s a long-running genre, but it got significantly revived in the late 1940s. It was a logical input for the new technologies of widescreen—not Cinerama but more practical offshoots that gained more general usage.
From the standard-format Samson and Delilah (1949), David and Bathsheba (1951), and Quo Vadis (1951) it was a short step to The Robe (1953) and The Egyptian (1954) in CinemaScope, The Ten Commandments (1956) in VistaVision, The Vikings (1958) in Technirama, Hercules (1959) in Dyaliscope, Solomon and Sheba (1959) and Spartacus (1960) in Super Technirama 70, and Ben-Hur (1959) in anamorphic 70mm Panavision. It is, incidentally, a pretty dire genre; the peplum might be the only genre that has given us no great films since Cabiria (1914) and Intolerance (1916).
In parallel fashion, the revival of the beefcake warrior film with Gladiator (2000) coincided with innovations in CGI and thus furnished new forms of spectacle for Troy (2004), Kingdom of Heaven (2005), and 300 (2007). This trend paved the way for Beowulf. When screens get bigger, Hollywood hankers for crowds, oiled biceps, big swords, and nubile ladies in filmy clothes. Not to mention soundtracks with pounding drums, wailing sopranos, and choirs chanting dead or made-up languages. And the conviction that Greeks, Romans, and those other ancient folks spoke with British accents. The innovation of Beowulf is to turn a Nordic hero into a Cockney pub brawler.
In other words, it’s 1954 again. So if we ask, Will it all last? I’m inclined to answer, Did 1954?
KT: Yes, the studios see the new technology as one more way to lure people away from their computers and game consoles and into the theaters. Maybe that will work to some extent. There’s no doubt that a lot of the people who have seen Beowulf have praised it as a fun experience and as having effectively immersive 3-D effects. I’m surprised at how many positive comments there are on Rotten Tomatoes, where the average score from both amateur and professional reviewers is 6.5 out of 10. That’s not exactly dazzling, but it’s a lot higher than I would give it.
Even so, the film hasn’t lured all that many people away from their other activities. You’ve mentioned that Beowulf hasn’t done all that well at the box-office. It did much better in theaters that showed it in 3-D. If it hadn’t been for the 3-D gimmickry, it would probably have been dead in the water from the start. And if 3-D effects remain on the level of gimmickry, they will soon wear out their welcome. Presumably the people who have been going to Beowulf are to a considerable extent those who are already interested in 3-D, and I can’t believe there are huge numbers of people really passionate about the idea of someday being able to watch lots of films in 3-D. If more films like Beowulf come out—ludicrous, bombastic action with distracting animation problems—they’re not likely to make the prospect any more attractive.
Eventually somebody—James Cameron or Peter Jackson, perhaps—will make the first great 3D film, and then maybe the passion will spread.
3-D in 2-D
DB: I’m doubtful that there will ever be a great 3-D film, and especially from those directors. But one last historical note.
I think that ordinary mainstream cinema has been setting us up for the flashiest 3-D flourishes for some time. One of the goals of the Speilberg-Lucas spearhead was to amp up physical action, to make it more kinetic, and this often showed up as in-your-face depth. Spielberg used a lot of deep-focus effects to create a punchy, almost comic-book look, and who can forget the opening shot of Star Wars, with that spacecraft arousing gasps by simply going on into depth forever? Seeing the movie in 70mm on release, I was struck by how the last sequence of Luke’s attack mission was maniacally concerned with driving our eye along the fast track of central perspective. Did it foreshadow the tunnel vision of videogame action?
In any case, I think that aggressive thrusting in and out of the frame was integral to the style of the new blockbuster. Since then, our eyes have been assaulted by plenty of would-be 3-D effects in 2-D. In Rennie Harlan’s Driven (2001), the crashing race cars spray us with fragments.
Jackson is definitely in the Spielberg line, favoring steep depth and big foreground elements. In King Kong, the primeval creatures lunge out at us, heave violently across the frame, and fling their victims into our laps.
Such shock-and-awe shots recall American comic-book graphics. These affinities are at the center of 300, as we’d expect. For example, a cracking whip curls out at us in slow motion, like a two-panel series.
Beowulf draws on this thrusting imagery, but inevitably it doesn’t seem fresh because so many 2-D films have already used it. Maybe the most original device is having the camera pull swiftly back and back and back, letting new layers of foreground pop in and shrink away. This is viscerally arousing in 3-D, but aren’t there precedents for it—in the Rings, in animated films, or some such?
Zemeckis tries to transpose into 3-D the style of what I’ve called, in The Way Hollywood Tells It, intensified continuity. This style favors rapid cutting, many close views, extreme lens lengths, and lots of camera movement. I found Zemeckis’ restless camerawork even more distracting than in 2-D. So I’m wondering if current stylistic conventions can simply be transposed to 3-D, or do directors have to be more imaginative and make fresher choices?
KT: That pull-back effect may be viscerally arousing, but in Beowulf it was usually pretty gratuitous and, for me at least, it called attention to itself in a way that was often risible. I don’t think there’s anything in Rings as crude as the shots in Beowulf that you’re talking about. In the opening scene of the latter, in the mead-hall, the camera zips into the upper part of the room, with rafters, chains, torches, and even rats whizzing in from the sides of the frame. None of that contributes to the narrative.
The flashiest backward camera movement, or simulated camera movement, I can think of in Rings is the one in the Two Towers scene where Saruman exhorts his army of ten thousand Uruk-hai to battle. The final shot is a rapid track backward through the ranks of soldiers holding flag poles. The point is to stress the enormous numbers of soldiers. There’s no gratuitous thrusting-in of set elements from the sides, just the cumulative effect of so many similar figures. The simulated camera also at one point “bumps” one of the flagpoles, causing it to wobble, but I take it that that’s an attempt to add a certain odd realism to the “camera” movement, not a knowing nudge to the audience. In Rings, the virtual camera usually follows action rather than moving independently through space. It tends to go forward or obliquely rather than backward.
As to transferring classical Hollywood style to 3-D or finding a whole new set of conventions to fit 3-D, Beowulf offers an object lesson. It uses a combination of the two. At times we have conventional conversations using shot/reverse shot, and at other times we have the swoopy-glidey style you described, with the camera zipping around the space and trying to see it from every angle within a few seconds.
The odd thing is, neither one works. The very close shot/reverse shot views of the digital characters make them look unnatural and emphasizes the not infrequent failure of their eyes to connect with each other. The swoop-glidey camera movements are silly and don’t stick to the narrative.
I’m not optimistic enough to think that directors can come up with a whole set of “fresher stylistic choices” to make 3-D work. Maybe Sergei Eisenstein, with his meticulous attention to every aspect of such topics, could have thought the issue through, but his solution would probably not be viable for the Hollywood studios. My own thought is that directors working in 3-D should probably stick to classical Hollywood style and avoid flashy stylistic effects. So far, the more blatantly 3-D something looks on the screen, the less it makes 3-D seem like something we want to watch on a regular basis. Think of the best films of this year: Zodiac, The Assassination of Jesse James by the Coward Robert Ford, Ratatouille, Across the Universe, and so on. Would any of them be better in 3-D? Probably not.
Plus, I don’t like watching movies through something. Movies should just be the screen and you. The 3-D glasses are definitely better now than in the earliest days of the cardboard, red/green versions. Still, the Imax glasses we used in watching Beowulf were heavy enough to leave a groove on my nose. Make the mistake of touching the lenses, and you’ve got a blur on one half of your vision of the film. In short, I think that 3-D still has to prove itself, and Beowulf didn’t add any evidence.
(1) PS 9 December: DB: I originally added in regard to Imax: “It’s actually waning in popularity, except in newly emerging markets like China.” This disparaging comment was misleading. Yesterday I learned from Screen International that Imax recently signed a deal with AMC to install 100 systems in 33 US cities. Oops! Today Paul Alvarado Dykstra of Austin’s Villa Muse Studios kindly wrote to point out the Hollywood Reporter‘s coverage, which gives background on the costs of installing and maintaining an Imax facility.
I was, obscurely, thinking of the traditional Imax programming of travelogues and documentaries. I failed to register that these have been largely displaced by screenings of blockbuster features, catching fire in 2003 with The Matrix Reloaded and proving successful with The Polar Express and other titles. Imax is now largely an alternative venue for megapictures, and its seesawing financial performance may have been steadied by moving into the features market.
PS 26 December: Travel has delayed our timely linking, but we couldn’t neglect Mike Barrier’s in-depth critique of Beowulf here.
PPS 5 January 2008: Harvey Deneroff has a comprehensive and judicious discussion of the 3D situation here.
This past Friday, October 19, marked an historic moment in the history of Madison movie-going. We got our first permanent digital projection set-up, and not just digital, but 3D. The first film to be presented there was Tim Burton’s The Nightmare before Christmas.
We’ve had several big changes in the local theater scene recently. In May the first purpose-built Sundance Cinemas art-film multiplex opened. Madison, I gather, is considered a pretty good moviegoing market for its size. A more modest two-screen theater, the Hilldale, was shut down and demolished. Not such a bad thing, since the Hilldale, a two-screen house, was not aging well. We now have two multiple-screen theaters showing nothing but art films, Westgate and Sundance, as well as the Orpheum theater, University of Wisconsin union, and the Communication Arts Department’s Cinematheque showing art films part-time. Overall the city has a total of something like 60 screens for a population of just over 200,000.
Film vs. Digital: The Film Scholar’s Perspective
As film historians, David and I like to work directly with films. This summer he posted an entry on the joys and possibilities of analyzing films on a flatbed editing table. We used to insist that all the frame enlargements we used to illustrate our books be from film. We acquired hundreds of trailers and patiently cut them up, mounting them in slide frames to use in lectures. Naturally they looked great. For older films, we went to archives and used special photographic equipment to capture frames. Eventually DVD technology got good enough that we had to admit frames captured from them could look as good on the pages of a textbook as reproductions from our slides or negatives. Now Film Art and our other books contain illustrations from a mixture of sources.
But obtaining images for illustrations is a very different thing from watching a movie projected digitally in a theater. The images from a strip of real film have a distinctive look to them. They convey a sense of life. It’s a subtle thing, but the minute grains that make up the blacks, whites, and colors in the frames shimmer slightly from frame to frame. (That’s why a freeze frame makes an image look suddenly grainy. There’s no play of the particles from frame to frame to overlap and create a richness.) Digital projection throws visual information on the screen far faster, eliminating the shimmer and creating instead a fixed-looking image.
So we’re not in a great hurry to see 35mm projection disappear, and we watch the growth of digital exhibition with both interest and some trepidation.
The Slow Progress of Digital Projection
Digital projection has been around for years now, but for most of those years we lovers of 35mm film could largely ignore it. The earliest digital screenings of films in theaters took place way back in 1998, with satellites beaming The Last Broadcast into the few American houses with the proper equipped. Every now and then other digital advances would be touted, but the installation of digital projection equipment around the world has been slow.
Some of the reasons for that slowness are obvious. As usual with such technological breakthroughs, there have been competing systems. There still are. The cost of such a projection system is around $150,000. Tell your theater-chain owner holding nine sites, each with ten screens, that he or she needs to make an expenditure of $13.5 million and see what the response is. Especially if that owner has fairly new, expensive 35mm projectors already installed. The theater owners have argued that the distributors should pay part of the costs. The distributors, naturally, want the exhibitors to pay.
Both groups would gain advantages from digital. Among those are savings in printing up thousands of copies of a given film, usually at over a thousand dollars apiece, and shipping the very heavy prints via overnight courier service. There would be no physical prints going through projector gates and suffering scratches and breakage.
Assuming theater owners will end up paying for most or all of the conversion, they would want to charge extra in order to make up the costs of the equipment. There are undoubtedly people who think digital is superior, and they would pay a higher admission price. A lot of moviegoers, however, couldn’t care less whether the moving image they see in the multiplex is coming out of a 35mm projector or a digital one—and they’re probably not about to pay a dollar or two more to go in the digital auditorium that’s playing The Bourne Ultimatum rather than the one with the old-fashioned 35mm system.
Until now, that is. Modifications of projection systems that allow for 3-D projection have finally given exhibitors a selling point that will get patrons into theaters at advanced prices. Our local multiplex that is showing Nightmare is charging two dollars more for it than for films on its other screens. Apparently people will pay. Nationwide this weekend Nightmare is estimated to make $5,245,000 in 564 theaters, for a per-screen average of $9,122. That’s the highest average in the top 25 films. In contrast, 30 Days of Night, the top grosser at $16 million, has a $5,604 per-screen average.
The question remains, will people remain willing to pay extra for 3-D once its novelty value wears off?
2007 is increasingly being touted as the year when digital installations have speeded up most radically. The conversion will still be a slow process, but for the first time it becomes plausible to think that in the not too distant future digital might kill off 35mm projection. In the October 13-14 issue of The Hollywood Reporter, Gregg Kilday, its editor, suggests that 2009 will be the breakthrough year for digital 3-D. Two major 3-D projects, DreamWorks’ animated Monsters vs. Aliens and James Cameron’s Avatar, will debut within a few months of each other. The real question, he says, is whether enough theaters will be outfitted with digital projection by then. With big theater chains now adopting the new technology, it seems quite possible. (See here for an excellent summary of the current situation of theater conversion and 3-D films in the pipeline.)
Alone with my 3D Glasses
As I was sitting at my desk last Wednesday afternoon, finishing up my David Cronenberg entry, I got a call from a friend of ours, Tim Romano. Tim is an expert projectionist, perhaps the best in town. He’s the one theaters often call on to test new equipment and screen the big blockbusters ahead of time to check for flaws. Not surprisingly, Tim was testing the new digital projector by running Nightmare over and over, as if for a regular day’s screenings, but without an audience. Tim suggested that I might want to drop in and catch a screening. Naturally I headed for we call Point, which is really the Marcus Point UltraScreen Cinema.
As the name suggests, Point is part of the Marcus Theatres Corporation’s chain of 47 theaters (594 screens), scattered across Wisconsin, Illinois, Minnesota, Ohio, North Dakota, and Iowa. Some of the chain’s other theaters will be getting digital projection installations as well. The name Marcus is a big one here in the Midwest, since The Marcus Corporation also owns or manages 20 hotels in the area. Its theater chain is the seventh largest in the country.
Arriving at Point, I received my 3-D glasses from the ticket seller, obtained my snack of choice (Junior Mints), and headed for what proved to be an otherwise empty auditorium. I had missed the opening, but having seen the film a couple of times, I figured that didn’t matter. I was here to see what digital 3-D looks like.
When we go to the movies, David and I usually sit in the center of rows three to five, depending on the size of the screen. This time I sat in row five, but I quickly found the edges of the eyepieces were not wide enough to take in the entire screen. There’s an aisle across the auditorium behind the fifth row, so I sat at the front of the upper section of the house, approximately row eight. That worked fine.
I’m not sure the glasses I had were the same kind that are being used for commercial screenings. Still, my impression is that if you’re used to being down front, sitting a bit further back works well for this film.
Nightmare was originally made on 35mm film using stop-motion to animate puppets. It was transformed into a 3-D film using technology from a company called Real D, which also has been the major player in the move to install digital projection equipment in theaters.
Tim wasn’t, however, projecting Nightmare on a Real D projection setup. Just recently Dolby has moved into digital projection in a big way, and the Point installation is part of its rollout of competing equipment. (For details and other cinema chains using Dolby, see here.) Dolby may have the edge, since its projector works with existing white screens, while Real D’s requires a special silver one.
Happily, the new polarized glasses work better than earlier models. My experience with the older glasses was that one’s eyes sometimes had to struggle to resolve the images onscreen into three dimensions, and one could end up with a headache by the end of the movie. Dolby’s digital 3-D system makes perception of the depth effects automatic and effortless.
Speaking of the glasses, I was amused to note that publicity for the film (see the poster above) shows the characters wearing what look very much like sunglasses. Most probably people still think of 3-D glasses as clunky and cumbersome. This is clearly an attempt to counter that image and make the glasses look cool, like the ones sported by characters in Men in Black or The Matrix.
The depth effect is definitely impressive, even in a retro-fitted film like Nightmare. Because it was not originally produced in 3-D, most of the depth appears to extend behind the screen. I remember seeing 3-D films made in the 1950s that strive to thrust objects out at the audience—most effectively in House of Wax when a carnival barker has a paddle-ball that he bounces directly toward the camera. I rather prefer the depth behind the screen to the depth in front, which tends to be distracting. Whatever the promoters say about people wanting to feel themselves in the movie, I would prefer not to have projectiles coming at my face or actors tumbling into my lap.
The movement of the characters was fine most of the time. I found that rapid action caused a ghosting, blurred effect that was quite different from the rock-steady three-dimentionality of static or slowly moving figures. That ghosting may simply be an artifact of the process of turning 2-D into 3-D, and perhaps films made in 3-D will not have it.
Then there’s the matter of grain and other qualities of 35mm film. Certainly one can’t object to the lack of dust and scratches. The absence of grain is a little disconcerting, but it’s really hard to judge in an animated film like this. Presumably, though, it’s not significantly different from 2-D digital projection, which I’ve never seen. I suppose I shall get used to it.
The Current Trend
So Nightmare provides only a limited indication of what the future holds.
Thus far animated films have made up the bulk of the digital 3-D films released. It’s much easier and cheaper to build 3-D into a CGI cartoon (e.g., Disney’s Meet the Robinsons) than to use it on the set of a live-action film. Indeed, the current generation of 3-D features that include actors have employed motion-capture technology. The proof of that pudding will come with Robert Zemekis’s Beowulf, due out November 16.
The real future of 3-D will depend on it attracting adult audiences, and that means that more live-action films will need to be made. That’s not happening now because the technology remains to expensive, cumbersome, and touchy to be easily used on set. The technology is bound to advance, though.
In the meantime, possibly we can expect that for the near future, digitally equipped auditoriums will be a little like Imax is now. A multiplex might have one Imax screen, but it would never put Imax in all its other theaters. Similarly, a theater owner might be willing to pay that $150,000 to convert one projection system in a multiplex, with digital 3D remaining a special attraction, one which people are willing to pay a little extra for.
If such a scenario is fulfilled, then we may see 35mm and digital share the multiplexes of the world for a long time. You wouldn’t hear me complaining.
When I was teaching, students often came to my office hours with variants of this question. “I want to direct Hollywood movies. When should I start trying to get into the industry?”
My answer usually ran this way:
Your prospects are almost nil. It’s tremendously unlikely that you will become a professional director. You may wind up in some other branch of the industry, though. In any case, start now. Aim for summer internships while you’re in school, and try to get a job in LA immediately after graduation—preferably through a friend, a relative, a relative of a friend, or a friend of a relative.
My answer still seems reasonable, but recently I’ve begun to wonder if it holds up across history. Suppose we refocus our goggles on the macro scale. At what ages do most Hollywood directors make their first feature? If we can come up with a solid answer, it would be one instance in which studying film history can have very practical consequences.
No country for old men
In any field, most people would expect to have their careers moving along by the time they hit their early thirties. But aspiring filmmakers may think that there’s a long period of preparation–working in a mailroom, sweating over screenplays–before they finally get to direct. Directing films is a top-of-the-heap role. Who would trust a young first-timer with a multimillion-dollar investment and authority over far more experienced performers and technicians?
Nearly everybody, actually.
Executives tend to be oldish and not fully aware of what the target movie audience likes. A few years ago a friend of mine, consulting for the studios, went to a meeting of suits and was told that all the new digital sampling and mixing software was purely an effort to steal music. He replied that there were legitimate creative uses of that technology. “For example?” one exec asked. My friend said, “Well, there’s Moby.” Nobody at the table knew who Moby was.
So hiring young people in touch with their peers’ tastes makes sense. Indeed, many of today’s top filmmakers started making features quite young. I haven’t made an exhaustive survey, but picking directors more or less at random yields some pretty interesting results.
Note: It’s hard to know how long a movie took to make, so I’ll use the release date as my benchmark. With the exceptions noted at the end, I haven’t checked people’s birthdays against day and month release dates. As a result, the ages of directors that I’ll be giving fall within a year of the releases. That works well enough to establish an approximate age range for a general tendency. And since it takes at least a year to get a project finished and released, we have to remember that most of these people started work on their first feature quite a bit earlier than is indicated by the age I give.
Start with directors who broke through in the 1990s and 2000s. Many were about 30 when their first features came out: Keenen Ivory Wyans (above; I’m Gonna Get You Sucka), Michael Bay (Bad Boys), David Fincher (Alien 3), and Spike Jones (Being John Malkovich). A tad older at their debuts were Alex Proyas at 31 (The Crow), Cameron Crowe at 32 (Say Anything), James Mangold at 32 (Heavy), Karyn Kusuma at 32 (Girlfight), McG at 32 (Charlie’s Angels), David Koepp at 33 (The Trigger Effect), Allison Anders at 33 (Border Radio), and Mark Neveldine at 33 (Crank).
Actually these are the older entrants. A great many current directors completed their first feature in their twenties. Quentin Tarantino, Reginald Hudlin, Roger Avary, and Joe Carnahan debuted at 29, Sofia Coppola and Brett Ratner at 28, Peter Jackson and F. Gary Gray at 26.
Go a little further past 33 and you’ll find a few big names like Alexander Payne (35), Nicole Holofcenter (36), Simon West (36), and Ang Lee (38). Almost no contemporary directors broke into the game past 40. The most striking case I found is Mimi Leder, who signed The Peacemaker when she was 45.
On the whole, the directors who start old have found fame and power playing other roles. When you’re David Mamet, you can tackle your first feature at 40. (He’s my age; don’t think this doesn’t make me depressed.) If you’re a producer like Bob Shaye or Jon Avnet, you can start directing in your fifties. Actors, especially stars, get a pass at any age. Mel Gibson made his first feature at 37, Steve Buscemi at 39, Barbra Streisand at 41, Anthony Hopkins at 59.
Putting aside such heavyweights, the lesson seems to be this. In today’s Hollywood, a novice director is likely to start making features between ages 26 and 34. It’s also very likely that he or she will have already done professional directing in commercials, music videos, or episodic TV, as many of my samples did.
But what about the Movie Brats?
Is today’s youth boom a new development? The Baby Boomers will yelp, “No! Everybody knows that we invented the director-prodigy.” Look:
*Dementia 13: Francis Ford Coppola, 24. If you don’t want to count that, You’re a Big Boy Now was released when Coppola was 27.
*Who’s That Knocking at My Door?: Martin Scorsese, 25.
*Dark Star: John Carpenter (above), 26.
*THX 1138: George Lucas, 27.
*Night of the Living Dead: George Romero, 28.
*The Sugarland Express: Steven Spielberg, 28. If you count the TV feature Duel, subtract 2 years.
*Targets: Peter Bogdanovich, 29.
*Dillinger: John Milius, 29.
*Blue Collar: Paul Schrader, 32.
*Good Times (with Sonny and Cher): William Friedkin, 32.
*Easy Rider: Dennis Hopper, 33.
True, the 70s also revved up veterans like Altman and Ashby, but it was the new generation, we’re always told, who made the difference. Hollywood was run by old men who had brought the studio system to the brink of ruin. The twentysomethings smashed the barricades and made a place for young filmmakers.
It might have seemed that way at the time, but the belief was probably due to a quirk of history. The 1960s and early 1970s were in one respect a unique period in the history of American cinema. During those years, the studios had the widest range of age and experience ever assembled. For the first time in history, it looked like a retirement home.
Look at it this way. In the 1960s, films were being made by directors who had started in the 1950s (e.g., Arthur Penn, Sidney Lumet), as you’d expect. But at the same time there were films by directors who had started in the 1940s: Fred Zinneman, Sam Fuller, Robert Aldrich, Anthony Mann, Nicholas Ray, Vincente Minnelli, Robert Rossen, Robert Wise, Mark Robson, and many others. Furthermore, some of the biggest names of the 1960s were directors who had started in the 1930s, like George Cukor, Carol Reed, George Stevens, Otto Preminger, and Joseph Mankiewicz. More remarkably, the 1960s saw the release of films by masters who had begun in the 1920s, notably Hitchcock, Hawks, Capra, and Wyler.
And believe it or not, several directors who had begun in the 1910s were still active nearly fifty years later. John Ford is the most obvious case, but Chaplin, Alan Dwan, Henry King, Fritz Lang, George Marshall, and Raoul Walsh signed films in the 1960s. When Sidney Lumet celebrates a half-century of directing with the upcoming release of Before the Devil Knows You’re Dead, he will be considered a rare bird in today’s cinema; but in a broader historical context he doesn’t stand alone.
Interestingly, the arrival of the Movie Brats and the blockbuster brought nearly all the veterans’ careers to a halt. No matter what decade an old-timer started, by the mid-1970s he was most likely dead, retired, or making flops. Perhaps the Movie Brats’ sense of Hollywood as a haunted house was created in part by the felt presence of six decades of history.
The studio years
But what about those codgers? At what age did they get started? A look at the evidence suggests that the rise of the twentysomething Movie Brats has plenty of precedents.
The Hollywood studios ran a sort of apprentice system. In the 1920s and 1930s, short comedies and dramas were the era’s equivalent of music videos and commercials—ephemeral material that allowed youngsters to learn their craft. For example, George Stevens started as a cameraman, and at age 26 graduated to directing short comedies. Three years later he made his first feature (The Cohens and Kellys in Trouble, 1933).
Stevens wasn’t atypical. In the 1920s and 1930s, major directors seem to have started as young as they do now. King Vidor began in features at 25, Rouben Mamoulian at 26. Mervyn LeRoy’s first feature was released when he was 27, the same age that Keaton, Capra, and Sam Taylor entered the field. Byron Haskin and W. S. Van Dyke made their debuts at 28. Hawks started at 30, as did Victor Fleming, Lewis Milestone, and Joseph H. Lewis. Clarence Brown signed his first solo feature at 31, the same age as Jules Dassin and George Cukor at their debuts.
Even more remarkable is what went on during the 1910s. As Hollywood was becoming a film capital, it became a playground for twentysomethings. Lois Weber, Alan Dwan (who may have directed 400 movies), James Cruze, Reginald Barker, George Marshall, George Fitzmaurice, Raoul Walsh, and many others directed their first features before they were thirty. (At this point, a feature film could be around an hour. Kristin explains here.) Cecil B. DeMille was on the senior side, having directed The Squaw Man (1914) at age 33. Griffith was then even older of course, but having started in 1908, somewhat earlier than this generation, his authority waned as theirs rose. And he had started directing within my golden zone, at age 33.
Why so many youngsters? In the 1910s, the industry was growing rapidly and it needed a lot of labor. Few people went to college, so teenagers could take low-level jobs at the studios straight out of high school, or even without a secondary education. Moreover, Kristin reminds me that at this time life expectancy was much lower than today; in 1915 a white male on average would live only 53 years. Beginners hurried to get started, and bosses probably found comparatively few oldsters around.
At the other extreme, the World War II years seem to have been a tough time to break in. Anthony Mann, Nicholas Ray, Don Siegel, Richard Brooks, Robert Aldrich, Robert Rossen, and other gifted directors had to wait until their mid- to late thirties before taking the director’s chair in the second half of the 1940s. The 1950s somewhat put the premium on youth again with fresh blood, especially from the East Coast. The most famous instances are Stanley Kubrick (Hollywood feature debut at 28), John Frankenheimer (debut at 30), and John Cassavetes (Hollywood debut at 32).
In short, Hollywood has always favored fresh-faced novices. It’s not surprising. Young people will work exceptionally hard for little pay. They have reserves of energy to cope with a milieu dominated by deadlines, delays, long hours, and rough justice. They are more or less idealistic, which allows them to be exploited but which also infuses the creative process with some imagination and daring. The executives need wisdom, but the talent needs, as they say, a vision.
Some last questions
Of course in every era a few older hands start directing features comparatively late. Lloyd Bacon, auteur of the masterpiece Footlight Parade, didn’t make a feature (Broken Hearts of Hollywood, 1926) until he was thirty-seven, after dozens of shorts. Daniel Petrie was forty when his 1960 feature The Bramble Bush was released, and Julie Dash was forty when Daughters of the Dust came out. But you have to look hard to find careers starting this late. Most of the evidence I’ve seen indicates that first features skew young.
First question, then: Is the tendency to seize talent young unique to Hollywood, or is it a characteristic of other popular film industries? My hunch is that it has been a worldwide phenomenon. Eisenstein, who was 27 when he made Potemkin, was treated the oldest of his cohort of Soviet directors; at the other end of that scale were Gregori Kozintsev and Leonid Trauberg, who collaborated on their first film when they were 19! In Japan, Ozu, Mizoguchi, and Kurosawa started directing in their twenties. Michael Curtiz was 30 when he made his first feature, which happened to be Hungary’s first as well. The very name of the French New Wave was borrowed from a sociologist’s catchphrase for postwar twentysomethings. Today, in Europe or Asia, most beginning directors are in their mid-to-late twenties or early thirties. I suspect that everywhere the film industry gobbles up youth, both in front of and behind the camera.
Second, a question for further research. Hollywood hires a lot of first-timers, but how many of them build long-lasting careers? It’s sometimes said that it’s easier to get a first feature than a second.
Third, who’s the youngest director to make a Hollywood feature? The most famous candidate for the Early Admissions program is Orson Welles, who was 25 when Citizen Kane premiered. But on this score at least his old rival William Wyler had him beat. Wyler was 24 when he released his first feature. The fact that his mother was a cousin of Carl Laemmle, boss of Universal, doubtless helped him start near the top.
So is Wyler, at 24, the winner? No. John Ford, Ron Howard, and Harmony Korine were 23 when their first features opened. But wait: Leo McCarey was only 22 when his first feature (Society Secrets) was released in 1921. M. Knight Shyamalan’s first first feature, Praying with Anger, achieved a limited release in 1993, when he was 22.
Yet I think that William K. Howard has the edge. Born on 16 June 1899, he signed two features in late 1921, when he was 22, putting him abreast of McCarey and Shyamalan. But Howard also co-directed a feature that was released on 22 May 1921—about three weeks before his twenty-second birthday. Brett and Sofia and PJ, you are so over the hill.
(Once we move out of the US, the clear winner would seem to be Hana Makhmalbaf. When she was ten, she served as AD for her sister Samira on The Apple, and she signed a documentary feature when she was fifteen. “I used to get excited over the words sound, camera, action. There was a strange power in these three words. That’s why I quit elementary school after second grade at age eight.”)
Finally, how does a little knowledge of film history help us steer hopefuls on the road to Hollywood? It gives us a sharper sense of timing and opportunity. My best revised answer would be this.
Start as young as you can, in any capacity. For directing music videos and commercials, the window opens around age 23. For features, the best you can hope for is to start in your late twenties. But the window closes too. If you haven’t directed a feature-length Hollywood picture by the time you’re 35, you probably never will.
Unless your last name is Bruckheimer, Bullock, Kidman, Cruise, or Bale.
PS 16 Sept 2007: Some corrections and updates: I’m not surprised that my more or less random sampling missed some things. So I’m happy to say that we have new contenders in the youth category.
Charles Coleman, Film Program Director at Facets in Chicago, pointed out that Matty Rich was only 19 when Straight Out of Brooklyn was released in 1991. So my candidate, William K. Howard, is second-best at best.
As for Hana Makhmalbaf, she’s been beaten too. Alert reader Ben Dooley wrote to tell me of Kishan Shrikanth, who was evidently ten years old at the release of his feature, Care of Footpath, a Kannada film featuring several Indian stars. Kishan, a movie actor from the age of four, has been dubbed the world’s youngest feature film director by the Guinness Book of World Records. You can read about him here, and this is an interview.
From Australia, critic Adrian Martin writes to point out that Alex Proyas made features before The Crow. “A clear case of Hollywood/ Americanist bias!! (Just kidding.) Proyas had an extensive career in Australia before the USA, and that includes his first, intriguing feature, Spirits of the Air, Gremlins of the Clouds, in 1989. Which puts him around 27 at the time!”
Finally (but what is final in a discussion like this?) over at Mark Mayerson’s informative animation blog, there’s a discussion about animation directors, who tend to be quite a bit older when they sign their first full-length effort. Mark’s reflections and the comments point up some differences between career opportunities in live-action and animated features.
PPS 27 December 2007: Malcolm Mays, currently 17 years old, is shooting a feature under studio auspices. Read the inspiring story here.
Recently we went to see Ratatouille and both loved it. We thought it was the best Hollywood movie we’ve seen this summer.
KT: Last October, in the infancy of this blog, I posted an entry on Cars. There I said, “For me, part of the fun of watching a Pixar film is to try and figure out what technical challenge the filmmakers have set themselves this time. Every film pushes the limits of computer animation in one major area, so that the studio has been perpetually on the cutting edge.” For Cars it was the dazzling displays of light and reflections in the shiny surfaces of the characters.
Figuring out the main self-imposed challenge in a new Pixar film is like a game, and I avoid reading any statements about the films by their makers ahead of time.
At first glance Ratatouille might seem to be “about” fur. True, there are lots of rats with impressively rendered fur—but fur was the big challenge way back in Monsters, Inc. Surely Pixar wasn’t repeating itself. To be sure, Monsters, Inc. contained only one major furry character, Sulley, and his fur was the long wispy type that some stuffed animals have. Difficult to render, no doubt, but different from the rats’ fur, which is dense, short, and has to ripple with the movements of the animals.
Not only that, but in Ratatouille, we see furry rats in all sorts of situations: just running round, crawling out of water and various other liquids, and in one virtuoso throwaway shot, emerging all fluffed up from a dishwasher. It’s all very impressive, but I didn’t think that was the main technical feat that the filmmakers were aiming at.
I quickly became aware that there was something different about the settings. Pixar films always have eye-catching settings: the beautiful and convincing underwater seascapes of Finding Nemo, the huge vistas of the factory in Monsters, Inc., the stylized domestic settings in the The Incredibles.
The Incredibles, the first film that Brad Bird directed for Pixar, was deliberately cartoony-looking, evoking the streamlined Populuxe look of 1950s cartoons. Ratatouille, his second effort, takes a very different approach. Here the settings are far more realistic and three-dimensional, approaching photo-realism in some of the Parisian street scenes. Often our vantage-point moves rapidly through these settings, twisting, turning, and plunging from high angle to low angle framings in a second. In the scene where the protagonist Remy is swept away from his family through the sewers until he emerges in Paris, the twists and turns of the pipes sweep by. Likewise, the camera explores the crevices of the restaurant’s crowded pantry.
The settings have a tangible and immediate presence beyond what we have seen in previous Pixar films, partly because so many objects in the surroundings are pulled into the action. Ingredients sit in bowls and jars that take up considerable portions of the kitchen set, and the rat dashes among them, sniffing to find the ones he needs for a new concoction. The lessons learned in Cars return here to make the shiny copper cooking utensils reflect their surroundings. The brick arch and floor tiles of the restaurant kitchen were individually tweaked, so that they don’t have the uniformity that CGI tends to give repetitive patterns. Dense combinations of bricks, tiles, wood panels, carpets, patterned wallpaper, glass, and Venetian blinds make every shot too busy for the eye to fully take in.
A delightful demonstration of all these features and more is given in the “Ratatouille QuickTime Virtual Tour.” 360-degree spherical space allows you to look up at the ceiling and down at the floor as well as scan the walls. (The download time is reasonably quick; you need to move your cursor into the window that opens and use it to scroll in any direction and at variable speeds. Use the shift key to zoom in and the control key to zoom out.)
DB: Two things, one general and one specific to Ratatouille:
(1) The idea of explaining artists’ works in terms of problems and solutions is common in art history and musicology, but not so common in film studies. It can be fruitful to consider that sometimes filmmakers face common problems and that they compete to solve them, or to find different problems they can solve.
I sometimes try to imagine what animators for other Hollywood studios thought when they walked out of Snow White and the Seven Dwarfs. Did the talents at Warners, Fleischer et al. just throw up their hands in despair? Disney must have been the Pixar of its day, challenging its rivals with a dazzling series of achievements along many dimensions. Disney had solved so many problems—of rendering color and depth, of catching detail and voluminous movement, of blending pathos with comedy—that the others could hardly compete in the same race. So it seems that they carved out other niches. Fleischer, though trying its hand at feature cartoons as well, concentrated on the familiar and presold comic-strip world of Popeye. Warners avoided child-oriented sentimentality and offered more insolent and whacked-out entertainment, personified in Bugs and Daffy. In today’s CGI realm, Pixar seems to set the pace, staking its claim before anybody else has realized the territory has opened up–though Aardman consistently offers something different.
(2) Along with the problems that you’ve mentioned, I was struck by what we might call a general task facing all animators: the need to display a sophisticated cinematic intelligence that fit contemporary tastes in live-action movies. So the pacing has to be fast (here, an average shot length of about 3.5 seconds), but it isn’t frantic. Today’s movies are overstuffed with details, so this is too; but here many stand out sharply. Central are all the minutiae of food and its preparation, which you mention below. Our friend Leslie Midkiff Debauche reports that her son, a chef, noticed the burn mark on Colette’s right forearm—a combat wound of the professional cook. Instead of the heavy satire and flatulence of the Shrek cycle, Pixar always gives us something that would engage us even if it weren’t animated.
Every director, I think, should study this film for lessons in making movement expressive. The velocity of our rats’ scampering depends on the surface they cross, and the differences in acceleration and braking are vivid. The vertigo-inducing river turbulence that carries Remy away from his clan displays the old Disney genius in rendering the behavior of water. There’s a caricatural difference in body language among all the characters, from the cadaverous Ego to the heaving movements of pudgy Emile and the spasmodic twirling of Linguini when Remy is at the controls. Shot scale is always well-judged. When there’s a moment of uncertainty about whether the kitchen team will support Linguini, the pause is accentuated by the fierce Horst taking a step forward toward the boy, in big close-up. Will this be the signal for the others to join him? The close-up conceals the key piece of information: Horst has stripped off his apron, and only when he lifts it into the frame do we realize that he’s walking out. Perhaps the visual expressiveness of silent filmmaking survives best today in animation.
KT: A secondary but still important challenge seemed to be the effort to find ways of rendering the textures of surfaces that are difficult to capture in animation. Most obviously the food—slices of carrots and tomatoes, stalks of celery—must look realistic and attractive if we are to believe that the dishes Remy devises are truly as scrumptious as the characters find them. (The film wisely sticks to soups, desserts, and, yes, ratatouille, sidestepping the problematic notion of an animal cooking other animals.)
Once you decide what you think the Pixar crew was working on extra hard in their newest film, it’s usually easy to find supporting evidence in interviews with the top people involved. For some reason Bird was reluctant to talk much about the big technical challenges for Ratatouille, but he gave a good summary in the interview on Collider.com:
I think our goal is to get the impression of something rather than perfect photographic reality. It’s to get the feeling of something so I think that our challenge was the computer basically wants to do things that are clean and perfect and don’t have any history to them. If you want to do something that’s different than that you have to put that information in there and the computer kind of fights you. It really doesn’t want to do that and Paris is a very rich city that has a lot of history to it and it’s lived in. Everything’s beautiful but it’s lived in. It has history to it, so it has imperfections and it’s part of why it’s beautiful is you can feel the history in every little nook and cranny. For us every single bit of that has to be put in there. We can’t go somewhere and film something. If there’s a crack in there, we have to design the crack and if you noticed the tiles on the floor of the restaurant, they’re not perfectly flat, they’re like slightly angled differently, and they catch light differently. Somebody has to sit there and angle them all separately so we had to focus on that a lot.
DB: This relates back to the idea that an animated film has to offer its own equivalent of what live-action has led viewers to expect. Since at least Alien and Blade Runner, we’ve come to equate realism with a worn-out world. No more spanking-clean spaceships, but rather creaky Gothic ones; no more shiny futures, only dilapidated ones. Bird acknowledges that once his team opts for more detailed settings, they have to look lived-in, rather than the rather generic ones we find in Toy Story and even The Incredibles. But then the food contrasts with this air of casual imperfection; it looks pristine.
KT: Speaking of food, in another interview, Bird expands on the difficulties of rendering it:
There was quite a bit of effort expended to make the food look delicious. Because if one of the things your movie is about is gourmet food, then you can’t have it not look delicious. And computers aren’t really very interested in making things look delicious. They’re interested in things looking clean and things looking geometrically precise, and usually hard not squishy – not tactile. Computers are great for perfection. They’re not great at organic things. We had to work really hard to get the food to look like you could taste it and smell it and enjoy it.
The interviews I’ve read don’t mention it, but the film also takes a small but impressive step toward solving the ever-difficult problem of rendering human skin. Most of the characters are given the usual smooth skin that we have come to expect in computer-animated films.
DB: Agreed! One of the things that put me off CGI animation years ago was the overpolished look of CGI surfaces. Volume without texture always looked plastic to me. But in Ratatouille, the Pixar team has made great progress in dirtying up the surfaces. That kitchen is full of spills and stains, but the faces are still pretty balloon-like, except for that villainous chef Skinner. He’s the most cartoony character, I suppose, and the range of expressions he passes through just in delivering a single line had me in stitches. The Termite Terrace legacy lives on in him.
KT: Yes, Skinner must have inspired the filmmakers. His face gets very sophisticated treatment. In most character animation, eyes and eyebrows are the main means of creating expressions in the upper half of the face. Several times, however, Skinner comes into extreme close-up, so that his expressions of rage and shock are complete with elaborate forehead wrinkles. There’s even a patch of pores on his nose. That degree of detail is used sparingly in this film, but perhaps we see a sign of things to come as the Pixar animators set up new hurdles to jump.
PS 25 August: For an interesting, more thematically oriented discussion of Ratatouille, see Michael J. Anderson and Lisa K. Broad’s entry on the Tativille blog.
PPS 31 August: Bill Desowitz has a lively and informative feature on the film, including behind-the-scenes interview material, at Animation World.