Yearly Archives: 2006

October 10, 2006

Art thought continued

It’s been a busy time and many plans for entries remain unrealized. Having delayed my Moby-Dick project until I can get back my access to the OED, I instead took up reading all of Western Literature, so as you can imagine I have some reports to post about that. Right now two, which is how many items of Western Literature I have thus far read. But they’re not done. Then there’s my very long and not-interesting enlistication of all the goofy live shows I saw in August – that’s taking forever to finish. Plus there’s an appendix to that, of the other stuff I read or saw in the past several months. Haven’t even started it. Also, that thing I wanted write about John Williams that’s been sitting here half-finished for a year. I have some new thoughts for that. And then the whole issue of the meme dictionary. You haven’t heard the last of that, but it’s complicated. I feel like there’s a lot to attend to on this site just to live up to my own meager-grand plans for getting out my thoughts. Getting out my thoughts takes forever!

So anyway, none of that stuff is ready to post. But it’s late and I feel like I can probably get this one thing out without too much trouble.

Most of my thinking about those various books and music and shows and things has been colored, since that last entry, by my “art thought,” which has now taken on the character of being a whole “theory of art.” Really it’s just a certain “angle on art” that I’m finding very valuable. In addition to All of Western Literature I’m also reading The New Penguin History of The World, and early on – which is where I am – he says that, while we don’t know for sure what it was for, clearly cave art was carrying some burden of communication in an era before writing. Then he goes on to say that it probably had ritual or magical significance as well. I don’t doubt it, but the first part is what my thoughts have been focussed on.

When I said last time that art is the projection of an individual’s mental model of the world – and that it thus allowed others to interpret the world on congruent models – I was speaking and thinking at the extremely analytic end of a spectrum that also includes very familiar ways of expressing the same thing. Things like “art expresses inner truths that we cannot put into words,” etc. are basically the same thought. But whenever I heard people saying things like this, I used to think they were talking about inner truths like “loss is a shadow crossing the soul” or whatever – essentially, inner experiences, and so probably emotional ones. Representation of the outer world can only be tied to this sort of “meaning” in art by being a figurative depiction of an emotional state, or as a stimulus meant to provoke a quasi-emotional state such as “the appreciation of beauty.”*

But what I didn’t see is that any knowledge of the outer world is an inner experience. I guess a related thing that I didn’t see is what Plato was going on about. All that talk about the real essence of perfect “horseness,” unattainable by flawed earthly horses, struck me as near-nonsense deriving from a simple confusion of word with thing. “Words are just useful tags!” I wanted to shout at Plato. “People made them up to get things done!” And to some degree I still feel that way. But the deeper issue managed to escape me: that even just to think of things as “things” is an interpretation, is something people made up to get things done. Plato writes about “forms” and puts them in some world other than the real world, but – as far as I know – he never says that the world of forms is, in fact, the mental mechanism. In fact, I think he goes the other way and says that the world of forms is some heaven-type place outside of us. I don’t really know. Gotta make sure Plato’s on the list of All Western Literature. Yup, there he is. So all in due time.

We only have the five senses, but in the brain they’re all plugged into one another via a central program – I generally don’t like computer analogies like this because they seem smug and nerdy, but right now I need it – via a central program that is doing all kinds of complicated “world-simulation.” Without that program, the five senses would have nothing to say to one another. Well, maybe smell and taste would get along. But sight and touch would have no idea. The central program is necessary, is the root of what we’re doing as minds, and is so incredibly accurate in its predictions that we generally forget about it. But it has nothing directly to do with the world. The world is apparently made of some kind of sticky stuff. The program in our brain deals in very different terms.

Verbal communication is one of the functions of the program; it deals in the program’s terms. It cannot, without fancy workarounds, reprogram the program itself. Art, however, is about program maintenance. The program, though it doesn’t quite know why, does its best to portray the nature of the program itself. Then other brains, running a complementary routine, try to determine what they can about the program portrayed and, if the signals of prestige, sanity and efficacy are positive enough, incorporate some of that program into their own.

A better metaphor than this is cross-pollination – art is like the pollen strewn by desperate plants. Sexual reproduction in general. How do brains reproduce? Not just ideas, which for their meanings are already dependent on certain cultural programs, but cultural programs themselves? All culture is by nature reproductive. In fact, isn’t the value of all culture exactly that it reinforces itself and thus maintains solidarity? In this way, art can be seen as functioning exactly like any other feature of culture – it represents and thus communicates/reinforces the mental programming of its participants. A definition of “art,” if we need one, could then be: “any aspect or artifact of culture that has no utility other than to communicate and reinforce the mental program.” Interestingly, this particular definition does not resolve the “What Is Art” debates over Duchamp and other nudniks**, but they are left open for more interesting reasons. To me this is a very promising feature of this definition of art. The problem with the urinal in the museum is not that it is not beautiful or that it is not made by the artist or any of that – under this definition, the problem is that it quite possibly communicates nothing about the mental life of the artist and can be incorporated in no way to the mental life of the audience. It is merely a prop, a word, in some other kind of very specific, “in-program” communication, about art culture and the definitions of art. Obviously, this is up for debate. But isn’t that a more valuable debate than the usual one?

Art is this, among other things. Clearly it has some ritual significance as well, just like J.M. Roberts said. And the cultural apparatus that has developed around art is as gnarled and turgid as they get, so I’m not claiming that this sort of explanation is the explanation for all art. Far from it. But it does feel, to me, like the “pure” “essential” side of art, the side to which I want to attend. Conversely, on my trip through some irritating exhibits at the Tate Modern, I found that it made a good litmus test for dismissing works.

Right, I made notes about that and wanted to put them here. That was the whole point of my writing this second entry on the same topic. But all I’ve managed to do is reiterate, at greater length and more loopily, what I wrote last time. I can hear them quietly starting to play me off the podium so I guess I’ll post those notes later. Ugh. Not even the sense of having accomplished one of my meaningless self-appointed tasks. I still have it all ahead of me.

* Yes, it took me a long time to find just the right image. Previous choice was this. This was also a strong contender but the artist ruined it.

** This year, for the first time, I saw real, non-prankish works of Duchamp, at the Philadelphia Museum of Art, and they were good. So all right, so maybe he’s not a nudnik.

September 8, 2006

Art thought

A few years ago, while I was at the Museum of Natural History in New York, I had a thought about art. There’s some text on the wall next to one of the dioramas that talks about human evolution and development, and it says – in that slightly musty way – that the decisive step in human mental development was the ability to manipulate our environment in a non-reactive way. That is, to make plans and then execute them; to be able to envision something that did not yet exist and then bring it about. Basically, that humans acquired the ability to manipulate a mental model of the world that could then be used to guide actions in the real world. I don’t remember quite how they said it, and it’s possible that I’m overshooting their point in my paraphrase here. Anyway, this was meant to introduce the advent of tool-making. Then – at least as I recall it – right next to that was a display about the first artworks – those fat little blobby goddesses. Maybe I made this up or maybe they were explicit about it, but I seem to remember the museum telling me that these things date back as far as the first tools; that as soon as human beings were able to manipulate their environment in useful ways, they were also manipulating it in, shall we say, non-useful ways.

And it seemed suddenly clear to me that art was just the junk output of a useful but indiscriminate brain function. Primitive man thought about killing a tiger and then made a spear. The next day, primitive man thought about having sex and then carved a woman out of a rock. Not quite as useful.* But, evolutionarily speaking, the guy who can make a spear is going to win out, even if he does happen to waste his time spreading pigment on the wall in the shape of the animal he’s thinking about. Whatever he thought that was supposed to do, I assure you, it doesn’t do it.**

So basically I came away that day feeling like I’d seen the nature of all non-essential human culture – our brains developed a very useful tool-building mechanism based on manipulating the world to match what we see in our mind’s eye, but only about 5% of what gets poured into that mechanism produces useful output – the rest is just freaky nonsense. I mean, a PICTURE of a thing?

But today I was reading a little book about art that I picked up for free from a bin on the sidewalk outside a bookstore, and I had a different thought.

Culture both forms and reflects the way we see the world. This, in turn – our mental modeling of the world – is essentially unlike the world. Just as a map of how the body feels, based on how many nerve endings we have, is completely distorted compared to an actual body***, our maps of pretty much everything – the ways we think about them – are crucially distorted. I don’t know anything about neuroscience, but from being a human I do know that we process things by breaking them down into chunks and then identifying them, recognizing them. Once it’s gone through processing, the sloppy sensory truth has been torn apart and reordered. A tree encountering someone’s idea of a tree would never recognize itself. If you know what I mean.

A child’s drawing – or a caveman’s dolly – reveals the quality and tendencies of the artist’s attention to the world. The kid looks at the same people we look at, but there’s only so much that he’s aware of such that he can manipulate it mentally. He’s aware that people have legs – and, if he’s sharp, feet – but this big chunk called “legs” overwhelms any of the other things he might have noticed about the foldings of cloth, or the anatomy of the knee, or light and shadow, etc. If he could grasp those things in his mind the way primitive man could grasp the idea of a spear, you can be sure he’d draw them. After all, if he remembers the toes, he WILL draw them all. The world projects an image in the mind; art is the projection of this image back into the world. From it, by triangulation with what we know about the world, we can deduce a good deal about the mind that produced it. The experience of art appreciation is the experience of empathy with our recreated version of the mental life that the art implies. This is so profoundly basic an idea in art theory that it is rarely stated explicitly. It wasn’t stated in my book, incidentally.

And – so my thought goes – the projection of mental life is actually a vitally important tool, from an evolutionary standpoint, in the creation of a social system. Any cooperative endeavor and any coordinated society – or more basically, functional communication – depends on mental congruence among the members of the society. If two or more cavemen are going to cooperate on a plan of building spears, surrounding the tiger, taking him down, roasting him over a fire, and so on, they’re going to need to be chunking their worlds in very similar ways. The kid’s stick drawing of legs, arms, torso and smiley-face is acceptable to us because at a basic level, that is the endorsed chunking. If the kid drew knuckles, uvula, and navel inside a big circle, he wouldn’t be able to participate in society. Okay, that’s just silly. But if you go, like I did yesterday, to the British Museum, and look at the representations of people from one culture to another, you see that what is being agreed upon about what “seeing a person” consists of varies slightly and crucially. This one really struck me – those Assyrian winged man-horse doorframe statues all have five legs depicted: two for the front view, and four for the side view, with the corner leg appearing in both. Check out the image. Even though this viewing angle, where all five are simultaneously visible, is not only possible but in fact the most likely viewing angle, the artists – for many centuries – nonetheless felt that the visible inconsistency was less important than the symmetry and beauty of the two distinct views from the orthogonal angles. What does this say about Assyrian mindset? I’m not sure how to articulate it. But it certainly says something, and if I had grown up Assyrian, it would have taught me something, about what matters and what doesn’t in the realm of seeing, and, ultimately, in the realm of processing the world at all.

That the artistic representation of the world reveals the mental parsing of the world is in fact the underlying philosophy behind Auerbach’s Mimesis, which I am very much enjoying. The only part of this thought that is specific to today for me is the idea that the fact that art reveals mental parsing makes it a tool for disseminating a shared mental grammar. That, evolutionarily, it would be advantageous to be constantly projecting one’s mental processes for one’s peers by illustrating what the world looks like after it has been processed. And that maybe, as children are internally primed to listen to speech and acquire language, we are always primed to look to art and subconsciously acquire new ways of breaking down and recognizing our sensory experience and putting it to use.

This functional theory of art as a tool for mental conformity could feasibly also account for the long contemporary crisis of art, with an argument along the lines of Henry Pleasants’ The Agony of Modern Music. To wit: that the romantic emphasis on art as the expression of an individual was a nineteenth-century aberration of necessarily limited scope since it distorts the essential nature of art. This trend ran the hundred years it had in it, and now we have long since exhausted the technical possibilities arising from the idea of art by and about individuals, but have so profoundly poisoned our idea of what art is that we cannot find our way backward.

I don’t know if I believe that, or any of the rest of it, but it’s interesting to think about.

* We don’t know what those little figures were about; they’re pretty attentive about the sex organs but they still might have been religious symbols by way of the concept of “fertility.” But no matter how you cut it, they’re still not a well-thought-out way to bring about children or crops or anything else Monsieur Caveman may have had in mind.

** Beth: “but couldn’t the purpose of the drawing be communication?” Well, yes. Right. But isn’t there still a definite difference between functional communication and art? Maybe cave paintings are going back too far to make that distinction, but only because we know so little about them. Is there any way of construing the Venus sculptures as functional communication? There may be, but I can’t see one.

*** I’ve seen this sort of thing illustrated somewhere, but when I went searching for a picture just now I couldn’t find it.

August 24, 2006

More talk about memes

I’ve just seen a bit of improvisational comedy.

Improv comedy purports to be something like on-the-spot writing – and maybe it is, in its way – but it feels like something else. Improv comedy tends to come off less like creative play than like the shuffling of pre-existing cultural molecules – what the kids today call “memes.”* Since the performers need to be in agreement during the unpremeditated performance, they’re forced to depend on strategies that they can be sure the others will “get” instantly – on concepts that are common property. Anyone watching improv is also there to appreciate the craft – the comedy itself is generally pretty pale when taken on its own merits – and for the most part that craft is the clever invocation and exchange of these existing notions. In this sense, memes, and the shared repository of memes that constitutes our common culture, are exposed in a particularly naked way during improv.

See, when I sat down to write this, all of the above felt like it would take about one sentence. But then I couldn’t figure out how to write that sentence and it turned into a whole paragraph. Very frustrating; the whole rhythm of my overall thought is thrown off; that was supposed to be the upswing but now it’s turned out to be the first movement, and I really don’t have the time to keep it up through what will apparently have to be a very long arc of speculation.** Let me try to just suggest the rest of what I wanted to say with a series of questions:

Is this (the shuffling of pre-existing cultural molecules) really any different from “real” creativity, or is it just a rougher-grained version of the same process? If these molecules (those that might be invoked during improv comedy) could be catalogued in a dictionary, approximately how many would there be? Do they really exist as discrete molecules in a way that such a dictionary is feasible, or are we tricked by our sense of recognition into thinking that they have Platonic existences outside of their specific usage, when in fact we are only recognizing rules of formulation (that is, recognizing individual snowflakes as being well-formed snowflakes and thinking this means we must have seen them before)?

If such a dictionary could be constructed – not only of comedic memes but of all dramatic (mimetic!) memes – could a grammar be specified to govern their usage? It certainly feels like such laws, or at least principles, exist – it is the elegance and charm with which these memes are deployed that we appreciate while watching improv… or reading a mystery… or taking in any form of art that does not attempt to disguise the fact that it is constructed from pre-existing molecules – and such judgments necessitate principles of taste, as well as underlying formal principles.

Listening to music, one often feels the same sense of continuously recognizing constituent gestures – “now he’s doing one of those; now he’s doing one of those, etc.” – but my efforts to isolate and catalogue these have failed instantly because the things are devilishly hard to disentangle from one another without them losing their essentials. Is that a case of the snowflake illusion, or is it just a difficulty arising from our poor ability to articulate the workings of music? Perhaps an attempt at isolating such things in spoken/dramatic culture is a more approachable precursor to determining the form of a musical equivalent.

Everybody likes concrete examples, so here’s a concrete example to finish. The other day an obscure movie that I haven’t seen was being described to me. In this movie, a man seeking to immigrate to the US tricks a woman into believing that he loves her so that he can marry her for his citizenship. Then, before they reach the border, events ensue such that he actually does fall in love with her. Then, at a crucial moment, while the man is elsewhere, a third party shocks the woman by revealing that he never loved her and was only using her – even though now he really does love her! When I was told this, I thought, “Well, sure, right. One of those.” The question is, one of what? Have I actually seen this snowflake before? My gut tells me I could make a lexicon of these and that such a lexicon is sitting in my brain right now. A commenter on this site once suggested starting a wiki of all memes, but before that process begins, a coherent theory of how they break down needs to be established.

* Long note on “meme”: I don’t like this word “meme” for what I’m talking about because the emphasis, as it was coined by Richard Dawkins, is on the fact that, in analogy with genetic information, such an idea propagates itself from person to person and is thus subject to principles of adaptation and evolution. What I want to talk about is the notion of a unitary cultural concept that is shared by many people, but without this emphasis on propagation, which is a limiting metaphor. After all, culture (and the molecules thereof) is not solely transmitted from person to person; concepts can, for example, lie dormant in books and films and whatever for years and then be picked up again by a new generation, now colored by all sorts of historical considerations. Or they can be disseminated to millions of people all at once on television, where in some ways they don’t really seem to their audience to have originated with humans at all. These kinds of events can no doubt find a place in a pseudo-genetic theory of human culture; my point is just that such theory shouldn’t provide the terminology for the culture itself. Meme essentially means “a unit of imitatable thought” when I want a word that means “a molecule of cultural convention.” Coin and suggest!

For your reference… Since my OED privileges have been wiped away, here’s the – ugh – American Heritage Dictionary on “meme”:
n.

A unit of cultural information, such as a cultural practice or idea, that is transmitted verbally or by repeated action from one mind to another.

[Shortening (modeled on GENE) of mimeme, from Greek mimēma, something imitated, from mimeisthai, to imitate. See mimesis.]

** When I started, promising only to write one paragraph, Beth said, “You’re writing a one-paragraph entry on memes? That’s not possible!” And she was right.

August 19, 2006

Creepy musical doodads

I was just now looking at this page from Vertigo. This is when Jimmy Stewart is pulling Kim Novak out of the bay after her apparently ghost-induced apparent suicide attempt.

The Bay.jpg

In the first two bars here, the strings and horns are trading off a little whiplash figure in sixteenth-notes while the rest of the orchestra plays nauseous chromatic scales. Then, at the double bar, the figure in sixteenth-notes becomes a figure in eighth-notes and is played by the strings and winds together. Herrmann’s idea, I think, is that the frantic moment is beginning to subside, so the rhythm is altered to be less wrenching. But to me this sort of switch – from sixteenth-notes to eighth-notes – is a little unnerving in its own right. The motif itself, as it passes from one rhythmic speed to another, is revealed in a disturbingly naked sort of way. I was put in mind of something similar in Janáček’s Sinfonietta (the only work by Janáček that I’ve heard enough times to hum – but my sense is that the basic technique is found in many of his other works as well). Janáček’s overall approach to composition seems to deal, like Herrmann’s, in discrete units of musical material: musical objects that one hears being altered, played with, and rearranged, but in a fabric where the seams are always turned outward. The patches of material do not melt into one another; they remain distinctly movable within the larger context. In Herrmann’s film music the short, repetitive packets are first of all useful for communicating to an audience listening with only “half an ear,”* but they have their own peculiar character, as well.

The particular strangeness that I find in Janáček and Herrmann has something to do with this foregrounding of musical figures as manipulable items. When individual musical items are so discrete and item-like (itemic?), they begin to be disturbing. The principle is the same as with Yves Tanguy:

tanguy.jpg
Globe de glace (1934)

Because these smudges and blobs are portrayed as objects, we are confronted with all the uncomfortable ways in which they do not live up to their responsibilities as objects. “What sort of horrible fucked-up objects are these?”

If it’s handled a certain way, the substance of music begins to take on the same, queasily insufficient quality. If musical motifs are “figures” to be handled and altered – are, in essence, things – then what sort of terrible off-world do they come from? Of course, that response is never conscious or nearly so stark. In practice, something like this Vertigo moment, which bluntly exposes the “material” quality of the material (to my ear, anyway), would just create a mild sense of oddness. But that’s plenty.

Why ‘change of rhythmic scale’ is what sets me to hearing music this way, I’m not sure. A visual analogy is tempting: it’s like moving the camera, pulling back from or zooming in on an object. The features that remain coherent and undistorted are more readily identifiable as an object and not just as a feature of the background. A parallax sort of thing.

Well, anyway, all those discrete musical “cells” can get unnerving in Herrmann. The technique flirts with clumsiness but the end result is something much more ominous than clumsy. Bartók also sometimes strikes me as having given his music the power to disturb by veering toward clumsiness. “Rough” is probably a better word: not a roughness of sound but a roughness of manipulation.

I recall being transfixed by a not-very-good piece, David Del Tredici’s Tattoo (1986), because of how viscerally uncomfortable it was making me. The whole piece is a huge, much-too-elaborate fabric built out of spiraling iterations of a single rhythmic figure that’s a sort of spiral in itself. The little thing appears in constant juxtaposition with itself on various scales and at various speeds. This is Del Tredici’s big technique, or was in the 80s, and he used it to death in all his works, so far as I can tell. It’s sort of the shameless, aggressive version of what I’m trying to point out in Herrmann and Janáček, and in that particular piece it really made my skin crawl. Really, like nails on some existential blackboard, like the guy in Sartre’s Nausea. I’m not sure if that’s what Del Tredici was going for – my explorations into his other music turned up a lot of grotesque self-indulgence and some other distasteful qualities, so it’s hard to guess what he wanted to achieve – but I give him credit for it.

The gothic, morbid qualities of Vertigo and Tristan und Isolde before it seem to me related to this sense of being confronted with the uncanny “stuff” of music, but that’s a different thought for another time. But there, I said it anyway.

* This is someone’s line about film music but I forget who. Aaron Copland or someone like that.

July 26, 2006

Broken Sword II: The Smoking Mirror (1997)

directed by Charles Cecil
written by Dave Cummins and Jonathan Howard
story and design by Charles Cecil, Dave Cummins, Jonathan Howard, and Steve Ince

developed for PC and PlayStation by Revolution Software
published for PC by Virgin Interactive Entertainment

~8 hrs

Another ridiculous piece of pulp from the waning days of the computer adventure game. As usual, I played it in search of pearls of game design, plot design, or puzzle design. But there were none to be had. I’m currently trying to piece together a pulpy Indiana Jones-type plot of my own, and my specific hope was that this game would spark some thought processes in that direction. But it didn’t.

I don’t need to complain at any length about the difference between junk and careless junk, because I have before. This was careless junk. The plot and game elements seemed to have been thrown into a salad spinner and left where they landed, then stitched together using the laziest possible game design. That is to say, a lot of “conversations” – click on the icon of an object or person you’ve encountered to ask about it. About 50% of the spoken dialogue in this game is myriad variations on the classic line “I wouldn’t know nothin’ about that!” – one per object per speaking character. When games went “talkie,” few game designers seemed to have considered that it takes a lot longer to listen than to read, and that listening to half-assed dialogue being spoken slowly is a huge drag compared to speed-clicking through half-assed printed text. The sub-adolescent sub-greeting-card sub-Bazooka-Joe “cracks” about every stupid object in the game – that underpants are a recurrent source of humor ought to give you a sense – are incredibly wearying, not to mention embarrassing, when performed by actual humans. A further source of weariness is the incredibly, infuriatingly slow walking animation that propels your character from one point of interest to another. Of the 8 hours that I’m estimating to have spent with this game, the majority of them were spent watching my choices play out in excruciatingly uneventful detail, one foot in front of the other, or else listening to every character in the game say, about every object in the game, “Gosh, a newspaper article about an upcoming total solar eclipse? I wouldn’t know nothin’ about solar eclipses!” This problem of “what do you actually do” is fundamental to all story-meets-game productions, but by 1997 there was enough accumulated wisdom on this subject that the designers should have known far better.

The actual downright incoherence of some sections of the game is evidence, to my eye, that this product was rushed to market, or else the budget was reduced after the design phase. Both the introduction and the ending are animated sequences that felt like just slightly less than a bare minimum, as though most of the storyboard had been pared away in desperation. At some points, a thing we haven’t yet heard about is suddenly assumed to be common knowledge: evidence of either a cut section or insufficient playtesting. Either way, shoddy stuff.

Plot: When the solar eclipse comes, an evil Mayan god will be released from a SMOKING MIRROR where he’s been imprisoned for centuries, and destroy all mankind, and that’s what the evil smuggler/general wants because he’s crazy or something. There are several sacred stones that can stop it from happening, and then the bad guys kidnap you because you have one, and then you get away, figure out what’s going on, find the other stones – one was buried by a pirate, the other is in the British Museum – and stop it. Hm. In summary it sounds almost like it works. But I assure you it doesn’t. The causal linkages suggested by my summary are not actually part of the gameplay.

The evil god, when he appears briefly in the final animated sequence, looks like Skeletor, which is to say not even remotely Mayan. That’s the last straw!

The previous game, Broken Sword: The Shadow of the Templars – or, as I bought it on its US release back in 1996, Circle of Blood (awesome!) – suffered from the same slow-walking, lame-talking problems, but the whole production felt much more cared-for, and the plot progression managed to be genuinely entertaining. The third game in the series (Broken Sword: The Sleeping Dragon (2003)) improved on the walk-cycle annoyance with a newfangled, fairly attractive 3D engine, and managed to keep the comedy at a good solid 12-to-14-year-old level – and, most importantly, it had a sense of atmosphere. Ridiculous as the word “taste” is in these surroundings, it really comes down to taste. Some of this crap is the good stuff and some isn’t. Is a skeletal Mayan god trapped in a magic mirror more stupid than a Templar conspiracy to harness cosmic energies? Absolutely it is.

What’s the lesson to learn here? That in writing my own bit of junk, I should be careful not to confuse the dumb with the merely stupid. Harder than it sounds! My sympathies do go out to Charles Cecil and company. But they failed. I guess the moral should be: Stupid is fine, but when in doubt, be smarter.

July 25, 2006

Harry Potter and the [Several Things] (2000-2005)

Harry Potter and the Goblet of Fire (2000)
Harry Potter and the Order of the Phoenix (2003)
Harry Potter and the Half-Blood Prince (2005)

by J.K. Rowling

Goblet of Fire, book four, was the best one. It had the feeling of being really comfortable with its own terms, like a sitcom that’s finally hit its stride.

There’s that comfort-pleasure we get from fictional characters being recognizably themselves; the warmly, status-quo-affirmingly formulaic joke that’s supposed to elicit an “Oh, Chandler!” Not the most edifying sort of pleasure; there’s something sleepy and doughy and stupid about reassurance-entertainment. It’s like the heat rising off a sleeping person’s body. But it is nonetheless a very desirable commodity, and it is not easily earned.

Though in our desperate need for comfort we sometimes try to snatch it out of thin air. This guy who visited my roommate in college actually said, with fond exasperation, “Oh, [Chandler]!” about a friend of ours that he had not yet met. I am tempted to use the word “American” in talking about what’s so sad about this pathetic over-readiness to be sleepily comfortable with a sitcom-life, but I don’t really believe in making pronouncements about national identity like that. Still, I bet they don’t do that sort of thing in China. For example.

In re: the fifth book. The first time I read it, I think I was dismayed by what seemed at the time like a nerdy, undeserved emphasis on characters less essential, less earned. Just like my impatient annoyance as a third grader finding that “Eowyn” and “Theodred” and so forth, introduced long after exposition time had come and gone, were actually going to figure in the plot. As if! Furthermore, my degrading memory had wiped away several secondary characters, especially those introduced in book three and then played down in book four, like “Sybil Trelawney” and “Remus Lupin.” It’s dismaying to return in search of the warm sitcom glow and realize that you’re watching an episode from that off-key season where they have a monkey.

On this read, however, “Cornelius Fudge” and even “Bellatrix Lestrange” still meant something to me, and as a result the book seemed less arbitrary and, you know, Trekkie. Nonetheless, by book five, a calculating soapiness has crept into the plotting. I’m not complaining about the kids flirting and dating each other – that stuff’s fun, particularly when it’s indulged at length in the sixth book – I’m talking about the main storyline, which becomes increasingly crabbed and finicky as the series plays out. Considering that she started with the broadest possible mythical strokes – young chosen one vs. legendary evil – she’s certainly worked herself into a lot of loopholes and thumb-twiddling. The recurring and confused issue of House-Elves typifies the way she’s maybe let her imagination run in too many different directions at once.

This state of affairs is reinforced, if not actually worsened, by book six, in which she systematically demystifies the bad guy and literally breaks the threat into a series of technicalities. It’s too late to be disappointed at this turn in the series, which has been happening gradually all along. Like I said about book three, it feels like she’s constantly working out clever solutions to having been backed into a corner. There are worse forms of entertainment. For my part, I find this sort of plotting inspiring to read – if I ever have to solve these problems, it tells me that there are always solutions and everybody will love them even when they’re complicated. Plus, the very ubiquity of the franchise makes it exciting to find out what happens next, since it involves us in a worldwide phenomenon – another “American” line of reasoning, there.

This last book owed the most obvious debt of any of them to The Lord of the Rings, if you ask me. I could swear it included a couple of shots described directly from the recent movie versions thereof. Not that there’s anything wrong with that; fantasy is all in good fun and good fun is community property. But it’s more satisfying to tour this funhouse when you can’t still hear the echoes of the group in front of you, if you know what I mean.

Hey, you know what was pretty good when I was in fourth grade? Those Lloyd Alexander books.

July 22, 2006

My Dinner With Andre (1981)

directed by Louis Malle
written by Andre Gregory and Wallace Shawn

One problem with the Netflix approach to movie-watching is that everything is part of a grand checklist, which can be deadening. In thinking back over my response to this movie it seems like the greater part of it was “I’m finally seeing My Dinner With Andre!” and that’s no good. I remember trying to write up a review of Die Hard a few years ago and realizing that my sense of checklistic satisfaction at finally having seen Die Hard completely overwhelmed anything I might have thought about the actual stupid movie.

That’s not to say that My Dinner With Andre doesn’t have anything more to offer than Die Hard; far from it. But my received knowledge about what goes on in My Dinner With Andre was pretty accurate; the movie was, for me, just the fleshing-out of the potential, secondhand My Dinner With Andre that I had already had outlined for me by pop cultural reference and, I think, by my dad telling me about the movie. So it didn’t have a lot of punch to it. But I’m certainly glad I saw it. Not only because now I’ve seen it, but because of the principle that makes the movie work in the first place: being present while a conversation plays out is intellectually engaging in a way that is not lessened by the conversation’s being on film – or, in my case, by one’s already knowing roughly what the conversation is about.

One thing that did surprise me was how simple the scripting was. There was no particular attempt to simulate the complicated back-and-forth of a real conversation; the two characters each offer their thoughts in a fairly stylized, formalized alternation. Maybe that’s how some people conduct conversations, or maybe the hocket I’m used to is more of a contemporary phenomenon than I imagine. I know it’s pretentious to say “hocket” but I’m proud of having thought of it, and if it’s new to you, then you just learned a cool word. But… right, that seems unlikely. People used to interrupt each other just as much as we do now, didn’t they? It’s so hard to be confident about a thing like that when one’s impression of the past is almost exclusively formed from works of fiction, which have always been, and for the most part continue to be, markedly unrealistic in their depiction of actual human speech.

And this was fiction too, so I ought not have be surprised. But I was a little surprised, since it was a formal experiment about the experience of intellectual engagement and exchange that arises from conversation; you might think that would be dependent on the rhythms of “real” conversation. But it still worked, just in a slightly broader, more theater-based way. Ultimately, this captured two major things about conversations: the way they can suggest a wider interrelatedness of everything under discussion by assimilating digressions and reactions, and the way that they are fundamentally driven by the confrontation between two different points of view. But the formal side of the experiment – the “it’s just this conversation” factor – didn’t seem to have been worried over very much. It was left up to the viewer to think about that aspect of it; the movie was neither coyly pointed or grittily “real” about it. In fact there’s a strange quasi-magical gesture at the end where the restaurant has mysteriously emptied around them without their noticing. It was hard for me to figure out how that sort of thing fitted into the “here’s a conversation we had” package. Again, more of a theatrical than filmic approach to the question of a dinner conversation. So that stuff surprised me.

Another thing that surprised me about the movie, slightly, was how fast-and-cheap-looking it was. A lot of badly-matched lighting and such.

As for the content: the movie could work just as well, or better, with fewer of Andre’s stories and a little more interchange of ideas; one’s sense of involvement rises considerably once they are trying to express something to one another, whereas most of what Andre says, at least in the first section, is mere storytelling. If you are like me (or like Wallace Shawn, or like almost anything other than Andre Gregory as here depicted) and not initially inclined to find the details of these stories specifically compelling, this section clearly goes on several stories longer than it needs to get “the gist” across.

As I’m writing this I’m thinking about one of my problems with theater. It seems like the attitude motivating a lot of what happens on the stage is “people are interesting because they’re people, they go deep and even the insignificant things they say resonate if you are listening closely,” but then the people to whom we are meant to bring this attention are fake people who only go as deep as they’ve been programmed to go. It’s a question of finitude vs. infinitude, one that, you’ll pardon me, I relate to the problem of contemporary video-games. I can’t play these new games, these truly vast games where the selling point is something like, ahem, “you could explore the game-world for hours and hours and not even encounter the main quest” – and I can’t play them because this is a gogglingly enormous finitude, not a real infinitude, and being aware of that, I will be subconsciously aware that a partial exploration of what its creators have to offer is incomplete. The idea of a vast offering is meant to appeal to the desire for an inexhaustible entertainment, but players are unshakeably aware, deep down, that they are still within the realm of exhaustibility. Reading some Borgesian book with no end would be an incredibly different experience from reading a book that advertizes itself as “so long that you’ll probably never finish it!” Of course we’d still want to try to finish it.

Finitude is a crucial feature of our notion of an artwork because it allows one to identify one’s experience as having been of that artwork and not of something else, or of only a part of that artwork. If artworks could not be closely correlated with the experiences they elicit, those experiences could not be clearly said to be of those artworks and so would be difficult to distinguish from experiences of real-life stimuli. If art is valuable because it is created, because it is filtered through human consciousness – that is to say, if a painting of a sunset is not necessarily just a poor subsitute for a sunset – a great part of what makes it appreciable as such is that it is bounded. Photography, which borrows its substance as directly as possible from the real, non-art world, is knowable as art because it is bounded.

I seem to be wandering toward what ought to be the long-delayed follow-up to this old posting. I guess I’m just going to go there now under the unrelated heading of My Dinner With Andre. Most video games – any formalized interactive make-believe, but video games are the best example – are ostensibly mimetic*. That is, they portray people doing things, and our interaction consists of variously influencing those people or things. But that mimetic veneer is so thin that I would say it’s irrelevant to our experience. I mean, a plumber who punches bricks, kicks turtles, and eats mushrooms and flowers? Obviously that’s all garbage. The fact that “abstract” games like Tetris feel more or less like interchangable kin with “mimetic” games like Super Mario Whatever makes clear (to me) that a player is interacting directly with the mechanics and disregarding the incoherent stab at mimesis. No man is Pac-Man. I would argue that even in story-oriented games – adventures and role-playing games and whatever – the player is always directly aware of the underlying engine. How many objects can you carry at once, and how many moves before the monster wakes up? These things feel like variables, not life. If there is a mimetic element, it is conjoined with these mechanics and offers a place for another part of the mind to vacation while the game is played, but it is distinct.

This is all to say that playing a video game is unlike being in the situation depicted.** An infinite video game, therefore, is as fanciful and undesirable as an infinite painting. After a while you feel disoriented by this monstrous painting and just want out. (Unlike life, one hopes.) An enormous painting, however, can impress by its hugeness. When I read À la recherche du temps perdu, I told people that I had come to terms with its enormity by just thinking of it as infinite and taking it in small doses as it pleased me. But obviously I was still aware of its finitude; otherwise I might well simply have stopped. Dealing with infinity is like dealing with a habit, not with an object. Soap operas are infinite, and it is the ritual event of watching, rather than the cumulative content, that drives their viewership. The cumulative content of a soap opera over any large span of time is generally contradictory and inassimilable.

I am thinking of all sorts of counter-examples and complications as I try to straighten this out. An amusement park is exhaustible (“let’s go on every ride!”), yet the actual experience had there is so personal and intermingled with reality that the offering feels unbounded. A game like Pac-Man is known to be infinite, but is practically (and intentionally) bounded by the player’s capacity. Still, a player who is skilled enough to play infinitely will only play toward the unknown but assuredly finite endpoint of a high score or a world record or whatever; actual infinite play has no appeal whatsoever. Life is bounded by death but savoring real-life experience doesn’t feel informed by that finitude except under morbid or pointedly philosophical conditions; real-life, despite its infamous finitude, is the “infinite” experience to which I am contrasting artistic experience. But perhaps that illusion of infinite life (and the resulting sense that art is distinguished from life by its finitude) is specific to this era, or this culture, or my segment of the population defined in some other terms.

Well, enough. To bring it back to where it started – for one reason or another, despite all possible examples to the contrary, I feel convinced, at this point in time, that: life offers infinity; art doesn’t. When art purports to have enriched itself by incorporating the infinite, it seems to me confused. The only way art can truly incorporate the infinite is by leading us back to life itself rather than by encapsulating it. A huge video game is never going to lead back to real life – it’s a video game! – so in this sense it is bounded. Any suggestion that infinity lives within those bounds is either false and disregardable, in which case we have a game that begs to be exhausted but makes unthinkable demands on our time (check!), or else we have an infinity that is trivial and to be avoided as a habit, as with soap operas. In theater, there are several roads by which the art might lead us back to life in a profitable way, and thereby to inexhaustible potential significance, but these must be out-roads. A character in a play who says seven lines is only seven lines deep. A person in life who says those seven lines has infinite potential significance. For a play to benefit from that infinitude, it must resign itself to merely pointing toward that significance rather than containing it. The limited clay of a play can be molded into either a decorative shape, a very shallow bowl, or an arrow pointing toward the bottomless bowl of life. The shallow bowl, which purports to put a premium on depth, tends to seem pretentious and wasteful compared to the arrow.

Oh good lord, do I really think that? Obviously not. I’ve tangled myself terribly here. Plus this really doesn’t apply to My Dinner With Andre, where the “out-roads” back to real life were entirely conscientious and obviously the point. Except for when his stories went on too long. Okay. I think that was what I wanted to say. I should have saved all this video game crap for another time after all. Someone please help me end this mess.

Oh look, it’s over! Thanks for your help with that.

* I’m using it!
** A good rebuttal to this would be that, actually, a wide variety of games are very much like being in the situation depicted – flight simulators, first-person war games, and so forth. Good point. Nonetheless the point holds that the experience of playing these games is entirely distinct from living; flight simulator-ers might hope not to “crash” the “plane,” but the fact of interaction has not confused them into actual fear, the way a dream would. They are still participating in artifice. Furthermore, the screen may resemble a cockpit and the sorts of choices a player must make might be analogous to those made by a pilot, but the player still knows that this is so only because it has been programmed this way. For example, a flight simulator might be described as “totally realistic except for the trees, which you can fly right through” – surely the player does not think of these mysterious trees as being of some other “type” from the rest of the simulation. If things are comparable to life, it is because of the talent of the artist, and everyone is always aware of this. There’s no trompe l’oeil in video games, just as there’s no trompe l’oeil in life.

July 10, 2006

Sonatina

This is a no-frills sonatina movement. The first couple phrases got in my head on the subway and I wrote them and the rest down with as little reflection as possible, always trying to go for the most obvious solution. Right now it seems like the only way for me to keep things from getting too absurd or too condensed is to write very fast and freely resort to cliches. With that in mind, I am proud to have rounded off a complete movement that does what it’s supposed to do, utterly insubstantial though it may be.

The style here is, as usual, “things I think I heard in Stravinsky, but less tasteful.” Actually, this seems to me to be a secondhand imitation of Stravinsky, by way of some of the lesser American composers in the 40’s. If I consciously had anyone in mind, it was, believe it or not, Gail Kubik, whom you may know, if you know me personally, as the guy who wrote this score. I also have copies of a sonatina and a sonata he wrote, as well as a collection of short occasional pieces. I’d post the scores here but it’d be copyright violation, despite their being quite difficult to find nowadays and certainly impossible to purchase new. Maybe someday I’ll feel reckless and just post them. Anyway, they all sound kinda like each other, and my little sonatina movement sounds a little like them all. To me, at least.

Here’s the audio of my piece.

There’s a score but it’ll take me a while to clean it up, get all the accidentals to be pretty, and put in markings and whatnot. I don’t feel like doing that now, but I’ll do it eventually and put it here.

July 9, 2006

The Paradine Case (1947)

directed by Alfred Hitchcock
screenplay by David O. Selznick
based on the novel by Robert Hichens (1933)
adapted by Alma Reville

125 min.

Traumatically boring. You always hear stories about bigshot producers (like David O. Selznick) throwing their weight around and bullying directors into changing storylines, casting different actors, etc. Always the idea in these stories is that some kind of thickheaded, cigar-chomping “I know what I likes” sensibility ends up getting dumped all over the helpless art. This movie was written by Selznick, so you could think of it as a sort of perfect realization of that producerial sensibility. It was certainly thickheaded. The funny thing is that Selznick’s idiotic screenplay has none of that good stuff that the cigar-chomping producers are supposed to like – sex, violence, spectacle, happy endings, etc. It was as though the challenge of simply making those things happen was too much for him. A great deal of the dialogue consists of people stating what we’ve just seen, like in a radio play, or discussing the plot situation as it stands. You could feel Selznick’s frustration as he tried to wrap his mind around what the hell people might do or say to each other, like the scene on Seinfeld when they sit down to write their sitcom screenplay and immediately agonize… until they have a breakthrough and decide to have the first character say “Hello” and the second character respond with “Hi.” Which is funny because not only is that worthless dialogue, it’s also bad dialogue. This movie was all like that; people saying needless things to each other and saying them awkwardly.

The photography was more stylish and intelligent than this material deserved, which is saying nothing, and which unfortunately redeemed the film not at all. I don’t know how it could have been redeemed, and it doesn’t seem like Hitchcock cared to try; looks like he just made the movie and got his paycheck. The idiocy of the script was so apparent to us that I have to assume he was well aware of what he had on his hands, but who knows.

This was off the on-demand movie service. It is interesting to note, we observed, that the plot of Jagged Edge, the last on-demand movie we watched, is essentially identical. I’d say that Joe Eszterhas had intentionally borrowed it except why on earth would he have done that? The point is: we picked the same stupid movie twice!

This movie includes a scene where Charles Laughton, as a leering, flabby old judge, fixes on the bared shoulder of his friend’s wife and begins making a creepy, drunken pass at her, sitting next to her and taking her hand. I was forcibly reminded of this. Isn’t Charles Laughton perfect casting? But his dialogue wasn’t as good.

July 5, 2006

King Kong (1933)

directed by Merian C. Cooper and Ernest B. Schoedsack
screenplay by James Ashmore Creelman and Ruth Rose
after an idea by Merian C. Cooper and Edgar Wallace

One paragraph about King Kong (1933). Cooper apparently said that the idea of a giant ape threatening New York City came to him in a dream and it was that dreamlike craziness that struck me this time. I mean, an enormous monkey? I wanted to pretend that I was a first-time viewer who didn’t know what this “Kong” was going to be – the build-up to his first appearance gives away only that he will be something terrible and powerful – but I have to assume that Kong’s monkey-dom was a non-secret even in 1933, given that he appears on all the posters. Still, in the moment where suddenly we see what she sees – a surreal, jerky, monstrous gorilla with a hypnotic stare – the movie takes a huge leap forward in force of personality. Up until that point, all genre indications point to a typical kidnapped-by-the-natives-and-fed-to-the-volcano plot. But the giant monkey staring into the camera tells us that the movie is not about that or anything else; it simply must be taken on its own terms. That’s still exciting today, even though every scene in the movie is now familiar. Dinosaurs fighting, a Broadway theater, the top of the Empire State Building… it defies any conventions of plot or formula; each sequence arises out of its own sheer need to exist and is the more involving for it. I think of Nabokov writing (I forget where) about the complete vitality of fairy tales, the way that each element of the storytelling retains its full savor. On the other hand, these particular elements – dinosaurs, Broadway and so forth – are recognizably all part of the 30s imagination, and the fact that some musty “Weird Tales” mindset may be the only thing that holds them together becomes another delectable aspect of the experience.

Second paragraph is just extras. The whole thing is just so junky, but done with such panache. It makes me happy to think that nobody really seems to want to say a word against King Kong – sort of like it makes me happy when I hear non-junky people praising junk food; those pleasures that draw us to non-nutritional things are real and it feels good to acknowledge and endorse them as a part of the human experience. I think that’s the main thing that drives the cult of dignifying and mythologizing Hollywood* – the idea that this stuff might be worthy of dignity is immensely reassuring. The recent DVD looked wonderful. Fay Wray is a lot more appealing onscreen than she would seem to be from still photographs. I’ve heard people go on about how great and important Max Steiner’s score is, but for my money it was plodding and unimaginative. The recent remake, in retrospect, had more thought behind it and, obviously, fancier thrills, but was lacking that sense of actual craziness. I personally came away from the actual craziness more delighted, because it’s so much harder to be critical of actual craziness, but I could understand someone who felt, given the roller-coaster aspirations, that bigger and more sentimental was actually better. But not me; on its own scale, and its own way, this was the more unambiguously satisfying experience.

* e.g. see: TCM, AMC, or any Oscar broadcast.