Welcome to Level Seven: S.H.I.E.L.D. at PaleyFest

I’ve already blogged more than once about the ABC series Marvel’s Agents of S.H.I.E.L.D., not because I think it’s a great show but because I think it could be a great show and because it’s getting better with almost every episode. Shows produced by Joss Whedon, even ones like S.H.I.E.L.D. where he isn’t involved in its production on a day-to-day basis, tend to start out slowly and hit their stride at the end of the first season or even the beginning of the second. Honestly, there are times when I wish I could look into the future and see if S.H.I.E.L.D. is really going to live up to the promise I think it has, because I may just be wasting 60 minutes of my life each week by watching it. But I don’t think I am.

And, as if by magic, I got a peek into the future last weekend and saw next week’s episode of S.H.I.E.L.D. I can report that it’s getting even better, but it’s still not quite as good as I’d like it to be.

Felicia Day

Geek goddess Felicia Day hosts a panel of tiny little people from the ABC show Marvel’s Agents of S.H.I.E.L.D.

One of the advantages of living in Los Angeles, as I’ve been doing for the last five years, is that you occasionally get the opportunity to look into the future of television, even if it’s only a week into the future. Last Sunday I drove to the Dolby Theatre in Hollywood (that’s where the annual Academy Awards ceremony is held) to watch part of Paleyfest, an event put on annually by the Paley Center for Media. I was there to watch a panel on S.H.I.E.L.D. and it was an impressive panel indeed, at least in terms of who showed up. Basically the entire regular cast of the show — Clark Gregg, Ming-Na Wen, Brett Dalton, Chloe Bennet, Iain De Caestecker and Elizabeth Henstridge — were there, as were the show’s three showrunners: Jed Whedon (Joss Whedon’s brother), Maurissa Tancharoen (Jed’s wife) and Jeffrey Bell, along with co-producer Jeph Loeb. The panel was hosted by geek goddess Felicia Day, best known to Joss Whedon fans like myself for her roles in Dr. Horrible’s Sing-Along Blog and several episodes of Dollhouse.

Actually, you didn’t have to live in LA to see this panel. If you knew about it in advance, you could have watched it on streaming video using the PaleyFest app — and maybe you did. But if you were actually at the Dolby Theatre you got to see something extra, something that wasn’t included in the streaming video. You got to see the April 1 episode of Agents of S.H.I.E.L.D.

Jeph Loeb asked all of us to go home and tweet or blog about the episode or at least about how much we liked it. We were advised not to give out spoilers. Yeah, right.

I suspect by now hundreds of attendees have given out spoilers in defiance of Mr. Loeb’s request, so it would be completely redundant for me to do so here. Therefore all I’ll say is that it’s one of the best episodes so far, largely because it focuses purely on the show’s serial arc and doesn’t attempt to stand alone in any way. I suppose it would be a minor **SPOILER** to mention that it centers around J. August Richards’ character Deathlok and the oft-mentioned but never-seen character known as “the Clairvoyant.”

I love serial TV shows and generally don’t care for standalone episodes of those shows, so the mere fact that this episode stuck to the serial arc was enough to make me happy. But it also advanced the serial arc significantly with a few surprises and plot twists, which is more than I can say about most episodes to date. And that’s all you’ll get out of me without inserting flaming bamboo splinters under my fingernails. For more spoilers, you’ll have to go elsewhere. Try Google. Or Twitter.

As for the panel, I have surprisingly little to say about it. It was fun seeing the cast and producer/writers in person, along with the lovely Ms. Day, who I’ve followed on Twitter for a year or two now because she’s a great source of geeky news about Joss Whedon TV shows. In general, though, the conversation on the panel didn’t reveal anything that I didn’t already know or at least suspect, such as the fact that Clark Gregg, who plays Agent Phil Coulson on the show, is a really nice guy. Twice, he literally leaped off the stage and ran into the audience to give someone a hug, most touchingly when the huggee was a youngish female fan who appeared to have Down Syndrome and was having difficulty articulating her question to the cast. I laughed when Chloe Bennet mentioned that fanfic ‘shippers — fan fiction writers who like to invent relationships between fictional characters from television and elsewhere — had created a romantic couple out of her character (Skye) and Elizabeth Henstridge’s character (Simmons) and were calling the couple “Skimmons.” (A quick check on Google revealed that, yes, there’s actually fanfic about “Skimmons.” It had slipped right past me.) Otherwise, the conversation consisted largely of cast members answering banal questions from the audience like “What superhero would you like to be?” (For the record, not all of the questions were banal. If you were there and asked a question, I can assure you that it wasn’t one of the banal ones. Hey, you’re intelligent enough to be reading my blog, right?)

Also, if you were watching the streaming video you probably saw me without realizing it. When a heavyset guy in a Captain America sweatshirt got up to ask a question, I was the gray-haired guy with glasses over his shoulder on screen right. I reached up twice to touch my glasses, not as a signal to anyone viewing the video stream or as a nervous tic, but to make sure the guy I was seeing on the giant video screen above the panel was actually me. Sure enough, the guy on the video screen reached up to touch his glasses too.

Two other pieces of important news gleaned from the panel — actually, from Jeph Loeb’s introduction to the panel — are that, starting with the April 1 episode, there will be no more interruptions in the Agents of S.H.I.E.L.D. schedule for seven straight episodes, right up through the season finale in May, and the April 8 episode of S.H.I.E.L.D. will be a direct follow-up to the movie Captain America: The Winter Soldier, which will be released to theaters on Friday, April 4. (Hey, that’s today! Better get tickets soon!)

If you saw the screening of next week’s episode or you’re reading this after it’s already appeared on TV, I’d be curious to hear what you thought of it. Feel free to leave a comment on this post.

And keep your fingers crossed that S.H.I.E.L.D. gets renewed (word has it that it will) and that it has the greatest second season of any show in the history of television. Or at least of any show executive-produced by Joss Whedon.

That would make a lot of people, including myself, very happy.

Video Games as Story, Video Games as Life: Part Two

(This is Part Two of an article about the evolution of computer games into a form of virtual reality, largely as witnessed by the fanatic computer gamer who writes this blog. If you’ve already read Part One, you might want to skim over it again, because I’ve added several paragraphs of material.)

In early 1997, a revolution occurred in computer gaming on DOS and Windows PCs.

I was lucky enough to be in a position to witness this revolution from the very beginning, something most gamers weren’t. In January of 1997, I was working on a project for a local software company and to help me accomplish it they loaned me a computer that was considerably better than the aging DOS/Windows machine I already owned. The loaner had a 200-mhz, first-generation Pentium processor (still more-or-less state-of-the-art at the time and vastly superior to the aging 66-mhz 486 machine I used for word processing and gaming) and something else that was on the bleeding edge of microcomputer technology: a 3DFX Voodoo graphics board.

For much of that decade, computer equipment manufacturers had been attempting to create 3D accelerator cards for microcomputers and gaming consoles that would allow programmers — mostly game developers — to write code for three-dimensional animation that would move polygons across the video display of a microcomputer at considerably faster rates than were possible using purely software-based 3D programming, the type that I’d written about in my book Flights of Fantasy. The first Playstation used one of these and its graphics were very good for the time (about 1996), if not quite revolutionary, though they were good enough to turn Lara Croft into the world’s first video game sex symbol (unless you count Zelda and Princess Peach). These graphic cards took over many of the more time-consuming tasks that the computer’s main processor needed to perform in drawing 3D images, freeing up the CPU to handle other tasks while the graphic card did most of the drawing.

Tomb Raider 1

Lara Croft on Playstation 1: The world’s first 3D-accelerated sex symbol

Unfortunately, nobody had been able to produce a 3D accelerator card that had all the capabilities programmers really needed for truly realistic graphics, until a previously unknown company called 3DFX introduced the Voodoo Graphics chip set. A graphics accelerator board built around this chip set could not only draw 3D polygons on a computer display at unheard-of speeds, it could do so in resolutions of at least 640×480 pixels (considered fairly high-resolution on 14″ monitors) with more than 32,000 different colors on the screen simultaneously, an astonishing number when you consider that most computer games of the period ran in 320×200 mode and only put 256 colors on the screen at one time. 3DFX also released an API — application programming interface — for the chip set called Glide that simplified the task of writing programs that used the chips’ capabilities.

The first game company to recognize the potential of the 3DFX chip set and make a game available that utilized it was, not surprisingly, Id Software, the company that had revolutionized 3D gaming and invented the first person shooter with Wolfenstein 3D and Doom. In 1996, Id had released Quake,  a game that I had played and enjoyed but that hadn’t struck me as the kind of quantum leap beyond Doom that Doom had been beyond Wolfenstein 3D. What was clever about Quake was buried deep inside its code, where the average player couldn’t see it. Id’s head programmer, John Carmack, had designed the Quake graphic image so that it could easily take advantage of graphic hardware quite different from the graphics cards that it was initially intended to run on. Just before I received my loaner computer from a local software company, Carmack had released a modification to Quake that anyone could download freely from Id’s Web site. It allowed Quake to generate its 3D imagery using the 3DFX chip set, eliminating the need for much of the 3D code built in to the game program. Being the first gamer on my block — and probably one of the first gamers in the state of Maryland — who had a computer at his disposal with a 3DFX Voodoo board installed, I downloaded it immediately and ran the accelerated version of Quake. The results were breathtaking.

Here’s a detail of the Quake screen running in 320×200 pixel mode with 256 colors on-screen:

Quake without 3D acceleration

Quake running in low resolution in 256 colors

And here’s a detail of the Quake screen in higher resolution with more than 65,000 colors on screen at one time:

Quake with 3D acceleration

Quake running in high resolution with more than 65,000 colors

I’ve blown up these crops to make the differences more obvious, which causes them both to look blurry, but the differences should indeed be clear. And though few gamers had 3DFX accelerator boards in January of 1997, a lot of game developers did, and when they saw what the new boards did for Quake, the revolution began. Everybody who wanted to write 3D games wanted to write them for 3DFX-based graphics boards.

To get a better idea of what happened when 3D graphics were introduced, watch this YouTube video:

Graphical Evolution of First-Person Shooters: 1992-2012

Be sure you’ve got your YouTube settings at 720 or higher when you start watching. The 3D accelerator shift becomes most apparent here with Half Life in 1998 rather than with Quake 1 or Quake 2 in the previous two years, but that may have as much to do with the resolution settings or video capture software used by the creator. It was the accelerated version of Quake 1 that really set off the revolution. But it’s still obvious in this video that something dramatic happened between roughly 1996 and 1998 and that something was 3DFX.

Within a few years 3DFX had serious competition from chip sets developed by NVidia and ATI. By 2000 the company went bankrupt and sold off its patents to NVidia, which along with ATI remains one of the two major producers of 3D graphics boards for gamers. But by then, 3D-accelerator technology had become ubiquitous. Even ordinary computers, those used exclusively for spreadsheets or Web surfing, had 3D acceleration built in, even though in many cases it was never used.

Once the revolution began, 3D computer animation improved so rapidly that by 2004 games like Half Life 2 could produce scenes that looked like they’d been shot in the real world by video cameras:

Half Life 2

Half Life 2: Is it real or is it a video game?

By 2006, games like The Elder Scrolls: Oblivion had even achieved my holy grail of computer graphics realism: You could see individual blades of grass blowing in the wind:

The Elder Scrolls: Oblivion

Blades of grass seemingly blowing in the wind

This was actually a bit of a cheat. You weren’t seeing individual blades of grass. You were seeing transparent polygons with clusters of grass “painted” on them through a technique known as texture mapping. But it looked real and in the world of computer game virtual reality, that’s what counts.

Only a few years over schedule, computer graphics had achieved the vision I’d had for them in 1981, when I’d looked at a picture of a crudely drawn warrior facing down a crudely drawn monster in a very crudely drawn maze on the box for the game Morloc’s Tower:

Morloc's Tower

A crudely drawn warrior in a crudely drawn maze. Computer game virtual reality had come a long way in 25 years.

There was only one element missing from my original vision of computer games as virtual reality: artificial intelligence. MMOs — massively multiplayer online games — allowed real human beings to interact as computer drawn characters, but not really in a way that produced a story. I wanted computer-generated characters who were as artificially smart as they were artificially realistic, so that I could enter a world and create my own story from existing elements, stepping into a kingdom on the verge of war or a conflict between mobsters and becoming involved with the people involved in such a way that I would actually affect the direction of that world’s history.

Computer game designers were working on this too, but without true AI creative substitutes had to be found that would give the player the illusion that they were in a real world interacting with real people. Essentially, designers have gone in two different directions to create this illusion: allowing players maximum freedom within a richly imagined world or forcing players down a pre-conceived story path that only offers a veneer of true freedom. In the next (and, I hope, final) installment of this post, I’ll discuss three games that take very different approaches to creating stories within virtual worlds: The Elder Scrolls: Skyrim from Bethesda Game Studios, The Walking Dead Season One from TellTale Games and Dishonored from Arkane Studios.

Ordinary People, Extraordinary Situations: The Walking Dead

When asked why I enjoy genre fiction, by which I mean fiction that wouldn’t be put on the literary or even straight fiction shelf at the bookstore but on shelves with names like science fiction or fantasy or thrillers, I reply that it’s because I like stories about how ordinary people will react when placed in extraordinary situations. There’s more to it than that, but it’s sufficient for a short answer and it’s the closest I can come to an accurate one that wouldn’t have to be expressed at essay length. When I was younger the best fiction in the science fiction and fantasy genres was being done in written form but today much of it is in movies and on TV, especially on TV. Of those TV shows I’m familiar with, the best one about ordinary people in extraordinary situations is, without question, The Walking Dead.

The Walking Dead

The Walking Dead

It’s a show about an extraordinary situation that’s become so familiar, even cliched, over the nearly half century since George Romero’s Night of the Living Dead first appeared on movie screens that you’d expect it to be dead now itself, yet like the creatures that it’s about, it just keeps coming back and back and back. And somehow it just keeps getting better. Movies, video games and TV shows about the walking dead — colloquially known as zombies, but almost never referred to by that name in the stories themselves — continue to appear on a regular, almost monthly basis, and currently The Walking Dead is the best of them. It’s a TV show just winding up its fourth season that’s already on its third showrunner — that’s the producer-writer responsible for day-to-day decisions on the creative aspects of the show — and it just keeps getting better with each one. The characters — those ordinary people caught in the extraordinary situations — continually become deeper and more interesting, and to anyone whose been following the show since the beginning they’ve become almost like members of the family. The cast of The Walking Dead and the writers who create the characters that they play are as good as any actors and writers on a television show today, and unless there’s something I’m missing (I haven’t seen Orphan Black yet, so it’s still in the running), The Walking Dead has become the best genre show currently being produced on television. In fact, it might be the best show, period. Yes, even better than Game of Thrones, which (though I love the books, especially the first three, and think that the adaptation to television is masterful) sometimes becomes a little too complicated for its own good.

Last Tuesday night Amy and I had the good fortune to a see a presentation at the Writer’s Guild in Beverly Hills hosted by Chris Hardwick, star of Talking Dead, the show that follows The Walking Dead every Sunday night on AMC. His guests were Scott Gimple, the show’s current showrunner, Bob Kirkman, creator of the comic book that the show is based on and a frequent writer for the show itself, Lauren Cohan, who plays Maggie, and Steven Yeun, who plays Glenn. The program ran for two hours and, as you can imagine, it was much too short.

Walking Dead Panel

From left to right, Chris Hardwick, Scott Gimple, Robert Kirkman, Lauren Cohan and Steve Yeun.

Much too much happened for me to give you a full report, but I can make a few observations:

  • Bob Kirkman is one of the most outgoing writers I’ve ever seen and took immediate command of the room. If Hardwick hadn’t been available to host, Kirkman could have done the job handily with no preparation whatsoever. He was witty, smart and obviously quite proud of, and amazed at the success of, his creation.
  • Scott Gimple is a quiet, intelligent man who probably shouldn’t be put in a chair next to Kirkman, because Kirkman’s larger than life personality (which goes with his somewhat larger than life physique) pretty much left everyone except Hardwick in his shadow. I think Gimple’s the best showrunner the program’s had yet, as proven by the masterly way he’s divided the cast up in recent episodes so that we can get to know the characters better through individual vignettes on the way to what will obviously be their inevitable reunion after the battle with the Governor and the destruction of the repurposed prison they had been living in sent them all scampering into the woods at the end of the first half of the current season.
  • Steven Yeun, as Amy noted after we left, is a lot like his character Glenn on the show — quiet, sweet, handsome and softspoken but with enough stage presence (and distance on the panel from Kirkman) to hold his own in the conversation.
  • Lauren Cohan is nothing like her character Maggie, except that she also seems to be a nice person. She was raised in New Jersey until the age of 13, at which point her family moved to England, so she has one of those peculiar mid-Atlantic accents that fluctuates back and forth between American English and British English in a way that would have made her origins hard to pin down if I hadn’t looked them up on Wikipedia first. So naturally she plays someone with a rural Southern accent. She’s also quite beautiful and cleans up nicely when not covered with zombie guts.
Partial Walking Dead Panel

From left to right, Scott Gimple, Robert Kirkman, Lauren Cohan and (partially cropped) Steven Yeun.

It was a great evening and I was glad that we went. If you don’t watch the show because it’s about zombies, you’re making a mistake, because like any good genre show (or book) it isn’t really about the implausible elements that make up its premise. It’s about the people who become involved with those implausible elements. And on this show those people are so brilliantly depicted that by the third season — which is when I think the show goes from being good to being great — you’ll be so caught up in the lives of these people that you’ll forget that you were ever put off by the idea of a zombie show, even when you’re watching very sharp knives being driven into the brains of the restless undead. The first three seasons are available on Netflix Streaming and I suspect the fourth will be shortly.

(For those of you following my discussion of video games as virtual reality, it should return in my next post.)

Video Games as Story, Video Games as Life: Part One

(This is Part One of a multi-part article on computer games as pioneering attempts at virtual reality, all as seen from the viewpoint of a young and then not-so-young gaming fanatic from 1981 through the present.)

In 1981, one day before my first microcomputer was delivered to my apartment by a kindly salesperson, I bought a game for it. It was called Morloc’s Tower and, though I didn’t know it at the time, it was closely related to Temple of Apshai, one of the most successful early computer role-playing games, first published in 1979. The picture on the back of the box showed a simple silhouette of a warrior traversing a two-dimensional maze and facing down an equally simple silhouette of some sort of monster. I was disappointed to discover that this image was from the Apple ][ version, while my computer was a TRS-80 Model III, where everything on the screen was reduced to an even simpler collection of giant pixels. But it really didn’t matter. Just looking at that picture, as crude as it was even in the Apple version, had caused me to have an epiphany.

Morloc's Tower screenshot

A warrior, a monster, a maze: We were easily entertained in 1981.

I’m sure everyone who follows technology and has even a rudimentary knowledge of the principles that underlie it has experienced a moment when they’ve looked at some revolutionary new piece of consumer tech, as home computers still were in the summer of 1981, and caught a glimpse of where that technology would eventually lead. I had already spent a decade playing arcade games, from Pong to Space Invaders to that spanking new sensation Donkey Kong, but it wasn’t until I saw that screenshot of a role-playing game being played out on a microcomputer display that I realized these things we called “computer games” or “video games” could be more than just games. They could be worlds.

By the time my computer arrived the next afternoon, I had already worked out a full-blown vision of what I thought computers would be capable of in 20 years, a period of time that turned out to be fairly close in some ways and completely off in others. We would have wraparound computer screens as tall as we were (wrong, but it turned out not to matter) that would offer us a window into a virtual universe, though I probably didn’t think of it quite in those words, because the term “virtual reality” hadn’t been popularized yet. This universe would be rendered graphically based on a mathematical model stored in the computer and it would be populated by artificial intelligences with whom — I was already thinking of them as beings, not things — we could interact through speech and body movements. The graphics would be so realistic that the AIs would look like real people and the landscape so intricately depicted that you would be able to see individual blades of grass waving in the wind. So realistic would this computer-generated world be that you wouldn’t even feel like you were playing a game, per se. You would be having a full-blown experience, spending a few hours vacationing in a world that just didn’t happen to be your own. The components of the world would be so fully realized and the characters so artificially intelligent that the game designer wouldn’t even need to provide a story. You would create one yourself through the ways in which you interacted with the elements of this virtual world.

Dungeon Master screenshot

Dungeon Master: The most realistic computer role-playing game of the 1980s.

I was so taken with this vision of an electronic universe constructed inside a computer that I started collecting advanced texts on 3D graphics and artificial intelligence. The ones on graphics were intimidating and made me wish I hadn’t slept through high school trigonometry. The ones on AI were less intimidating, but also made the basic problems sound a great deal more difficult.

I obviously wasn’t the only person thinking along these lines, because I quickly began to discover that Morloc’s Tower and its cousin Temple of Apshai weren’t the only games that were attempting to realize some version of that vision. Somewhere in the 80s the term “virtual reality” became common and it was clear to me that it was the goal that a lot of programmers were working toward in the guise of computer games. Over the next decade, games like the Ultima series, the Infocom adventures, the Microsoft, AKA subLOGIC, Flight Simulator, and the undeservedly obscure but extremely influential Dungeon Master were all, in one way or another, doing exactly what I hoped computer games would do: creating worlds. By late in that decade Origin Systems, the company that published the Ultima games, even adopted “We Create Worlds” as its motto. And these worlds that they were creating were ones that I, the person sitting in front of the computer, could in one way or another move around through and interact with, worlds with genuine inhabitants, never mind that those inhabitants were quite simple-minded compared to actual human beings. The fact that these worlds were depicted on a 14″ computer monitor and that their inhabitants couldn’t pass a Turing Test if they had a cheat sheet scribbled on the backs of their virtual hands didn’t matter. A perfect simulation of reality turned out to be far less important than I had thought.

In fact, even perfectly rendered graphics — or graphics at all — turned out not to be essential. The first game where I really felt that I’d fallen through the computer display and into the world of the game like Alice tumbling down the rabbit hole into Wonderland was Infocom’s 1982 adventure game Deadline, where the player took the part of a detective with 12 hours to investigate a locked-room murder and to interrogate the occupants of the mansion where it had taken place. Your interface into that world was purely through text. You typed commands using the computer’s keyboard; the game described your environment by printing text back at you on the computer’s screen. The sheer sense of verisimilitude I felt while playing Deadline was a revelation. That mansion was alive! People moved through it on their own schedules and could be observed by the player either openly or, if a suitable hiding place was available, covertly, and sometimes they would behave differently depending on whether they were aware of your presence. You could collect clues, explore hallways, unlocked rooms and the grounds inside the mansion’s fence. You could stop characters in their tracks, talk to them, and ask them questions, which had to be phrased properly before the character would understand them but would often yield interesting (if frequently and deliberately misleading) answers. By the end of the 12 hours, you either had to accuse someone of the crime or get thrown out of the house. Fail to find the correct culprit and you could either revert to a saved game position or start the game over, trying new tactics.

Deadline from Infocom

Deadline: Virtual reality with no graphics in sight.

Deadline fascinated me and I still think it’s the best game Infocom ever published, even better than the more famous Zork series, but some players found talking to the characters boring and later detective adventures from Infocom were far less ambitious. I enjoyed the company’s science fiction and fantasy games too, but none had the phenomenal sense of reality that Deadline exuded. In fact, Deadline might have been a peak moment in the use of artificial intelligence in games. Even today, the only interactions you’re likely to have with computer-controlled characters either involve fighting them or selecting conversational gambits from onscreen menus.

It was, however, text adventures such as this (and simpler ones being produced by a programmer named Scott Adams, no relation to the creator of Dilbert) that inspired me to learn to program. In late 1981 I ordered a book of adventure games written in the BASIC program language, all of which had been published commercially in the late 1970s but were now sufficiently dated that the authors had released the source code to be used for educational purposes. I typed one into my computer and, once I’d finished combing out all the typos, was astonished at how vivid a world it created, all based on typed instructions and fairly simple data structures. I was so excited by this that I stayed up almost 48 hours straight, dissecting the program so that I knew exactly how it worked and then writing a similar program of my own. It turned out to be remarkably easy to create something out of variables and computer data structures like arrays that felt very much like a real world, one with which the player could interact freely.

Still, the reality quotient of games continued to increase and 3D graphics, which were a tougher nut for me to crack intellectually, became rapidly more important. The subLOGIC Flight Simulator, the second edition of which was published the same year as Deadline, was another early milestone in virtual verisimilitude, a stealth attempt to create virtual reality in the guise of a computer game. Even though it ran at about one frame per second on my Commodore 64, I was startled by the sheer volume of the world it depicted. You viewed that world entirely from the cockpit of a small plane and it largely consisted of lines representing roads and rivers, with the occasional wireframe building or bridge and the even more occasional texture-mapped surface. But the fact that I had hyper-realistic control of the way that Piper Cherokee Archer moved through the skies of the game’s four areas (Seattle, New York, Chicago and Los Angeles, if I’m remembering correctly) made it easy to believe to believe that there was a world inside my computer. And in a sense there was, except that instead of being made of atoms and molecules, it was made of patterns of electrons stored in a matrix of silicon.

Screenshot from the subLOGIC Flight Simulator

The subLOGIC Flight Simulator: A world made of electrons and silicon.

But what really made the subLOGIC Flight Simulator so astonishing was the sense that every experience you or anyone else had in it was unique, just as every experience you have in the real world is unique. Almost every game of Donkey Kong was identical to every other and there were no doubt other players who typed and were told the same things in Deadline as I was. But the subLOGIC Flight Simulator offered a nearly infinite variety of possible game sequences and it was very likely that the one you were experiencing was different, in at least small ways, from the ones experienced by other players. Although I’m pretty sure the terms hadn’t been coined yet, the subLOGIC Flight Simulator was probably the first example of what later came to be called either an “open world” or a “sandbox” game.  You could go anywhere within the game’s database of maps and you could choose to do just about anything your plane was capable of, including crash into the ground like a bug hitting the windshield of a race car. There was no real goal to the game except the ones you made up for yourself. You were like a child playing in a very big sandbox for the sheer joy of it.

In the 1990s the development of realistic computer-generated worlds really began to take off (no flight-simulator pun intended). For me, the turning point came when the game development firm Blue Sky Productions (later renamed Looking Glass Studios) joined forces with game publisher Origin Systems to create Ultima Underworld: The Stygian Abyss, published in 1992. Earlier Ultima games had given the player a top-down view of the imaginary land of Britannia, with pre-drawn animated characters traveling from city to city fighting pre-drawn animated monsters. But Ultima Underworld (which was only loosely related to the mainline Ultima games) gave you a first-person three-dimensional look at its underground universe, a 10-level dungeon illuminated by flickering torchlight and populated by three-dimensional humanoids who were sometimes your friends and sometimes your enemies, but who were rendered in real-time with surprising realism given that the game came out in an era when computer CPUs rarely ran faster than 33-mhz. The wonderful game Dungeon Master, which had debuted on the 8-mhz Atari ST in 1987, had tried something similar and succeeded extraordinarily well by the standards of its time, but Ultima Underworld was the first time I really felt like I was entering that graphically vivid universe I had envisioned when I bought Morloc’s Tower back in 1981. Blue Sky Productions couldn’t make their characters look quite like real people or show individual blades of grass waving in the breeze (not that there were any breezes to be found in its underground environment), but the game was such a stunning leap toward the type of world-building I had longed for that just the playable demo of the first level of the dungeon made my head spin.

Ultima Underworld screenshot

Welcome to the Stygian Abyss!

As revolutionary as it was, Ultima Underworld was not the most influential worldbuilding game of 1992. That role fell to an unexpected candidate, Wolfenstein 3D from Id Software, an attempt to remake a popular Apple ][ game from the 1980s called Escape from Castle Wolfenstein into a high-speed three-dimensional experience. At the time I was working as a moderator on the old Compuserve Information Service, the sort of proprietary online service we hung out on back in the days before the Internet invaded the homes of ordinary people. I found the game in the file upload area of a forum dedicated to PCs (when we were still in the transitional phase between MS-DOS and Windows as the operating system of choice). It ran under DOS and, despite having less realistic (and technically less sophisticated) graphics than Ultima Underworld, the programmers at Id had come up with a way of making Wolfenstein 3D rocket along at high speeds while by comparison Ultima Underworld merely crawled. Programming purists complained that it wasn’t using true 3D graphics, which was true — you could not, for instance, look up and down or climb up to a higher level that looked down on a lower one — but the level design was so clever that you barely noticed. Wolfenstein 3D was addictive in a way that few games had ever been and it spawned a brand-new game genre: the first-person shooter.

Wolfenstein 3D screenshot

Wolfenstein 3D: The first first-person shooter.

When Wolfenstein 3D came out in July 1992, I had, ironically, just finished writing a book called Flights of Fantasy that used my newfound knowledge of computer programming and 3D graphics to explain how to program three-dimensional animation of the type found in flight simulators. (It was published in 1993 by Waite Group Press and came with a working flight simulator on disk that I co-wrote with my friend Mark Betz. I was in charge of writing the graphics animation code and Mark wrote the flight-simulation mechanics. The book spent several weeks on computer book bestseller lists and I’d like to think it taught a generation of young programmers how to write both 2D and 3D games.) The moment I saw Wolf 3D, as it was affectionately known, I proposed to Waite Group Press that my follow-up book, Gardens of Imagination, be about Wolfenstein 3D-style graphics. The contract was in the mail almost immediately.

Flights of Fantasy cover

The book that (I hope) launched a thousand careers.

The programmers at Id Software, meanwhile, weren’t resting on their laurels. Although dozens of Wolf 3D clones began to appear, Id was already at work on the next generation of the Wolfenstein graphics engine, one that came even closer to true 3D graphics. They used it to create the revolutionary game Doom, which appeared in December of 1993. Doom made Wolf 3D look like ancient technology and it deservedly became one of the most popular computer games ever published.

Doom screenshot

Doom: The game that revolutionized 3D gameplay.

While I’ve probably played Doom more than any other game I’ve ever owned — hell, I still play it, albeit in versions with revamped graphics engines that keep it from looking atrociously dated on widescreen monitors — it was really Ultima Underworld that came closest to my 1981 vision of computer games as worlds inside the computer. And while Doom and its follow-up Quake were the games that were really shaking up the gaming industry in the mid-1990s, another company, Bethesda Softworks, was quietly reinventing the Ultima Underworld model and creating the game series that would eventually go on to become probably the most influential in gaming history: the Elder Scrolls. I was lucky enough to live about two miles from Bethesda’s offices while they were developing the first game in the series, Arena, and either because I wrote Flights of Fantasy or because I was a moderator on one of Compuserve’s gaming-related forums — I never was quite sure — I wangled an invitation to see the game while it was still in development. I was stunned by what I saw. Although the graphics look crude now, largely because they were designed for much smaller monitors than we have today and only used 256 different colors, it was the greatest leap yet in the direction of my 1981 vision. The designers at Bethesda were creating a genuine virtual world, one that was vast, detailed and alive.

Arena screenshot

A glimpse of the immense virtual world of The Elder Scrolls: Arena.

Those of you who follow the computer gaming world know that Bethesda is still creating such games today and each one — the latest is The Elder Scrolls: Skyrim — comes closer to that vision of a perfect virtual world that I had 33 years ago. Skyrim may be the most successful game that Bethesda has yet published, though their Fallout games, which create their own virtual post-apocalyptic United States, are probably close in terms of sales.

In 1997 and 1998, a revolution in computer graphics occurred, one that raised the realism component of computer games to new heights and made virtual reality of the kind I had envisioned in 1981 genuinely attainable. But this post is getting too long and I’ll be back to talk about it later.

In Part Two (and possibly Part Three) of this article, I’ll talk about the revolution in gaming brought about in the late 1990s by graphics accelerator boards and how 3D virtual reality games have essentially split into three types — those that tell stories, those that create worlds and those that do both. Stay tuned.