Welcome Guest!     Login - Register          

Do you remember the days when we only used one term to describe a console’s power? Twenty years ago, what people saw as “bits” were all that mattered and we could tell when one console was better than another simply by looking at the games. One needn’t any more insight than his own eyes to see that Donkey Kong Country was more advanced than the arcade’s Donkey Kong or that Sonic the Hedgehog could do more than Alex Kidd in Miracle World.

Fast forward to today and how do we compare consoles? Jargon. Twenty years ago, people didn’t debate the relative merits of a “Customized 6502 CPU” and a “Television Interface Adaptor Model 1A” because a system’s power could be easily described by marketers as “bits,” and every generation self-evidently doubled the power of the last. But those days, as you certainly know, are gone.

These ever-improving graphics were always one good reason to buy a console, but alongside visual improvements, each generation’s new technology brought us bigger and better ways to play our games. The new Mode 7 technology of the Super Nintendo, for example, allowed racing games like F-Zero and Super Mario Kart to take off, while the jump to 32 and 64-bit consoles allowed for full 3D gaming, which was a monumental advancement in its own right. During this time, advancements in technology went hand-in-hand with advancements in gameplay, but when the core fundamentals of a video game have reached their limits, companies look for other, less essential ways to improve the experience.

The first time we saw this play out was in the sixth generation of console gaming. After expanding into the third dimension, where else can video games go? Nintendo and Sony sought to ignore the absence of new gameplay concepts and decided instead to rely on big-name exclusives to guide the success of their consoles with the release of the PlayStation 2 and GameCube, respectively. Microsoft, on the other hand, entered the race by pursuing the idea of online gaming after SEGA’s original attempt came crashing down just a few years prior. While Sony and Nintendo focused only on improving the core concepts that were already in place, Microsoft placed a second emphasis on gameplay not by advancing the core mechanics, but by advancing the greater experience of the game.

After the sixth generation, we saw even more interesting developments on this front. The seventh generation, as always, brought more power, but new marketing strategies and alternative ways to play were now rampant in the gaming market. The Kinect, Sony’s cross-play, touch screens, and the Wii itself all proved to us that when new gameplay technology is no longer a byproduct of improved graphical and computational power, companies must find other ways of convincing gamers that their console’s gameplay and overall user experience is the best the market has to offer.

But what if graphics stopped improving? People often like to say, “Graphics don’t matter; gameplay is all that counts,” but what if this personal preference became a reality in the entire market? If graphics and power were no longer important factors, consoles would rely on these gameplay changes to thrive, and without the need for power boosts, consoles would only come once in a blue moon — changes drastic enough to require a whole new platform as opposed to an add-on or software update are few, and a new console to change the interface, online menu, and more is an extravagance if it can’t boast a significant graphical leap.

Now what if that weren’t so hypothetical? This situation is bound to happen eventually — ignoring development costs, games will one day get so realistic that there’s no longer anywhere to go and companies will be forced to use this model for at least some time until the industry’s next major revolution — and this day may be upon us.

When Sony revealed the PlayStation 4, it marked the most noticeable departure from the days of comparing consoles’ power by sight rather than technical specifications. Whereas in the past, a new game couldn’t be released on two consoles of separate generations without being completely rebuilt, it is now possible to release a game like Watch Dogs on both the PlayStation 3 and the PlayStation 4, with the PlayStation 3’s version looking just as good as PlayStation 4 exclusives like Knack and The Witness.

Surely the PlayStation 4 will be more powerful than its current-gen predecessors — as the tech demo by Quantic Dream shows us, the PlayStation 4 is capable of truly great things. But will that potential ever be truly realized? The developers of DriveClub, for example, have put incredible amounts of time and effort into the tiniest details of the game’s cars when going close-up, but the demo on-screen looked little more impressive than anything the PlayStation 3 can achieve. The advancements DriveClub brings lie only in details so small that they barely make a difference when actually playing the game. Even Quantic Dream’s demo is somewhat impractical — showing such an emotional face is beautiful close-up, but surely its impact will weaken when rendering an entire world in the actual game. These little details are very satisfying to have, and more power is always warmly welcomed, but is it always going to be worth it?

This is where a new factor comes into play: business. To us, games are an artful experience. To the developers upon whose profitability the industry depends, however, games are largely about dollars and cents. When we see greater power, we see a more accurate realization of the gaming experience; but when consoles grow in power, so too do the development costs of a game. When a console’s power gets to the point where the only difference between generations can be seen in minute details, the cost to develop games like DriveClub skyrockets. The more money software companies spend perfecting these indiscernible details, the more units they need to sell in order to break even. Ignoring the vicious corporate circle of marketing, mass appeal, and distrust that this creates in the industry, this means that developing high-power games only gets riskier as time goes on.

When new consoles automatically meant graphics would improve twofold, it made sense for companies to develop games for the newer machines. Yes, development costs would rise, but every new console blew gamers’ minds and high-power consoles were exactly what the market was heading towards. But now, the results of these efforts pull in fewer consumers because the results are less jarringly impressive. Twenty years ago, embracing rising development costs was a good investment for game developers, but as the cost of the investment rises and both the visual and retail results become increasingly negligible, developing cutting-edge software is no longer the guarantee of economic success it once was — and insofar as this is worrisome now, it only exacerbates with every new generation.

Console manufacturers have been dancing around it for a few years now, but we are near the crucial tipping point where good business and advancing technology no longer go hand-in-hand, and it could even happen that the eighth console generation rings in a new era for the industry. Whenever this day may come, developers will have to stick to lower technologies in order to stay afloat, and no longer will the competition be one of raw computing power. In an age where all gaming machines are equally powerful, all that consoles can do to advance is introduce new features, and they will have to be few and far between in order to prevent a saturation of gimmicky control schemes and a subsequent industry crash. Thus, “console generations” as they are will cease to exist.