This post is eventually about AI

In the latest issue of PC Gamer UK there’s a feature about Valve designing engines for multiple-core processors. Sounds pretty dry on the surface, but they popularise it by saying that multi-core CPUs are sounding the death of the add-in GPU.

The 360 and the Redacted-Station 3 are multiple-core machines (to sane and insane extents respectively), and every new Intel CPU has at least two cores. The technology is here to stay.

Valve think that general-purpose CPUs are better than dedicated GPUs because, and I quote, if you’re having difficulty running your AI calculations you can just get the user to turn the resolution down to free up cycles. Interesting. There’s every chance of the GPU becoming on-CPU as standard (as ATI are currently experimenting with), but we won’t lose it altogether – graphics are always going to be important enough to warrant the specialised architecture.

There are a lot of developers going on about how multi-core CPUs are going to let them do AI “properly”, but I’m not buying it. CPU consumption is a coder’s natural excuse why AI isn’t better, but the truth is that AI is just very hard to do properly and depends far more on genius design that processor cycles. It’s not that they have all these great AI algorithms which they can’t ship for framerate reasons, it’s that at the stage of development when you get to doing AI it isn’t something you want to put a lot of energy into. They’re suggesting that cycles are all that’s holding them back from making great new games simply by having better AI. If that were true then someone would have made them already with 1999 graphics.

Everyone’s excited about more realistic AI and physics bringing us much more immersive worlds. But emergent gameplay from AI is only suitable for a very small percentage of games. An FPS would be improved if guards didn’t act stupidly, but that’s just bad design anyway. An FPS where a guard reacts predictably to a situation is easy to code. An FPS where a guard reacts unpredictably to a situation is going to break immersion unless the world can handle everything the guard could decide to do. For a game to be more emergent all of it’s elements need to support than emergence.

We all want human-like AI in freeform RPGs like Oblivion, and the people making these games are already on the path. But hugely better/emergent AI is untenable in almost every other established genre, so do not listen when developers talk about their next FPS having ground-breaking AI. It’s either an outright lie or they just mean better combat AI. And people have been promising better combat AI for ten years with about a 3% hit rate. You do the maths.

12 Responses to “This post is eventually about AI”

  1. Archa:

    Well i think that you have made a good combat AI. 😀

  2. Ian:

    Ready
    Ready
    Ready
    Ready
    Ready

    … maybe not such a good speech AI though 😉

  3. shaun:

    It’s really funny too, because I was actually thinking on doing a writeup on this earlier this evening when I read about the afforementioned article.

    The funny thing here is that while both CPUs and GPUs have their independent applications, and bolstering performance in relative categories, the initial thought of combining the two in a relative fashion wouldn’t be so off.

    Let me base two cases here. The afforementioned PS3 & XBox360.

    Both systems are capable of some quite jawdropping graphics. Both have a “decent” video card, and some insane CPU.

    Now I wrote an article a while back, which I actually lost somehow (don’t ask) that highlighted the ups and downs of both systems.

    I won’t bother you by going into specifics of what is better and why, so much as to bring up the points of integrating technologies.

    My personal view on this would be more to try to (and it’s been done) use a specialized GPU (an exponentially more powerful math machine than a conventional CPU) to do AI stuffs, such as the SPEs on the Cell processor have been designed to do.

    What you have here is a special purpose miniprocessor to offload one of many tasks. A daunting game can offload some excess (read: repetetive) graphics load onto one of the SPEs to leave more room for more demanding calculations (read: physics), or to perform calculations for physics and/or control AI. The two work in unison so that you don’t have a dedicated line between the two.

    In this case, RSX and Cell are one and the same, all the while maintaining their respective pools and names.

    In the case of the 360, a separate (not necessarily worse) angle was taken, in making a very clear line between Xenos and Xenon.

    No specialized processing units, just flat out 3 processors to do the job for you, and a badass video processor that has about 10x the memory bandwidth of any GPU on the market today.

    I’m not sure where I was going with this anyways, but I’m sure you see my point…

  4. Kylotan:

    You’re right; the idea that a multi-core CPU will finally let them do AI properly is much the same as saying SSE or MMX would finally let them do AI properly, or that the move from 386SX to 386DX would let them do AI properly. Multi-cores are just the next incremental development – with the disadvantage that they’re harder to code for. Let’s be clear about one thing though: depending on which benchmarks you use, processing power has increased by a factor of between 10 and 100 in the last ten years, but the Oblivion combat AI is barely any better than the Doom combat AI back in the early 90s.

    The problem isn’t CPU usage – it’s developer usage. All the hype is about graphics, so it’s hardly surprising that far more game developers know about the latest graphics card features than know about useful AI research done 20 years ago. Similarly, the focus on FPS games in the last decade has meant that extensive AI becomes pointless – after all, each character has a lifespan of about 10 seconds if they’re lucky.

    But if you put some knowledgeable developers on a project and actually make AI an important feature, you can get some good results, as in the Thief series, or as in various strategy games like Civilization. Even The Sims has AI that puts most ‘real’ games to shame, because they weren’t afraid to do away with flashy graphics to have more time for behaviour.

  5. Paul:

    When you post something extended, people will fissure off parts to “tackle”, which is often irritaing, but it’s also what I’m about to do right now…

    “We all want human-like AI in freeform RPGs like Oblivion, and the people making these games are already on the path. But hugely better/emergent AI is untenable in almost every other established genre…”

    Fighting games? Sports sims? RTS’s? Why is it untenable? In those kinds of games, AI that “learns” would be so valuable to the single-player experience, because those genres are so keyed-in to “play styles”. Surely, “emergent” gameplay is the holy grail in those genres, in that single-player would become like playing an elaborate intellgence rather than simply memorizing patterns?

    In fact, in what way does this not apply to *every* genre?

    But…”It’s not that they have all these great AI algorithms which they can’t ship for framerate reasons, it’s that at the stage of development when you get to doing AI it isn’t something you want to put a lot of energy into.” Absolutely agree, sadly. The poor single-player gamer is always going to be left impoverished by that…until someone decides to corner that market, that is.

  6. Ian:

    Because Emergent by definition isn’t under the control of the designer, and unless you’re making a sandbox game it’s detrimental. If I trap a man inside my computer and give him control of another Oblivion player, I can hardly guarantee he’ll add to immersion.

    Emergent AI is by it’s nature dangerous. AI that “learns” and basically acts like a human playing a competitive game isn’t what I’m talking about – it’s only really lazyness which hasn’t come up with that more often, as UT’s bots show.

    Valve are talking about intelligent *characters*, NOT intelligent opponents. You’re saying intelligen opponents are a good thing, and I completely agree. Intelligent characters are a completely different matter though.

  7. Paul:

    “Valve are talking about intelligent *characters*, NOT intelligent opponents. You’re saying intelligen opponents are a good thing, and I completely agree. Intelligent characters are a completely different matter though.”

    Is that really a distinction? Surely a character (like a human) will act like an opponent in an appropriate situation, like combat? Isn’t the line between character and opponent just based solely on the number of motivations you give an NPC and the relevance of those motivations to the situation in hand? So, isn’t it analogue rather than digital, in that there’s no “switch” where a game needs to be either “emergent” or “linear”? Doesn’t the game world just have to be more supportive the more “emergent” a game is?

  8. Ian:

    There certainly is an distinction between Character and Opponent: it’s the distinction between game (in the classical sense) and immersion.

    Let’s create a new metric for games: their Immersion versus Mastery ratio (IM ratio). Immersion is really feeling like you’re in a situation, Mastery is being “good” at the meta-game underneath the aesthetic.

    RPGs have a very high IM ratio: they’re all about feeling you’re in a story/situation and not really about being good at combat or something. There’s some skill in Oblivion’s combat, but not much.

    Chess has a very low IM ratio – it’s all about beating your opponent.

    Far Cry has a fairly even IM ratio. From the Mastery side, when you play it you are attempting to beat the computer opponents. Far Cry would be much less fun if the bad guys were much more intelligent. They’d just beat you, for a start. The meta game in Far Cry is not that you’re beating a load of humans, it’s that you’re beating a load of rule-based AIs who have strengths and weaknesses you understand. If they made the AI more advanced so that they grouped together and flushed you out more effectively, they’d probably have to give them less health. It’s a game: a very good player should be able to defeat most situations perfectly; an average player should take a fair amount of damage; and a bad player should need to practice to get past.

    “Opponents” are agents of the actual game. Boxes in box puzzles, bottomless pits in platform games, walls in racing games, these are all agents. Just because an agent is taking the form of a human being does not make it any different – it’s there to be part of the rules of the game.

    “Characters” on the other hand are just the same as other aesthetic things, like our cloth code or sun glare or whatever. To be realistic and therefore more immersive cloth needs to flap in the wind, ie it need to act like cloth. And for human characters to be more realistic they need to act like humans.

    In some situations, we want Opponents to have certain human qualities, and nowhere is this more apparent than in “bots”. We want UT bots to miss occasionally and to say things when they beat us. We (probably) don’t want them to start chatting you up if your player name is Sarah-16.

    To return to my original point: most genres do not need humans as aesthetic devices. It would be fun if your men in C&C were chatting about Poker or women when you came accross them to give them orders, or if one of your units really hated another because he stole his beer or something, but it’s hardly going to improve the game ten-fold.

  9. Paul:

    I think the point about immersion vs. mastery is sound and useful, but tangential in this context because I’m talking about how an AI “personality” could inform an NPC’s status as an “opponent” so to my mind the two are homogeneous.

    All we’re doing in reality when we add “character” to an NPC is increasing the number of rules that NPC is using to define its behaviour…why is that inherently bad?

    So, “Far Cry would be much less fun if the bad guys were much more intelligent” is a really good example of the point I made before about having a digital view of this issue – a switch is thrown, and the game goes from being balanced to being much too hard – that doesn’t follow logically at all. If I add one more behavioural rule for the bad guys, are they then too clever? Two rules? Why do I have to add rules which make them better able to defeat the player, anyway?

    You picked chess as an example because by doing so you sidestep the argument completely. The whole point of chess is that it’s an abstract game, where pretty much the only parts of a player’s personality (at a general skill level) which come into play are the parts they’ve devoted to learning the skill of playing chess.

    If I’m trying to come up with a strategy to make one army beat another in real life, that’s MUCH more complex than chess because of the intelligence of the “agents of the game” -essentially there’s a lot more rules than chess. But why does that have to be a worse game or a less interesting exercise?

    “…were chatting about Poker or women when you came accross them to give them orders, or if one of your units really hated another because he stole his beer or something, but it’s hardly going to improve the game ten-fold.” Yeah, that wouldn’t be CnC, but a game where your units could start getting pissed off with each other if you handle them badly – fucking awesome! It could certainly “improve the game ten-fold” if the game was balanced and fun. It would just be harder to balance and make fun simply because…there’s more rules. But the viability of it is directly proportional to the number of rules.

  10. Ian:

    “You picked chess as an example because by doing so you sidestep the argument completely” – what I did was to define the difference between aesthetic and meta-game, that was what the entire comment was really about.

    I think that there is a very clear divide between aesthetic and meta-game. I think that they cause two separate forms of enjoyment. Enjoyment gained from aesthetic is the “immersion” – very similar to the enjoyment one gets from a book or film. Enjoyment gained from the meta-game is the “mastery” – pretty much exactly the enjoyment you get from playing chess.

    I propose that these two forms of enjoyment are completely different (although of course a certain game element can provoke both forms).

    Now to the main point, and why you think I’m saying there’s some digital divide.

    A “character’s” job is to be human-like to improve atmosphere and immersion. Thus, anything which makes it more human-like is very likely to make it more effective at it’s job.

    An “opponent’s” job is to form a part of the meta-game. It may happen to take the aesthetic form of a human.

    My whole point revolves around the following proposition: there is no correlation between making gameplay mechanics more realistic and improving the meta-game.

    Here are two things which would make Far Cry’s bad guys more human-like but would (probably) negatively impact the game: 1. Not forgetting about seeing you. 2. Using very slow (but much more effective) tactics, such as creating a perimeter, calling in re-enforcements, and waiting half an hour to come in and try and get you.

    You can pretty much haphazardly add human-like features to NPCs and improve the immersion of the game.

    You cannot haphazardly add gameplay altering features and expect to improve the meta-game.

    There is a very popular method of creating AI for opponents in games, which I’ve heard talked about in two interviews with AI coders: make it so hard it beats you all the time, and then dumb it down until it’s balanced right.

    We use “realism” as inspiration for gameplay mechanics all the time. We don’t come up with completely random ideas for mechanics (“what if this bad guy suddenly dug a hole in the ground and hid there until he heard you use a chainsaw – would that make our game better?”) nearly as much as looking at the real aesthetic we’re copying (some special forces guy running round an island full of mercenries), looking at all the cool things that happen in real life that our game doesn’t do yet, then choosing one that sounds promising and testing it. The two reasons we usually take that approach is that it’s a good subset to narrow the range, and that it’s likely to have the added benefit of improving the immersion.

    But if you take a random added-realism mechanic it’s not got a good chance of improving the meta-game. You probably think I’m being pedantic – why would we add a random mechanic? The better a designer someone is, the better the “hit rate” they’ll have with a mechanic they just thought of and briefly analysed in their head. Let’s say the average hit rate for designers is 10% – obviously this is an important point to agree on otherwise the next bit won’t make sense. 10% is more than my hit rate, but then I’m not experienced enough yet.

    All we’re doing in reality when we add “character” to an NPC is increasing the number of rules that NPC is using to define its behaviour…why is that inherently bad?

    Any added rule has a 90% chance of failure. If you add a gameplay-altering mechanic that improves the immersion, it’s still 90% likely to fail – more so if the mechanic was selected on purely aesthetic criteria.

    Why do I have to add rules which make them better able to defeat the player, anyway?

    If you make additions which have no effect on the gameplay (like better speech sample selection or varying beard lengths), then there’s no problem. Any addition which effects the gameplay will need to be tested, and subject to that 90% chance of failure. It’s not just directly defeating the player, any change to the game will either improve it or make it worse.

    A game where your units could start getting pissed off with each other if you handle them badly – fucking awesome! It could certainly “improve the game ten-fold” if the game was balanced and fun. It would just be harder to balance and make fun simply because…there’s more rules. But the viability of it is directly proportional to the number of rules.

    That’s one fucking big if, which you do justify a bit, but I don’t think enough. I think you could make an RTS with that mechanic and it would turn out to be a fun addition. But how much of the eventual implementation would be ludicrously unrealistic? There’d have to be a way of making the managing of the relationships fun. Once you had that system, would making it more realistic stand a good chance of improving it?

    Most genres need a pretty big balancing act when you add new features. Realism is a good inspiration, but quickly loses it’s usefulness once you get into the implementation.

  11. Paul:

    Kylotan’s comment got caught in the web of spam and so appeared a little bit late – apologies.

  12. Paul:

    1. “What I did was to define the difference between aesthetic and meta-game, that was what the entire comment was really about.”

    But the idea of AI facilitating the area of overlap between the two domains is the most interesting thing, so your example “did violence” to the discussion by eliminating one. Yes, it’s possible to fissure them easily when you discuss them, but the whole point about better AI is that it will bring them closer together. That’s why I used the example of war, because it revolves around the intersection of a game-like mechanic with the interesting distribution of the personalities of the commanders.

    Games which derive their gameplay from rules established by their aesthetic like Armadillo Run can blur that distinction. But see, realism vs. coherence below for that in the context of AI…

    2. “My whole point revolves around the following proposition: there is no correlation between making gameplay mechanics more realistic and improving the meta-game.”

    I completely agree with this (if you mean “necessary correlation”) and all your supporting justification for it – that’s why I’ve not mentioned “realism” yet, because it has the connotation of being game-breakingly bad, because you assume that it should mean “more like the real world in a way that’s bad for the game”.

    How about the term “more-aesthetically-coherent”? AI – AI which improves immersion and gameplay simultaneously.

    Let’s look at your Far Cry examples – “1. Not forgetting about seeing you. 2. Using very slow (but much more effective) tactics, such as creating a perimeter, calling in re-enforcements, and waiting half an hour to come in and try and get you.”

    Yes, these are obviously both bad, because they’re not aesthetically coherent. They don’t make the guards better *characters* – they give them more human-like behaviours. Characters don’t have to be like humans, they just have to be interesting in the way they fool the brains of stupid people into believing that they have a personality. You can do that by making great gameplay and then bolting things on to make the guards interesting, or you can go Dwarf Fortress on your ass and make the whole thing more based around the interaction between the AI’s. Both are great working methodologies, but what you’re still saying is that a game has to be either completely in one camp or completely in the other and I disagree.

    So, how about rules which create immersion and define gameplay simultaneously? It’s been done with physics in the aforementioned Armadillo Run, and games like Oblivion take a stab at it with AI.

    You still seem to be saying that games switch from being viable to inviable as soon as this is attempted, whereas I think my COMPLETELY INCORRECT SENTENCE WHERE I USED THE WORD “DIRECTLY” INSTEAD OF THE WORD “INVERSELY” BECAUSE I AM A MORON is closer to the mark – so… “But the viability of it is inversely proportional to the number of rules.”

    3. “Any added rule has a 90% chance of failure…” All you seem to be arguing here is that it’s hard to make fun games with complex AI – damn straight. That’s not really the point.