Why don’t we have an AI to do testing?

I mean an AI which is at about 2k Elo, then developers can make it to test any buffs and nerfs they may think of, then we can get civs more balanced.

Because there is nothing like “AI 2k ELO”. The AI doesn’t think, it is stupid. I think 1k player is much better than the extreme AI.

Many greetings

5 Likes

Age AI doesn’t work like a human. Humans can check for interactions between bonuses and units and adapt on the fly. Maybe you could create a bot like the ones who play chess, but in that case, you may as well become an AI engineer

I was thinking the same, someone who can code an ai, can make a real ai (not bot), it watches record games of top ranked players and improve itself then we can test ourself with more real humanlike computer.

If coding an AI was that easy… then someone would have created such AI.

But it isn’t

3 Likes

I get where you’re coming from. These days there are tests with AI’s that learn to play a game all by themselves, by “simply” playing the game tens of thousands of times against basically itself and finding the best strategy this way. That is a completely different class of program from the current AoE2 AI’s, which are programmed by humans based on rules humans have come up with.

Such an AI could indeed be useful for testing balance changes. But with 2 important caveats:

1 I think they’re currently still quite expensive in terms of how much processing power they use and the expertise needed to properly develop them.
2 They’re best in limited scenario’s. Give them a Huns war on one specific generation of Arabia and they might train themselves to be better than the top human players, as complex as a match like that already is. But to have an AI learn how to play the game on any generation of any (reasonable) map with any civ vs any civ? That’s probably a bit beyond their capability at this point.

Point 2 there is the big obstacle. It doesn’t really matter whether the AI’s performance with a newly tuned civ on Arabia vs Franks is a good model for how well humans are going to do with them or not if that model does not apply to a match vs any other civ on any other map. So I’d say that between the resources of the still kind of modest team by AAA game industry standards and the limitations of these AI’s I don’t see them as quite the real gamechanger right now.

6 Likes

You are right, but that’s why it is a challenge. And we can have a tournament, like alpha go vs pro players, right? And It will be a super ad company

Good points. I want to add some more remarks:

  • Often balancing is done in relation to high level players, as it is assumed that low level players do not have all the necessary skills to play the civilians optimally. The same argument can be used for an AI vs high level player. The “best” AI is superior to all high level players in terms of multitasking (APM), response time, data processing etc. Thus, using such an AI might result in a balancing not suitable even for pro players!? Therefore, I think the capabilities of the AI must be restricted to be more human-like.
  • Assume the AI exploits an unknown civ-specific bug to get an advantage. Thus, the balancing would try to nerf this civ to compensate for the exploit. Ideally, the devs recognizes this and can fix the bug. However, that would require that not only the final result of the AI-matches are evaluated but also how or why an AI won.
    As long as the devs and players do not find this bug and its exploit, you got a civ, which the human players consider as too weak (depending on how large the impact of the bug exploit is)
  • The other mentioned AIs, like alpha go, have a clear target: win the game. You can do the same for AOE: win the game. However, using it for balancing is another (actually an additional) problem statement. First, you have a set of parameters for the game (civ specific, map specific, etc). Then you train your (civ specific?) AIs to learn the best strategies for this parameter set. Then you must evaluate the results and try to find parameters to improve the balancing. Then, you update the parameters and train the AIs again. Repeat this until you are satisfied with the balancing.
  • This brings us to the next problem. When is the game balanced? What is the metric for it? Must all civs have an approx 50% win rate on all maps? Which parameters are allowed to be manipulated during the balancing-process? A naive guess of mine is, that an AI without any constraints would try to equalize all civs, thus eliminating any effects of civ specific bonuses/technologies.
1 Like

To add to this, there are a few other problems with using this type of machine learning-based AI as well.

  1. It’s basically impossible to control their behaviour. If your AI exhibits some undesirable behaviour, there’s not really anything you can do about it. This is why when big tech companies’ chatbots say offensive things, they have to pull the plug – fixing them is impossible.
  2. There’s no way to control their ability level. You want an AI that is “about 2k elo”? Too bad – there’s no way to tell a machine learning algorithm that that’s the desired outcome. Even if, by some weird coincidence, it ends up “about 2k elo” in the sense that it has about a 50% win rate against human players with about 2000 elo, it probably won’t play the game in the same way that they do.
  3. This type of AI works ok for turn-based games, but it’s much more computationally expensive to apply it to a real-time game. I don’t know for sure, but I think training such an AI would take a long time and/or a lot of computing resources, and then few computers would be able to run it.

Bear in mind that AIs like AlphaGo are theoretical research experiments. They’re not really designed or intended for humans to play against, other than as a demonstration of the technical achievement of their creation.

This also feels like a sensible place to point out that balancing around players at 2000 elo is nuts. I can’t find any recent data about what proportion of players are at 2000+ elo (if someone knows where to find it, please tell me!). But from the historical data I can find, it’s less than 1% of the player base, maybe even less than 0.5%. If you’re going to make decisions based on data, don’t make those decisions based purely on outliers.

4 Likes

If your point 1 and 2 are correct, that means the way we are playing is wrong, just like what alpha go taught us.

Oh, the way I’m playing is definitely wrong!

But more generally, I don’t necessarily agree. Even assuming that the AI actually “works” in the sense that, like AlphaGo, it surpasses all human players, It’s totally plausible that the way it would play doesn’t work well for a human player – or, worse, works but isn’t fun for a human player.

we can limit its eapm to 50

It is not necessarily related to APM. For example, the decision to engage in combat or to flee does not require many actions. Here, the AI should be superior to humans to evaluate whether a fight is benificial or not. At the very least, it should reach a decision more quickly. That the outcome of a battle is related to APM is another story, but can be considered for the decision making.

Developing and training a program like alpha go is way more expensive than DE as a game itself. So no it is not possible to develop an ai that has 2k. MS should put its ressources somewhere else and not to develop something that wouldn’t even be able to beat a 1500.

1 Like

Sorry to call this out specifically, but the idea of some dude just casually whipping together a 2k+ machine learning algorithm for AoE in their spare time made me chuckle. People in this thread should really look into Alphastar, an AI plays all 3 Starcraft factions at the top 99%. It took an entire team of experts working on specialized hardware and cutting edge algorithms two years and hundreds of years of simulated games to make it. It’s so hard to do that the only other example I know of for strategy games is Dota’s OpenAI Five. Even if Microsoft decided to use AoE2 as a machine learning testbed, simulating matchups is prohibitively expensive and the games would still need to be analyzed by a human to figure out balance implications. At the end of the day, its just way easier to pull data from the existing playerbase and pay attention whenever the 1600+ have a consensus on something.

3 Likes

the reason it’s impossible is that aoe2 doesn’t have programming support. with starcraft we were able to write decent AIs (even without a huge datacenter and training model like alphastar) because there were APIs to read the game state and send individual commands. we were able to get tens of thousands of APM with no problem.

if AoE2 had something similar, we could make a decent AI. people who don’t know what they’re talking about always come in these threads and conflate the difficulty of making a grandmaster-level AI with the impossibility of making a 2k-level AI, but those tasks are not the same

as for balance-testing it wouldn’t really matter. the game is imbalanced because the ladder is exploitable and the maps suck, not because there is a lack of information about what is strong or weak. it’s not like starcraft where the designers actually cared about making the factions balanced. DE designers pretend that the civs are balanced because they have good maps and bad maps, except their ladder lets you change the civ after seeing the map so none of that matters. no amount of AI-testing will change that

1 Like

If developinh an Al with machines learning is so easy, why don’t you make one yourself?