yeah.
Too early as in probably decades to early. Well it depends.
LLM (Large Language Models) like ChatGTP are trained on the entirety of the internet and gigantic numbers of scanned books as wells as automatically generated video subtitles. OpenAI downloaded pretty much the entirety of youtube, then generated subtitles and then used them for training.
This is a absolutely gigantic amount of training data. This is also why AI hasnât really been getting all that much better in recent years. You canât just add more training data because it doesnât exist. You can only improve training data but to improve it you need humans to either generate data (write texts) or filter existing data by quality.
AoE2 has a a few hundred official scenarios plus a lot more fan made ones but the fan made ones are not all of equal quality.
Generating text (like ChatGTP) is a linear thing, you just need to predict the next letter.
Generating images is a 2D thing. Since itâs just 3 numbers per pixel (Red, Green, Blue) itâs not too complex.
AoE2 has multiple systems on top of each other that need to interact correctly.
The basis is a 2D map. This part would be relatively easy to generate. We already have random maps after all.
The 2nd aspect is object. Units, buildings, resources. Those follow a lot of rules. Rules are easy to code but hard to teach to a generative AI. Generative AI doesnât work like normal software.
Is the AI going to understand that certain units belong together or not? Like does it generate Lions on a snow map or give a player units they canât train. Light Cavalry for Aztecs and so on.
And then there is the whole trigger thing. Does the AI understand what triggers are possible, how they work and how to make them function.
Letâs say the mission is to move a relic cart from point A to point B.
Does the AI understand how to make it possible to move from point A to point B. Maybe there is a cliff or water blocking the path.
Does the AI understand how to communicate to the player where point B is. Like does it make a marker on the map.
Does the AI know how to make the mission challenging. Are there the right number of enemies? Are point A and point B even far enough apart from each other?
The getting the right challenge part is the hardest. How to test if the scenario is doable. Technically and practically.
Like how to test if the winning condition even works. Prevent softlocks and so on.
And then how to test if the mission is not to easy and not too hard.
You would have to develop and AI that plays like a player to effectively test those scenarios. Because testing with humans would completely ruin the point of making them AI generated.
Well and in the end there is also the question if itâs even going to be fun and interesting.
AI usually generates very average and boring things because itâs not really creative. Itâs trained on a lot of data so it will be very likely to spit something out that is very common in the training data.
It canât come up with new kinds of missions, it will just remix existing ones.
Current machine learning based AI that plays video games (like DeepMind) requires very powerful hardware and only works on some kind of games.
I donât think most people have a server farm full of GPUs at home.
And even then itâs not human like at all because it canât adapt. It is only based on itâs training data.
For example the Go AI (AlphaGo) that beat the Go champion can be beaten by anyone by just knowing one simple strategy.
Also that AI needs over 100 GPUs to run.
The AI canât adapt to new strategies and it doesnât even understand the rules of the game.
Go is a lot simpler then AoE2 and itâs a turn based game instead of a real time one.
I donât think we will have machine learning based AI play AoE2 any time soon.