

Turn 1: I deploy 2 on Lord Turnip Cargo Port, 2 on Cowboy's Tower and 14 on Pikachu's Factory. So I try to show my game and ask some questions. It's absolutely clear to me that the main strategy should be concentrating on break AI's bonuses while expanding but even knowing that I'm not able to realize it!
APEX AI UNITY FORUM CODE
Or maybe code your own logic of where to set the seeker target, waypoint systems etc.Since I didn't get enough detailed anwers I try asking more detailed questions. The negative is that it is all CPU based so training is NOT fast. This is an implementation of the NEAT algorithm which is evolving a neural network using genetic programing. If you do take the behavior trees path for part or all of your project:įor behavior trees the first three assets stand out on the asset store. In some games you could add a noise factor to the input sensors (kind of like a dropout layer) where you randomly zero out some sensor values to make the AI less effective. To make a dumber version of the AI you could run it at say 1.5 seconds per decision. So AI is trained at say 1 second per decision. In some games (depending on game mechanics) like FPS you could increase the decision interval.

You could abort training early to create an easier AI. Thats a tough one as it all depends on the game itself. Re: "Is it possible to get multiple versions of ML agents for different difficulties." Polarith might also do a good job for the Familiar game entity:Īnd there are a lot more on the asset store under Tools -> AI Then sent the end reached distance to a bigger value like 1 or 2 meters etc.īehavior Designer Movement Pack might work for the Familiar game entity: Don't hookup ML Agents to it.įor a familiar that follows a player you could simply use AStar Pathfinding Project and set the seeker target of the Familiar to be the players transform. Simply hard code that entities behavior or use a behavior tree, etc. Sorry question does not make much sense in the context of ML Agents :ĭon't train an AI unless you need one. Re:"Is it possible to give a hard-coded behavior to certain agents." Then the curiosity module goes off on a mission to investigate the discrepancy instead of learning howto play the game. Hey one minute I request this set of attributes and the impact is X, Next time I request the same set of attributes the impact is Y. Otherwise it is very confusing for the AI cause it would be like. However that attribute to spell matching heuristic will need to be 100% deterministic. Likewise the action output data could contain the desired attributes of the spell to cast and your code could use heuristics to try and find the closest matching spell to the requested attributes.

Of course along with available elixir and whatever other relevant game play attributes. Your Input data for example could contain the attributes of the spell casted by the opponent rather than a 1 hot encoding of the spell itself. If the abilities of any given spell can be described as a fixed set of attributes. Especially if you can train it using both curiosity and memory modules enabled. Meaning the AI will eventually learn generative behavior / strategies as well as very deep strategies way beyond what a behavior tree can implement. Pros: Generative and not easy to predict. Meaning for any given situation it will always do the same thing so its less of a challenge Cons: Predictable and gets boring after a while.
