Martín Lombardo

Halite III Post-mortem

January 29th, 2019


The Halite 3 edition has ended and I managed to get rank 18. Here you will find my bot strategy and the tools I managed to create. I'll try to make this post-morem very detailed to avoid you the hassle of reading the code, that you can still find here.

If you didn't participate this edition, I recommend you to read the game rules first for this to make sense.


All the code written for the competition was with fast prototyping in mind, so don't expect quality code. It must remain in the context of a programming competition.

My bot

I divided my bot into two separate parts, the task assignment and the navigation. The task assignment will set for each ship the target location and then the navigation will figure out how to get there safely.


At the start of each turn I process the map to gather all the static information I could need of each cell, this is done in Map::Process and Map::CalculateNearInfo.
I gather this information for each cell:

For example, with this information I could easily know the average halite in a radius k for any cell in the map.


To load and separate the parameter for each game mode/map size I used different files in a cascade manner. I load the parameters in the following order:

If a parameter is found multiple times the last one found is kept. Ref


The friendliness is a value describing how "friendly" or "unfriendly" a cell is. This value is used for task assignment and navigation, its one of the core functions of my bot.
This is how it is calculated:

You can see that ships with more halite are less dangerous, because is less probable that someone attacks with a 950 halite ship.
You can find the function CalcFriendliness here.

Dropoff location

A hard problem that I think no one managed to solve it optimally. I went with a few heuristics and some restrictions found analyzing top bots.
My bot only allows one dropoff creation per turn, so each turn I choose where the next dropoff should be placed and let the ships go and build it when they need to drop.
Description of the function BestDropoffSpots:

Task assignment

The assignment of tasks is done like this:

Note: if a ship has been assigned then it can be used in the next category

  1. Dropoffs: get the BestDropoffSpots, if there is a dropoff to be build then the closest ship is assigned to build it. Also constants::DROPOFF_COST - ship.halite - cell.halite is reserved to pay for it
  2. Drops:
    • if a ship can't get to the nearest dropoff in toroidal distance from ship to closest dropoff * 1.255 turns before the game ends then we enable the allow_dropoff_collision flag (endgame flag) and override this ship as dropping
    • during the rest of the game, the ship will be assigned to drop if it was previously assigned to drop or its halite is > DROP_THRESHOLD (970)
    • also, if a ship has < 300 halite then it exits the dropping state (if it hasn't been overrided)
  3. Attacks (disabled on endgame):
    • for each enemy ship:
      • the closest ship to the enemy ship will be assigned to ram the turtle if the friendliness value of the position of the enemy ship is > friendliness_should_attack (a parameter)
  4. MiningThis is where it gets weird...
    1. Score every ship/cell combination (I call a combination an edge):
      • the cell has some restrictions:
        • cell's halite > min(35, map_average_halite * 0.8)
        • cell's friendliness value > friendliness_mine_cell (a parameter)
      • the score is calculated this way:
        profit = cell.halite + (average_halite_in_radius_4 / map_average_halite) * 100;
        time_cost = distance_from_ship_to_cell * 4 + distance_from_cell_to_closest_dropoff * 0.8;
        if (cell.inspiration && constants::INSPIRATION_ENABLED) {
        	profit *= 1 + constants::INSPIRED_BONUS_MULTIPLIER;
        if (num_players == 2) {
        	profit += min(0, num_ally_ships_in_radius_4) * 15;
        	profit -= num_enemy_ships_in_radius_4 * 25;
        else { // num_players == 4
        	profit -= num_ally_ships_in_radius_4 * 35;
        score = profit / time_cost;
      • Some of the values hardcoded there were parameters but I changed it the last week to do some fine tuning
    2. Sort the edges: order by priority descending and if two edges are to close by ε=0.05 then tie-break with the lowest travel time
    3. Iterate all the edges in order, if the edge's ship and edge's cell hasn't been assigned yet then assign it
  5. Check for unassignment:
    sometimes ships don't get any taskThis can happen if any mining cell is adequate
    , so I force them to go to the nearest dropoff


At this point, every ship has a target location assigned, so we must find a way to reach the targets fast and without traffic-jams or collisions.

Navigation Options

The first thing to compute the navigation is to generate the 5 NavigationOptions possible for each ship to end up with 5*number_of_ships options or less.
A NavigationOption describes a possible movement by a ship:

To calculate each option cost I use this method:

Now we have all the possible options scored we can assign them to the ships. During the competition I used two ways:

The easy way

This is a very simple way to do it and I'm sure many people tried this or something similar.
While there are no ships not assigned to an option:

This way ensures that there will be no self collisions and ships are choosing their options according to their priority.

Navigation simulation

Then I tried simulating the navigation as a whole, each ship picking an option and minimizing the sum of all the options picked by randomly swapping options. You can see the messy code.
Here is a high-level overview of how it works:

I end up using this method, though is not worth that much μ (IIRC ~1.5) as I expected. For local testing I lower down the simulation time to be able to run games in a minute.

Ship spawining

My spawning strategy is mainly based on some ifs with magic numbers. This is how ShouldSpawnShip decides:

bool Strategy::ShouldSpawnShip()
	if (game->turn >= 0.8 * constants::MAX_TURNS)
		return false;

	if (game->turn < 0.2 * constants::MAX_TURNS)
		return true;

	if (game->map->halite_remaining / (double)(constants::MAP_WIDTH * constants::MAP_HEIGHT) < 60)
		return false;

	if (my_ships > 20 && my_ships > enemy_ships * 2)
		return false;

	double max = 0.65;
	switch (game->num_players) {
	case 2:
		switch (game->map->width) {
		case 32: max = 0.60; break;
		case 40: max = 0.62; break;
		case 48: max = 0.65; break;
		case 56: max = 0.68; break;
		case 64: max = 0.70; break;
	case 4:
		switch (game->map->width) {
		case 32: max = 0.36; break;
		case 40: max = 0.40; break;
		case 48: max = 0.45; break;
		case 56: max = 0.52; break;
		case 64: max = 0.55; break;
	// last minute change that seemed to perform better
	max *= game->num_players == 4 ? 0.85 : 1.15;
	return game->turn <= max * constants::MAX_TURNS;

Those big switchs are the core of the function, they decide when to stop spawning ships. The magic number were extracted analyzing games from Rachol, teccles and TheDuck314 using the tool described in the Games analyzer section (with some fine tuning by hand).
Having those numbers hardcoded didn't seem like a good spawning strategy and I think I could have performed better.

Parameter optimization

I used two techniques to find the optimal values for the parameters of my bot. I managed to optimize some parameters better using one technique or the other and I end up with a mix of both.
Both techniques use the workers to distribute the matches described on the tools section.

Genetic Algorithms

I used my own script using NodeJS without a GA lib, you can find evolve.js here.
In all my testing, I used less than 50 of population, because playing games is really expensive. Usually not optimizing more than 4-5 parameters at a time. It reaches a point where the population has very low diversity so I just regenerate all the individuals every 300 generations or so and always keep the best.
This is my first time using GAs for something useful. How my GA algorithm works:

I don't know if the diversity problem was caused by using a small population or I messed up (more probable) with some of the % used in the algorithm. Either way, I managed to get good results from it.


I found this tool because @janzert talk about it on the Discord server. CLOP stands for Confidence Local OPtimization. I already had the distributed games system working fine so I give it a shot.
This is Halite.clop, the config file I used:

Script node ../halite3-bot/Worker/play_game.js
Name Halite
DrawElo 0
H 3
Correlations all
Processor any
Processor any
Processor any
# Many "Processor any" as the concurrency available in the queue

IntegerParameter a 0 10
LinearParameter b -1 1

CLOP convergence

From my tests, is better to optimize no more than 7-8 parameters at a time with this program.


Distributed matches

I needed run lots of matches concurrently on different machines to optimize parameters, so I used a NodeJS with Bee-Queue and a Redis server to distribute the work.
A brief description of each of the scripts inside the Worker folder:

This is how I setup everyting:

Games analyzer

I did an script (analyzer.js) to analyse a batch of games to find patterns in players. I managed to get some dropoff and spawning information with it.

Some examples

This is an analysis of winning games from teccles in 2P. I tracked how many ships he had when he spawns a new ship. In this case, we can see that he never spawns a ship when he have 5 more ships than the enemy. In 4P the threshold is 10. I have to say that this is not the final version of teccles and I'm sure that he changed this. I tried this strategy and it didn't work for me.

Teccles spawining analysis

This is an an analysis for Rachol v85 in 2P when he was rank 2. The histogram below shows the distance between a new dropoff and the closest one. You can see he only create dropoffs being 18 or more units away. Also we can see (not in the image) the minimum number of ships and turn for every n-th dropoff. You can find the full log here.

Rachol dropoffs analysis

With a similar script (which I lost) I computed the last turn a player spawned a ship in every 2P/4P/map size combinations. Averaging that infomation from top players and some tuning by hand I just hardcoded the max turn to spawn a ship in my bot. For example, this was the output for TheDuck314 (when he was rank 1):

	"2p_64x64": {
		"sum": 4432.203758995905,
		"count": 63,
		"min": 53.18725099601593,
		"max": 82.00455580865604,
		"avg": 70.35244061898263
	"2p_40x40": {
		"sum": 3591.3348946135834,
		"count": 56,
		"min": 51.288056206088996,
		"max": 66.7447306791569,
		"avg": 64.13098026095685

Combat database

To analyze other bots

In my try to decipher teccles collision avoidance/combat, I created gen_db.js which reads a bunch of games and generates for each ship movement a key based on the enviroment before moving like this: xxx*xxxxx**-xxx*****x***+***x*+***xxx***xxxxx*xxx. Messy huh? Lets show it in a different format:

Combat database example

As you can see, green squares represent ally turtles and the red ones enemy turtles. Each key is mirrored and flipped to account for rotations. Now, for each of those keys I save the direction chosen by every ship in that situation. And get something like this:

Total: 312
o: 196
s: 33
w: 38
n: 24
e: 21

So, for teccles ships in that situation, is very probable that the ship in the middle stay still (what the o means).
Some more examples (displayed with db.html) for lots of teccles games:

Combat database

If you are interested in the full raw data you can find it in counts_db.json (~238MB).

Last minute bug detected: when rendering db.html, accounting for mirrored-/flipped didn't update the cardinal. If you are interested you can view this image without mirroring/flipping here.

To analyze my bots

I made a little app to hot reload debug my ShouldDodge(Ship, Destination) function, which unfortunately didn't make it to the final bot but it was cool to showcase it anyway.
First I generated lots entries playing games locally, each entry containing all the map information for a single ship movement so I could simulate being in the real bot and test live the algorithm. An example of an entry (it's a JSON blob).
Each entry can be tagged manually with a string, for example 'good' or 'bad' and then in the editor you provide a function that takes the entry data returns the possible tag and compare both tags in all the tagged entries and gives some %s.

The tagger

The yellow numbers are the friendliness values

Combat database entries

The code editor

Replay viewer

Like last year, fohristiwhirl made an excellent game replayer called Fluorine. But I needed to manipulate the map data that I had already programmed in halite.js so I just went a did a fast replayer to display and alter variables in realtime.

Statistics & Analysis Page

This is the most notable tool I did this edition, you can find the page here. Last year I did something similar but it was late in the competition and not very useful (only a μ graph and winrate percentages, also being ugly). This year, from the first day of the open competition (October 16th, 2018) I started to crawl the games into a MySQL database. With the database slowly growing I started from what I have last year, the historical graph showing the μ evolution overtime time. After that, the analysis page which later become the most important part of the page, showing the winrates in 2P/4P and combined.
Some time later I did the live game feed where you can filter by the number of players, the map size and even select players to filter the games, and the games appear being played.

Before mid season I made a PR adding to the game engine some metrics we discussed in the Discord server to add to the analysis. The metrics added were:

Not all of them finally made it into the page, but they were nice to have.

Historical graph

Halite 2018 Analysis page historical graph

The graph is calculated using the average μ in the matches every hour.
There are a couple of things here I couldn't implement, someone suggested to show the σ (sigma) with like an area graph avobe and below the line, extending it 3*(average σ). The graph with lots of players would look too crowded and be a mess. Also someone else suggested a way to differentiate between versions in the same line, but I had the same problems of the last idea. Maybe I will consider this ideas again in the next edition.

Live game feed

The live game feed lets you filter by the number of players, the map size and even select players to filter the games, and they appear as being played. I mainly used it to find specific matchups where I lose.

Live leaderboard

Halite 2018 Analysis page live leaderboard

Here you select the player you want to track and every 30 seconds (or the time selected) it will show you the difference in μ and σ with the last update of you and the players close to you.
Also, suggested by @Lunariz I added the option to rank by μ, so you can check your rank ignoring the σ that you may have because a low game count due a recent submission.

Bot Analysis

Halite 2018 Analysis page analysis 1

Halite 2018 Analysis page analysis 2

Halite 2018 Analysis page analysis 3

By far, the most important piece of the page. You can select any player/version and get a detailed breakdown of some aspects of that bot.
Features displayed per game mode (2P/4P) and map size:


Halite 2018 Analysis page records

This was a not that useful for analyzing bots, but it was a funny thing to have.
Records being displayed:

Wrap up

I want to thanks TwoSigma & @janzert for mantaining the competition alive. I'll wait for more Halite edition to come.
The community on the Discord server was amazing and I couldn't have built the stats page without the suggestions from all of you.

You can find other source code and post-mortems here.

Some funny situations to wrap up

Halite fail 1

Halite fail 2