News
Version 1.5 - Focus on Ecology
December, 2025
This update focuses on ecology and environmental effects.
The tank environment now has gradients for light, heat, and matter distribution. These can affect organisms in many ways. Providing a more interesting environment will hopefully create more biodiversity.
Plants now grow and create fruit according to resource availability, effectively creating a self-regulating feedback loop.
Plants and boids now have a health meter. Light and temperature may impact health, which will drive certain organisms to prefer certain parts of the tank over others.
Boids now work on "life credits" instead of having an explicit age and count-down timer. This allows organisms to become ancient by staying healthy.
Food and nutrition were refactored (again). The new system keeps the tiered complexity, but gets rid of nutrient types, toxins, and deficiency. The new system lets boids eat any food that matches their ability to eat. They get increased benefit by matching the color of the food, but all food is edible. This should help randomly generated life get traction in a new tank.
Personal stats were added to a tabbed viewer. Each individual now keeps lifetime stats for graphing. This feature has been wanted for a long time, but it has extra utility right now for viewing and balancing ecological changes.
Finally, i experiment with creating multi-cellular lifeforms. I would like to introduce some concept of "fungi", "algae", or other simple lifeforms in the future. You can view a live web demo here.
Version 1.4 - Experimental Neural Networking
October, 2025
This month was dedicated to intense research and development on exotic neural network concepts for active learning. Most of what i dug into has little economic benefit, are not popular topics, and live mainly in rarely seen research papers.
I first explored Recurrent Neural Networks (RNNs). These are networks that use previous activation values to influence the present. Perhaps boids could use this to learn more complex movement? I built a small prototype network and transplanted it into Vectorcosm as an alternative brain type. It did not work well. Any normal evolutionary mutation training performed on the network seemed to be muddled by signals coming from the recurrent hidden state. The only sign of learning was a single batch of organisms that had proprioception (introspective motor sense) and could conceivably benefit from what RNNs have to offer (learning time sequences). I scrapped the RNN idea and moved on.
Next, I researched Evolutionary Plastic Artificial Neural Networks (EPANNs). This is a clever architecture that mixes generational learning (from direct evolution) and online learning (from real time life experience). The network has built-in evolving plasticity rules. This means that life experience provides the learning, but evolution provides the hyperparameters of what to learn and how to learn.
How do we actually get real time learning? Backpropogation and reinforcement learning are too slow for what we are doing. These are generally meant for commercial AI and offline learning.
One idea is Hebbian learning ("neurons that fire together wire together"). The concept is biologically inspired but doesn't seem to actually work. The idea is to strengthen connections between correlated neurons. In practice, it just saturates the network and leads to mental snow-blindness. I tried a number of different weight-normalizing schemes including Oja's Rule, forward connections, receiving connections, connections by layer, and none of them produced good results.
In the end, i could not get active learning to work. However, building the EPANN prototype led to a fully functional replacement for the NEATaptic javascript library. This means i can drop this code dependency. The new code is faster, leaner, and has even more fun activation functions.
Next i turned to spiking neural networks, which so far have been an interesting disappointment. I scrapped the output adapters and replaced them with a much simpler rate-based sampling method. This might be seen as the "stupid" way to do spike networks, but it delivers more useful results for organism motor controls.
I research various methods for active learning on spiking networks as well. Options include Spike Time Dependent Plasticity (STDP) and the E-Prop algorithm. These find correlations between firing neurons and strengthen or weaken connections in response to learning events. While i was able to get the mechanisms implemented, actual learning never took place.
Finally, i spruced up the Braingraphs for both SNNs and regular feed-forward neural networks.
In the end, i could not get active learning to work for either brain architecture, but we have many interesting topics to explore in the future: Predictive coding, neuromodulation, hormones, plasticity, time-based spike output adapters, DNA and generative neural networks, and much more.
Watch the update video on YouTube
Version 1.3 - Physics & Metabolism
September, 2025
All of the physics were overhauled, which created some interesting side effects for organisms.
All movement was converted from video game hackjobbery to proper physics equations. This includes the basic laws of motion (F=ma) and also hydrodynamics for our aquatic environment. While a rocket ship in space is a good analogy for Newtonian physics, it doesn't work in a gas or fluid. For those environments, we also need to factor in drag forces.
Adding hydrodynamic drag now means that the shape of an organism matters. Narrow shapes move forward better, but turn worse. Wide or stubby shapes turn better, but cannot move as fast.
Euler integration was chosen to apply these new forces over time. This is a very standard way to apply physics in video game and simulation environments where precision is less important than frame rate. I also considered Verlet integration, but Euler is easier and more intuitive while Verlet offers only marginal benefit. I encountered problems with particularly speedy objects going berzerk and had to resort to splitting frames into multiple substeps to keep movement physics under control.
With mass and movement figured out, it was time to also overhaul the basic organism motor functions. They have all been normalized and contribute correct amounts of power towards propulsion. Their relative metabolic costs were refactored to make more sense and have more of an impact. This has resulted in finer control and wider range of movement expression.
Because organisms grow, and because shape, size, and mass now matter more than ever, it was time to address Allometric Scaling in a more robust way. Mass, dimension, power output, and metabolism do not scale at the same rate. The result is that smaller organisms have higher relative power, while larger organisms have more efficient metabolism and last longer without food.
On the cosmetic side, tank generation got a few more features including fractured rocks, rotation, skew, and rock generation based on wave-interference patterns.
Watch the update video on YouTube
Version 1.2 - Records, Reproduction & Rocks
August, 2025
This month introduces basic population record keeping and graph visualization that will be helpful in balancing the simulation later.
"BoxFit accounting" creates organisms that have larger size or metabolic requirements depending on their body features. Feature costs are not longer arbitrary or random.
"Budding" is introduced as a new reproductive strategy with certain tradeoffs compared to standard mitosis. This is similar to how real life yeast cells reproduce.
More rock and background themes were added. Rock theme mixing is now more natural looking.
Boids can now eat and excrete viable plant seeds to help with plant dispersal. ("Endozoochory")
Boids now leave a carcass after death (literally, a food particle). This should create more interesting food web situations.
Watch the update video on YouTube
Version 1.1 - Spiking Neural Networks
May, 2025
I spent the entire month working on Spiking Neural Networks for artificial life. Theoretical research, testing, experiments, integration, and observation. I took the small network model i built on the side and transplanted it into the Vectorcosm "boid" brain architecture. The results were interesting but not a huge breakthrough. SNNs solve a few fundamental problems i've had with regular neural networks (overfitting, mutation, library dependence), but they create a number of other problems (timing, training, accuracy, flatline). I think i will keep the new brain architecture as a fun alternative, but i dont expect that it will produce anything amazing.
The work i have done here leaves much room for improvement. Perhaps we can revisit this topic later. For now, i think it may be time to move on.
You can view a small SNN demo i put together. You can copy the HTML and tinker with the settings if you want.
https://leiavoia.net/sandbox/spike2.html
I also have a ton of raw training footage. Please let me know if that's something you would like to see.
Watch the update video on YouTube
Version 1.0 - Show & Tell Edition
March, 2025
The big 1.0 - Vectorcosm is now ready for show and tell! A long list of features and bug fixes were checked off in preparation. This month, various animations were restored, randomized training was improved, braingraphs were restored and improved, and speciation problems were finally ironed out. Additional rock configurations were also added. I even completed the readme file!
Watch the update video on YouTube