David J. Paola

Programming like nature

I want to study more nature. I really enjoy how the mechanisms of nature seem to solve problems at every level. Organisms have evolved a staggering amount of complexity, but with incredibly simple underpinnings. Respiration, for example, is pretty simple from a high level, but diving into how it works in depth proves quite complicated. But nutrients in -> energy is pretty simple. Even those complicated inner workings are driven by simple reactions that can be understood very well individually — the complexity comes from how they interact with each other. By understanding the simple rules, we can understand something that looks quite complicated.

Shift to programming nowadays. Everything we write is based upon certain rules. We assume, for example, that defining a function with certain parameters is compiled in such a way that it will always be called with those parameters — this is a necessary precondition that our compilers enforce and make life simple and easy. But our solutions with code are often very buggy and inelegant (some of us more than others!). They don’t always scale well — scaling usually requires many changes, additions, subtractions, and modifications to whatever our infrastructure is.

In nature, evolution serves this purpose. The process of natural selection is the agent of necessary change — if you don’t work, goodbye. Nature is a process, things are consumed and produced. Always. Programming doesn’t really work this way, at least not at a fundamental level. If an organism runs out of food, it dies. If our programs and functions aren’t called, they always sit around waiting for input or to be invoked.

This might be complete nonsense (in fact, it probably is). But I wonder what a software system would look like if it was somehow to inherit these attributes of nature? If a function is never used, perhaps it is discarded. But where do we get the function or module that will replace it? In nature, organisms reproduce. Code doesn’t really do that, at least not in the same way. Genetic algorithms are sort of analogous, but only insofar as they behave like nature. Code itself isn’t actually modified, just the state of the data. The fitness function is always the same, the production process is always the same. Self-modifying code isn’t robust enough, either, for this task. It mainly seems to be used to simplify how things work — not to evolve or consume/produce.

So what would programs consume? What would they produce? At a high level, data is consumed and analysis is produced in the form of (hopefully) value. But again, if information flow stops, programs just wait. This would definitely be a disadvantage, so there would have to be a corresponding advantage. What’s the tradeoff? Is there a way to make programs more efficient, faster, or more reliable if they don’t always assume they’ll be provided with X amount of data? What could they sacrifice?

This has been extremely stream-of-thought. Got to come back and think more about it later.