Hello!
This has been something that's fascinated my imagination for a while, and I'd love to know how feasible it is to implement. Not too important, obviously, but feel free to chip in! (hehe)
We know that small rules can create complex emergent patterns in nature. My question: Would we be able to create many small chips (call them little-guys or LGs) to solve complex problems by computing relatively mundane tasks quite quickly, storing minimal state or simply reacting to an input?
Take, for example, sorting an unordered list. Our current Von-Neuman (big-guy) solution would require the (forgetful) CPU to keep asking for data from memory and storing new data in it. We analogise it as sorting a sequence of face-down cards. Knowledge is hidden.
The little-guy solution would have chips that have just enough memory to store one element of the array and its index, as well as a method to interact with other little-guys and a general "desire" to fit where they need to. This can be analogised as a bunch of people trying to sort themselves in order or birth year (something they all know). Everyone talks with each other and sort themselves.
Obviously you need to make sure the little-guy algorithms are provably correct, but when you do, you can initialise every node in one go, run the arena, watch the little guys scramble, then extract your tasty ordered list.
This way, while the time complexity of the problem being solved might stay the same, we will have parallelized the problem so much that the time reduction would be noticeable for a large number of little-guys. There is no CPU bureaucracy to hold the little-guys back.
The space complexity would skyrocket, obviously, as would the cost of manufacturing, but for a problem as common as sorting, if you're a large enterprise, why would you not use something like this for optimising your performance?
Thanks