It's UWAweek 47

help3001

This forum is provided to promote discussion amongst students enrolled in CITS3001 Advanced Algorithms.

Please consider offering answers and suggestions to help other students! And if you fix a problem by following a suggestion here, it would be great if other interested students could see a short "Great, fixed it!"  followup message.

How do I ask a good question?
Displaying selected article
Showing 1 of 202 articles.
Currently 6 other people reading this forum.


 UWA week 34 (2nd semester, week 5) ↓
SVG not supported

Login to reply

👍?
helpful

Hello! This has been something that's fascinated my imagination for a while, and I'd love to know how feasible it is to implement. Not too important, obviously, but feel free to chip in! (hehe)

We know that small rules can create complex emergent patterns in nature. My question: Would we be able to create many small chips (call them little-guys or LGs) to solve complex problems by computing relatively mundane tasks quite quickly, storing minimal state or simply reacting to an input?

Take, for example, sorting an unordered list. Our current Von-Neuman (big-guy) solution would require the (forgetful) CPU to keep asking for data from memory and storing new data in it. We analogise it as sorting a sequence of face-down cards. Knowledge is hidden.

The little-guy solution would have chips that have just enough memory to store one element of the array and its index, as well as a method to interact with other little-guys and a general "desire" to fit where they need to. This can be analogised as a bunch of people trying to sort themselves in order or birth year (something they all know). Everyone talks with each other and sort themselves.

Obviously you need to make sure the little-guy algorithms are provably correct, but when you do, you can initialise every node in one go, run the arena, watch the little guys scramble, then extract your tasty ordered list.

This way, while the time complexity of the problem being solved might stay the same, we will have parallelized the problem so much that the time reduction would be noticeable for a large number of little-guys. There is no CPU bureaucracy to hold the little-guys back.

The space complexity would skyrocket, obviously, as would the cost of manufacturing, but for a problem as common as sorting, if you're a large enterprise, why would you not use something like this for optimising your performance?

Thanks

The University of Western Australia

Computer Science and Software Engineering

CRICOS Code: 00126G
Written by [email protected]
Powered by history
Feedback always welcome - it makes our software better!
Last modified  8:08AM Aug 25 2024
Privacy policy