Evolutionary methods for problem solving and artificial development

Evolutionary methods for problem solving and artificial development

One of the principles I follow for problem solving is that many of the best solutions can be found in nature. The basic axiom of all knowledge as self knowledge applies to the study of computer science and artificial intelligence.

By studying nature we are studying ourselves and what we learn from nature can give us initial designs for DApps (decentralized applications).

The SAFE Network example

SAFE Network for example is following these principles by utilizing biomimicry (ant colony algorithm) for the initial design of the SAFE Network. If SAFE Network is designed appropriately then it will have an evolutionary method so that over time by our participation with it can fine tune it.

There should be both a symbiosis between human and AI as well as a way to make sure changes are always made according to the preferences of mankind. In essence SAFE Network should be able to optimize it’s design going into the future to meet human defined “fitness” criteria.

How they will go about achieving this is unknown at this time but my opinion is that it will require a democratization or collaborative filtering layer. A possible result of SAFE Network’s evolutionary process could be a sort of artificial neuro-network.

The Wikipedia example

Wikipedia is an example of an evolving knowledge resource. It uses an evolutionary method (human based genetic algorithm) to curate, structure and maintain human knowledge. Human beings

One of the main problems with WIkipedia is that it is centralized and that it does not generate any profits. This may be partially due to the fact that the ideal situation is that knowledge should be free to access but it does not factor in that knowledge isn’t free to generate.

It also doesn’t factor in that knowledge has to be stored somewhere and that if Wikipedia is centralized then it can be taken down just as the library of Alexandria once was.

A decentralized Wikipedia could begin it’s life by mirroring Wikipedia and then use the evolutionary methods to create a Wikipedia which does not contain the same risk profile or model.

Benefits of applying the evolutionary methods to Wikipedia style DApps

One of the benefits is that is that there could be many different DApps which can compete in a market place so that successful design features could result in an incentive to continue to innovate. We can think of the market in this instance as the human based genetic algorithm where all DApps are candidate solutions to solve the problem of optimizing knowledge diffusion.

The human beings would be the innovators, the selectors, and the initializers. The token system would represent the incentive layer but also be for signalling so that humans can give an information signal which indicates their preferences to the market.

Wikipedia is not based on nature currently and does not evolve it’s design to adapt to it’s environment. Wikipedia “eats” when humans donate money to a centralized foundation which directs the development of Wikipedia. A decentralized evolutionary model would not have a centralized foundation and Wikipedia would instead adapt it’s survival strategy to it’s environment.

This would mean Wikipedia following the evolutionary model would seek to profit in competition with other Wikipedia’s until the best (most fit) adaptation to the environment is evolved. Users would be able to use micropayments to signal through their participation and usage which Wikipedia pages are preferred over others and at the same time you can have pseudo-anonymous academic experts with good reputations rate the accuracy.

In order for the human based genetic algorithm to work, in order for the collaborative filtering to work, the participants should not know the scores of different pages in real time because this could bias the results. Also participants do not need to know what different experts scored different pages because personality cults could skew the results and influence the rating behavior of other experts.

Finally it would have to be global and decentralized so that experts cannot easily coordinate and conspire. These problems would not be easy to solve but Wikipedia currently has similar problems in centralized form.

Artificial development as a design process

Quote from artificial development:

Human designs are often limited by their ability to scale, and adapt to chang-ing needs. Our rigid design processes often constrain the design to solving the immediate problem, with only limited scope for change. Organisms, on the other hand, appear to be able to maintain functionality through all stages of de-velopment, despite a vast change in the number of cells from the embryo to a mature individual. It would be advantageous to empower human designs with this on-line adaptability through scaling, whereby a system can change com-plexity depending on conditions.

The quote above summarizes one of the main differences between an evolutionary design model and a human design model. The human designs have limited adaptability to the environment because human beings are not good at trying to predict and account for the possible disruptive environmental changes which can take place in the future.

Businesses which take on these static inflexible human designs are easily disrupted by technological changes because human beings have great difficulty making a design which is “future proof”. It is my own conclusion that Wikipedia in it’s current design iteration suffers from this even though it does have an limited evolutionary design.

The limitation of Wikipedia is that the foundation is centralized and it’s built on top of a network which isn’t as resilient to political change as it could be.

In order for the designs of DApps to be future proof they have to utilize evolutionary design models. Additionally it would be good if DApps are forced to compete against each other for fitness so that the best evolutionary design models rise to the top of the heap.

1 Like

So, a very good argument for a de-centralised Foundation then? :smiley:
The Wikipedia example seems the most important app to migrate to Safe to my mind.
I would say that the “Evolutionary Design Model” is in fact an Evolution inspired Human Design Model and dependent on what aspects of Evolution you are inspired by, or what bits you focus on or leave out will impact the overall future functions of the App. I agree Evolution is a good thing to take inspiration from, but we have to be careful we don’t create a “survival of the fittest” society, when our intention was to only apply this to Economics. I “get” that you would create better apps this way though.
An eventual artificial neural Network would spark some interesting technological Evolutionary processes though I reckon.
De-Centralising Foundations is key though, I agree with David’s thinking. :smiley:

1 Like

http://www.technologyreview.com/featuredstory/520446/the-decline-of-wikipedia/

To avoid this the participants in the ecosystem must be given an incentive in safecoins or some other token for their contributions.

An uncensorable Wikipedia which provides incentives to participants rather than an unsustainable volunteer effort. I would say if done properly it would be far superior to Wikipedia.

Survival of the fittest isn’t necessarily bad for a society. It all depends on what the fitness criteria is that you’re filtering on. So if it were a message forum then the fittest messages would rise to your attention while the unfit messages would be junk, spam.

It’s just a filtering mechanism and we all filter according to our preferences.

1 Like

This idea of being EUsocial is thought-provoking when thinking about the “kind” of evolutionary social models to adopt. It is interesting comparing the insects to Humanity:

http://kk.org/mt-files/outofcontrol/ch2-f.html

I really like the idea of self programming computers.

Say you have 3 or 4 inputs, 2 or 3 outputs – There are tons of way for computers to map/classify/ relate the inputs to the outputs… Once the computer has an accurate enough model, it can manipulate the outputs to generate the desired inputs… Some of these are better than others but a computer could try the various models and pick the best one through simulated trial and error, actually trial and error, or a combination of the two…

Instead of Arduino and the like someday soon we could just have self-programming black boxes.

Somehow missed this, but just seen in reddit.

Genetic Algorithms are a powerful and still under utilised design pattern that I’ve long considered returning to, speculating at how to integrate them with internet connectivity, and the increasingly decentralised technology developing today. Not least that they could enhance the SAFE core, or be used to build SAFE apps.

This article is very good at highlighting this potential, though a bit slapdash about what can be called evolutionary software (Wikipedia?) or as he puts it a Human Genetic Algorithm.

Anyway, the author mentions Ethereum and Nxt, but goes into some depth regarding SAFENetwork, echoing my own thoughts posted here on the forum in the past.

Good to see this being suggested by others!

ASIDE: I think the label Genetic Algorithm should only be applied to design patterns that attempt to mimic real genetic processes, and evolutionary software the same. Wikipedia is a candidate for this, but not yet an example. Better call it “Adaptive” ATM

2 Likes