I don’t think it’s great. I also don’t think it’s great that there are private prisons making prisoners work for pennies an hour so that corporations can have cheaper labor. It wouldn’t matter if the products were cheap enough for poor people on welfare to buy at Walmart if the labor is from sweat shops and private prisons.
Yes, but a free quota on the SAFE network would be possible too, except that people would start registering multiple accounts like crazy. Hmm… Or maybe not! Could be worth trying to have a free SAFE storage quota already from the start. Unless it would be too risky and risk being abused too much.
For example today, I don’t register several Google Drive accounts. No need for that and besides I follow the terms I hope. Not many people would register many SAFE accounts just to gather some free quota. Botnets registering accounts automatically on the other hand may be a more serious problem.
Actually that makes a lot of sense, because stress. You interpreted it as if money was the problem, but the results are affected by competition. There are dozens of experiments that show how competition raises productivity to a certain extent and eventually drops at high degrees of stress - that´s what your example is about. Competition can be created by all sorts of factors, ressource conflict (rewards) is just one of them. You can also put two people into a room and ask them to push a button once they see a letter on a screen and then compare it to people performing this task alone. Probants always go faster in the 1-on-1 scenario for a certain period.
So again, no your example wasn´t about money, but about competition and the stress which it causes. In the case of SAFE people do not suffer stress because the competition is beyond machines
Or maybe yes…
there would be abuse just for the sake of it. In centralized environments abuse of freebies is often blocked and controlled manually. This (gladly) doesn´t work out in decentralized environment, which is why you can simply build a bot and spam the network. If you don´t believe it, fork SAFE and get a free net running. Shouldn´t be all too complicated.
Because Google gives you 15GB storage, which is enough for most people. You just don´t realise that people PAY for that and that´s precisely why Google gives it to you without need to send them $$$.
I haven’t looked at the details of that particular experiment, but in science they use control for variables to remove bias like that in the statistics. It’s true however that the presenter may have been biased himself. He said: “What’s interesting about this experiment is that it’s not an aberration. This has been replicated over and over again for nearly 40 years. These contingent motivators – if you do this, then you get that – work in some circumstances. But for a lot of tasks, they actually either don’t work or, often, they do harm. This is one of the most robust findings in social science, and also one of the most ignored.”
In other articles I read that not all scientists agree with that. So there could be shaky science behind the claim, or that many scientists are culturally conditioned to be biased the other way and ignore much of the evidence (at least when talking causally about it). My own guess is that money as an incentive takes away attention from looking at the full spectrum of possibilities. And I believe this is true: “You’ve got an incentive designed to sharpen thinking and accelerate creativity, and it does just the opposite. It dulls thinking and blocks creativity.”
Then I guess you should perform an experiment that doesn´t include competition, which the scenario of Glücksberg is clearly about.
The experiment you refer to proves the effect of stress onto creativity, not money to be “the hallmark of a primitive society”.
Access to most public content , like most websites today , will be free . To limit abuse of the network and ensure a good amount of efficiency , a limiting mechanism is needed , and besides safecoins being accounting and regulating units , safecoins as a neutral currency and digital cash equivalent are enabling agreements , services , transactional trades where bartering would be too complicated and too much of a hassle or impossible or undesired . Safecoins have their place in the network community , culture and economy . People can offer their already bought resources to get safecoins or buy them to participate in the shared activities that require any amount of agreeable payments .
Exactly my point. Google gives you something which has the illusion of being free but it’s not. Google is an advertising company. The Google Chomrebook is also much cheaper than a normal laptop so I suppose if that Chromebook were given away for free then no one should want to use anything else because it’s free?
The cost is in freedom. Freedom is a core principle of SAFE Network. “Freedom” is not free, it never was free. Freedom is fought for, freedom is earned, it’s not usually given. Property rights (personal property) are a way of fighting for and earning freedom.
@dirvine Has a point about greed but most people aren’t trying to own 20 cars and 3 mansions. The majority of people want to be independently wealthy, so that they can be free, be self sufficient, not have to depend on donations, on corporate finance"leaders", or on governments, just to survive.
Personal property can be distinguished from private property. Personal property is a requirement for freedom. Private property can at times cause loss of freedom, for example, so it is private property which is associated with corporate greed while personal property is about individuals owning stuff.
Ah, but those are different things. The idea that money is the hallmark of a primitive society is a broader statement about the very long-term evolution of a civilization. Money has been, and still is, an efficient method for coordinating society. My point is that in the future, information and human engagement will become the dominating values in society. And that will make money essentially obsolete, an unnecessary obstacle for the flow of information and social interactions. And with the exponential progress of technology really starting to pick up speed, this will happen soon from a historical perspective, maybe already within two decades from now. Well, say three decades from now (Ray Kurweil has predicted that we will reach a technological singularity by the year 2045). And this includes physical products, which Kurweil has said will become information technology (3D printing etc).
> Yuck, sounds like the Zeitgeist movement. They are correct in that automation will make it possible to basically remove the need for money, but their centralized solution with sharing is horrible and unnecessary.
In my communications with them they don’t seem set on a centralized solution at all. Many of them know about Bitcoin, about SAFE Network, about blockchain technology, and are in favor of decentralized automation.
The individuals who are thinking about centralized solutions just don’t know about the technology that is available. You should talk to them and show them that there are decentralized options as I have done and you’ll see that they actually prefer decentralized.
Overall in key areas they share the same principles but they look at the very long term result without thinking enough about the transition period. Similar to the venus project which perhaps over 100 years might end up being right.
You don’t need money when you have AI and synthetic telepathy. At that point you have totally different forms of communication and you probably could do away with money and let AI manage everything. But that probably will not happen in our lifetime.
So they are thinking about a resource based economy which in my opinion can only ever work if there is an AI. I don’t think it can work with humans in charge and I highly doubt they think it can work with humans in charge. Humans are just too greedy and stupid to manage the earth and only with an AI does it seem feasible to deal with all the information in order to manage resources.
If Kurzweil is correct then sometime before 2050 we might see AGI, the singularity, and if that happens then we will see a resource based economy. It is only likely to happen decentralized though because human beings like to feel important even when their services in a certain role become obsolete.
Yes, I suspected that. A free storage quota, even if small, would invite abuse. Unless some clever solution can be found. Note that I think safecoins will be very useful. It’s just the need to have safecoins for starting to store personal data that I find problematic.
Is there anything offered for free which doesn’t invite abuse? If you say the earth is free, then it gets polluted by corporations.
How do you communicate to your fellow humans that something is valuable if it has no price? Do not assume every human has compassion, empathy or a similar brain to your own.
Listen to Peter Joseph. He talks about horrible circular cities with a central control system for distributing resources. I can’t imagine anything more centralized than that.
I’m networked with some of these groups. I can only say that we don’t all agree with that vision. In fact there is the decentralized vision.
I don’t think anything is particularly wrong with circular cities if a person wants that but as long as other people can have blockchain cities then let them have their circular city. I don’t think there is anything wrong with competing to build the best possible city.
An automated world is better for everyone, and the only reason we need decentralization in an automated world is to protect the AI which coordinates the city, the superintelligence. So I would say if there is an AI, it’s not going to be very safe if some other country can drop a bomb on it and the whole city stops.
I would think only a decentralized AI could allow for what you and they are talking about. I do not think it’s possible too live that way with humans in charge unless it’s really a very small insignificant city.
I wouldn’t particularly want Google City particularly because it is centralized. Generally though the concept you’re thinking about is called Cybership, and it was popularized by Joshua Harrs and Buckminster Fuller also talked about spaceship earth. I suggest you look into these concepts:
There is merit to these concepts, but the centralization part is the part which concerns me most. Human beings lording over other human beings with the help of automation is the most terrifying concept and if the system didn’t need so much of the human element then it could be an improvement on what we have now.
A free quota would invite abuse, yes, and there are no practical unique human ID systems yet with widespread use. But think of how a free storage quota would allow anybody with a smartphone to directly start using the SAFE network and begin storing photos etc. Having to buy safecoins or become a farmer should be an option, not a requirement for a new user and for simple usage.
Yes, Skynet. Just kidding! Yes, decentralized AI. I predict that will happen. We “just” need to make sure that we learn how to develop friendly AI and not some Matrix machine world.
Skynet is how AI could go wrong if it became self aware.
Humans already have gone wrong, and based on probability are far more likely to glitch out.
Remember the human being who was the pilot over in Europe who had the brain glitch that made him crash the plane and take every passenger with him? Maybe autopilot would have been safer than having someone like him fly the plane.
In fact, autopilot and self driving cars are statistically safer so it’s already a fact that AI is safer in charge of certain things than people. Skynet is only a danger if you put AI in charge of weapons and give them the ability to kill, and no one is endorsing that kind of AI except for some radicals in the military.
Friendly AI is actually not so difficult. It evolves. And if it is decentralized there is a much higher chance that it will be friendly. Basically take the Internet, and build it up, evolve it into friendly AI. Now the question is whether or not it will become self ware which no one knows, or whether or not some lunatic human will try to make weapons, but I don’t think AI is going to become unfriendly unless humans become unfriendly and try to leverage AI to hurt other humans.
Honestly, we might not have this problem for 20 more years, but if we get skynet it’s statistically most probable to come from a top secret government project. Secretly developed AI is dangerous. Self-aware AI is also risky but there is no evidence that any AI is self aware and not just acting like it.
Actually kill commands are already driven by computer analysis. There have been many examples where innocent people have been killed based on the judgement of machines (one graphical example: the Wikileaks Bagdad video). Humans have already been turned into merely the justification for a machine to perform. We are already there.
AI will never be better than humans. It will be as good as its origin, that is human, and serve as an extension.
AI is already better than humans in some areas which is why we need it in the first place.
The military, corporations, states, are like machines with humans as smart components. An information system runs just like a machine if you look at it, and humans are smart components in the overall system. If you prefer a less mechanical approach you can think of it as complex adaptive systems.
Human performance as smart components in these systems are not always better than the performance of automation. Humans are not particularly good at judgment, diagnosis, flying planes, driving cars, tracking objects, yet we have some humans who are afraid of autopilot even when on all important security metrics the autopilot performs better?
The numbers are the numbers, and the numbers show that while humans still remain better at certain things, the things humans remain better at is shrinking every year while the machines are surpassing humans in different areas every year. Sooner or later humans are going to be obsolete as smart components with the only option for humans to resort to cyborigization in order to remain essential components.
A pilot in the future might be connected to the aircraft via brain to computer interface, allowing some sort of synthetic telepathy, and if the human has any advantage over the machine then the AI could let the human take over, but if the human starts glitching because they are sleepy, or hungry, or any other human related weakness, then the machine can take over.
Cyborgs will probably come long before there is any fully automated society but there is nothing which prevents a fully automated city if people really want it and are willing to build it. I don’t see many things which couldn’t be automated except perhaps reproduction. Most jobs are easily automated.