Parallella - Is this the future for Farming?

The Future of Parallel Computing

Adapteva’s groundbreaking Epiphany multicore architecture represents a new class of massively parallel computer architectures that is the future of computing and will disrupt a wide range of end markets from compact low power devices to next generation supercomputers. To enable parallel programming in heterogeneous environments, Adapteva is adopting an open source approach making the architecture, interface and programming information available to all.

Adapteva is the sponsor of the Parallella project and the designer of the Parallella board. The Parallella project is a community of users and developers dedicated to the promotion and progress of parallel processing in the industry. The Parallella board is an open platform available to participants to explore, prototype and contribute to an open source library of expertise, information and code samples for the benefit of the community. The community of thousands of people is a professional community of experienced participants worldwide.

  • 18-core credit card sized computer
  • Gigabit Ethernet
  • List item
  • Micro-SD storage
  • HDMI, USB (optional)
  • Up to 48 GPIO pins
  • User configurable hardware (FPGA)
  • Open source design files
  • #1 in energy efficiency
  • Starting at $99

Parallella Cluster in Action

Epiphany Architecture Primer

The Epiphany has a flat 32 bit address space split into 4096 1-MiB chunks. Each core is assigned his own 1-MiB chunk, but it also has transparent access to the memory of every other core in the system. The Figure below further illustrates the memory scheme of the Epiphany architecture.

Each individual core is a high performance RISC processor that can be programmed using the standard programming methods from the last 40 years. The challenge with having so many processors to work with is getting them to work well together. There are many parallel programming approaches in existence today, including: openCL, openMP, and MPI. By virtue of being a general purpose programmable processor, the Epiphany architecture could potentially support all of them with some effort.

One of the great advantages of the Epiphany architecture is that anyone familiar with C/C++ can achieve great results in no time. Absolutely no proprietary languages, libraries, or programming constructs required.


The grid architecture reminds me of the GreenArrays chips by Chuck Moore:

Are each core synchronous or asynchronous on the Epiphany?

I guess that programming in Forth would be a deal-breaker for many though ;-).

@chrisfostertv And what about XMOS?

@erick Bitcoin Script and olpc boot programer will feel at home with this stack-based programming language. :wink:

Where did you see anything about Forth? The documentation for the epiphany explictly says a C/C++ gcc toolchain. Maybe I missed something.

The epiphany cores are out-of-order superscalar processors with an inconsistent memory model. So no, they aren’t necessarily waiting when communicating with each other. [EDIT: Notice the strange double negative here. I was saying they are effectively asynchronous]

The unit is really cool, but I don’t see these running SAFE anytime soon to their potential. It could be worth seeing how the crypto on the epiphany cores or possibly even the FPGA units does. The latency is going to be critical more than anything, as this is something that @ned14 ran into when trying GPU offloading. Disk throughput could be interesting too, I think you’d have to use the USB 2.0 interface for that.

Given where SAFE is projected to be headed, you don’t see an important role for parallel processing? Maybe because SAFE is not so compute intensive as say a ‘proof of work’ scheme.

Niall had post this in another post, but basically he doesn’t expect SAFE to be CPU bound on most systems:

So the question is whether this board has enough power savings over modern Intel/AMD chips to justify the difficulty of programming for its specialized CPU/FPGA. I’m having trouble finding the latency from the 1gb DRAM to the local memory used by the epiphany cores - the bandwidth is 8GB/s. The latency is key to the practicality in this situation. Still a pretty cool board though - if SAFE incorporated some zero knowledge proofs (for example, Zerocoin style-anonymizing), this would likely become very useful.

1 Like

Interesting concept, significantly more expensive though

I was referring to the dev toolchain of the GreenArray chips, not the Epiphany. Should have been clearer…

It looks to me to be a consistent NUMA design, so local to the CPU memory gets low latency access while far memory gets high latency access. You can fire up any standard multithreaded program on one of these and get high latency access average performance, but if you design your program to be NUMA aware then you can get much better performance.

My next gen platform from ten plus years ago was a NUMA design. I way overestimated how soon they’d be commonplace. That said, a NUMA board for $99 is amazing, if I only had access to that sort of cheap test hardware a decade ago …

Designing around NUMA is an excellent mental test. For example, most crypto libraries have a crap design, and if approached from a NUMA optimal perspective you get a much, much better outcome. There is no reason why a crypto library shouldn’t assume a 16 core world and if it does, the choice of what crypto to use versus others is radically different e.g. CBC is a poor choice for NUMA.

Anyway, none of this matters to SAFE. But maybe it’s a vision of how things will be 20 years from now depending on when and how semiconductor scaling breaks down.

BTW @chrisfostertv thanks for the link. I’ll be buying one of these as a Jenkins CI test slave I think. You get to see bugs in an 18 core CPU you just don’t in a 4 core CPU.



Just realised that this isn’t a full fat 18 core CPU :frowning:. It’s a dual core Cortex A9 with 18 core ASIC where that ASIC is very similar to a GPU i.e. extremely limited. It has similar scatter-gather memory access limitations, but I suppose does have the advantage of a more generalised instruction set as GPUs are all vector registers.

What I had been hoping for was a cheap Intel Xeon Phi, so firing up any of 1…64 combinations of N CPU virtual machines i.e. 64x 1 CPU VMs or 1x 64 CPU VM or 2x 32 CPU VMs and so on.


$12.27 for the XMOS startKIT

This is definitely the case. And the latency is proportional to how far the referenced memory location is from the core (it must go through the “routers” of all the cores between the paths).

I wasn’t clear why SAFE wouldn’t be running to “potential” on this system anytime soon - its not a 18 core ARM system, as you found out. You have to explicitly request DMA transfers from main memory to these co-processor chips, just like a GPU. Then those cores are running some specialized instruction set. There does look to be potential on this board, but theres some work in using it.

1 Like

So not much chop then…and of no advantage for running multiple vaults via say Docker?

So I get this:

…then like, do I need like a soldering iron :fearful:

If you just want to program the chip you don’t need a soldering iron.
But if you want to build the Mesh networking farming equipment of the future. Be fearful :slight_smile: you will need it.
I was trying to address the significantly more expensive issue…

But i see the XMOS as possible candidate for creating future specialized programmable hardware for the safe network.

1 Like

There might be an easier way… No soldering required!

What if he decide to buy a BlackSwift Pro, and happened to have a hub laying around? I’m sure a maidsafe thin client could be written to the BlackSwift, and CPU useage could be routed over USB to each XMOS board. Then mesh networking could be as easy as plugging it in to a power supply! Did I mention the device is 25 * 35 mm? :smile:

1 Like

Power up via witricity a bunch of hacked wifi sd card and add mesh to the mix. :slight_smile:

Eugh Witricity is another unreadable UX fail website. I refuse to read websites that hit you with rapidly sliding images every couple of seconds. Web UX design seems to have got a lot worse in the last year or so.

Maybe you could go there: Eric Giler: A demo of wireless electricity | TED Talk
:wink: Warning for @happybeing there are moving picture :wink:

Or something more static:

1 Like