Death of x86?

Another interesting Kickstater

Interesting. Thanks.

The rumors of the death of x86 have been greatly exaggerated.


Not sure the business case for this one.

The revised Atom w/ the low power boards with pci express and a decent Nvidia CUDA capable card would
a) support windows / linux and a million other pieces of hardware
b) kick the shit out of this thing in performance / watt or / $ or whatever metric you want. 16 cores on a 65nm process isn’t going to be exactly a supercomputer on your desk.

this arm board.
Nvidia cuda boards.

I would say our biggest problem seem to be writing software to take advantage of multicores.

As you said, “lots of ARM cores on a blade” isn’t exactly a new idea.

Agreed. There’s simply too much legacy software out there, the only way x86 might stop being ubiquitous is if an ARM OS maker figures out a way to fully support >95% of x86 applications. Which, if I worked in Mountain View, I would be going all-out gonzo to develop.

As for the project, I don’t see it going anywhere since most users don’t need parallel computing–at least not at the expense of per-core performance. Might be interesting for doing Bitcoin mining on the cheap, though…I’m halfway tempted to back this just to see if I can show up the guys making $1500 purpose-built crypto boxes.

It has nothing to do with legacy software. ARM does some things really well, and ARM does some things really poorly. There are problems that aren’t easily parallelized, and require high performance per core. Those things still require x86, Itanium, or some other high-performance technology. To pretend that you can replace a single high-performance core with lots of low-performance cores (even if they don’t use much power) for all problems is foolish.

I may not have been keeping up on my study material, but what is the power consumption for a “decent Nvidia CUDA capable card”? Last time I looks it was 100W+, has that changed?

If I’m not mistaken then the target for the Epiphany multi core is 2W, total board with 1GB ram and dual core ARM will be 5W. An Atom CPU, without chipset or “decent Nvidia CUDA capable card” draws 5W to 15W, so where does “kick the shit out of this thing in performance / watt” come from?

Or “kick the shit out of this thing in performance / $” for that matter? Can you point me to a dev board with an Atom plus Nvidia for under $100?

How about getting a price on this:

Nvidia cuda boards.[/quote]

Have you looked around at what dev boards cost? Before the RaspberryPi the creapest ARM dev board you could get was hundreds of dollars. If you go Chinese then you can get a dev board for $100, with no support, and a really small comunity that can’t really help you with anything. And that is then for a very basic 400MHz ARM cpu and 256MB ram.

@ GMod(Errol) - Have you see this beauty yet? (It’s not in the homebrew or reference board market)

(full mini mobo with i3/i5 support with entry Ivy Bridge CPU/GPU with est starting price about $100)

it would still need some GPU muscle as you pointed out.

To me, this kickstarter might be more about getting their name/accelerator our there, than an actual commercially viable product. But then again, I’m not that up on the small scale massively parallel processing market.

Re death of x86 - “I’m not dead yet”

  1. Consumer front - ARM is the new AMD for Intel. I think we’ll be seeing more and more capable Atom class processors (or that line between desktop/mobile CPUs will blur), as the shift has started to more mobile with iPad/Android and increases in a few weeks Win8.

  2. Parallel & Supercomputing - Intel's Xeon Phi in 10 Petaflops supercomputer (64 x86 cores (256 threads) in a PCI expansion card)

That board looks to be about the size of the board inside my FitPC2, which is Atom based with an Intel screen chip inside the chipset. My FitPC2 draws around 12W when working hard. It also was not cheap, and the $100 board you pointed to will be $300 when you add GPU and CPU… :). Think my FitPC2 was around $400 four or five years ago.

See: Gallery - fit-PC wiki

An i3/i5 will again never get to the 5W range. A RaspberryPi draws 4W BTW.

To me this is an upgraded Propeller chip, which has 8 cores running at 20MHz each. People do some really amazing stuff with that chip. Now imagine you give those same people a chip with 16 cores running at 1GHz each… :slight_smile:

I understand that the i3/i5 (even ULV versions) would draw more and the total solution be more expensive. As I mentioned, it’s not really in the same market. But just pointing out that lower priced x86 alternatives to big iron stuff are becoming available (obviously a general purpose PC will have different characteristics that a purpose built device).


Just curious GMod, what do you see as the prime market or use for the Parallella and the Epiphany Multicore Accelerator? They brand it as a “Supercomputer for everyone”.

One thing would be neural networks.

Another, for me, would be Software Defined Radio, with one channel decoded per core. Imagine an FM radio that can receive 64 FM radio broadcasts at the same time. No tuning required.

Another interest for me is Synthetic Aperture Sonar. Imagine you have something like a Ping, or a Ultrasonic distance sensor, but instead of mounting it on a servo and getting 20 readings in different directions, you just send one ultrasonic pulse and you listen to all the echos comming back, with 6 or 8 microphones. With that recorded data, and some math, and quite a bit of crunching, you can build a map of what is in front of the robot, in many directions, all at once.

Who needs a whimpy 64 core computer… This chip as 144!

Low end CUDA card, 30watts (Nvidia 610) gives 48 cores andis $45 bucks.
Basic Atom board $80. 1.8Ghz 18+ watts (forthcoming boards are much lower power wise).

yes, you’ll need to add ram & powersupply and flash memory but it also has built in extras like NIC, advanced power management, usb host, and video out for diagnostics.

I think it will win any performance based metric by brute force on the numerator of that equation.

Here’s the problem: in the “performance per watt” measurement, watt is well defined, but “performance” is vague. Each manufacturer picks a metric that shows their board performing well. These Parallella people mention “45 GHz of equivalent CPU performance”. Since when was “GHz of equivalent CPU performance” a meaningful metric?

The amount of work done per GHz varies VASTLY by architecture. Even within a given ISA, it varies vastly (see AMD vs Intel and the GHz wars). CUDA cores excel at different types of work than x86 cores and ARM cores. If you’re going to compare performance per watt, you need to state what your performance metric is (and number of cores and GHz aren’t meaningful metrics).

Yep, and this $99 board will have 1Gb ethernet and HDMI for display. At 5W there it is already way less than Atom+Cuda can be with the most agressive power management.

The speed/processing depends a lot on instruction set and number of clock cycles per instruction. Lets look at GigaFLOPS, Billion Floating-point Operations Per Second.
The Nvidia 610 does 155 GFLOPS, at 29W, which gives you 5.32 GFLOPS/W.
This Epiphany-III chip does 35GFLOPS at 2W, giving you 16 GFLOPS/W. Yes, it is lower GFLOPS, but you can use 16 chips for the same Watt, to give you a total of 560 GFLOPS
The Epiphany-iV chip does 100 GFLOPS at 2W, giving you 50 GFLOPS/W. Closer GFLOPS, but still less. But, again, you can use 16 chips for the same Watt, to give you 1600 GFLOPS.

On top of all that the Atom+Nvidia solution won’t exactly be small.

In the end: If you want to run windows and do lots of calculations, get an i3 plus Nvidia. I doubt that an Atom will be able to manage a Nvidia properly.
If you want small, low power, portable, then look at this board.

64 cores but can it run Crysis?

Hmm, things just got interesting.

The applications processor on this board is a dual core ARM7 CPU running at 667MHz, with an FPGA on the same chip, chip made my Xilinx, a big FPGA company.

Add to that the 16 core paralel cpu chip, with an FPGA between it and the application processor, throw in some GPIO pins connected to the FPGA, and this has a lot of interesting possibilities, especially in the image processing arena…

That would make a nice microframework as well as WinRT/Surface base… now if only we had a CP7 that could connect to it easily… cough cough

Hmm, I bet you can connect a CP7 straight to the GPIO. What is an FPGA good for if it can’t run an LCD… :slight_smile:

Edit: The CPU doesn’t have native HDMI either, that is also done in the FPGA, so it should be trivial to drive a LCD instead of HDMI…