Skip to main content
No. of Recommendations: 2
https://www.coindesk.com/anti-asic-revolt-just-far-will-cryp...
Designed specifically to enable operators to earn a greater share of their networks' rewards, "application specific integrated circuits," or ASICs, have emerged to mine a handful of cryptocurrencies that were previously only able to be secured by those using GPU hardware
.
That's because, as bitcoin has proven in the past, GPUs won't be able to co-exist with ASICs, as their arrival is likely to push the hashrate to a level where other types of miners will become unprofitable.


Nice to get some info on when ASIC rule. nd that within their limits are cheaper than general purpose GPU.Basically it is in the acronym "application specific"
At present I suspect that AI is so immature that few outside of places like Google even know what the applications are going to be next year or two.. But at some point ASIC will be a big threat to Nvidia.
Print the post Back To Top
No. of Recommendations: 1
The article does also point out a critical flaw for ASIC(that is also its major strength). That is it’s “application specific”. If the “application” gets changed, like say an update to a crypto currency algorithm, it might very well render the ASIC obsolete. An expensive ornament. If say a cloud Titan fills a room with ASICs to rent out compute on a particular framework and that framework changes an algorithm in some way, it may no longer be compatible. Same for a fleet of cars or drones. Seems a risky adventure in this rapidly developing field. I see FPGA as a bigger threat, although they have a long way to go to catch up to NVDA Volta, if they can.
Print the post Back To Top
No. of Recommendations: 0
Is ASIC a thing outside of crypto-mining?
I don't think crypto-mining as a business is going to survive 2018, but maybe the technology has other applications that might threaten NVIDIA's business?
Print the post Back To Top
No. of Recommendations: 1
I believe AI was more the focus...at least before crypto became a thing.

http://www.moorinsightsstrategy.com/will-asic-chips-become-t...

Dreamer
Print the post Back To Top
No. of Recommendations: 0
Is ASIC a thing outside of crypto-mining?

ASICs have had the reputation of being CPU killers for decades but reality never caught up with fame. They seem to be a mid level utility chip. Why should they have better luck killing GPUs than they had killing CPUs? It's not the way to bet, I don't think.

Denny Schlesinger
Print the post Back To Top
No. of Recommendations: 0

ASICs have had the reputation of being CPU killers for decades but reality never caught up with fame.


I'd never heard of them before, and I thought of myself as reasonably tech-savvy.
Print the post Back To Top
No. of Recommendations: 0
History[edit]

The initial ASICs used gate array technology. An early successful commercial application was the gate array circuitry found in the 8-bit ZX81 and ZX Spectrum low-end personal computers, introduced in 1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution aimed at handling the computer's graphics.


https://en.wikipedia.org/wiki/Application-specific_integrate...

Denny Schlesinger
Print the post Back To Top
No. of Recommendations: 0
As a hypothetical the best barrier to ASICs is the need to constantly update software. Google appears to not have the problem with their tensor (albeit their tensor is no better than Volta - of course there are specific use cases that will make one better than the other).

That will be an interesting topic.

But for later. Just won a race to the courthouse! Not quite Churchillian fun, but in the realm of his world.

Tinker
Print the post Back To Top
No. of Recommendations: 1
ASICS are a very prominent component to modern day computing. Basically any circuit that does computing for a specific set of programming (application specific) as opposed to being able to do general computing is an ASIC. Think about a simple digital voice recorder. It is in essence a computer, simple as it is. You wouldn’t want to say put an expensive CPU in a device that can run off of a very simple and inexpensive ASIC. You sell it and would really have no need to update the software so the ASIC you install is always good. Same would go for building cruise missiles. The ASIC should be good for the entire lifespan of the missile. Technically something like the Axx chip inside an iPhone would be an ASIC. It only works for IOS phone software you couldn’t take it out and run a windows or Linux laptop with it. Unlike say an Intel CPU or an NVDA GPU which can perform general computing.

So the question becomes is something like an ASIC good for AI training or running AI devices. In some specific instances they will find a home. For where NVDA is targeting its AI market I think not so much. And that assumes that computing efficiency would be at least equal, which it’s currently not (edge to Volta by a good margin). In cloud data center where you want to rent out computing power to users, having a system tied down to one application limits your potential customer base. That affects pricing power for systems not as in demand.

For something like autonomous vehicles. If you are an end user Car company you want to sell someone something that can last the life cycle of the car. People will not like to have to fork over cash and time to update their cars hardware so it continues to be able to operate autonomously. Think about if some new sensor technology comes out that requires a major change to the underlying software. Now say this upgrade is so good it’s mandated by government regulators. If your processor is an ASIC, will a safety recall or update suddenly make your whole system unusable? That will be a huge problem. A system that runs on a GPU would not have that problem, ever.

Another problem i see for ASIC users in AI is what I call deceleration bias. Say a company like Google builds out a massive center with ASICS to run their popular Tensorflow framework. Will they put their full effort into developing out that framework software or will there be some underlying bias to slow down development to preclude having to rapidly replace their ASIC. In this business of rapid innovation Could what was once their competitive advantage turn into a competitive liability in the framework not innovating as fast or as good as others out there. The oracle and Cisco effect.

Just some thoughts. Bottom line is that there will be ASICS but there will be GPU high performance computing and the GPU is busy displacing CPU high performance computing so NVDA has a lot of room.
Print the post Back To Top
No. of Recommendations: 0
If the “application” gets changed, like say an update to a crypto currency algorithm, it might very well render the ASIC obsolete.

Yes, but there are many possible applications for AI where one might at some time in the future revise the algorithm, but one is not necessarily going to revise what has already been deployed. Imagine, for example, some AI to automate some aspect of flying a drone, say following a specific target. One might later come up with an improvement, but one will not be updating the thousands of existing drones. A wee bit of cleverness can also allow one to identify possible areas of change and externalize those from the core ASIC itself, making that part of the algorithm even possibly software updatable.
Print the post Back To Top
No. of Recommendations: 0
Think about if some new sensor technology comes out that requires a major change to the underlying software. Now say this upgrade is so good it’s mandated by government regulators.

Has there ever been an auto recall like this? To fix something that is broken and dangerous, yes, but to make something better?
Print the post Back To Top
No. of Recommendations: 3
Another problem i see for ASIC users in AI is what I call deceleration bias. Say a company like Google builds out a massive center with ASICS to run their popular Tensorflow framework. Will they put their full effort into developing out that framework software or will there be some underlying bias to slow down development to preclude having to rapidly replace their ASIC.

I think you are missing some details here. What Google's TPU does is not to (only) accelerate the Google TensorFlow AI framework. It accelerates the math of convolutions. Convolutions are, maybe 50% to 75% or more of a typical neural net processing time. TensorFlow is a high level framework. Down at the low level it calls functions that do convolutions (and other things). These can be executed by a CPU, by a GPU or by their TPU...or by any new ASIC that anyone else builds.

TensorFlow is conceptually similar* to a high level programming language (C, C++, Java or dozens of others). The TPU hardware is not specific to TensorFlow in any way. Another AI framework (Caffe, Torch, PaddlePaddle or a dozen others) could also be accelerated by the same TPU or a GPU or some new ASIC.

I beleive that it is important to understand the abstraction layers involved here if you want to sound like you know what you are talking about when discussing why a particular company or device has an advantage or not.

* You actually write your TensorFlow code in Python (a portable high level programming language

Mike
Print the post Back To Top
No. of Recommendations: 0
Is Google’s TPU (in its current iteration as deployed in their cloud) capable of running another deep learning Framework like Cafe2 let’s say? That would be news to me for sure. My understanding is they only developed it to run Tensorflow. I welcome learning something new.
Print the post Back To Top
No. of Recommendations: 2
“The TPU hardware is not specific to TensorFlow in any way.”

Mschmit
For Google’s specific TPU I do not think this is correct. That’s why Google and everyone else has labeled it as an ASIC processor. The application it is specific to is Tensorflow.

For instance here is how google markets the TPU on to their cloud customer.
“You can use a Cloud TPU to accelerate specific TensorFlow machine learning workloads. This page explains how to create a Cloud TPU and use it to run a basic TensorFlow model.” This from the how to get started section of computing with TPU.

https://cloud.google.com/tpu/docs/quickstart

The math equations that you reference are correct in terms of thinking about what “Tensor” is but not so much in thinking about what a Google TPU is. It’s an ASIC that features tensor core architecture. The programming and architecture is of an ASIC. Yes Tensorflow can integrate with many different programs but if you want to use the Google TPU to accelerate you must use Tensorflow. At least you have to set up your machine learning task through the Tensorflow application to use Google’s Cloud TPU service as it is currently set up. Maybe I’m wrong and it would be able to perform accelerations on other applications after beta. Do you have eveidence otherwise?
Print the post Back To Top
No. of Recommendations: 2
Darth,

You are absolutely correct. What is being built as GPU killers, called ASICS, are not really ASICS:

https://www.tractica.com/artificial-intelligence/asics-for-d...

An ASIC is a hardware solution to a software problem allowing for only minor deviations. Thus, as we have discussed, software updates, with a true ASIC, require a new ASIC for the most part. Such solutions, for say autonomous driving, would be a commercial disaster. It would require a hardware upgrade for every new new upgrade.

And upgrades will becoming continuously. Just for security purposes if nothing else.

What are being built are basically "better" GPUs that are supposedly more optimized and made to be more general in usage.
To date not a single such solution has hit the market, except for very specific and mature things that are unchanging in nature, and bit coin is one such thing. If you want to mine for bit coin, you use a real and actual ASIC.

What this means for GPUs is that until the technology becomes so mature that only minor updates become necessary going forward, or planned hardware obsolescence becomes the norm (very doubtful there given security and safety issues) these new batch of GPU killers are going to basically GPUs without having started out from being produced to create graphics.

And this is consistent with some interviews I have read. That the GPU was designed to crate graphics and therefore has extraneous features. We are trying to build a chip to do the same thing that a GPU does in AI without the extraneous junk.

Sounds a lot like the GPU has a lot less to worry about that is commonly batted about as these new "ASICs" will not really be an ASIC and therefore not likely to be materially better than what Nvidia is creating, particularly as it will be competing against Nvidia's next generation GPUs by the time they come out. But we shall see.

Tinker
Print the post Back To Top
No. of Recommendations: 0
It would require a hardware upgrade for every new new upgrade.

Not at all. There are a *great* many pieces of software where the algorithm in the software remains fixed, but the parameters which control how it behaves are fetched from outside. No reason at all that ASICs can't work the same way.
Print the post Back To Top
No. of Recommendations: 0
<<<It would require a hardware upgrade for every new new upgrade.

Not at all. There are a *great* many pieces of software where the algorithm in the software remains fixed, but the parameters which control how it behaves are fetched from outside. No reason at all that ASICs can't work the same way.>>>

You know, you say a lot of things, as proven in our political talks almost all of them turn out to be false or wrong with the number so great it may equal the number of dead JV marching ISIS soldiers killed under Trump in just 6 months.

So excuse me if I inquire further, can you give me an actual example of this? I provided a link to show that for any material AI application there is no real ASIC. None of the start ups are building a true ASIC, Google is not, certainly Nvidia and AMD are not.

Bit coin will never change and never need an upgrade or any variant to its fixed chip structure. It is a hardware solution with the software built in.

Give me an example. Support your supposition.

Tinker
Print the post Back To Top
No. of Recommendations: 1
Yes, the Google TPU is “optimized for specific workloads” using only Tensorflow. Google gives a list of compatible workloads that are all Tensorflow. And tells you what are not available including Tensorlow workloads with “Custom Tensorflow operations”.

They let you know that GPUs would be the appropriate choice for workloads “not written in TensorFlow or cannot be written in TensorFlow”. This because “On Cloud TPU, TensorFlow programs are compiled by the XLA just-in-time compiler. When training on Cloud TPU, the only code that can be compiled and executed on the hardware is that corresponding to the dense parts of the model, loss and gradient subgraphs.”. The TPU performs only those calculations or convolutions of the Tensorflow workload.

“Tensor Processing Units (TPUs) are Google’s custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning workloads.” “Cloud TPU enables you to run your machine learning workloads on Google’s second-generation TPU accelerator hardware using TensorFlow”

Read more if you like.

https://cloud.google.com/tpu/docs/tpus

Tensorflow is by far the most widely used deep learning framework so Google will have success with an ASIC designed for it to be used in cloud compute. But their TPU appears to be a genuine ASIC and that does present some limitations if it is to be a “gpu killer”.
Print the post Back To Top
No. of Recommendations: 1
An ASIC is a hardware solution to a software problem allowing for only minor deviations.

This is a misunderstanding of the term ASIC.
The words in the name "application specific" were coined (no pun) a long time ago. Being an ASIC doesn't mean that the product isn't highly programmable.

A good example would be a DVD or Blu-ray player SOC. These devices contain a full (not very powerful) CPU, a DSP for doing audio and a programmable video decoder that can handle several codecs. The system, as a whole, isn't suited to be very general purpose. But they have lots and lots of software to run them and you can write very complex programs that handle all the disc menus, navigation, sub-titles, etc.

One reason that many ASICs aren't general purpose is just to make building them and deploying them simple. You put together a few IP blocks, your special processing parts and put it on a board with a PCIe interface. Then you write a simple PC-based driver for it and you are done. This is exactly how a GPU works and is just how Google's TPU works are well. The driver is an abstraction from the programming language (or AI framework). The driver knows how to set all the registers in the ASIC and trigger it to process a workload.

Mike
Print the post Back To Top
No. of Recommendations: 1
Is Google’s TPU (in its current iteration as deployed in their cloud) capable of running another deep learning Framework like Cafe2 let’s say? That would be news to me for sure. My understanding is they only developed it to run Tensorflow. I welcome learning something new.

Technically, I'm sure that it could run Caffe2. But, of course, someone would have to interface the Caffe2 code to the Google driver that runs the TPU. TensorFlow is open source but I doubt that Google has made the TPU device driver open source.

Note. Basically all the TPU does is a large number of 4x4 convolution calculations. This is exactly what every deep neural network needs to operate...plus a few other things.

There are numerous tools that take TensorFlow models and convert them to Caffe and vice-versa. There are several (formal) neural net exchange formats being developed and proposed that allow models to be converted from one format to another. This is a bit like the early days of word processing file formats. Converters from one format to another might work well on some documents, but might not have support for every nuance.

Mike
Print the post Back To Top