Skip to main content
No. of Recommendations: 0
AI (Artificial Intelligence), ML (Machine Learning), and DL (Deep Learning) are terms increasingly used in tech.

These three labels are often used as synonyms, but may have some subtle differences?

All three rely on chipsets.
This video is a review of the chipsets and their attributes.
https://youtu.be/ckdGWZrGF80

The basic similarity: They all do math really really fast.

CPU - Central Processing Unit - Generalized computing, is the old school technology upon which the 1990s tech revolution was built. It uses single core, and eventually, in the later 2000s, multicore processors, to process diverse forms of data. Runs the operating system.
Flexible, but slow. Multicore, parallel processors helped with the speed issue.
Cheap, most common, and used for generalized applications, (not for specialized tasks.)
Low amount of on board, high speed memory, large amount lower speed, external memory.

GPU, TPU, and NPU are ASICS - Application Specific Integrated Circuits. They do NOT run the OS, they are highly specialized, running only a specific operation, run very limited set of calculations but do it really really fast!

GPU - Graphics Processing Unit. Gaming chips (NVDA, AMD, etc). Lots of fast, on board memory. Matrix math, great for 2D displays. Made for games, so not perfect for AI ML.
Readily available, economies of scale, ecosystems, etc make them widely used in university settings. NVDA's Cuda program develops the NVDA ecosystem.


TPU - Tensor Processing Unit - a tensor is nD matrix. Help explain Einstein's general relativity.
GOOGL is only user/maker. Not sold for other people to use. Trade secret. Highly specialized to process tensors and AI, ML.
Does tensor (n-dimensional matrices) calculations really fast, speed measured in FLOPS - Floating Point Operations Per Second, or TOPS - Tera Operations Per Second,
A TPU is a brand name version of an NPU.


NPU - Neural Processing Unit -is a class of chips. Measured in FLOPS or TOPS, lots of fast, on board memory.

NPUs (including TPUs) are even more highly specialized ASICS, for AI ML/DL.
Very Expensive, and not widely available. The supply chains and user ecosystems are less developed than for GPUs.
The limitations are software as well as hardware related. The software to run NPUs (TPUs) is highly specific, and often must be self developed.

TSLA uses a custom built inference chip, NPU, runs a very specific set of calculations. These chips allow the car to drive itself autonomously, but do NOT learn, they are in the car for DRIVING purposes only. Learning is done back at the Tesla super computer, currently (Nov 2020) using NVDA chips.

Tesla is designing its own AI ML DL custom, highly specialized chipset for use in DOJO, Tesla's self designed and built super computer, which Tesla says will be ahead of all other super computers. Memory and power intensive.

TSLA promises to allow researchers to use DOJO, implying that TSLA is going to follow NVDA's Cuda lead, and create its own TSLA ecosystem.

Makes TSLA independent, in control of its own resources, and enhances its vertical business. This allows TSLA to advance farther and faster than other companies who have to rely on more generalized chips.

😷
ralph
Print the post  

Announcements

When Life Gives You Lemons
We all have had hardships and made poor decisions. The important thing is how we respond and grow. Read the story of a Fool who started from nothing, and looks to gain everything.
Contact Us
Contact Customer Service and other Fool departments here.
What was Your Dumbest Investment?
Share it with us -- and learn from others' stories of flubs.
Work for Fools?
Winner of the Washingtonian great places to work, and Glassdoor #1 Company to Work For 2015! Have access to all of TMF's online and email products for FREE, and be paid for your contributions to TMF! Click the link and start your Fool career.