此帖转自 GreatCanada 在 电脑手机(IT) 的帖子:TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
GPU在通用性上比如数值模拟,仿真肯定用途比TPU广,但是现在半导体的主要市场是AI,在AI方面TPU的性能是碾压GPU的,为啥NV的市值还能这么高
TPUs such as Google’s TPU v4 can achieve up to 275 TFLOPS.
NVIDIA’s A100 GPU delivers up to 156 TFLOPS.
(转载)TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
版主: verdelite, TheMatrix
-
- 论坛元老
Caravel 的博客 - 帖子互动: 573
- 帖子: 25024
- 注册时间: 2022年 7月 24日 17:21
#2 Re: (转载)TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
这不是差不多,哪里有碾压?GreatCanada 写了: 2025年 7月 23日 15:39 此帖转自 GreatCanada 在 电脑手机(IT) 的帖子:TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
GPU在通用性上比如数值模拟,仿真肯定用途比TPU广,但是现在半导体的主要市场是AI,在AI方面TPU的性能是碾压GPU的,为啥NV的市值还能这么高
TPUs such as Google’s TPU v4 can achieve up to 275 TFLOPS.
NVIDIA’s A100 GPU delivers up to 156 TFLOPS.
如果功耗好10倍,google用十分之一的价格出租tpu
肯定有人买
#3 Re: (转载)TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
因为tpu只有8bit字长,你妈的高精度浮点运算编程要搞死人,你不知道的话找本z80汇编编程的书看看多位乘法是怎么实现的
相比gpu普遍32位,不追求高精度的情况下可以轻松部署
相比gpu普遍32位,不追求高精度的情况下可以轻松部署
#4 Re: (转载)TPU在深度学习专业计算方面无论性能还是功耗都碾压NV的GPU卡,为啥GPU能吃掉大部分市场
感觉你说的不对:
Tensor Processing Unit (TPU) is a processing unit specifically designed for machine learning applications. Developed by Google, these units are optimized to accelerate computations used in machine learning applications, differing from traditional CPUs and GPUs in this regard [4]. Speaking of tensors, I would like to briefly explain this concept. In mathematics, a tensor is a multi-dimensional numerical object. You can think of vectors (directed lines) as one-dimensional tensors, and a tensor extends this concept to two or more dimensions. Just as a scalar represents a point in space, a vector represents a direction. A tensor, on the other hand, can represent multiple directions or relationships. Tensors have various applications in fields such as physics, engineering, computer science, and many others. TPUs have led to revolutionary advancements in the field of artificial intelligence and machine learning, significantly increasing the speed of processing large datasets and training complex models, thus enabling broader use of AI applications.
TPUs have the potential to be a powerful choice because their high processing power allows for easier and faster processing of large datasets and training of complex machine learning models compared to CPUs and GPUs. Their optimization for specific types of computations used in AI applications makes them much more efficient for these tasks. However, TPUs are not typically preferred for general-purpose tasks like web browsing or word processing, as they are specifically designed for AI and machine learning tasks. Due to potentially higher costs compared to some GPUs, they are often accessed and used through cloud computing services.
The use of TPUs can be beneficial in the following scenarios and similar situations:
· Scenarios involving AI applications working with large datasets (e.g., image recognition, natural language processing, or text translation applications).
Time savings in developing complex machine learning models (TPUs can significantly reduce the time required to train large and complex models).
Tensor Processing Unit (TPU) is a processing unit specifically designed for machine learning applications. Developed by Google, these units are optimized to accelerate computations used in machine learning applications, differing from traditional CPUs and GPUs in this regard [4]. Speaking of tensors, I would like to briefly explain this concept. In mathematics, a tensor is a multi-dimensional numerical object. You can think of vectors (directed lines) as one-dimensional tensors, and a tensor extends this concept to two or more dimensions. Just as a scalar represents a point in space, a vector represents a direction. A tensor, on the other hand, can represent multiple directions or relationships. Tensors have various applications in fields such as physics, engineering, computer science, and many others. TPUs have led to revolutionary advancements in the field of artificial intelligence and machine learning, significantly increasing the speed of processing large datasets and training complex models, thus enabling broader use of AI applications.
TPUs have the potential to be a powerful choice because their high processing power allows for easier and faster processing of large datasets and training of complex machine learning models compared to CPUs and GPUs. Their optimization for specific types of computations used in AI applications makes them much more efficient for these tasks. However, TPUs are not typically preferred for general-purpose tasks like web browsing or word processing, as they are specifically designed for AI and machine learning tasks. Due to potentially higher costs compared to some GPUs, they are often accessed and used through cloud computing services.
The use of TPUs can be beneficial in the following scenarios and similar situations:
· Scenarios involving AI applications working with large datasets (e.g., image recognition, natural language processing, or text translation applications).
Time savings in developing complex machine learning models (TPUs can significantly reduce the time required to train large and complex models).
弃婴千枝 写了: 2025年 7月 24日 19:16 因为tpu只有8bit字长,你妈的高精度浮点运算编程要搞死人,你不知道的话找本z80汇编编程的书看看多位乘法是怎么实现的
相比gpu普遍32位,不追求高精度的情况下可以轻松部署
共产党就是赤裸裸黑手党