At this year's Google IO, they announced TPU v4, doubling the computing power of TPU v3, and my immediate reaction was "who is going to need this?" Google has been churning out a string of papers the past years with ever increasing amounts of parameters in their models which are already in the millions. MIT has started a meta-study to extrapolate how much further improvements via ML are going to cost (link below) and they are predicting that we are getting into the regime of diminishing returns. The amount of computing power required to get a couple more percentage points on tasks like object recognition will soon become prohibitive.
Share this post
Marginally Interesting - The Newsletter …
Share this post
At this year's Google IO, they announced TPU v4, doubling the computing power of TPU v3, and my immediate reaction was "who is going to need this?" Google has been churning out a string of papers the past years with ever increasing amounts of parameters in their models which are already in the millions. MIT has started a meta-study to extrapolate how much further improvements via ML are going to cost (link below) and they are predicting that we are getting into the regime of diminishing returns. The amount of computing power required to get a couple more percentage points on tasks like object recognition will soon become prohibitive.