Skip to content
News & Analysis

Transformers in Auto: Who Does it, Who Needs it?

Gartner recently declared that generative AI has reached a “Peak of Inflated Expectations.” Let’s explore the substance behind the hype.
When Vision Transformers Outpeform ResNets
Raw images (Left) and attention maps of ViT-S/16 with (Right) and without (Middle) sharpness-aware optimization. (Source: 'When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations' )

Share This Post:

By Junko Yoshida

What’s at stake:
Tech companies are promoting myriad claims that they have solutions to process transformer algorithms better than others. The benchmarking of transformer engines is not yet available.

The burgeoning trend toward generative AI has flipped the whole AI world on its head, or so it seems.

Large Language Models (LLMs), as seen in ChatGPT, are mostly limited to language modeling and text generation. But transformers – an overarching deep-learning architecture that underlines LLMs and other generative AI applications – offers a model useful in data streams ranging from text, speech and image to 3D and video, or any sensory data. 

This is great stuff. Let’s get started.

Already have an account? Sign in.