What’s at stake:
The new frontier in AI is in multimodal models such as CLIP and Stable Diffusion. Because humans interact with the world through both vision and language, AI researchers believe machines, too, need multimodal channels. Can something like LLaVA foster a “general-purpose assistant” that effectively follows multimodal vision-and-language instructions aligned with human intent? If so, at what cost?
We live in a tumultuous time.
Business reporters are often pressed to investigate commercial and technological advancements almost immediately after the scientific community breaks new ground. This is particularly true in the field of artificial intelligence.
With little time either for reflection or examination, reporters take scientists’ word on the Next Big Thing, reducing AI journalism to little better than stenography. Remember when its promoters kept saying the autonomous vehicle is “just around the corner.”
Corporations, obliged to become AI prospectors, now face a similar dilemma. Like the 49ers of the Gold Rush, the first priority for a company hoping to strike it rich, is to stake a claim, only worrying afterward whether its particular AI investment is the mother lode or a dry hole.
Clouding our judgement further is the exponential growth of Nvidia. Money talks.