ASIC is basically a purpose-built chip. Nvidia powers the early age of AI because there weren't other chips that specialized in it, and Nvidia makes general purpose chips that can do it too (albeit inefficiently), and it's easy to code for as it uses CUDA, an ecosystem that people know how to work with and create their own solutions in.
It's the same thing as what happened with crypto mining. Initially Nvidia GPUs were used for all of it because it was the only tool that enabled it semi-efficiently. Once dedicated accelerators hit the market, their price and efficiency made Nvidia GPUs irrelevant for that purpose, and now nobody mines anything with GPUs.
I think to some extent, smaller companies that aren't using their own solutions may still buy Nvidia as it's general purpose, anyone can easily use it and code for it. But their role will definitely see a notable reduction over time, and they'll have to go back to relying on selling chips to gamers, creators, and much smaller volumes to data centers.
Every big company like Google, OpenAI, Amazon, Microsoft, X/Grok, is developing their own solutions for more efficient, and much cheaper chips that can compute AI workloads. Google has had a head-start, so they're already using their own (The "TPU" family), replacing Nvidia. Others are designing or manufacturing their chips, and are yet to (but inevitably in the coming two to three years) switch to using them for training and inference ,so all AI processing. Nvidia will likely remain present, but own a far smaller and increasingly declining share of the AI compute market.