I've been around computers since around 1970. It was always said that AI was "just 10 years away" - but, of course, the 10-year milestone kept moving out.

I never believed in AI. My belief was that if I could figure out more-or-less how they were doing it, it wasn't AI. For example, facial recognition is not hard to figure out. And "Pick up the blue ball" may take some clever programming, but noting very intelligent about it.

Until ChatGPT. That changed my view, dramatically. And yes, I do think it is AI. Note, for example, that not even those who are developing it really understand how it is being done.
 
I'm late to this discussion it appears, but this all boils down to how we define AI. Is an LLM a kind of Artificial General Intelligence (AGI)? Absolutely not. But is it rooted in Machine Learning such that it can mimic human response, most definitely! There is a capability here that we still don't understand the full potential of, but are learning at an amazing clip.
 
  • Like
Reactions: Jin Chong
Jin Chong
Jin Chong
Never late, as we appreciate views from the audience. Thank you for sharing your points.
They're always improving. :) Google Gemini just rolled out updates that lower the cost of model usage and improve performance. Feels like these companies are already on their way in the race to $0. My interest actually lies more in the value of Small Language Models (SLM's) that don't use as much energy and can be grouped together to perform better than their large model brethren at lower costs.
 
  • Like
Reactions: Jin Chong
They're always improving. :) Google Gemini just rolled out updates that lower the cost of model usage and improve performance. Feels like these companies are already on their way in the race to $0. My interest actually lies more in the value of Small Language Models (SLM's) that don't use as much energy and can be grouped together to perform better than their large model brethren at lower costs.
Any good SLM around for me to try ?