I have been, and continue to be, highly skeptical of LLMs.  I am a bit late to the party, having only just begun to experiment with ChatGPT in Oct 2023.  I pretty well knew what to expect from it due to prior work in AI (mostly with genetic algorithms) and from having read quite a bit about artificial neural nets.  My experiences with ChatGPT are simultaneously not surprising while sometimes still astonishing.  I have had much to say about generative AI to my friends and family, but I feel the following succinctly captures my view.

"An LLM is just trying to say things that sound good.  That's it, full stop.  The fact that it so often, unexpectedly and surprisingly, says things that are factually correct is an interesting phenomenon that needs to be studied, but it's still just a "reasonable continuation" calculator.  It was designed to be a calculator and we've discovered that it somehow−in ways that we really don't understand−makes great hollandaise sauce.  That's interesting and we should study how it can be that a calculator can make sauce, but in the meantime, the people in charge want to say 'We should use this sauce-making calculator to administer all food on the planet, and our health care system, and...'"

People that talk of its hallucinations have a misapprehension that "providing answers" is somehow part of its brief as a "reasonable continuation" calculator.  I would argue that ChatGPT is almost never wrong.  It was designed to produce statements that sound like something a human would say and nothing more. (Not to minimize the achievement; that's not a trivial accomplishment) I am just as impressed as everyone else by its uncanny productions, but when it produces statements that are believable yet not reflective of reality, it is still operating 100% as designed.  NFE...PEBKAC

That said, I have found ChatGPT to be a useful tool in a number of scenarios.  It is frankly astonishing how well it is able to deal with abstract concepts and metaphor in conversation.  I have used it to synthesize new ideas out of existing, but incompatible ones.  I have had several, very educational conversations and I especially like using ChatGPT to solve the "blank page" phenomenon and create JSDocs and test suites.  I always need to go back and clean up after the fact, sometimes a lot, but having it generate the starting point is pretty handy.

As a sidenote: much has been written about its weakness in math, but I think what the authors of those pieces were actually describing was "calculation", not "math".  ChatGPT very quickly and regularly falls flat when asked to calculate something, but mathematics is not about calculation.  Mathematics is about ideas and ideas are constructed with words−ChatGPT's bread and butter.  One of my academic "White Whales" has been to have an intuitive grasp of Fourier transforms.  I was able to make very concrete progress as a result of a conversation with ChatGPT.

But as useful a tool as ChatGPT can be at times, I find that it is too unreliable and unpredictable for me to consider it a first-class citizen of my toolchest.  For every conversation that astonishes or illuminates or assists, there will be two others dealing with problems seemingly less difficult where ChatGPT has no helpful input.  It is not clear what the difference is that leads to an astonishingly insightful conversation or one that is Eliza-esque nonsense.  Until that difference is made clear, I do not intend to rely on any LLM, to any degree.

I remain highly skeptical of LLMs, but my skepticism is and always has been primarily focused on humans and how this technology will be applied.  I believe the most suitable role for LLMs is as an interface to actual computation engines, ones engineered by humans that function in understandable and knowable ways.  I believe that LLMs−especially with the current state of the art−have no business being incorporated into decision systems, on any level, in any capacity.