I know I'm in the minority here, but I've been finding AI to be increasingly useless.
I'd already abandoned it for generating code, for all the reasons everyone knows, that don't need to be rehashed.
I was still in the camp of "It's a better google" and can save me time with research.
The issue it, at this point in my career (30+ years) the questions I have are a bit more nuanced and complex. They aren't things like "how do I make a form with React".
I'm working on developing a very high performance peer server that will need to scale up to hundreds of thousands to a million concurrent web socket connections to work as a signaling server for WebRTC connection negotiation.
I wanted to start as simple as possible, so peerjs is attractive. I asked the AI if peerjs peer-server would work with NodeJS's cluster server. It enthusiastically told me it would work just fine and was, in fact, designed for that.
I took a look at the source code, and it looked to me like that was dead wrong. The AI kept arguing with me before finally admitting it was completely wrong. A total waste of time.
Same results asking it how to remove Sophos from a Mac.
Same with legal questions about HOA laws, it just totally hallucinates things thay don't exist.
My wife and I used to use it to try to settle disagreements (i.e
a better google) but amusingly we've both reached a place where we distrust anything it says so much, we're back to sending each other web articles :-)
I'm still pretty excited about the potential use of AI in elementary education, maybe through high school in some cases, but for my personal use, I've been reaching for it less and less.
I can relate as far as asking AI for advice on complex design tasks. The fundamental problem is that it is still basically a pattern matching technology that "speaks before thinking". For shallow problems this is fine, but where it fails is when it a useful response would require it to have analyzed the consequences of what it is suggesting, although (not that it helps) many people might respond in the same way - with whatever "comes to mind".
I used to joke that programming is not a career - it's a disease - since practiced long enough it fundamentally changes the way you think and talk, always thinking multiple steps ahead and the implications of what you, or anyone else, is saying. Asking advice from another seasoned developer you'll get advice that has also been "pre-analyzed", but not from an LLM.
I'd already abandoned it for generating code, for all the reasons everyone knows, that don't need to be rehashed.
I was still in the camp of "It's a better google" and can save me time with research.
The issue it, at this point in my career (30+ years) the questions I have are a bit more nuanced and complex. They aren't things like "how do I make a form with React".
I'm working on developing a very high performance peer server that will need to scale up to hundreds of thousands to a million concurrent web socket connections to work as a signaling server for WebRTC connection negotiation.
I wanted to start as simple as possible, so peerjs is attractive. I asked the AI if peerjs peer-server would work with NodeJS's cluster server. It enthusiastically told me it would work just fine and was, in fact, designed for that.
I took a look at the source code, and it looked to me like that was dead wrong. The AI kept arguing with me before finally admitting it was completely wrong. A total waste of time.
Same results asking it how to remove Sophos from a Mac.
Same with legal questions about HOA laws, it just totally hallucinates things thay don't exist.
My wife and I used to use it to try to settle disagreements (i.e a better google) but amusingly we've both reached a place where we distrust anything it says so much, we're back to sending each other web articles :-)
I'm still pretty excited about the potential use of AI in elementary education, maybe through high school in some cases, but for my personal use, I've been reaching for it less and less.