amphibian, on 26 June 2025 - 07:19 PM, said:
Cause, on 26 June 2025 - 03:03 PM, said:
The paper is basically warning against the first case. That ChatGPT can lead to people either willfully, or accidentally misleading themselves into thinking they are learning when they are not.
I disagree majorly with this and it is because using AI to do any of this kind of work - whether it is for neophyte or expert - is functionally saying "I don't want to do this work and I may or may not check for validity or veracity".
Anything that I have to certify or stand on my career + reputation and deliver, I will not use AI for. It's too risky and the lawyers etc who are doing this are dumb as heck.
I do recognize the corner case where using AI can improve some things for an expert, but it is in a form that is akin to niche, bespoke AI usage with a very vigilant person right there who knows how to use it plus has the considerable capacity to check what is happening. That's a rarer situation than most people will encounter.
This article below mostly encapsulates my feelings right now about usage. I do add that my experiences with people using AI in ways that cross over into my work often feature shitty copy edited paragraphs that eliminate First Nations history, meaningful context about diversity, and flatten longer sections into something like pablum. This is happening because the AI is putting together a statistically likely thing, not an actual and valid thing. First Nations people aren't statistically prevalent, so their names and history get chopped right out as "too hard to understand".
https://www.nytimes....innovation.html
In many cases it takes much less time to rigorously check something for accuracy and make occasional necessary revisions than it does to generate it all yourself. And as Cause pointed out you can have AI check your work as well.
Deepseek and other leading LLMs incorporate reasoning models:
Quote
https://www.ibm.com/.../deepseek-r1-ai
And AI agents are more useful than LLMs. You can have AI agents understand your commands (most of the time) and run ordinary programs for you. Then you can have it check or other AI agents check its work a zillion times.
AI (not necessarily involving LLMs) is also great for recognizing patterns of potential interest, discovering and testing new solutions, running simulations, etc.