I get what you are saying... but I think most people have no CLUE what tools their doctors are using or how they work. Nor do they care. They care about what the doctor tells them.
In the United States and other countries where healthcare is for-profit, some doctors are almost certainly going to be vocal about not using AI in a bid to attract more patients:
Quote
researchers [...] found that U.S. adults were significantly less likely to trust, feel empathy toward or seek care from a physician who advertised using AI — whether for diagnostic, therapeutic or even administrative tasks. [...] in every case, mentioning AI use reduced scores for perceived competence, trustworthiness and empathy. Patients were also less likely to say they would book an appointment.
Azath Vitr (D, on 14 August 2025 - 02:56 PM, said:
HoosierDaddy, on 14 August 2025 - 02:40 PM, said:
I get what you are saying... but I think most people have no CLUE what tools their doctors are using or how they work. Nor do they care. They care about what the doctor tells them.
In the United States and other countries where healthcare is for-profit, some doctors are almost certainly going to be vocal about not using AI in a bid to attract more patients:
Quote
researchers [...] found that U.S. adults were significantly less likely to trust, feel empathy toward or seek care from a physician who advertised using AI — whether for diagnostic, therapeutic or even administrative tasks. [...] in every case, mentioning AI use reduced scores for perceived competence, trustworthiness and empathy. Patients were also less likely to say they would book an appointment.
In an ethical and rational society, it would be illegal for doctors to not use AI assistance for applications where it improves medical outcomes.
Lesson - Don't advertise its use. There's no reason to.
People who trust doctors, trust doctors. Again, AI is a TOOL, and a tool is only as good as the material used to build it and the person whose hands are utilizing it. It isn't doing the diagnosing and I don't want it doing the diagnosing either. I will trust the doctor's discretion far more than any AI.
But this is beyond the point. If they want to use AI to assist, great. But overreliance on any tool is a bad thing, medicine and otherwise.
Trouble arrives when the opponents to such a system institute its extreme opposite, where individualism becomes godlike and sacrosanct, and no greater service to any other ideal (including community) is possible. In such a system rapacious greed thrives behind the guise of freedom, and the worst aspects of human nature come to the fore....
Mark Lawrence conducted an online "AI vs authors" flash fiction blind test again:
Quote
The contributing authors have sold around 15 millions books between them. And they are...
Robin Hobb
Janny Wurts
Christian Cameron / Miles Cameron
& me!
[...] We had 964 votes on the issue of whether story 1 was by a human or AI. This fell fairly smoothly to 474 votes on the rating of story 8. [...] on average the public [guessed] no more effective[ly] than a coin toss!
[...] A sizeable majority of people thought my story was human authored [...]
[... But] the AI scored better than us. Not only was the highest rated story an AI one, but they scored higher on average too.
One obvious possible explanation would be that more people guessed "human" for Mark Lawrence because they were familiar with his writing---another would be that his story was better than those of the other human authors (he decided not to share how the human-written stories were rated relative to each other, so we can't tell), but one of the AI stories scored strictly higher than any of the human stories...
This post has been edited by Azath Vitr (D'ivers: 17 August 2025 - 05:07 PM
Location:The call is coming from inside the house!!!!
Interests:Interesting.
Posted 18 August 2025 - 04:39 AM
I think that the 'flash fiction' format played to the AI's strengths... study a bunch of highly rated short content and reproduce the common points in a slightly different variation.
Had the challenge been 500 page doorstoppers - which is obviously unworkable for many reasons - i suspect the outcome would have been different.
THIS IS YOUR REMINDER THAT THERE IS A 'VIEW NEW CONTENT' BUTTON THAT ALLOWS YOU TO VIEW NEW CONTENT
I think that the 'flash fiction' format played to the AI's strengths... study a bunch of highly rated short content and reproduce the common points in a slightly different variation.
Had the challenge been 500 page doorstoppers - which is obviously unworkable for many reasons - i suspect the outcome would have been different.
ML (Mark Lawrence, that is, not Machine Learning) does say that LLM aren't as good at long-form narrative. I think a more likely explanation of the difference is that current LLM have a limited context window:
ChatGPT allegedly fuelled former exec’s ‘delusions’ before murder-suicide
Officials say ChatGPT fed a former executive’s ‘delusions’ in the weeks leading up to a horrifying murder-suicide
"Fortune favors the bold, though statistics favor the cautious." - Indomitable Courteous (Icy) Fist, The Palace Job - Patrick Weekes
"Well well well ... if it ain't The Invisible C**t." - Billy Butcher, The Boys
"I have strong views about not tempting providence and, as a wise man once said, the difference between luck and a wheelbarrow is, luck doesn’t work if you push it." - Colonel Orhan, Sixteen Ways to Defend a Walled City - KJ Parker
More than half of the 258 published novelists in Britain recently questioned for a Cambridge University study, along with 74 industry figures, believe the technology is likely to entirely replace their work. Romance, thriller and crime writers feel they are the most threatened.
[...] Generative AI is a useful research tool for proper, human writers – a third of authors in the Cambridge study admit to using it to speed up non-creative research tasks [...]
The unfortunate fact is that people absolutely love undemanding sludge, and that an awful lot of those threatened authors, as well as film and TV screenwriters, blog writers and so on already produce it for a living.
[...] Mediocrity is hugely prized across the board. A hugely successful travel blogger, who I follow for the perverse amusement of it, writes so boringly about his adventures that he's saved me a fortune by making everywhere sound uninteresting.
Unfortunately the author mistakenly assumes that AI---even future AI---is necessarily incapable of being creative.
Quote
I lead product strategy at a creative agency. We're using AI 'hallucinations' to come up with ideas for brands.
RYA is our creative AI tool. We position it as: radical ideas that are acceptable to your audience, because they're all grounded in data.
We put together a weekly survey that asks Americans: If you had extra time or money in your day, how would you spend it? We serve up 180 genres and 20 different actions, things like "I want to go on a trip" or "I want to go out to eat." And then the genres add a little bit of nuance: "You said you like to go out to eat. What kind of food do you like? You'd like to go on a trip. What kind of traveler are you?" That's where we can really pinpoint passion points. With that dataset, we then figured out how to train large language models to be creative and come up with ideas just as if our teams were coming up with ideas on their own — but at a rapid pace. We found that Anthropic's Claude is the best at generating creative ideas
What about anti-AI bias among audiences? Some interesting news on that front:
Quote
An “AI” label fails to trigger negative bias in new pop music study A study in Singapore found that, contrary to expectations, study participants rated pop songs labelled as AI-generated more highly in positive emotions compared to pop songs (which were also AI-generated) that were labelled as human-composed. The positive emotions included happiness, interest, awe, and energy. Consequently, this study found no evidence of negative bias towards AI-generated music. The paper was published in Computers in Human Behavior: Artificial Humans.
I gotta know whose payroll you're on man. Like the vehemence with which you fight the notion that LLM's basically scrape existing works and styles for the lesser content is spews out is...kinda next level.
I even have a good example. Imogen Heap, a singer and musician who I have followed for decades now. I have seen her more than a few times in concert. She uses these cool little finger gloves to make the sounds as she sings. She's always been tech-y.
She made a whole album recently by, associating with an AI/LLM company, and fed the system her discography, and some ideas about song narratives...and it spat out the album....and this is something she was up front about and clear that she was willingly doing this (I still don't think that's a good idea, but I'm willing to let it slide since she was involved)...
The album is ATROCIOUS. Heap has made some middling music in the last few decades, but nothing that made me actively want to turn it off. This album is so DEEPLY inferior to the stuff she made herself as to be insulting. It's like someone told a computer what her music sounds like, from another room, 3rd hand by someone who had never heard her music before, and after they smacked it with a shovel.
And I don't need you to believe me, you can hear it yourself.
Here's 2009's BAD BODY DOUBLE, one of my fave's by her made very much with as many tech accoutrement as she could manage.
and here, is 2025's AFTERCARE, the lead single from the AI Album (called Imogen Heap VS AI Imogen Heap)
As you can hear, it's a mess and feels like the vague shadow of the strong musical vibe from her older album...and it's not just that song, the whole ass album is like that...the second song is like intense buzzing for a long time...like I'm no stranger to weird, Bjork's more recent albums play with noise and incongruent shit...but she's making them herself...not feeding a computer that's doing the lions share of the work.
This is not "learning"....it's poorly copying and regurgitating...
Learning would be someone like Louise Post idolizing David Bowie, and echoing his stuff in her work...that's not even in the same "realm" of "learning" you claim AI does man.
Training a model on existing sound or text data and calling it art isn't creativity it's just building a bot that imitates it.
She did not train a generative AI model to imitate her songs, she trained an AI voice-changer to emulate her voice. Here's her explanation:
Quote
For clarity, the utilization of AI was done strategically and specifically as follows:
The vocal performance during the section "Aftercare" is all sung by me but I timbre transferred or "swapped out" my human voice sound for a model I trained off of my own voice called ai.Mogen. I then spent 100s of hours trying to make it sound nice.
So if you don't like the composition... it's not the AI's fault. I've certainly heard much better AI voice-changers for vocalists. Here's the one I've subscribed to---it's already been used by some major producers (KSHMR for one), and it's trained to emulate the voices of specific vocalists with their consent:
There's a big difference between literally "copying" someone's work and "copying" someone's style. The way these ANN generally work, they're not going to create an exact duplicate, but the learning process has to be carefully managed so it learns to create things that are sufficiently different to not violate copyright. These days generative AI almost always does a good job of not violating copyright or right to likeness. Still, people using generative AI outputs need to make sure there's isn't any inadvertent plagiarism and that any voices generated don't sound excessively similar to a singer's an existing voice (unless they have their consent).
Yes, generative AI can be used to imitate a specific artist's idiosyncratic style if there's a large number of very similar examples (artist who repeats the same thing) or their style is already extremely derivative, and the user tries to get it to do that, or if it's taught specifically to do that. But most of the time it doesn't do that. Whether a human chooses to try to get it do that and then decides to use the outputs (as one element of a collage with additional creative processing by hand, for example) is up to them. Humans can use AI in many creative ways---particularly by coming up with creative ideas (for images, for novels, even for songs) and having AI realize those ideas.
Since Suno added the ability to upload your own melodies (as well as lyrics) and have the "cover" feature actually follow those melodies, it's become popular among songwriters, lyricists, and producers, primarily for producing demos to pitch to artists or for auditioning how the song might sound in different subgenres and generating complicated (and surprisingly creative) densely layered arrangements of interlocking parts.
And Suno has now reached an agreement with Warner Music Group: going forward, their old model will be scrapped and replaced with one "ethically trained" exclusively with the consent of the rights holders. More importantly from my perspective, the new service will also restrict the number of free downloads (as well as continuing to watermark all audio outputs) to prevent people from profiting off of flooding streaming services with purely AI generated songs.
This post has been edited by Azath Vitr (D'ivers: 09 December 2025 - 07:41 PM
The issue with Theft Engines is they do not create. They simply jam disparate bits together to try and create something 'new', but in actuality it's never more than a horrible patchwork. It's like Frankenstein's monster except the Monster is made with some bits from a pig, and a horse, and a duck.
A computer doesn't have creativity. Someone putting sentences into a theft engine doesn't have creativity.
Maark Abbott, on 10 December 2025 - 08:47 AM, said:
The issue with Theft Engines is they do not create. They simply jam disparate bits together to try and create something 'new', but in actuality it's never more than a horrible patchwork. It's like Frankenstein's monster except the Monster is made with some bits from a pig, and a horse, and a duck.
Azath Vitr (D, on 10 December 2025 - 11:34 AM, said:
Maark Abbott, on 10 December 2025 - 08:47 AM, said:
The issue with Theft Engines is they do not create. They simply jam disparate bits together to try and create something 'new', but in actuality it's never more than a horrible patchwork. It's like Frankenstein's monster except the Monster is made with some bits from a pig, and a horse, and a duck.
That is not at all how they work or what they do.
I am in spaces both artistic, authorial and musical, and I see what theft engines do pretty much every day in one form or another. You'll not convince me that they don't steal from my pals when I've seen their stuff get scraped into a theft engine.
Maark Abbott, on 10 December 2025 - 12:47 PM, said:
Azath Vitr (D, on 10 December 2025 - 11:34 AM, said:
Maark Abbott, on 10 December 2025 - 08:47 AM, said:
The issue with Theft Engines is they do not create. They simply jam disparate bits together to try and create something 'new', but in actuality it's never more than a horrible patchwork. It's like Frankenstein's monster except the Monster is made with some bits from a pig, and a horse, and a duck.
That is not at all how they work or what they do.
I am in spaces both artistic, authorial and musical, and I see what theft engines do pretty much every day in one form or another. You'll not convince me that they don't steal from my pals when I've seen their stuff get scraped into a theft engine.
To be clear: you were saying you think generative AI is chopping up and rearranging or remixing sections of preexisting art or music, right? That is not at all how it works, as I've explained before, with references.
Non-copyrightable common structural elements are certainly imitated, though not literally copied. And as I wrote before, the model developers have to take care to minimize the risk of excessive similarity that would infringe on copyright or right to likeness (being too similar to a particular voice).
Can you provide any recent examples where you think generative AI literally duplicated a section or a copyrightable structural element?
This post has been edited by Azath Vitr (D'ivers: 10 December 2025 - 01:20 PM
Someone put it like this (paraphrased by me cos I don't recall the original). You might as well drink the trash juice in the dumpster outside a very good restaurant. All the scrapings from lots of incredible dishes have come together to make it so surely it's going to taste great.
A Haunting Poem
I Scream
You Scream
We all Scream
For I Scream.
Tiste Simeon, on 10 December 2025 - 01:31 PM, said:
Someone put it like this (paraphrased by me cos I don't recall the original). You might as well drink the trash juice in the dumpster outside a very good restaurant. All the scrapings from lots of incredible dishes have come together to make it so surely it's going to taste great.
This seems to be a clear set of counterexamples:
Quote
Mark Lawrence's AI vs authors part 2 results are in... and it's damning
Not only were the AI pieces overall rated better, once again we humans were no better than random chance at correctly telling apart AI from human-written fiction.
And as I posted before: when the Guardian asked art experts to try to distinguish between AI and historical human art---even way back in 2023---they didn't do very well. And not just for abstract art.
AI image generators have improved substantially since then. From a September 2025 study:
Quote
We find that participants struggle to distinguish between AI-generated and human-created artworks reliably, performing no better than chance under certain conditions. Furthermore, AI-generated art is rated as aesthetically as human-crafted works. Our findings challenge traditional assumptions about human creativity and demonstrate that AI systems can generate outputs that resonate with human sensibilities while meeting the criteria of creative intelligence.
An earlier (2024) study in the United States found that people did only slightly better than chance at guessing whether artwork was AI or human generated:
Quote
Participants correctly identified the source of the artwork only slightly more than half the time, and even so, were not confident that their guesses were correct.
"It's really a coin flip — when you show them the pictures, there's about a 50-60% chance they'll get it right," Samo said. "Generally, people don't know which is which, and when we asked how confident they were, they were typically saying they were only 50% confident."
Obviously, the difference between the 2024 study and the September 2025 might be largely explained by the models' progressive improvement, which has also been documented by Mark Lawrence's periodic tests.
And remember that Mark (Lawrence) felt obliged to discontinue his cover art competition back in 2023 after it turned out that the winning artwork used generative AI, even though the contest rules explicitly prohibited it.
This post has been edited by Azath Vitr (D'ivers: 10 December 2025 - 02:04 PM
Tiste Simeon, on 10 December 2025 - 01:31 PM, said:
Someone put it like this (paraphrased by me cos I don't recall the original). You might as well drink the trash juice in the dumpster outside a very good restaurant. All the scrapings from lots of incredible dishes have come together to make it so surely it's going to taste great.
it's essentially a big old blender, except what comes out of the funnel is turds.
It utterly galls me that people defend it. Especially here, on a forum dedicated to a creative work.
Interests:Sacrificing myself for everyone else's greater good!
Posted 10 December 2025 - 05:43 PM
Maark Abbott, on 10 December 2025 - 05:09 PM, said:
Tiste Simeon, on 10 December 2025 - 01:31 PM, said:
Someone put it like this (paraphrased by me cos I don't recall the original). You might as well drink the trash juice in the dumpster outside a very good restaurant. All the scrapings from lots of incredible dishes have come together to make it so surely it's going to taste great.
it's essentially a big old blender, except what comes out of the funnel is turds.
It utterly galls me that people defend it. Especially here, on a forum dedicated to a creative work.
In particular, let's not forget where this discussion came up again. Azath daydreaming about AI doing exactly what they're swearing up and down AI doesn't do.
Azath Vitr (D, on 08 December 2025 - 06:11 PM, said:
How did no one mention that one of the main characters is a cat?
Hmm, maybe if a great enough AI rewrote the books from the perspective of the cat... and narrated the audiobook with hyperrealistic (if idealized/optimized for human hearing and tactile sensation) cat sounds...
Maark Abbott, on 10 December 2025 - 05:09 PM, said:
it's essentially a big old blender, except what comes out of the funnel is turds.
It utterly galls me that people defend it. Especially here, on a forum dedicated to a creative work.
In particular, let's not forget where this discussion came up again. Azath daydreaming about AI doing exactly what they're swearing up and down AI doesn't do.
Azath Vitr (D, on 08 December 2025 - 06:11 PM, said:
How did no one mention that one of the main characters is a cat?
Hmm, maybe if a great enough AI rewrote the books from the perspective of the cat... and narrated the audiobook with hyperrealistic (if idealized/optimized for human hearing and tactile sensation) cat sounds...
No.There's a huge difference between "remixing" a text by chopping up and rearranging its words and rewriting a novel from a different perspective.
And also between unauthorized, unintended duplication of copyrightable elements, which would include the plot of a novel, and intentional, authorized rewriting of a novel mostly maintaining the same plot and main details (which could be explained in a sufficiently long prompt or series of prompts, or the whole novel could be input as a pdf). I have acknowledged that in the past there have been issues with overfitting for widely reproduced images, which did produce excessively similar works that would reasonably be deemed infringing, but the major generative AI platforms seem to have improved dramatically at preventing that. As I wrote before, people should still do due diligence to check that there isn't any infringing excessive similarity to copyrighted elements in other works.
Plus I was mostly kidding about having some future great author AI rewrite it from the perspective of the cat. Though it might be interesting.
This post has been edited by Azath Vitr (D'ivers: 10 December 2025 - 06:02 PM
It would not be interesting it would be another nail in the coffin of human creativity in a soulless desolate landscape where only the ultra wealthy have any freedom.
A Haunting Poem
I Scream
You Scream
We all Scream
For I Scream.
Interests:Sacrificing myself for everyone else's greater good!
Posted 10 December 2025 - 07:02 PM
Azath, please don't go into politics. You spin like a Maga Republican.
You tell me I'm wrong, but what about your stated scenario doesn't boil down to the prompt: "Take Dungeon Crawler Carl and make it from the POV of the cat"? This right here is exactly what so many people don't want. It's just like saying "take A Game of Thrones and stylize it like a Studio Ghibli film.".
This isn't anywhere near the case of a book being rewritten to add depth and perspective that the original didn't have. Think more along the lines of James by Percival Everett for an example of what I'm talking about here.