rule
-
As someone who use a screen reader daily, absolutly the fuck not.
LLMs will invent things out of tin air and ruin any comprehesion. It waste my time rather than help me.
If you use any generic LLM then yes, but there are LLMs (like i said in another reply - its prrobably not a LLM - but as there is no 'real' ai that's what I'm calling all this ai bullshit)
That are trained specifically for captioning/transcripts, just not necessarily done in real time.Doing it "live" is what increases the error rate.
-
Yeah speech to text models have nothing to do with LLMs and their use for captioning is perfectly fine imo
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
-
Subtitles is a perfect use case for LLMs.
subtitles have a hard enough time getting the words right without llms.
-
If you use any generic LLM then yes, but there are LLMs (like i said in another reply - its prrobably not a LLM - but as there is no 'real' ai that's what I'm calling all this ai bullshit)
That are trained specifically for captioning/transcripts, just not necessarily done in real time.Doing it "live" is what increases the error rate.
I will frame it another way.
You cannot automate subtitles or caption.
And I always find reviewing automated output is harder than doing it yourself. -
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
How would an llm fix a mistake equivalent to something being misheard? I feel like you're misunderstanding something and could probably also use some help with your English.
-
Subtitles is a perfect use case for LLMs.
Yes and no. There are specialized models that perform better than general purpose LLM with vastly lower resource use. But… the output part is essentially a language model too, so it’s prone to a lot of the same issues.
They perform A LOT better than traditional models though. So much better it’s not even funny.
-
If you use any generic LLM then yes, but there are LLMs (like i said in another reply - its prrobably not a LLM - but as there is no 'real' ai that's what I'm calling all this ai bullshit)
That are trained specifically for captioning/transcripts, just not necessarily done in real time.Doing it "live" is what increases the error rate.
LLMs are large language models, they're a specialized category of artificial neural network, which are a way of doing machine learning. All of those topics are under the academic computer science discipline of artificial intelligence.
AI, neural net, or ML model are all way more accurate to say than LLM in this case.
-
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
Large language models are designed to generate text based on previous text. Translation from audio to text can be done via a neural net but it isn’t a Large Language Model.
Now, could you combine the two to say reduce error on words that were mumbled by having a generative model predict the words that would fit better in that unclear sentence. However you could likely get away with a much smaller and faster net than an LLM in fact you might be able to get away with using plain-Jane markov chains, no machine learning necessary.
Point is that there is a difference between LLMs and other neural nets that produce text.
In the case of audio to text translation, using an LLM would be very inefficient and slow (possibly to the point it isn’t able to keep up with the audio at all), and using a very basic text generation net or even just a probabilistic algorithm would likely do the job just fine.
-
LLMs are large language models, they're a specialized category of artificial neural network, which are a way of doing machine learning. All of those topics are under the academic computer science discipline of artificial intelligence.
AI, neural net, or ML model are all way more accurate to say than LLM in this case.
I have to disagree with you. Ai is never a more accurate way to describe what we have now. Not until they call true ai something different.
I know its a weird hill to die on, but die on it I will. Calling one artifical intelligence and one virtual intelligence could work.
Also it's my understanding that LLMs are considered a type of neural net so I don't see it being more accurate to call it a neural net vs a llm.
And they are all subsets of machine learning so calling it an ml model leads me back to the same issue I have with "ai". (And the same reason those loser usb fucks can suck a bag of dildos) lack of clairty of what it actually can do.
-
edit to clarify a misconception in the comments, this is an instagram post so “caption” refers to the description under the image or video
as an example, this text i am typing now is also a “caption”
just saying because someone started a debate misunderstanding this to be about subtitles (aka “closed captions”) and that’s just not the case
Disabled people using their disability as a reason to defend ai but not acknowledging that disabled people will be the first to suffer when it comes to the climate crisis, water crisis, displacement, lack of privacy, and all kinds of inequity. Ai is not here to help disabled people, its here to further capitalist billionaire goals.
-
I have to disagree with you. Ai is never a more accurate way to describe what we have now. Not until they call true ai something different.
I know its a weird hill to die on, but die on it I will. Calling one artifical intelligence and one virtual intelligence could work.
Also it's my understanding that LLMs are considered a type of neural net so I don't see it being more accurate to call it a neural net vs a llm.
And they are all subsets of machine learning so calling it an ml model leads me back to the same issue I have with "ai". (And the same reason those loser usb fucks can suck a bag of dildos) lack of clairty of what it actually can do.
A dog is a kind of animal but that doesn't mean you can describe every animal as a dog.
The term for "true" AI is artificial general intelligence.
-
Understandable, AI marketing now is a shitshot, but they are not even AI I think. Just people forget that tech used to do magic before AI existed.
This is a big part of it. Back when ai was first becoming big, my manager said they needed to run all my kb articles through an ai to generate link clouds or some such.
I was like umm.. that’s a service this platform has always offered..? Like just because you don’t know what the kb tools do, or what our rock bottom subscription gets us, doesn’t mean I haven’t looked into it.. but that also isn’t worth doing because now we only have a handful of articles in any given category because I’m good at my job..
-
Subtitles is a perfect use case for LLMs.
Crunchyroll really messed up their subs with AI. Not sure if they mean LLMs and are just calling it AI but still:
Kept wondering why subtitles were so obviously off when I was watching some stuff. It was horrid.
-
Subtitles is a perfect use case for LLMs.
to clarify we are talking about a post caption, not closed captions.
that is, the text you put in the description of an image or video post.
-
I have to disagree with you. Ai is never a more accurate way to describe what we have now. Not until they call true ai something different.
I know its a weird hill to die on, but die on it I will. Calling one artifical intelligence and one virtual intelligence could work.
Also it's my understanding that LLMs are considered a type of neural net so I don't see it being more accurate to call it a neural net vs a llm.
And they are all subsets of machine learning so calling it an ml model leads me back to the same issue I have with "ai". (And the same reason those loser usb fucks can suck a bag of dildos) lack of clairty of what it actually can do.
You need to spend less time watching movies and more time watching computer science lectures. We had AI back in the 1960s.
-
It definitely is. As someone who actually struggles with severe ADHD this comment makes my piss boil.
I second that, this person is actually just lazy. I got ADHD and I always add fucking alt text, it's part of the normal post routine no matter if I took my meds or not. And it's not like you can't edit it into posts if you clicked send too quickly.
I'd even argue it makes your social media experience better. Forces awareness to what you do, gives you time to reflect on your post.
-
Subtitles is a perfect use case for LLMs.
Fuck no.
-
I second that, this person is actually just lazy. I got ADHD and I always add fucking alt text, it's part of the normal post routine no matter if I took my meds or not. And it's not like you can't edit it into posts if you clicked send too quickly.
I'd even argue it makes your social media experience better. Forces awareness to what you do, gives you time to reflect on your post.
There was someone on tiktok defending AI "art", who says that he has ADHD and how it is hard for him to concentrate on art and how AI makes his life "easier" by allowing him to feel like he did something, don't remember exactly but it was something like that. But he also forgot how many disabled people there are, with different disabilities, and still be able to make like perfect art. He also mentioned how he wasn't born with talent, not like talent doesn't really exist.
-
There was someone on tiktok defending AI "art", who says that he has ADHD and how it is hard for him to concentrate on art and how AI makes his life "easier" by allowing him to feel like he did something, don't remember exactly but it was something like that. But he also forgot how many disabled people there are, with different disabilities, and still be able to make like perfect art. He also mentioned how he wasn't born with talent, not like talent doesn't really exist.
Probably self diagnosed with whatever cool disorder is hot right now
-
Understandable, AI marketing now is a shitshot, but they are not even AI I think. Just people forget that tech used to do magic before AI existed.
It's kind of the other way around, we've always had AI, it used to just basically mean a computer making some decision based on data. Like a thermostat changing the heating in response to a temperature change.
Then we got LLMs and because they are good at pretending to have complex reasoning ability, AI as a term started to always mean "computer with near human level intelligence" which of course they are absolutely not.