lyn elaborates

three hierarchies


Updated Feb. 8, 2026

Lynai has worked in three fields. Broadcast, art/video and higher education. In all three, AI has touched it in some way.

BROACAST – JOURNALISM

With broadcast, automation has taken over control rooms. Production staff of 15 has dwindled to 1-3 people per shift. Multimedia journalists are expected to push out stories like tiktokers, sometimes doing 3 stories per day if possible. The normal rage was 1-2 per day. ChatGPT and other LLMs have been rolled out to producers, the idea that they'll make teases and short stories using LLMs rather than having producers spend time writing them out. Some companies (like Tegna) will have an LLM specific to their station rather than use a ChatGPT model so information is kept within the company. I'm unsure of if the models use Gemini or ChatGPT since I've stepped away from broadcast for a bit now.

Hearsts partnership with OpenAI (Hearst, October 8, 2024)
Tegna uses AI to write stories (FTVLive (Opinion), Jan. 2025)
How Local Stations Are Leveraging AI (...) (TVNewsCheck, October 2024)

Other companies may use LLMs to write long form articles on their websites, anywhere from small local stations to big companies like the New York Times. Sometimes this is not the fault of a NYT reporter but could be an opinion piece submitted by an outside contributor.

One recent example is the Chicago Sun publishing a page by a contributor that used AI to recommend fake books. It was not checked by an editor and printed by the paper. The error shed some light that writers have to keep up with demand and in some cases, turn to AI.

AI-Generated Fake Book List Seems Funny, but Reflects the Technology’s Danger to Journalism (PenAmerica, May 2025)
AI took their jobs. Now they get paid to make it sound human (BBC, June 2024)

If you're curious about what AI is being used at your local station or newspaper, it's worth emailing or calling the assignment desk and asking for transparency. If no one knows, then ask for the EP (executive producer) or ND (news director) so that you can be an informed viewer/reader.

THE ARTS – WRITING - VIDEOGRAPHY

It almost feels like this topic doesn't need much additional notes from me. If you're observant of the media field, you can easily see how AI is trying to be punched through. So I'll try to keep everything brief. My friends will laugh because my version of "brief" is still a thesis page.

THE ARTS, the humanities. If you only became “creative” after the introduction of Midjourney or Gemini, were you an artist to begin with? My thought two years ago had been that AI could be used in the concept phase of artwork. A commissioner giving an artist an idea for what they wanted. That is no longer my belief. When genAI is able to create fully rendered, near complete pieces, it muddies the creativity pool for the artist. The work is already made, what do you want me to do with it?

The arts is often something people want to cut corners on. But when you have no more artists in the room, symbolism and meaning fall off. When artists are part of the conversation, we can contribute so much. Art can have immense impact, such as the Obama Hope poster by Shepard Fairey or Flower Thrower by Banksy. When we are excluded things can feel lifeless, sterile which is honestly okay if your piece is for a stale stock holder meeting.

The current state of genAI infiltrating the arts is massive. Some companies refuse to be transparent about their AI involvement, others will do what they can. An example being Larian Studios stance on AI use in their concept work for their next game. My follow up questions for them include what specific tools are used with AI, WHICH AI model is being used (if applicable), where is it used specifically (such as powerpoints for internal use), can we see some blurred samples and may we how do their artists feel about this as they will have more insight to AI use in their respective fields.

On the flip side artistically, Spiderverse used a form of AI to support artists. This was not a machine learning model like Gemini. Instead it was a tailor made computer developed by Sony that learned and added lines to the 3d models. This made a very time consuming task into quick thing so artists focused on other tasks of the model as opposed to the minute detail. In this case, machine learning supported the artists in a tedius task rather than taking away from them.

Some companies are all into AI however. I learned Paramount launched an AI marketing campaign during San Diego Comic Con 2025. They broke it down in a panel for Adobe Max 2025 using Adobe Fireflys model trained on their own sources. But the running joke in the panel had been “every one of our 150 IPs takes place in japan because we based the mountain of one of the mountains in Japan...”

After messing with the Firefly model to stop giving horses 6 legs at times and with more curation, they launched the marketing campaign at SDCC 2025. Attendees could generate their own Paramount location “land deed” as their souvenir. They called it a success. I did pass their station while at comic con that year, took one look at the land deeds and at the time, called it slop. Now that I know the insight of how they created that marketing bit, I remain firm that it is slop.

WRITING is just as worse off as animation and art is. It's harder to detect writing as everyone has their own style and no, an em dash isn't a red flag for AI assisted writing. I'm an em dash user. Do I know how to use it properly? No, but will I use it? Yes I will, dammit.

Editors and avid readers have a heads up on picking up AI writing. I have an advantage as well because I was/am addicted to chatbots. While people are running into AI more in romance novels and medical journals, I'm running into this more and more in fanfiction writing.

  1. ”You're playing a dangerous game”
  2. “That's balance – not restriction”
  3. “This was a warning. A lesson. A reminder of who had power.”
  4. “You looked at him, really looked at him, (…): his sharp suit replaced by something softer, but the edge is still there.”
(The above are real examples I pulled from varying fanfictions and works that drive me NUTS. They are from 4 different fandoms as well)

Additionally, a lot of paragraphs will focus on the moment rather than world building, character description/development and making time feel like we're moving at a sluggish pace. I had to read about 6 paragraphcs of a throw away character just to seldom see them in the rest of the story just recently. What's the point of adding all that information if it gives the reader nothing of value? Word count? AI often has a lot of statement pieces and “gotcha” talks. Everyone has a quippy one liner and no one is interesting or even silly for that matter. Even poor writing will be more directive and active than AI assistance. To the average person that just wants to consume a plot, this is bottom of the barrel acceptable. But to someone who wants more nuance, to read the persons writing for what it is, we deserve more.

I find fanfiction writers are less transparent about their work than other fields and as I dabble more into the space again after a 15 year hiatus of my own writing work, I want to be transparent in this field. Simply because reading what I suspect may be AI assisted work is a massive turn off for me as no matter the writing, the format reads the same every. Single. Time. The AIs do not understand a characters personality unless you feed it to them and even then, unless you're on top of it, they'll break character.

As an addition, those AI detector websitse are bullshit. Please do not use them against writers as a "gotcha". It's incredibly insulting in my opinion.

The Problems with AI Detectors: False Positives and False Negatives (University of San Diego, Dec. 2025)
AI detectors are easily fooled, researchers find (EdScoop, Sept. 2024)

VIDEOGRAPHY is a space that AI is in and out of breaking into. Engines like Veo3 can produce realistic short videos but using it for an actual film is unreasonable financially. A monthly use plan (with no promo attached) for Veo3 is $60/month at 1000 credits. I don't have Veo3 so I can't say how many tokens it takes to generate a video. But it'll never be right the first time so you have to regenerate. The Coca-Cola 2025 Christmas ad had 70,000 generated images. Using that number on public use models like Adobe Firefly, the highest tier is 50,000 credits a month at $200/month. This price will skyrocket if you're a business or enterprise. Still, the results of AI generated videos hit flat with audiences as the artists are not in the room. To this day (Feb. 2026), the only video piece I've seen that I've said “yes, that might be actually... doable” was a short 4 minute animation piece fully generated in AI in the summer of 2025. The test animation was done by a traditional CG industry animator that took numerous regens and manual editing just to keep the style consistent. The piece took anywhere from 3-6 months (can't remember exactly) to fully complete along with AI made audio.

Yet in this example, it took a trained artist to make AI work and be cohesive, something many people neglect to address.

Additionally, AI tools have been involved in the video space for longer than people realize. Adobe's early adoption of AI in their programs has led to them trying to be the leader of AI in the creative space. Some of the tools are helpful, such as the text to editor and captions in Premiere and After Effects smarter masking/tracking tools. Adobes Podcast Enhancer (web) does a very spectacular job of removing loud airplane sounds from audio clips during interviews. New features for Lightroom have been rolled out for photographers as of January 2026, an AI culling feature. By the way, this can be turned off in Catalog Settings > Metadata > Assisted Culling to turn off. This should be off by default, I think. But it's always good to check.

EDUCATION - CHATBOTS

This is probably the more harrowing portion as it also involves children. The topic is massively complex as my viewpoint is from the American education system. On top of that, it's compacted by test scores pushed by the state and government, teachers stretched thin with no financial support, parents stretched thin with no financial support and students left in the middle with digital devices in their hands. There is no good guidance for them as technology becomes more “user friendly”. No need to go into an operating systems control panel to customize your computer, no need to tinker with electronic devices and no need to curate your social media consumption when all your friends are on it for hours just scrolling.

Now add AI (LLMs, for the most part).

Some schools use AI companies that run off the back of ChatGPT. Their privacy policy is dependent on ChatGPT in this example, not their own. AI in schools is so wild west that we still don't have adequate studies on if this helps or hurts students. In some cases it does help. Such as students with dyslexia or learning disabilities. Other times it hurts as students will let the bots just do work for them instead. In the worst scenarios, students will become addicted to chatbots. Because some of them are young, they will struggle with realizing they have an addiction.

I assist in a chatbot support group. Ages range widely and do not just focus on youth but elder adults as well. Reasons vary for why anyone uses bots beyond a simple question. Loneliness, loss of family, trauma, unable to communicate to others, therapy use (which I don't recommend unless you are under the guidance of a therapist and truthful with your therapist). Because my focus is mostly on roleplay, I seldom see people use it as a “boyfriend/girlfriend” situation like we often see in news media. But with youth, often they know there's a problem because they see how many hours they accumulate and realize they cannot keep going on like this. They want some portion of their life back. But because chatbot addiction is still an unknown, it's hard to get assistance for it. I often say to use internet addiction as a starting point as we have the baseline tools for that and it can be molded for chatbot use. The worst case scenarios should always be supported by a trained therapist though.

The problem specifically with chatbots in my opinion is the dopamine effect. The way the sites are set up, it's too easy to access. Make a new temporary account, delete it, log back into the same email. You can never truly delete your email or account. Messages come in just like iMessages or Instagram messages. Some sites will ding with a new immediate message so the brain takes a hit.

Something new, I get to respond!

In another way, it's like the gambling effect. You pull the lever, watch the slot go and wait for it to finish spinning. Then you pull the lever again and start the cycle over. You can see where I got addicted. My personal story was after a loss, I got into chatbots as a roleplay format. I liked seeing the new messages constantly. It was novel, I was engaging with characters.

But slot machine goes brrr.

And no, if I told myself I was harming the environment to try and sever my addiction, it didn't work. Not one bit. It only added to the complete stress I had over it. Other people have these same concerns and I often encounter them spiraling. They're weighed down by the harm they're doing but can't stop. I find it better to focus on the individual rather than the bigger picture first. When you're able to help yourself, then you can help others.

As an adult, I figured out I was getting dopamine from this, hence my long battle of divorcing myself from chatbots using extreme measures (locking myself out of my computer several times over). For myself, if I use Gemini as an assistant (“explain this to me but use CBT and cite sources”) it doesn't trigger my brain like a roleplay does. But for others, any use of AI can trigger them. It's really dependent on the person.

But when we bring it back to students, some can be fully reliant on AI while others have smartly used it as a tool such as making outlines for their essays and writing on their own based on the outline to make their thoughts more cohesive. Others are staunchly against AI use and even push back against professors citing their moral and environmental reasons. Both I think are valid arguments when combating AI in education. On the flip side, teachers may be pressured by their administrators to encourage AI use to better prepare students once they graduate as the current landscape is a world with AI.

If you're asking yourself why don't the companies put safeguards for students/people with addiction, the answer is very simple. Money babyyy!!!

ChatGPT and Beyond: How to Handle AI in Schools (Common Sense, May 2025)
AI cheating runs wild on campus (CBC, May 2025)
What social media addiction lawsuits mean for chatbots (Politico, Feb 2026)
STUDY: A new direction for students in an AI world: Prosper, prepare, protect (Brookings Inst., Jan 2026, 500 participants)

Students can take some action on pushing back against teachers in AI use cases:
  1. If work is deemed to be AI but it isn't, request to defend your work in person. This could include citing specific information from your work and where the sources came from
  2. Cite faultiness of AI detectors if it was used against you
  3. Use ANY word processor that has an edit history.
You can additional ask your teacher to be transparent with you. How do they use AI? Where is it used in the course? Is it a chatbot like a student aid? Do they use it in their personal work? What model do they also use? What is their stance on it?

ULTIMATELY

...there are little guidelines on how we should use AI and where it is appropriate to use. As you can see, I covered three fields alone and haven't even touched on others such as medical, scientific or accessibility. Those three fields I have less knowledge but still enough to realize how AI can help in a positive way.

The most important topic though that I often find people skirting away from on the top level has it's own page. Environmental impact.

Transparency: Wrote this in mostly one sitting on OpenOffice Writer (2021 baby). If my thoughts are all over the place, suffer. I'm not tl'dring or chatgpting it.

another section

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut iaculis orci at sodales convallis. Proin luctus vehicula dolor, id ultrices diam eleifend eu. Donec tincidunt tellus tellus, in maximus lorem fermentum ac. Phasellus sagittis nisi in ante pretium, eget molestie est pellentesque. Ut tincidunt ultricies porta.

full width section

half width section

half width section