Video artworksGenerative AI artist
Agiya is the narrative character on the edge of the Singularity
A series of experiments about humor between humans and AI, where the heroine barely keeps life together and jokes about it.
RELEASED
10/3/20252 min read


AGIYA / Agiya — a series of experiments on the edge of the Singularity
Concept. Agiya is an AI character on the edge of the Singularity. It’s a series of experiments about humor between humans and AI, where the heroine barely manages everyday and emotional load — and jokes about it. Youthful tone, self-irony, short scenes — moments where human habits collide with algorithms.
Who is Agiya
Not a flawless “cyber diva,” but one of us — with deadlines, fails, and micro-wins. Her jokes are born where voice assistants, generators, and “smart” rooms make life not simpler but… funnier (sometimes painfully funny). We’re at the threshold of the Singularity, and that edge is often comic — if you talk about it honestly.
How I built Agiya (my real pipeline)
1) Portraits in Midjourney + early Flux.
I started with a portrait series in Midjourney. Before that I experimented in Flux to feel out facial shape and lighting.
2) Move to Higgsfield Soul (since June).
I transferred character development to Higgsfield Soul — convenient for a near full-cycle character build.
Win: a wide range of poses and expressions.
Fail: environments had to be integrated separately.
3) Environments & angles: Pinterest → Midjourney.
First I tried describing scenes via GPT — visuals missed, process dragged.
Then I compiled visual references in Pinterest (materials, light, palettes) and fed them to Midjourney — it clicked immediately.
Win: looks and scenes quickly snapped onto the character.
Fail: long iterations before I found the right strategy.
4) Wardrobe / looks.
I locked two white future-cozy looks for the “season,” so style doesn’t jump within an episode.
Win: clean clothing fit on the character.
Fail: later the face drifted — I lost the original prototype, drift started.
5) Script, character, theme, plot.
Iterations with GPT, updates to the character bible.
Mentoring with a joke writer and a novel writer — to nail Character / Theme / Plot, sharpen rhythm and tone of jokes.
6) Voice (VO).
Final text → Hume.ai for voiceover before scene assembly — this saves unnecessary regens.
7) Static assembly.
Nano Banana and Seedream: merging the face with environments (lab / bedroom / kitchen / city), iterating poses (via HF tools).
8) Video and lipsync.
Generating/stabilizing fragments: Kling AI (first/last frame), sometimes Veo.
Lipsync: Kling + Veo3.
Pro (Kling): instant lipsync on video.
Con: lipsync quality is uneven → for stills I added lipsync in Veo3.
Where it hurt — and how I worked around it
Face drifts across iterations.
Fix: roll back to early face refs and re-assemble; lock the chosen version.Upscale breaks skin/features.
Fix: cover with b-roll / overlays on a-roll where inconsistency is visible.After Nano Banana it’s hard to consistently retrain the face.
Fix: switched to a second “face model,” aligned it with existing scenes, assembled a working clip.Memory/resources and long iteration loops.
Fix: short check cycles; keep a pack of still references outside the assembly; strict versioning discipline.
Where I build and what’s next
I currently develop the character in Higgsfield Soul.
Backgrounds/looks — Midjourney informed by Pinterest visual refs.
Assembly/motion/lipsync — Nano Banana, Seedream, Kling AI, Veo/Veo3.
Next I’m waiting for Sora 2 access to make Agiya more stable and faster: less face drift, better video alignment, higher tempo for jokes and scenes.
Follow Agiya: instagram.com/agiya_go_viral
Tags:
#Agiya #AIinfluencer #AIvideo #Higgsfield #Midjourney #Flux #NanoBanana #Seedream #Kling #Veo3 #Sora2 #Singularity #GenerativeArt #Humor #HumanVsAI