- jaypeters.dev
- Posts
- Under Estimated, Growth Strategies, and The Model Business
Under Estimated, Growth Strategies, and The Model Business
The Weekly Variable
Recent reminder - it always takes longer than expected.
Sometimes consistency isn’t enough, but the right consistency.
And an interesting perspective on build Large Language Models.
Topics for this week:
Under Estimated
I was looking for regular content, and I thought I was building a weekend project at the suggestion of a dedicated stream viewer.
Two months later, it turns out I built a SaaS that learned how to lecture.
It asks a few questions about your course, then builds the whole thing: slides, narration, lecture, all automatically.
Scoped way down to use only “Apple-style” with minimal text, and sticking with just AI voice narration, I didn’t see the system being that difficult to build.
I was thinking I could knock something out in a weekend.
It was only a few steps: just get the course details, generate slides and audio, then stitch them together into videos with ffmpeg.
After the weekend I thought about building it, I decided I’ll just build it on stream instead.
And quickly figured out it was going to take a while.
I started working on flows for this system around September 10, and finally got a single lecture video produced on October 6, but with manually running workflows and manipulating data directly.
Yesterday, October 30, was the first time the entire system ran on it’s own without human intervention.
I entered the details, selecting the option for only 2 courses for testing, and eventually out popped 2 lecture folders with:
2 slides per lecture (title slide and key points)
a 2-3 minute audio narration for the lecture
a video lecture that shows the slides while the audio narrates the lecture
a lecture document to accompany the video lecture
a list of additional resources for continued learning on the lecture topic
a set of quiz questions about the lecture
It’s a pretty solid system, and took way longer than expected.
But also does much more than I had originally pictured it doing.
This is what I meant by accidental SaaS.
I think I started pushing n8n to it’s limits with this one.
But it’s exciting to see the whole thing come together after about 2 months of building.
And 2 months sounds like a long time, but this was almost entirely build only while streaming.
I think I tweaked a couple things once or twice off stream.
About 80 hours total, or roughly 2 weeks of full-time effort spread over 8 weeks of streaming.
Much longer than the original weekend hacking I had planned, but really only 2 weeks of full-time building.
Not bad for the result.
It’s been a good project, and nice staple for the stream.
It showed me once again, that something I thought could be built in a weekend, actually took 2 solid weeks of focus.
Even automating things with “no code” takes longer than expected.
Handwritten Documents
Speaking of taking longer than expected…
Last week I set up an automation that would read a customer intake form, find the customer details like name, address and phone number, and add those into the appropriate columns in the CRM Monday.com.
That process was pretty straightforward.
I did over-engineer a little, though.
Using that input data, I generate a fancy report number that combined part of the address, the customers initials, the signing date, and part of a timestamp to guarantee a unique searchable value this: FL-2025-05-04-1234-EL
Spent more time getting that to cooperate than anything else.
But I forgot to test with a document that had handwritten customer info, not printed.
n8n has an “Extract Text from PDF” node but using that with the handwritten document produced no text at all.
Just didn’t even try.
Dropping the PDF into GPT, it read the text right away, though.
So I had to add a second route to the automation so that if the “Extract Text” failed, send it to GPT instead.
But again, this was surprisingly more complicated than expected.
n8n has direct nodes for GPT to read images, but not PDFs.
Instead, this became a multipart process of redownloading the PDF file from Monday.com, uploading the file to GPT so it could read it, finding the proper way to send a request to GPT to read the file I previously uploaded, then delete the file upload afterward since it doesn’t need to live in GPT and take up storage space.

Getting GPT to read a PDF
The alternative would have been to use a non-standard community node for n8n that could convert a PDF to an image, and then send the image to GPT to analyze instead, but I was trying to stick with just using OpenAI if I could since I know they can handle it.
It took about 5 different attempts to finally find the right approach to get gpt-5-mini to read a PDF but luckily only maybe 45 minutes total to get a proper result.
I expected the actual reading of the handwritten document to be a pain, not the process of uploading and reading it to be the real problem.
I didn’t get a video done for this one yet, but it will be high on the list.
Amazing to think a business could easily have GPT, or any other major model, do all the document reading and entering at this point, even if those documents are handwritten.
Scheduled Videos
OpenAI’s Sora 2 video model has been out for a few weeks now, but I finally got around to making an automation for it.
At first I set out to use some of these new video API platforms like kie.ai and fal.ai which advertise convenient and super cheap access to all the major AI video and image models, including Sora 2.
I created a workflow that connected to kie.ai’s Sora 2 and pretty quickly had a working system, but I paused before recording the video to show it.
Sora 2 has been invite only since launch so I couldn’t help but wonder if these third party websites are finding ways to get to these video APIs that aren’t within Terms of Service for OpenAI.
This is the same reason I haven’t fully bought into relying on Apify.com for automations.
These websites work well and are super convenient, but they also feel like a time-bomb.
They aren’t operating within other websites guidelines so it only takes one cease and desist from a huge company to force Apify, or kie or fal to close up shop, and any platform that’s been dependent on those sites now has a big hole in its process.
I couldn’t in good faith recommend kie or fal for now until I know more about them so I stuck with the direct Sora 2 API, which was easy to use (and I was lucky enough to get my hands on an invite code a few weeks ago).
So here’s a simple system to schedule Sora 2 videos everyday:
Finally put my face in a thumbnail again.
Let me know what you think!
Growth Strategies
I spent a good amount of time this week focusing back on “brand growth”.
I’ve talked about it before how the stream alone is not a good short-term traffic generator.
This has been the problem with Twitch since the early days.
The easy part is turning on the camera and being live.
The hard part is growing the audience.
One-per-week YouTube videos are a solid foundation for traffic though, and it’s been working slowly but surely.
If growth is the goal however, it’s going to take more than one video per week.
Or better quality videos…
But at this point I think I need more quantity and data to figure out the quality piece.
Almost 40 long videos is a lot but at the same time it isn’t.
Shorts however take much less time to make and can increase traffic considerably faster.
I’ve been working on ways to automate the shorts process to pull content out of a stream, but of course, it’s taking longer than expected…
But I do have an alternative in the meantime.
Sabrina Ramonov has provided a fantastic guide and service to leverage shorts content much more efficiently.
She spends 1 day per week creating shorts and one long video, manually uploads her shorts to TikTok, then uses her service Blotato to repost those TikTok videos to 9 other social media platforms.
She even threw in some example n8n automations on how to make this happen.
I’ll be exploring this approach more next week while I continue to squeeze value out of the stream, but this process is looking promising.
More to come on this shorts strategy.
The Model Business
We’ve gone full meta (not that Meta) in this one.
I was listening to Theo’s video talking about how OpenAI is burning through cash at the moment, but then Theo referenced a podcast clip with Dario Amodei.
Podcast within a podcast.
Dario was on the Stripe podcast explaining the business behind training a frontier model like Claude.
You can look at training a new huge model like building a separate business.
It costs whatever, $100 million to make a model in 2023.
Then $800 million for a new one in 2024.
Then $1 billion in 2025.
So it looks like money is just burning.
But the 2023 model doesn’t cost much to maintain once it’s built, and makes money once it’s offered as a service.
If they get 2 million users to pay $120 per year to use that model in 2024, and assume maybe 50% margin to operate, it’s $20 million in profit.
By 2025, the 2023 model is $120 million in profit.
The next model has to push monetization a little harder but has more options.
Say it gets 10 million users to $240 per year or they offer Max pricing of $200 per month, $800 million is well within reach.
Even just 10,000,000 users paying $120 per year is $1.2 billion, and the $800 million for training is recovered.
And the pricing becomes more extreme for the $1 billion model in 2025, but ideally its a more capable model that can take it’s time to become profitable, but continue to operate for longer.
All the while the other models are operating in profit, feeding the training of the next big model until the market reaches equilibrium and these numbers don’t scale anymore.
It was super interesting to hear Dario explain it, and I’m anxious to listen to that full podcast, but it excited me because I’ve been thinking about AI models like an enterprise in your pocket - a Corporation as a Service.
It’s both obvious and not obvious at the same time.
Corporations are essentially one big business made up of a bunch of smaller businesses that all try to align on the same goals.
An LLM is essentially a large percentage of an actual business compressed into a single asset.
But that asset still needs humans to turn it into a fully operating business.
Maybe soon, though, one of these models may finish training and actually be a self sustaining business, operating all on its own.
And that’s it for this week. Wave is finally live on iOS and now Android!
If you want to start a newsletter like this on beehiiv and support me in the process, here’s my referral link: https://www.beehiiv.com/?via=jay-peters.
I also have a Free Skool community if you want to join for n8n workflows and more AI talk: https://www.skool.com/learn-automation-ai/about?ref=1c7ba6137dfc45878406f6f6fcf2c316
Let me know what else I missed! I’d love to hear your feedback at @jaypetersdotdev or email [email protected].
Thanks for reading!