• jaypeters.dev
  • Posts
  • Automated Opportunity, A Code Red, and AI Puzzles

Automated Opportunity, A Code Red, and AI Puzzles

The Weekly Variable

Re-evaluating old approaches.

Re-mixing new approaches.

Lots of ways to solve the business puzzle.

Topics for this week:

  • Accidental Opportunity

  • Micro-saas Solutions

  • Code Red Memo

  • Too Many Cursors

  • Advent of AI

Accidental Opportunity

I’ve been thinking more and more about building in the background.

I built Wave primarily with AI generated code, and lately the models have only gotten more trustworthy.

Of course, that wasn’t the case when I was working on Wave, and still isn’t quite there yet.

I usually sit there in Cursor and AI Studio, double-checking the result after every generation, but Wave is a very complicated application.

Not the best project suited for agentic cloud development.

Something simple like SEO generated articles could be a nice project that can run over night or in the background and build momentum over time.

I’ve accumulated a nice backlog of random URLs that I thought were great ideas at the time and some are in various states of completion.

The other day I was reviewing spending habits and found a Wix plan that I had forgotten about for one of those random websites that I thought was a great idea at the time.

The plan automatically renewed earlier this year so even if I canceled it now, it wouldn’t officially shut down until July.

So why not take advantage of a sunk cost?

With a renewed interest in SEO, I may be able to have AI push that plan from sunk cost to recovered cost, or even profitability if handled correctly.

With about 50 well structured, valuable articles, it sounds like Google will pick up a website as a valid source and start recommending it for search traffic.

If all of those articles have affiliates links, it only takes a few higher priced items to earn some money back and turn a dead website into a tiny passive income machine.

I’m really not so worried about it making money, but if it didn’t cost anything to operate that would be a plus, but also to have something that can near autonomously grow traffic to itself is the real power.

That strategy will be very useful for other projects.

I accidentally created the perfect reason to test out programmatic SEO with a realistic goal of generating a couple hundred dollars in 7 months with minimal time and effort.

More to come on that SEO opportunity.

Micro-SaaS Solutions

The SEO approach I mentioned above isn’t exactly a new one.

A huge portion of the internet exists to do exactly that.

Pump out articles that Google likes so it recommends them as search results.

Fill the articles with affiliate links.

Make money when people buy from those links.

The alternative strategy to this is still to programmatically build SEO, but to create traffic for my own services.

Rather than take an affiliate payout of 1-3% for other products, why not attract visitors to something I create and would get 100% of the purchase.

This is indie hacker’s approach to development.

Build a small tool, get some traffic, improve traffic based on questions and feedback.

And the way AI is going, it can manage a good portion of this entire process.

It will still need human supervision, we’re not quite to AGI levels yet, but it’s getting closer with each new upgrade.

For now, I see a scenario where AI agents are building and tweaking these small tools a few sessions at time, then switching to research mode where they gather feedback and research SEO opportunities for topics related to the tools, then generate a few articles that can help drive traffic, then repeat the cycle.

A couple nights a week, a series of projects could be looping through these steps and making progress in the background, patiently waiting for review in the morning.

And this approach attempts to solve the two main reasons why a business doesn’t succeed.

Bad product or bad distribution.

AI built micro-saas has potential to handle both of those problems by constantly monitoring feedback to improve the product, but also constantly working to create more traffic based on what people are actually doing.

A modern approach to making 10 viable software bets.

The true 10x engineer.

This may be where things are headed in the future so I will keep you posted.

Lots more SaaS to come.

Code Red Memo

The hot topic this week has been OpenAI’s slightly panicked internal memo.

With 4 major model releases in the last few weeks, OpenAI may not be comfortably sitting as the clear leader of the AI race at the moment.

Gemini 3 Pro, Grok 4.1, and Opus 4.5 now claim the top spots on many AI leaderboards as the top performing models, and variations of GPT 5.1 sit close by but not in the lead.

Apparently Sam Altman sent out a “Code Red” email warning to the company that they need to refocus on GPT becoming the top model again, and deprioritize their other projects.

OpenAI has branched out to a ton of other products other than AI models recently, so it’s reasonable to think their efforts are getting spread a little thin.

At the same time, I do wonder if there’s a limit to how many people can work on improving models at the same time.

Could create a “too many cooks in the kitchen” situation.

But rumors are, they already have a model that’s better than Gemini 3 Pro and will undoubtedly be releasing this new version very soon while they re-prioritize taking the AI lead.

Then we’ll see how the other big players respond.

Grok 4.1 just came out 2 weeks ago, but already seeing Grok 4.2 rumors so xAI is ready to keep pushing to new levels.

I thought there would be a few weeks without any major AI news after all the recent upgrades but maybe not.

OpenAI may be pushing more big release as their try to fix this “Code Red” situation, and the big competitors may be responding just as quickly.

Too Many Cursors

As an AI consumer, I can benefit from things like OpenAI’s “Code Red” situation above.

New model takes the lead?

Just switch to that one.

Cursor has made that easy enough, thankfully.

And that may serve as my main development hub.

That’s certainly what they’re trying to create, with some of their recent updates to make multi-agent coding a core part of the user experience.

But as a developer I can’t help but want to have my own system to handle things how I want to, outside of Cursor.

I dabbled with the idea of building my own AI pipeline which is probably what will end up happening eventually but recently I’ve considered switching to something a little more flexible.

Claude Code is one such alternative, that’s Terminal-based instead of an IDE like Cursor, but it sounds like it takes a little hacking to get Claude Code to use other AI models than Claude.

And of course, big AI players have their options too, with Google’s Gemini CLI and OpenAI’s Codex CLI, which again I believe allow for multiple models but take some config to switch out of their preferred options.

But there’s a couple options not owned by a big AI company, which are OpenCode and Droid CLI.

Neither are particularly tied to one model, which makes it easy enough to switch on the fly if a new model becomes the leader in AI coding output.

I believe Droid CLI even has a built in /switch command so they have already prepared for model hopping.

As I talk about jumping between different AI projects, I’m seeing a benefit to moving to something terminal based and not having to have 15 instances of Cursor open to manage everything.

Plus the ability to script actions to start at certain times is not really an IDE kind of thing, more of a CLI thing.

That’s probably the biggest motivator for considering a switch.

Queuing up tasks in various projects that can run at a certain time every night.

Back to that whole 10x talk from earlier.

10x the work but not 10x the IDE windows to rifle through.

I haven’t run the commands to install either of these options yet, but I had the quick start guide up and ready to go earlier this week.

Droid CLI will certainly be making it onto my computer sometime before the next newsletter.

I’ll be sure to let you know how it goes.

Advent of AI

Somehow another year has passed and another Advent of Code has started!

AoC has been running strong for 10 years now, providing a series of daily coding puzzles leading up to Christmas (although this year is only 12 puzzles instead of 25).

And these puzzles aren’t very easy.

Complex problems that require a good understanding of advanced programming concepts to come up with an efficient solution.

“Leetcode” at it’s finest.

Not sure I’ve every made it past 3 days of puzzles if I’m being honest, but it’s always fun to get a few solved.

And there’s now 10 years of puzzles to practice programming if you’re looking to brush up on your coding skills.

This year in particular though, I wanted to test a few of the top AI models and see how they fair in coming up with solutions.

GPT 5.1 Pro is supposed to be surprisingly good at puzzle solving so I wanted to see how it does with complex code problems like these.

And I’m sure I’ll try out a few myself.

I’ll report back on how the puzzle solving goes next week.

Let me know if you end up solving some yourself!

And that’s it for this week. Let’s see if AI can solve the puzzle of business.

If you want to start a newsletter like this on beehiiv and support me in the process, here’s my referral link: https://www.beehiiv.com/?via=jay-peters.

I also have a Free Skool community if you want to join for n8n workflows and more AI talk: https://www.skool.com/learn-automation-ai/about?ref=1c7ba6137dfc45878406f6f6fcf2c316

Let me know what else I missed! I’d love to hear your feedback at @jaypetersdotdev or email [email protected].

Thanks for reading!