A Big Fix, Hard Mode, and GPT-5 Vibes

The Weekly Variable

Finally back to properly vibing.

Especially with some big AI releases.

Topics for this week:

A Solid Wave

A lot of progress on Wave this week.

Finally released a new version to the App Store on Wednesday as the first major update.

Much has changed since the originally release version, including no longer requiring users to complete their profile when they sign up.

But there’s still a few nasty bugs in that release as well.

Logging out and logging back in will cause the app to get stuck in a loading screen after successfully logging back in.

We decided that’s not a huge deal since most people won’t be switching accounts that often.

How often do you sign out and sign back in to Instagram?

Coincidentally, yesterday was the first time I can think of in recent memory where I voluntarily signed out and signed back in to an app that wasn’t Wave.

I was hoping to force an update to my ChatGPT account to get GPT-5, but more on that later.

Wave also has a known bug with returning from the DM’s page, but most likely people won’t be DMing like crazy yet so again, not the most critical bug.

But the current version of Wave, now sitting in the “Waiting for Review” state on the App Store because I forgot it was Thursday and Apple doesn’t do App reviews over the weekend because it’s already Saturday in some parts of the world, that version - waiting for early Sunday morning review - doesn’t have either of those bugs.

I squashed the log out, log in bug I’ve been fighting for at least 2 weeks now.

So hopefully Monday morning, Wave will have the solid foundation it deserves on the App Store.

And I can finally get it to Android as well.

Updated links next week, and maybe even a marketing push…

Hard Mode

I’ve been developing on Hard Mode and I don’t really know why it took so long to realize it.

I think I was feeling a sense of urgency to get the app out there, and as I mentioned last week, I was vibing too hard.

I was convinced if I sent enough code enough times into a enough AI models, one of them would solve the bug, so it wasn’t worth the time investment to setup a proper development environment when it seemed like it was so close.

I was also probably subconsciously avoiding messing with the Apple developer account configurations because their certificate and provisioning system is kind of a nightmare.

After so many failed AI answers that looked like obviously right answers, the 13 years of engineering experience was screaming louder and louder in my head to just get the app running locally on a phone.

It was time to console.log everything or, what I actually did, rip out code until things started working again.

But to do either of those approaches, the 25 minute rebuild process was going to make that debugging take a decade.

It was time to fix the dev pipeline.

Following all the steps for creating iOS Development certificates, and iOS Development provisioning profiles that included my specific iPhone device resulted in multiple failed local builds through Expo.

Some sort of mismatch was happening, saying my laptop wasn’t allowed to build the app for my phone.

I quickly abandoned that approach after a couple attempts because building locally would still have been way too slow.

The true power of Expo is hot-reloading changes directly on a device, no need for builds at all.

But the Expo app from the App Store wouldn’t let me do that.

Wave is still on Expo 52, and the Expo app expected Expo 53, but I didn’t want to introduce more uncertainty by updating Expo versions in the middle of this and have some dependency break with the update.

And since I couldn’t get the app to build on my laptop after multiple certificate and provisioning profile attempts, I finally let EAS remotely build a development version for iOS which I was able to successfully install instead.

Luckily that version didn’t care about Expo 53 so when I scanned the QR code ready to run the app on a phone, it didn’t complain about using the wrong version.

Instead, it defaulted to localhost:8081 and didn’t connect.

With a little JavaScript magic, o3 created an npm script that would provide the app link with the local network ip address of my laptop instead so that my phone could connect to it through WiFi.

After about 4 hours of wrestling with local environment setup, I finally had Wave running on my iPhone, could make changes to the code on my laptop, and see those changes happen immediately in the app on my iPhone again.

Obviously the way things should have been from the beginning.

To be honest, I’m still not exactly sure what EAS did to create the development build though.

Looks like it used an Ad Hoc provision where I using iOS Development, but more to learn there.

For now, I can properly rip apart Wave as needed.

So the obvious moral of the story is get the app running on a real device through Expo first.

Would have saved so much time and headache.

Don’t trust iOS Simulator!

Vibes Are Back

To wrap up the Wave bug saga…

Once I could properly test, I thought I found the bug early on.

I was way too aggressively checking device location, with precision down to the slightest movement in my chair.

Updating those lat and long points to only have 2 decimals instead of 5 made it accurate to the meter, not moving in my hand.

I had code to throttle saving that often to the database luckily, but it was still checking way too much.

And I never saw this on iOS because the iOS Simulator doesn’t move so there’s no location tracking trigger.

A real phone does move.

It was a big “aha” moment.

But a false one.

Because fixing that, my heart still sank when I logged out and logged back in and was immediately stuck again.

I made a number of other updates trying to find the issue, when finally I started commenting out full sections of the screen.

Eventually I narrowed it down to one simple line.

region.name

I forgot when I started writing Wave, I was using my own data storage system but later upgraded to a modern library that handles query caching and data storage together automatically.

My old code and the new code ended competing to manage the stored data and got stuck racing each other.

Updating the old code to use the new proper code led to a massive sigh of relieve.

I could log out and back in over and over, 7 times in a row to be sure the first time.

The bug was finally gone.

Vibe coding was back on the menu!

But now I can properly test the vibes.

Faster Whisper

Taking a break from Wave last Sunday since Apple sort of forced me to - no weekend App Store reviews - I decided to be frustrated with upgrading Whisper instead.

I had Whisper running on my gaming rig with a nice graphics card which should be able to rip through an hour long audio file in no time, but it seemed like it was taking a surprisingly long time.

I realized the Docker settings weren’t enabling GPU usage properly so it was defaulting to the processor, which is fine, but I knew it could be much faster.

A few Perplexity searches ended up convincing me to try out the appropriately named Faster Whisper project instead, a community-improved fork of Whisper that should fully utilize NVIDIA architecture.

After some more Windows tweaking and back and forth with o3, I had a new Minicoda environment with Faster Whisper installed, running on my machine.

I was absolutely thrilled to see it stream through a transcript of an hour long audio clip faster than I could read all the lines fly up the console.

It sounds like this setup could actually stream near-realtime transcripts, as I’m recording, if I set it up properly, which would be kind of incredible.

But I’ll hold off on that for the moment.

I haven’t been able to test it out much beyond that first trial run so I’m looking forward to trying out more transcripts first.

Pretty excited to have a system that tear through hours of streaming backlog in minutes.

Time to go live again!

oss and GPT-5

OpenAI had two major updates this week.

It started with gpt-oss.

Two new open-source, or oss, models that are free to download and use.

gpt-oss-20b: a 16 GB model that can run on a laptop that has enough memory and it performs at about the same level as o3-mini.

gpt-oss-120b: a 65 GB model intended for the cloud or super high end machines that performs at about the same level as o4-mini.

Haven’t had a chance to see if I can run 20b yet on my laptop but looking forward to trying it out myself.

Also wondering how much it would cost to deploy the 120b model to the cloud…

But of course, that oss hype was strategically used to lead into the release of GPT-5 on Thursday.

Listening to the live stream while working on Wave, I was anxiously refreshing the ChatGPT app and website to see if I had the new version yet.

I’m sure they were waiting until the end of the live stream to start rolling out publicly but it was worth a shot.

Not long after the stream, I was in Cursor and happened to check model options when I saw GPT-5 already live so I switched to it immediately.

Cursor is doing a promotional period where all the usage of GPT-5 is free for a week so naturally I started hammering it with questions and update requests.

I’m already paying for Cursor so I’ll gladly let them handle the cost of trying out this upgrade.

GPT pricing

The GPT-5 API is surprisingly cheap:

  • $1.25 for 1 million Input tokens

  • $10 for 1 million Output tokens

Compared to o3 and gpt-4.1, it’s cheaper in but more expensive out:

  • $2 per 1 million Input tokens

  • $8 per 1 million Output tokens

And more strategically I’m sure, GPT-5 is right in line with Gemini Pro 2.5 pricing.

Gemini Pro 2.5 API Pricing

I’m going to have to consider that pricing because having it run in my repo, I was truly vibing again.

GPT-5 helped make a ton of changes and updates, and I was able to test them directly on my phone to verify no issues on a real device.

It was great.

So far I’ve been really impressed with GPT-5.

It takes direction well, can respond really quickly, but can also take it’s time to think for a while.

The coding answers seem solid, I only ran into a couple situations where it wasn’t actually fixing the issue with it’s answer, but most of the time it nailed it.

Even just chatting with it has been enjoyable.

Feels like a great upgrade.

A few people on Twitter still swear by Claude Code, which I almost tried this week in my desperation to avoid fixing my dev environment, but already, GPT-5 has jumped to the top of the leaderboard:

LMArena Leaderboard with GPT-5 in rank 1 for Test and WebDev

OpenAI announced in the live stream that GPT-5 should be free for everyone so you can try it now at chatgpt.com.

Worth a shot, I would highly recommend it, especially for free.

Looking forward to vibing more with GPT-5, first impression is really great!

And that’s it for this week! An epic bug fixing saga, and a new coding buddy.

Those are the links that stuck with me throughout the week and a glimpse into what I personally worked on.

If you want to start a newsletter like this on beehiiv and support me in the process, here’s my referral link: https://www.beehiiv.com/?via=jay-peters. Otherwise, let me know what you think at @jaypetersdotdev or email [email protected], I’d love to hear your feedback. Thanks for reading!