- jaypeters.dev
- Posts
- Stealth Moves, Contract Success and Feature Parity
Stealth Moves, Contract Success and Feature Parity
The Weekly Variable
Wave is reaching feature parity!
Plus another week of streaming.
And contracting success.
Topics for this week:
Android Launch
Streaming Streak
Upwork Official
Which Model When
Stealth Moves
Android Launch
Wave is nearly ready for a new platform.
Last Friday I finally got Push Notifications working for Android which makes both versions of the app near feature parity.
I ran into some trouble getting that service to work in AWS, but got that fixed last night as well so almost everything is in place.
Earlier in the week, I went through the Google Play Developer process because I wasn’t sure how involved it would be but it went surprisingly smoothly!
I had to verify the business with a legal document, then prove my identity and that I am a legal member of the business, but that only took a couple hours to submit, and about 24 hours of waiting for validation from Google.
By Wednesday, I had the option to create an app for the Google Play Store.
Next week should hopefully mark the launch of Wave on it’s second platform which is pretty crazy to think.
A few minor cosmetic issues to clean up and it should be in shippable shape… again!
Streaming Streak
The content pipeline is in full swing.
Developing a solid routine of streaming at least 3 days a week.
Last week I ended up streaming after writing the newsletter so that made for 5 days in a row.
This week may be the same, 4 days in a row so far, and maybe a casual Friday stream after this newsletter goes out.
I didn’t get to posting anything else besides one YouTube video this week, and it seems like Monday.com isn’t the hottest searched topic on YouTube, but got a little traffic from it.
Up to 847 subscribers, inching ever close to 1000.
And Skool members are up another 20 members is kinda crazy.
The slow trickle is working.
A regular flow of shorts on a few platforms could make those numbers jump quite a bit.
And maybe go back to Whisper videos since those seem to perform a little better.
I haven’t done more exciting, marketing n8n flows yet like many of the other channels have had success with so it may be time to give those a shot as well.
But for now I’ve really been enjoying the streaming streak.
It’s fun to hop online and chat with whoever shows up.
Even got some client work done live on stream.
Just need to repurpose the streams a little faster, but in the long run, I think streaming will be a huge differentiator from other channels and communities.
Hopefully many more streams to come.
Upwork Official
I closed my first Upwork contract this week.
The client and I agreed on an hour of consulting and we both figured out how to properly do that on Upwork since we’re both new to the hour contract process but we got it figured out.
We happily traded 5 star reviews so hopefully that establishes a higher close rate for my Upwork submissions going forward.
I learned how to charge an hourly gig which is good to know.
Something else I learned a little more about Upwork this week is the specialized profile feature.
I was debating if I wanted to get into Salesforce consulting as well because I’ve been seeing how CRM automation is in high demand, and I happen to have a pretty strong background in Salesforce integrations, but I didn’t want to completely refocus my Upwork profile from AI Automation to Salesforce after the first successful gig.
It turns out the “specialized profiles” feature that Upwork has been constantly reminding me of is exactly for this situation.
Upwork allows for a default profile that I could use more generally as a “senior engineer, full-stack developer”, but then have 2 speciality profiles for AI Automation and maybe Salesforce specifically.
Then when applying for contracts, I can use the appropriate profile to match the job, no profile updating required.
It’s something I’ll be looking into more as I try to book the next gig.
But this week, I could officially consider Upwork a success.
Which Model When
Wrapping up Push Notifications for Android last week, I couldn’t get them to work because I wasn’t handling credentials correctly.
Whatever model Cursor was using at the time convinced me I could copy the entire contents of a JSON file and put it as a string in an .env
value to give my Notification service access to the Firebase Service to enable push notifications.
Turns out that wasn’t right.
I wondered about it at the time but I figured I’d vibe with it, made my life easier to just copy and paste the file contents, but after a full evening of prompting with Cursor, notifications still weren’t working.
I finally explained the scenario to Perplexity and immediately it came back saying the stringified JSON won’t work, I should be using base64 encoding instead.
I had been wondering about that from the start and Perplexity confirmed it for me.
After a quick refactor, base64 encoding worked like a charm.
As I use them more, it’s becoming more obvious what AI models are good at what, which is partly why I don’t like the “Auto” setting in Cursor, and partly why the GPT-5 launch had so much backlash.
It seems like GPT and Claude models tend to rely more on their training.
Perplexity relies more on searching the web for resources.
AI training can go out of date pretty quickly, or it can be averaged into a wrong answer.
Search results are more up to date and reliable.
So generating code is fine for Cursor models, it just needs to be pretty close syntax averages to produce working code.
But asking about best practice can be an issue if that process has changed, or you need something specific.
Perplexity will find the docs and reference those, while other models will default to their training and generate an answer that isn’t a direct reference, like trying to recall from memory, not read the paper directly.
If anything, that’s the easiest distinction.
For up to date references, Perplexity is probably the best answer.
It’s entire purpose is search.
Grok does prioritize X.com posts and web posts so it does a decent job of generating a current answer as well.
Gemini with the “Grounding with Google Search” option enabled seems pretty good in making sure it’s answers reflect real documents too.
GPT and Claude are good at calling tools, but seem to rely more on generating answers as opposed to referencing answers.
GPT does have 2 different options in the browser and app that make web search the priority, but in an API format like Cursor, that is not enabled by default I’m sure.
It has to be expressly prompted to search which makes sense, internet search is not cheap.
So moral of the story, if you’re looking for answers on up-to-date best practice, Perplexity is probably the first choice.
Stealth Moves
And last but not least, have to cover some AI updates.
“nano banana” was an image model that popped up on LMArena and a few other AI testing platforms as a new “photoshop killer” producing impressively accurate image edits with a single prompt.
I was a little late to the trend so I didn’t get to try it myself, but the rumor is that it’s a new Google model and it may officially be revealed soon since it was pulled from the testing platforms.
Have to wait and see if that shows up officially in the next few weeks.
Another new model called “sonic” sneakily appeared as an option in Cursor this week as well.
It is a really fast, cheap, “thinking” model that is currently free to test for now.
Rumor is it’s xAI’s coding model and so far the results have been overall positive.
Being super fast has it’s benefits, but when I tried it, it can be a little aggressive.
I asked it a question about fixing keyboard styling, and it immediately decided I needed to refactor a few files and started making way more changes beyond what I asked it.
Going to have to give it a better scoped test to see what it can do.
With that sonic attempt, I decided to check back on the LMArena and see what’s currently performing at the top of the charts.

Gemini Pro 2.5 back on top
All the backlash and performance issues still have gpt-5-high tied for 1, but Gemini Pro 2.5 has snuck back to it’s spot at the top of the Text category.
Claude Opus also climbed it’s way into the tie for rank 1, which makes sense.
Much of the internet swore that Claude Code was way better than GPT-5 so it’s good to see that reflected if that’s the case.
Based on these results, I may be back to the old aistudio.google.com method if Gemini Pro is still proving to be on top.
Have to wait and see what model, old or new, ends up on top as we reach the end of another quarter.
And that’s it for this week! Android issues, a streaming streak and so many models to choose from.
Those are the links that stuck with me throughout the week and a glimpse into what I personally worked on.
If you want to start a newsletter like this on beehiiv and support me in the process, here’s my referral link: https://www.beehiiv.com/?via=jay-peters. Otherwise, let me know what you think at @jaypetersdotdev or email [email protected], I’d love to hear your feedback. Thanks for reading!