Plenty of comments about using an LLM to assist with this, and I was happy to be able to read about a learning experience where the stakes were pretty low and the feedback loop pretty tight. Thanks for writing it up; for me, it reminds me that some of the use cases where an LLM might be an efficient tool are also the places where it can be wise to take the opportunity to learn and sharpen new skills.
Great writeup and also a great example of where LLMs can step in to help fill the gaps in areas where you don't have as much skill or interest. For instance, your wife used ChatGPT to come up with a name and you used AI to generate the admin flows that you weren't interested in building.
Sounds like Flutter was a good technology choice too, given its flexibility across platforms. As a designer, I know how frustrating it can be that the Google and Apple interface guidelines aren't too prescriptive but patterns vary so much across domains, that it's better to do what you did and evaluate what others do to solve similar problems. Great work!
Flutter was good, but now with Liquid Glass™ I find react native (specifically expo) using expo-ui far far better at designing apps that match their native look and feel.
also fwiw for small things like this, unless you want to really learn image recognition, just send the image to gemini-flash-3 or something. Sure it's 0.5-1s latency, still faster than entering it manually and it's pretty cheap, I'd reckon it's under the free tier at least for you and your family.
Manual data entry is just too unreliable and time-consuming. I don't see how this could work short of integrating OBD-II fuel consumption data combined with some sort of presence tracking.
It has a button, records button presses for the last 7 days, saves them to local storage. Then it presents totals per day and detailed timestamps for today's button presses.
This is how I described it to the LLM (which wasn't even one of the coding assistants, just Gemini free). Not the exact prompt, that was more detailed, but that's the idea. I did like 3 iterations just to add features, because everything worked the first time.
It's a javascript app configured to work as a PWA on my phone's home screen. I don't know javascript or what a PWA is, I just told the LLM make it into a PWA and it generated the extra files and told me how to set them up on my web server.
The goal is to record when I smoke in hope that seeing the totals will help me cut down. Unfortunately what a LLM can't solve is me remembering to open the damn app and press the button every time I light one, but at least I'm trying...
Edit: and just for the record, in spite of the above I still think 95% of the "AI" evangelism is lies, bullshit and stuff like that.
Oh nice! kinda funny story : I wanted the exact same thing as you a while ago so I used 'inhaler usage' on apple health app as a proxy for cigarette counter.
Also i know cigarettes & vapers do not always mix, but man I do always dream of creating a vape that can be set to a limited number of puffs/hour. It's probably the only way of actually controlling it or at least warning you, like you mentioned.
But still, fun that we can make these apps so easily now!
Every time you finish a pack, save it. Seeing the empty packs will be more impactful. And they’ll take up space as an additional consequence of smoking more.
wow... so much yak shaving, including priceless bits like "sat with ChatGPT for a bit [...] we came up with OurCar" (I mean... how original is that, clearly powerful datacenters computing over a dump of the Internet was needed), I'm impressed.
All this to avoid doing one subtraction (km before, km now) then multiplication (result times average litter/km) in your head.
> index.html to make it available to anyone, Worldwide, for free!
For the fancy version I'd make priceperlitterperkm URL parameter to make it work not just for my area. But that's like an entire additional like of code.
My point being... I'd make a Web page, on app, no deployment, no tracking.
echo <input id=kmbefore><input id=kmafter onleave='alert( (kmafter.value - kmbefore.value) * priceperlitterperkm )'>
> index.html to make it available to anyone, Worldwide, for free!
You are conveniently leaving out that you already must have:
* a server running 24/7 on the internet, paid for each month
* purchased, setup and keep paying every year a domain name
* configured a web server in that server, ideally with automated SSL certificate issue and renewal
PS : echo "blabla" > index.html is actually becoming my new World reaching publishing method. I do have a home server with a Web server. I connect to it via ssh keys... so
ssh homeserver 'echo hi >> /var/www/self-published/index.html' and voila. I'll probably share my gist this way from the CLI.
ssh homeserver "echo '$(ls)' >> /var/www/self-published/index.html" if I want to run a command locally first, not on homeserver (notice the " vs ').
Notably, the only parts of this that could not have been done by a well configured agent in a weekend with SOTA today is the futzing with app stores and the UX iterations.
Plenty of comments about using an LLM to assist with this, and I was happy to be able to read about a learning experience where the stakes were pretty low and the feedback loop pretty tight. Thanks for writing it up; for me, it reminds me that some of the use cases where an LLM might be an efficient tool are also the places where it can be wise to take the opportunity to learn and sharpen new skills.
Best family app to me is home assistant.
It's so powerful and you can build so many custom UIs on it.
I started it for smart home automations but on daily basis I use it more for managing tasks,scheduling reminders.
And with Claude code remote even my not so technical wife uses it to build her tiny utility apps.
Great writeup and also a great example of where LLMs can step in to help fill the gaps in areas where you don't have as much skill or interest. For instance, your wife used ChatGPT to come up with a name and you used AI to generate the admin flows that you weren't interested in building.
Sounds like Flutter was a good technology choice too, given its flexibility across platforms. As a designer, I know how frustrating it can be that the Google and Apple interface guidelines aren't too prescriptive but patterns vary so much across domains, that it's better to do what you did and evaluate what others do to solve similar problems. Great work!
Flutter was good, but now with Liquid Glass™ I find react native (specifically expo) using expo-ui far far better at designing apps that match their native look and feel.
also fwiw for small things like this, unless you want to really learn image recognition, just send the image to gemini-flash-3 or something. Sure it's 0.5-1s latency, still faster than entering it manually and it's pretty cheap, I'd reckon it's under the free tier at least for you and your family.
Manual data entry is just too unreliable and time-consuming. I don't see how this could work short of integrating OBD-II fuel consumption data combined with some sort of presence tracking.
Hmm an app where you can count the users on your fingers, and where it's not a big deal if it's slightly wrong.
Safe to LLM generate it, unless you want to learn something in the process, in which case do whatever parts you want to learn about manually.
Had an 100% generated app with one user - me - on my phone's home screen since some time last year.
im curious, what did your app do?
It has a button, records button presses for the last 7 days, saves them to local storage. Then it presents totals per day and detailed timestamps for today's button presses.
This is how I described it to the LLM (which wasn't even one of the coding assistants, just Gemini free). Not the exact prompt, that was more detailed, but that's the idea. I did like 3 iterations just to add features, because everything worked the first time.
It's a javascript app configured to work as a PWA on my phone's home screen. I don't know javascript or what a PWA is, I just told the LLM make it into a PWA and it generated the extra files and told me how to set them up on my web server.
The goal is to record when I smoke in hope that seeing the totals will help me cut down. Unfortunately what a LLM can't solve is me remembering to open the damn app and press the button every time I light one, but at least I'm trying...
Edit: and just for the record, in spite of the above I still think 95% of the "AI" evangelism is lies, bullshit and stuff like that.
Oh nice! kinda funny story : I wanted the exact same thing as you a while ago so I used 'inhaler usage' on apple health app as a proxy for cigarette counter.
Also i know cigarettes & vapers do not always mix, but man I do always dream of creating a vape that can be set to a limited number of puffs/hour. It's probably the only way of actually controlling it or at least warning you, like you mentioned.
But still, fun that we can make these apps so easily now!
Every time you finish a pack, save it. Seeing the empty packs will be more impactful. And they’ll take up space as an additional consequence of smoking more.
I'll try any advice but that doesn't really work for me because I already save them.
One day I'll run into something that works for me...
Interesting, was actually planning on setting up a carshare for our cul-de-sac in Honolulu. This is a great reference, thanks for sharing.
Honestly, this is kind of the sweet spot for LLM-built apps.
Small thing, used by a few people, solves one annoying problem, and nobody really cares if it’s not “proper software”.
It's family software, haha
wow... so much yak shaving, including priceless bits like "sat with ChatGPT for a bit [...] we came up with OurCar" (I mean... how original is that, clearly powerful datacenters computing over a dump of the Internet was needed), I'm impressed.
All this to avoid doing one subtraction (km before, km now) then multiplication (result times average litter/km) in your head.
That's a LOT of effort to be lazy.
The log for "who took the car for how long when and did they fill it up" seems to be much more relevant.
Nothing a notebook and a pencil can't fix, of course, but an app is more fun.
I don't think it's laziness, I think it's an excuse to do a personal hobby project. Makes perfect sense to me.
FWIW if I were to do this I'd do
echo <input id=kmbefore><input id=kmafter onleave='alert( (kmafter.value - kmbefore.value) * priceperlitterperkm )'>
> index.html to make it available to anyone, Worldwide, for free!
For the fancy version I'd make priceperlitterperkm URL parameter to make it work not just for my area. But that's like an entire additional like of code.
My point being... I'd make a Web page, on app, no deployment, no tracking.
* a server running 24/7 on the internet, paid for each month
* purchased, setup and keep paying every year a domain name
* configured a web server in that server, ideally with automated SSL certificate issue and renewal
That, or know that https://neocities.org/ exists.
PS : echo "blabla" > index.html is actually becoming my new World reaching publishing method. I do have a home server with a Web server. I connect to it via ssh keys... so
ssh homeserver 'echo hi >> /var/www/self-published/index.html' and voila. I'll probably share my gist this way from the CLI.
ssh homeserver "echo '$(ls)' >> /var/www/self-published/index.html" if I want to run a command locally first, not on homeserver (notice the " vs ').
Integrate an OBDii dongle with Bluetooth and have the app read it from there.
Cool, glad you had fun building it.
Notably, the only parts of this that could not have been done by a well configured agent in a weekend with SOTA today is the futzing with app stores and the UX iterations.