← All Posts
19 March 2026 by Louise
AILifestyleTechnology

I spend a lot of my time talking about AI in a business context. What it can do for companies, where the ROI is, how to adopt it without wasting money. But recently I have been paying more attention to how people outside of tech are using it, and some of it has caught me off guard.

My mum uses AI. She does not call it that. She asks her phone to summarise a long email from her solicitor and it does. She asked it to explain what a clause in her home insurance actually means, in plain English. She got a better answer than she would have got from the insurer’s call centre. She has no idea she is using a large language model. She just knows the phone is more helpful than it used to be.

The stuff people are actually doing

A parent at our son’s school had a problem with how something was being handled. She knew what she wanted to say but did not know how to put it into a formal letter. She sat down with ChatGPT, explained the situation, and it helped her draft something clear and measured that she could send to the headteacher. She told me afterwards she had been putting it off for weeks because she did not want to come across as aggressive or get the tone wrong. The AI gave her a starting point and she made it her own.

That is the pattern I keep seeing. People know what they want to say but get stuck on how to say it. Complaining to an energy company about an overcharge. Responding to a landlord about a deposit. Appealing a parking fine. These are things that feel intimidating if you are not used to writing formal letters, and AI turns a blank page into a first draft you can edit.

I know parents who use it to help their kids with homework too. Not to do it for them, but to explain things in a way the kid actually understands. “Explain photosynthesis like I am ten” gets a better answer than most textbooks give. One parent told me her son finally understood fractions after a twenty-minute conversation with ChatGPT, having struggled with it at school for weeks.

People are using it for meal planning. Sounds trivial, but if you are trying to feed a family on a budget with a fussy eater and a freezer full of random bits, asking AI to suggest recipes based on what you have actually in the fridge is surprisingly practical. It is not going to win a Michelin star, but it beats staring at a packet of mince at 5pm wondering what to do with it.

Health is trickier. AI is not a doctor and you should not treat it like one. But I have seen people use it to understand what their GP told them, or to prepare questions before an appointment. It fills a gap between “I am worried” and “I can get an appointment in three weeks.” That gap is real for a lot of people.

Where it falls apart

AI is confident about everything, including things it is wrong about. That is the main problem. It will give you a plausible-sounding answer that is completely made up, and it will not tell you it is guessing.

For everyday stuff, this mostly does not matter much. A slightly wrong recipe is a slightly odd dinner. But for anything medical, legal, or financial, you need to treat AI answers the same way you would treat advice from a stranger in a pub. Do not act on it without checking.

The other thing is privacy. If you type your personal details, medical symptoms, or financial information into a free AI tool, that data is going somewhere. Read the privacy policy. Or do not type anything you would not be comfortable seeing on a billboard.

Who gets the most out of it

AI seems to help most when you already know roughly what you want but need help getting there faster. That parent at school knew exactly what the problem was. She just needed help putting it in writing. The parent understands fractions but cannot find the words. You need to know enough to judge the output.

It is less useful when you have no idea what you are doing. It will happily guide you through rewiring a plug socket, but I would still call an electrician.

What is coming

The thing I find most interesting is accessibility. My mother-in-law cannot use a touchscreen well, but she can talk to her phone and it understands her now. A friend’s daughter is partially sighted and uses live image descriptions that did not exist two years ago. Translation has got good enough that a colleague had a proper conversation with a client in Japan without either of them speaking the other’s language.

This is all in apps people already have. It just does not get the same coverage as whatever Silicon Valley shipped this week.

I write about AI and business because that is what Gremlin does. But honestly, this is the bit I find more interesting. People who never asked to become tech-literate, quietly finding that their phone understands them a bit better than it did last year.

Want to talk about this?

If something here is relevant to what you are working on, we are happy to chat.

Get In Touch