We are a tool using species. It is one of our defining characteristics, and one of the ways we have been able to be as successful as we are. But there is another side to this – first we shape our tools, and then our tools shape us. We all know this, and it is summed up by that old phrase – if all you have is a hammer, every problem looks like a nail.
I was thinking about this recently, as I yet again struggled to get my Amazon Echo to do something I wanted. Alexa, Google Home, Siri, and others, were one of the previous next big things in tech – voice activated assistants that would help us automate our homes, our lives, and make everything seamless and simple.
It didn’t quite work out like that. The user interface being voice for these devices was novel, and quite successful for what it was. The problem was that if we can talk to a thing, we assume it should be able to understand us. These devices couldn’t, of course. They were simply processing our audio input to try to identify certain trigger words or phrases, and then execute pre-built routines related to these.
I want all in on these voice assistants back in the day – a fully automated home, controlled by my voice. (Incidentally, this works great if you live on your own. Introduce someone else into the mix, and all those perfectly sensible routines and commands you have are somehow incomprehensible to someone else..)
One thing I learned fairly quickly is that there were particular ways of phrasing your request that were more likely to work. The success rate of my requests started to increase – but not because the voice assistants had got better. I was being trained to change my phrasing, my thinking, to better match what the voice assistants expected. (Or their developers had programmed to respond to, rather.) The tool I was using was shaping me.
Having been using various AI tools and assistants recently, I was strongly reminded of this. Firstly, because AI assistants are much better at interpreting what I want them to do (though I don’t talk to them, I type, which may help). In my inexpert view, the user interface has been merged into the AI system, so that it runs my input through the LLM to attempt to discern intent, and then runs that intent through the model to generate an output.
This is great, as it means I can be fairly certain that what I say will get interpreted, and an attempt is made to meet my request. It is bad, as the voice assistants would at least admit when they couldn’t work out what I wanted, whereas the AI assistants will just make stuff up.
This rest my expectations when talking to the voice assistants – I went back to thinking something that I could talk to should be able to understand me. I found myself asking questions like “If it is 5pm in Eastern Time, what time is it in BST?”. No chance. I am asking it to work something out given these parameters, and this just wasn’t what any developer expected to need to program as a generalised catch all for all time zones. So I ask what time it is in Eastern Time now, and work it out myself – the tool’s limitations change my request. But that request was itself changed by the AI assistants’ enhanced capabilities, as they would have been able to provide me with an answer.
So, big plus for the flexibility and capabilities of AI assistants, then? Well, maybe. Because I realised I was phrasing my request based on what tended to work with AI assistants. We’ve all seen those courses (or even done them, as I have) which promised the secret sauce on how to ‘prompt engineer’, to get the AI models to give you what you were really after. These were courses to get us to change our requests to ones AI could deal with. We are changing how we phrase our requests to try to be more successful. Small price to pay.
But is that all we are doing? Are we training ourselves to limit our requests to what AI can handle? Is the tool moulding us? Certainly the voice assistants did to me, but that was restricted to how I asked it to turn on the lights. AI assistants, though, are being sold to us and ways to augment our thinking, to make us more productive, more effective, more more. But if they are instead training us to limit our requests and thinking to what they can effectively deliver, surely we are losing some of our own range of cognitive abilities?
Are we being trained to see every task, every problem, as a nail?

Trevor Roberts is a programme and project management consultant and the founder of Dull Industries – a consultancy focused on project turnaround, AI implementation, and digital strategy.