Think!

Human default is to search for easier, but it’s rarely better

Human default is to search for easier, yet easier is rarely better.

It is easier not to think about our diets, but we would become overweight or obese, and around two-thirds (of Australians) are, by not taking care beyond the easy. It is easier not to drive within the speed limits, or to drive while distracted, but fines may come.

Self-service at the supermarket is in many ways easier, and increasingly the default. While the big supermarket groups deny reducing jobs, it would seem logical that now or into the near future, easier systems and usage of technology would reduce the need for school students (and adults) to get a first part-time job in a supermarket. Fully AI in our fast food is not far away (More >). Likely there will be fewer unskilled roles moving forward. The robots win.

In an Australian Financial Review article this week, noting the changing consulting environment for the big four, KPMG CEO Paul Howes noted that it aims to have just half its consulting work done by locally based advisers by 2026, with the rest carried out by lower-cost workers based in Australia and overseas, or performed by robots (More >).

With all the discussion of late around housing affordability, and the Australian dream of owning a home on the world’s largest average block sizes, perhaps the focus should be more so on retaining and creating jobs for the future. 2024 seems like a year of excitement over easier, and artificial intelligence, but likely this is the start of a dumber workforce not needing to think – “AI does that” – and likely fewer jobs for those unable to think beyond a robot.

We all love technology and disruption, until we are disrupted out of local jobs.

Just like why buy local (including from consumer products to professional services) is so important, so is not always choosing to opt for technology to make our lives easier.

I wrote a piece back in 2018, in which I note that the robots are not just coming, they are living among us. But we should not fear the robots, but the people becoming robots. Read article >

The challenge is to understand what humans do well, and get technology to help fill in the gaps. It seems that we are getting technology to do the fun stuff that we need humans to keep developing – creativity, writing, thinking – rather than the horrible stuff like putting out the bins or cleaning the house. I heard Neil Finn yesterday say his family were so musically talented because they sang while doing the dishes each night, as they lacked the technological ease of a dishwasher.

If you don’t use it you lose it, is a phrase that relates so much to thinking, creativity and tech today. Back in 2018 I interviewed Dr Fiona Kerr, a South Australian based expert in technology and building adaptive human systems. Kerr noted that the strength of humans over AI is abstraction, saying:

“Last year I was at Google and I was talking with Vint Cerf ( Vice President and Chief Internet Evangelist at Google since 2005). He said to me ‘so, if you had to choose one thing that is different between humans and artificial intelligence, what would it be?’ And, I said abstraction. And he said ‘yes!’ We were on the same page. That way of being able to completely laterally jump.”

It seems AI is getting pretty good at abstraction, dealing with ideas rather than events, too. What is critical to remember for society is adopting AI, and yet remaining cynical and cautious about its power. I asked ChatGPT for their perspective …

For example, Square Holes uses AI to assist with our analysis, yet what we have learnt is that ChatGPT and the likes make things up, and are eager to please. Commonly fabricating statistics even after being fed real data, making up hypothetical academic reports, or creating verbatim quotes by filling in gaps.

The best use of AI is human + AI, with human driving.

As Dylan from Square Holes noted in a discussion this week, “Generally I see the biggest shortcoming is in AI’s tendency to fill in the gaps, make assumptions and provide you with information it thinks you want to see rather than what is available or true.”

Particularly we have noticed this in …

  • Inaccurate or made-up quotes from transcripts – tendency to merge multiple quotes from different people or make up a quote based on general themes and findings of the information provided
  • Inaccurate or made-up findings when reading from charts in reports – particularly with multiple data sets e.g. comparing against age and gender etc.

It is important to ask when AI provides statistics, data or research papers to ask “is the data correct” or “do these reports actually exist” and ChatGPT will politely and honestly say “no.”

Square Holes uses AI analysis as a guide, always balanced by real human analysis, to add timing and other efficiencies, or even to help identify themes from large structured and unstructured data. As the saying goes “if you assume, it will make an ASS out of U and ME”. And, AI does make things up, as it is eager to please.

When we overly rely on technology, we run the risk of fewer jobs and a dumber society.

I’ll the last words on this to ChatGPT (unedited quote) …

“The key to preventing overreliance on AI is fostering a mindset of curiosity, accountability, and balance. We need to teach people—whether in schools or workplaces—to see technology as a tool to amplify their own abilities, not replace them. It’s about encouraging questions like ‘What is this technology missing?’ or ‘What can I add that a machine can’t?’
Organizations and educators must embed critical thinking and creative problem-solving into their cultures, ensuring humans stay at the center of decision-making. We need to reward thoughtful engagement, not shortcuts, and develop systems that emphasize collaboration between human insight and AI assistance. If we focus only on efficiency, we risk creating a society where convenience leads to stagnation. The future demands people who know when to lean on AI—and when to stand on their own.”
Thanks ChatGPT (and no thanks).
Share this: