For me, AI stirs up a variety of emotions. It's awe-inspiring, like science fiction. When you stop and think about what ChatGPT or Claude is doing when it builds a browser plugin, writes a research paper or generates an alternative ending for a Harry Potter novel. It's dizzying. However, it also creates its share of fear, uncertainty, and doubt when it comes to its future possibilities.
Hands-on learning with AI creates different feelings than just reading or thinking about it, though. I definitely learn better when I'm 'on the tools', gaining immediate feedback from trying and failing, or from generating unexpected results. I'm excited by new things I either couldn't do - or didn't have time to do - before Generative AI.
Learning a new language
I find that a useful way to think about AI is to see it as like learning a language. I've been revisiting my Finnish recently and, of course, you don't jump from nothing (or very, very rusty) to expert in a week. It takes time to become familiar with a language and how it works. You need a mental model for its patterns, structures, and its quirks and exceptions. Finnish, for example, isn't gendered, nouns have cases, and there are no articles, which means that it’s a very different way of constructing sentences from my usual language: English.
At first, it seems impossible - everything is alien. But if you persist and use it, your brain adapts, and you become more proficient. You understand its context and nuance. We should view GenAI in a similar way to the process of learning a new language. Explore, give it time, and practice regularly. For AI, this means getting hands on with the tools.
Fear and uncertainty may remain
Generative AI is, I believe, is characterised by two things: the sheer pace of change and the scope of its potential usage. It's easy to be overwhelmed by all the noise and debate around AI. It can be hard to keep up, but it is essential to recognise that change has always impacted work.
Tasks, roles and industries may cease to be necessary or decline in value. While the pace of this change feels faster, I believe the hype is obscuring what is actually being adopted and used.
Looking for answers
I try to carefully curate the sources I look to for AI information. I seek a mix of fact, editorial/opinion, and predictions. The challenge is that, certainly in this cycle, it's harder to avoid hype when looking for factual descriptions of what AI can do. How can we be sure we’re tapping into authoritative views? I look to people who are either researching or actually delivering practical applications. I add editorial and opinion from people I trust or follow on LinkedIn, Medium, and Substack, and I pay attention to the key AI players' own blogs and research.
I engage my critical thinking skills across all this information. I ask questions like:
- What was considered?
- What is missing or absent from the piece?
- What is the evidence?
- What are the consequences if this is true/false?
- What are the alternative points of view?
- Who gains if this is true/false?
- Who paid for the piece?
- What emotion are they trying to evoke?
I take my answers to those questions, and see what other questions or insights emerge.
Reality is always messier and more nuanced than the hype suggests. Whether it's GenAI, eCommerce, the personal computer, or the telephone, change is both faster and slower than the louder voices would suggest.
Often, long-term change is deeper than the hype, but the short-term impact is less than the hype. Today, it seems there's polarisation, from people pointing out what AI can't do or do poorly, to rose-tinted boosterism that insists AI will solve all of the problems we point it at.
There should be attention paid to energy consumption and the environmental impact of all the data centres required to support large-scale AI use. When the real cost of AI use is factored in, does the business case still stand up? I'm still missing a sense of how to assess the benefits and practically move forward pragmatically here, but I'll continue collecting information.
Keep it small, focus and learn
Systems improve when we create solutions that leverage technology's potential, assess their impact on workflows and systems, and iterate. Small, focused, and continuous learning helps to filter the noise and make the value of Generative AI more tangible. That's how I think we best use technology. I would prefer a future where human and AI are better than either alone. That position guided my early investigations. My own 'learning by doing' has been framed from that perspective. This is my goal right now, and here are my suggestions for framing your own AI explorations.
Jari's top suggestions for AI exploration:
- Form your own opinions, do your own learning. I need my own experiences to help with critical thinking and the appraisal of claims, hype, and dismissals of AI's capabilities. It's allowed me to spot some of the wilder claims from both sides, and to have not just knowledge but experience. I can debunk or confirm based on my own experience. Experience leads to further and better questions.
- Small, fast bets are essential. I need speed in the learning cycle - try something, reflect, learn, and incorporate the findings into the next attempt. It's an approach, and I recommend doing it regularly with incremental improvements and insights that build. See it as placing small bets – i.e. small investments of time and money with quick results. What do you expect to find? What does what you find mean?
- Understand what is scarce or hard to do. What happens if the cost of developing or marketing something tends towards zero? Does that fundamentally change your operation, or do you just hit the next constraint in the overall system? While the cost of building something might reduce, what about the cost of maintaining it? I think we're starting to see some of that tension now with the experience of vibe coding. You've built something that might be thought of as a prototype: are you willing to incur the cost of hardening and maintaining it?
Explore different ways of interacting with Generative AI
I use Claude Code to explore and experiment, using a large language model from the command line, giving it full access to my filesystem. This is somehow more effective and magical for me than using it in the browser. The ability to write out multiple files, read, edit, summarise, and create code is key. It allows for types of interaction that I can't get from a single browser session.
A simple example would be using Claude to research something new to me, then exploring it from an alternative perspective, and combining the two to form a single synthesis in a document. That creates three separate Markdown files from a single session. I then create a fourth that includes all the articles or papers the model has included in the analysis. Because I can go back and re-analyse earlier sessions, I have everything I need to learn from my past mistakes.
Assist me, don't do it for me
I've developed assistants to help me. For example, I created one to assist with writing and posting on LinkedIn. Why? Because it's an open-ended, complex thing to do and it has an immediate feedback loop (if I actually post). It's not a key part of my job, so it's a lower-risk task for me to explore and experiment with.
As a task, creative writing of any kind is hard to verify, so I need to use my judgment as part of the process. Did this produce something that sounds like me? How much is me? How much am I being steered by suggestion? Am I comfortable posting this? Every session leads to changes I can make to the assistant, and to the wider 'system of writing' that I'm developing. So far, all of that learning has been transferable to either other jobs I do, or to different roles in the company. (NB. This article was written by three humans working together with no AI.)
Other explorations have been simpler. For example, creating and updating pages in Confluence using Atlassian MCP server, or producing a plan in monday.com using their MCP server rather than Claude code action tracking.
Ultimately, what I'm finding is that the best learning tends to come from unexpected directions when I've been free to 'play'. Play takes the pressure off, and then you really start to understand context and nuance, and get a sense of what Generative AI is capable of. If you want to keep up with my adventures in AI, connect with me on LinkedIn.