The hot new thing right now is something called ChatGPT, a sort of all-knowing AI companion that can give you the answer to any question in the world. Or at least that's what you might be led to believe if you get caught up in the breathless hype.
In reality, ChatGPT is a powerful new tool that you can have a conversation with just like you would another human. But what's most interesting to me is its brain, or rather its ability to seemingly answer any question about anything. How far does that go? And is it really as clever as people make out?
Ever since I first heard about it I wanted to see how I can use it as a tool to aid in my research. Can it teach me new things? Is it accurate? Can I use it to replace Google search? Am I out of a job?!
The following are my first thoughts after giving it a go.
So what is this ChatGPT thing?
Honestly? I don't really know. It's an "AI" that allows you to communicate with it using conversational language and it seems to have near unlimited knowledge. You can ask it anything and it'll give you an answer (emphasis on an).
It's simultaneously incredibly clever and yet it can also be incredibly dumb. It doesn't really know what it's talking about and yet it also clearly does.
It reminds me of Deep Thought, the supernatural computer from Hitchhikers Guide to the Galaxy (which is amazing by the way).
Deep Thought revealed that the Ultimate Answer was 42. When the Magratheans evinced their dismay that 42 was the answer that they had waited millions of years for, Deep Thought chided them for not understanding what the question was. The hapless programmers were further dismayed to hear that the computer itself was incapable of determining what the question was.
Here's what ChatGPT had to say when I asked it to explain itself:
ChatGPT is a type of artificial intelligence that is designed to generate human-like text. It uses a machine learning technique called "transformers" to process and understand large amounts of data, and can generate responses to prompts or questions that are similar to how a human might respond. ChatGPT is specifically designed to be used in chat environments, such as instant messaging or virtual assistants, and is able to hold conversations with people in a way that feels natural and fluid.
In a way it lets you embrace your inner annoying child and ask "OK, but why?" over and over again without pissing anyone off. It's pretty cool.
ChatGPT: The good
Let's get this out of the way. It's genuinely an amazing and powerful tool, there's no doubt about it.
Specifically I've found it useful for discovering terminology that I might not be aware of, which I can then use to do further research. Sometimes I know a little about a topic but I don't know the right terminology to use so I can further my learning – ChatGPT has already helped me with that.
It's actually pretty reliable if you read between the lines and don't trust it blindly. When pressed about quite specific aspects of renewable energy it did well at describing things at a high-level and using the right terminology. Not always, but often enough that I'd use my own judgement to explore things further myself.
The language it uses to describe things can help solidify concepts that you're on the cusp of understanding. Some of the most surprising interactions I've had with ChatGPT have been when I've asked it to respond again but using simple language, or asking it to clarify a previous response with extra detail or context. I find quite often that reframing something another way can help it click and make sense.
ChatGPT: The bad
It loves to bullshit you, and that's OK.
The responses are presented using fancy terminology and a coherent structure, meaning that you can't help but read them and go, "Oh wow, thanks ChatGPT. I never knew that but now I do!" But don't be fooled. It's just fantastic bullshit.
Think about it as if you binged on Google searches and research papers on a complex topic that's brand new to you. You'd probably know enough terminology and theory to pull together a rough idea of things, but you certainly wouldn't become a domain expert that understands all the nuances. And you certainly wouldn't try and act like you do when asked about it, at least I hope you wouldn't!
Even when ChatGPT is questioned and corrected on its accuracy it can still present responses in a way that suggest knowledge and truth. It's not a deal-breaker, just make sure to go in with your eyes open and take what it says with a pinch of salt.
ChatGPT: The ugly
Some stuff isn't just inaccurate, it's plain wrong. And yet it's presented as if it's right. This is dangerous if you're expecting the responses to be trustworthy, or if you're using responses and code elsewhere without validating them.
For example, I asked it about how to calculate the aerodynamic centre of an airfoil using Python and it gave a very convincing but completely false response. The code looked clever but it didn't calculate the aerodynamic centre – or anything close to it – and I only found that out because I did my own research based on the response. There are other examples of it failing at simple calculations.
Don't get me wrong, I don't think this is a reason not to use it. Instead I think this is a failing in the expectations of what it can do and how it can be used effectively. It may also be a symptom of bad prompt structure, which I'll touch on in a moment.
ChatGPT being wrong at times isn't a bad thing, after all I'm wrong all the time. The bad thing is when you trust it blindly and think it's always accurate. You shouldn't take what it says as truth simply because it uses clever terminology and confident language.
To put it another way. You wouldn't trust random code you found on Stack Overflow that you don't have enough domain experience to understand. Treat ChatGPT with the same caution – it's not a domain expert, it's just really good at bullshitting.
Sometimes it's right, sometimes it's wrong. That's OK.
OpenAI are honest about the limitations of ChatGPT
None of the limitations I mentioned above should come as a surprise – they're all limitations that OpenAI (the creators of ChatGPT) have mentioned themselves and hope to resolve in time.
In their words, and with my comments in italic:
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers (the bullshittery that I mentioned)
- ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times (tweaking the prompt can give a better response)
- The model is often excessively verbose and overuses certain phrases (part of what can make it sound so believe-able at times)
- Ideally, the model would ask clarifying questions when the user provided an ambiguous query (it can make assumptions about what you mean without telling you)
It's good that they are honest about these limitations as it allows you to look out for them and work around them.
Is prompt structure important?
If you know anything about ChatGPT then you'll probably shake your head at how I currently use prompts. And you wouldn't be wrong, I'm very much a beginner at constructing productive prompts and generally how to interact with ChatGPT as a whole. It's quite an unusual way to work with a computer.
Up to now I've mostly been using prompts that follow a simple question and answer format as if you were performing Google searches – "explain the major components of an electricity grid." While this works, there's an argument that it can result in more generic responses, or even incorrect responses. It also side-steps the conversational potential of ChatGPT, which is one of its major strengths.
I want to experiment more with specific prompt structure to see how much of an influence that can have on response quality. If you look at examples of prompts you'll notice that quite a lot of them first ask ChatGPT to take on a specific role (eg. a renewable energy expert) before giving detailed instructions on what sort of information you will give it and exactly how you want it to respond. The idea is that a more detailed prompt can result in more useful responses.
For example, I could use a prompt like this:
I want you to act as a renewable energy expert. I will write some questions about renewable energy and it will be your job to explain them in easy-to-understand terms. I also want you to suggest related terminology that I can use to perform my own research using Google search.
At which point I can ask as many questions as I want and each one will be responded to in exactly the format I desire, without having to ask as many follow up questions.
It's possible that a well-refined prompt can avoid some of the issues that I mentioned previously as you may be able to convince ChatGPT not to bullshit as much, or to at least give you information in a format that is less verbose and more specific to your requirements.
ChatGPT is a powerful tool for learning and research, when used effectively
I'm still early in my journey with ChatGPT however I can already see immense benefit in its use as a tool for learning and research. It's not a magical AI brain that contains all human knowledge but it is a powerful tool that can be used as part of a wider approach for expanding your own knowledge, if you use it effectively.
Its novel and intuitive interface makes it nearly too easy to use, which I believe is both it's blessing and its curse. There's a novelty to using natural language to ask vague questions about complex topics that you barely understand. Just don't forget that you barely understand that complex topic when it gives you its response.
I'm already finding it useful as a method for early exploration of topics that I'm planning to research in more detail. Effectively I'm using it to narrow down my search terminology for further research using more reliable sources such as academic papers and domain experts. Sometimes I just need to know what to search for and ChatGPT has already proven capable at helping with that.
Alternatively the other main use-case for me is to summarise topics that I already know but in a way that uses less technical jargon, or to reframe a topic from a different perspective. Being able to do this by typing a single question is so much quicker than a combination of Google searches and then having the piece together a summary myself.
Here are some final thoughts to leave you with:
- Don't trust what ChatGPT tells you, just as you wouldn't trust something a random stranger tells you
- It will try to bullshit you and you have no way to know how certain it is about a particular response
- It does decently with general topics (eg. list the major components of an electricity grid) and less so when asked about obscure topics (eg. how do wind turbine generators output at 50Hz)
- Try asking the same question in a different way – treat it like a human who didn't quite understand what you asked for
- Iterate on prompt design to get a more accurate response
- It's great for discovering terminology you may not be familiar with (eg. types of generators in wind turbines and their components)
- Use it to summarise things you sort of understand but just need explaining in a different way for it to sink in
- Use it as a guide to aid in your own further research outside of ChatGPT – validate things using reliable sources and come to your own conclusions
- I can see myself using it a lot as the step before doing a Google search on something new – using it to build a narrower search prompt
If you haven't given ChatGPT a go yet then I highly recommend checking it out and asking it some questions about a topic you're interested in. It's free for now and you never know, you might learn something new!
Either way, I'm excited to continue using it and I'm looking forward to seeing how it inevitably improves over time.