How do you optimize your ChatGPT (or other AI tool) prompts to ensure that responses are concise, relevant, and verifiable?

2.4k viewscircle icon2 Upvotescircle icon11 Comments
Sort by:
Director of Operations in IT Services8 months ago

I find targeting specific details from chats, meetings, documents, emails, etc helps bring additional context to the prompts I use.  This is especially helpful when I am specifically using it for customer interfacing, I will reference emails and chats that can be leveraged.  For those that I cannot reference (i.e. meetings not recorded/transcribed) I use my notes from the meeting to further refine my desired output of the prompt.  This is using MS Copilot in an environment where access to these files and systems is accessible. 

Chief Information Officer in Healthcare and Biotech8 months ago

I've recently started asking ChatGPT (and other GenAI tools), to create me the prompt that I need. I take the initial output and test it out to see what I get in return and iterate the prompt from there. Similar to others I will provide follow up such as - be more concise, use bullets etc. or I will ask ChatGPT to provide recommendations on further improving the prompt it provided me.  This has been working well with he new ChatGPT Canvas. 

For example I might start with "create me a prompt that I can use to analyze, synthesize and summarize a podcast transcript" which will give me a relatively well structured prompt. If I'm not happy with the output the prompt gives me I might ask for specific additions - regenerate the prompt to include a section on key takeaways, audience is a CIO, create a presentation outline with speaking notes etc. - or I'll ask ChatGPT to provide recommendations on how to improve the prompt - then tell it to regenerate the prompt with some or all of its own feedback. I'll loop through this again a few times, tweaking as I go. 

I've also used this approach to have ChatGPT provide further prompts as part of its output. So continuing with my podcast transcript analyzer example - have ChatGPT generate prompts as part of it's output that can be used to generate images to support the presentation it creates. 

As for verifiable - trust nothing that it gives you without verifying. As part of the prompt generation process, ask ChatGPT to provide specific references or external sources that can be used to verify what it is telling you. And then go look at what it is giving you...

Director of Technology Strategy in Services (non-Government)8 months ago

I give it the first prompt, see what the response is, then follow up with "be more concise".

I've also tried "don't show me the workings,don't waffle, give me the response in a bullet point list".

Generative AI is like a 9 year old, you need to be very specific in what you ask for.

Lightbulb on2
VP of IT8 months ago

ChatGPT has significantly advanced prompt engineering. If you’re developing an AI tool, consider a multi-agent approach, where one agent first clarifies and refines the prompt before generating a response.

Lightbulb on2
Head of Transformation in Government9 months ago

I don't optimise my prompts and believe prompt engineering will soon face obsolescence. Machines work for humans, not the other way around (yet) and so I expect GPT to get better each and every day at understanding and supporting me. That's ME. Protip: it does. If it doesn't give me good result, I give it feedback to do better next time.
I get it to be concise by telling it to be concise.
I get it to be relevant by providing feedback.
I verify reliability by ensuring factchecking and passing my own heuristics. E.g. I treat it like a person and when something doesn't seem right, plausible, correct, I check it. If the topic is sensitive, like something I am writing or basing a decision on, or wish to store in my own memory as a factoid or relevant intellectual scaffold, then I factcheck it down to the source. 
It's just a machine, but one that is supposed to be concise, relevant and verifiable. If it were a car, we wouldn't be soul-searching for the topics that are the main concern in most discourse. We would give feedback and make it better. I don't demand explainability from my car's EGR. So that's how it should be with GPT.

Lightbulb on2 circle icon1 Reply
no title8 months ago

I've found the same thing.  The more I use it it is getting to know me better and providing better answers with less and less context. <br><br>

Lightbulb on1

Content you might like

Yes, we’re starting to see the limits of GenAI74%

No, the hype still hasn’t died down26%

Yes, they produce more work now33%

Yes, they produce less52%

No, their output is the same4%

Don’t know8%

Not using GenAI for software development1%

View Results