How to use ChatGPT (more) safely

I’m working on a few different articles and talks discussing the fallibility of systems like ChatGPT at the moment. From mirroring humans’ unconscious biases, to producing entirely imaginary ‘facts’, there are several issues to contend with.

But I realise that people are already using this tool, and it has the potential to transform workplace productivity, whether you’re using it to brainstorm new ideas or draft tactful emails that sound more polite than you feel.

With that in mind, here are a few tips to help you get the most out of generative AI tools like ChatGPT while avoiding bias and inaccuracies as far as possible. Hopefully, as the technology improves, more and more of this list will become redundant. But as of today, these are the minimum steps I’d personally want to consider.

Ask the chatbot to mark its own homework

Think about whether a response seems overly positive or negative, and ask ChatGPT to provide the opposing view. For example, when I asked it about the potential impact of generative AI, it initially gave me an entirely rosy picture of future benefits. When I followed up with a request for it to consider potential down-sides, the resulting list of problems in the follow-up response was actually substantially longer! But the original response hadn’t included any of them.

You can also do this when drafting emails, for example try “Can you rewrite this to sound more supportive?”

Try flipping the gender of people in your scenarios

I’ve seen a few examples lately of gender bias that can emerge when “he” or “she” is included in the user input.

I tried this myself by asking ChatGPT to draft me an email to a staff member who was showing a pattern of lateness – I ran the same query three times, switching between “he,” “she,” and “they”. Although all three responses were okay, the response to the imaginary man was a lot more supportive than the other two, including “I would like to work together to find a solution to this issue. Can you let me know if there is anything going on that is causing you to be late? Is there anything we can do to help you improve your punctuality?” Contrast this with “punctuality is a non-negotiable aspect of our working relationship” (they) and “I would appreciate it if you could make a conscious effort to arrive at work on time from now on” (she).

If you’re using ChatGPT to draft emails to real people, try switching up their gender to see if that highlights any issues with the wording.

Consider other potential demographic biases

As a rule, AI reflects whatever biases are in its training data, so when the training data is the web, that’s a lot of potential bias. Gender is probably the easiest one to check for, as the English language has distinct gendered pronouns. But think about whether your prompt contains other clues, for example racialised names or details of a disability, that might lead to subtly biased responses.

But for privacy reasons, it’s best not to use real names or personal details at all, so…

Don’t submit sensitive information

If you want help to write an email, frame it as a hypothetical “staff member,” “client” or “boss” rather than including real names. And if you want help with a coding problem, don’t make the same mistake as the Samsung employee who pasted confidential company code into a query box. The chances are very low that any individual piece of information shared this way would later turn up in response to someone else’s query, but there have already been examples of full chat history being leaked to other users. Even if it doesn’t get you in legal hot water, it could be embarrassing for you or your employer, so think twice before hitting submit.

Think carefully about whether to let OpenAI use your chat history for training

By default, everyone is opted in to save and share their conversations. There’s a well-hidden check box (under Settings; Data Controls) which allows you to opt out of both keeping your chat history and allowing it to be used for further model training. Personally, I think it’s unfair for these two things to be linked (and maybe it’s different in the paid version of the product, which I haven’t tried). It’s also a shame that you can’t opt in/out at the level of individual conversations. But you should consider which option is best for you, or consult your company policy if they have one in place.

Check text against a plagiarism detector

Not everything that comes out of ChatGPT is novel and original. Not only that, but it often repeats itself, so it’s probably giving other people the same ideas it’s giving to you. But there are plenty of free plagiarism detectors available online, so if you want to publish anything based on work done with ChatGPT, check it there first.

Ask for references…

If you’re doing anything factual, ask for references. Are there reports or websites that back up what it’s saying? If numbers are quoted, where are they from? If you ask directly for links, you’ll often get them, though they don’t always work (remember, the training data only goes up to 2021).

…And then check that those references really exist, and are credible sources

The next step is to follow the links, and make sure they really do say what has been attributed to them. Even the Guardian is getting emails from people looking for imaginary articles, credited to real columnists who never wrote them, so check for yourself, even if it sounds plausible. If the source is accurate, see above regarding plagiarism, to make sure you’re not inadvertently copying too much – or make sure you put a citation into your work.

I realise this sounds like a lot – and not every point will be applicable in all cases. But I hope this has given you some ideas on how to become a more informed consumer of ChatGPT and similar systems.

Scroll to Top