ChatGPT: a guide to secure prompting in 2023

ChatGPT is increasingly gaining popularity, and our ability to train people to use it properly lags behind.

Most resources are now focused on “how to prompt” so that the results are more effective, and more relevant to the request of the user.

And that is great. However, the tool raises some data privacy concerns and underlines the need to have some safeguards.

Companies providing such conversational agents should obviously implement every measure in their power to protect the confidentiality, integrity, and availability of their users’ data.

They should also respect privacy regulations regarding personal information that might have been used to train the models.

That said, users have a role to play as well. It is essential for users to adopt secure and responsible prompting practices to protect their sensitive information.

In this article, I will share some guidelines on secure prompting, focusing on best practices that can help you maintain data privacy, and prevent security risks, while still benefiting from AI-generated content.

Of course, it won’t reduce your risk to zero. For each piece of information that you decide to send to the chat… you put your security in the hands of OpenAI’s data protection and cybersecurity measures.

We will cover the following topics:

  1. Anonymizing user data in prompts (protecting the data you decide to send)
  2. Data Minimization and Selecting Prompting (making sure you only send what is needed)
  3. Maintaining a secure interaction environment (send information from the right place)

I’m sure this list will get longer as time goes by. But for now, let’s go.

ChatGPT: Secure prompt engineering in action

Anonymized Prompts: A Crucial Step Towards Secure ChatGPT Interactions

Anonymizing prompts is a vital practice when interacting with AI-powered language models like ChatGPT.

Basically, anonymization is about protecting the data you actually send to the chat.

Ensuring that your prompts do not contain any personally identifiable information (PII) or sensitive data protects user privacy, helps prevent data leaks, and complies with data protection regulations.

Here’s how to effectively anonymize your prompts:

Replace Personal Information

Before submitting a prompt, scrutinize the content for any PII such as names, addresses, phone numbers, email addresses, social security numbers, or other identifiable data.

Replace this information with generic placeholders like “[Name]” or “[Address]” to maintain the context without revealing personal details.

I think that in 99% of the requests you must do, there is absolutely no need to input information allowing identification.

Before sharing, ask yourself: am I sharing information directly identifying my person or someone else?

Rephrase other Sensitive Data

Be cautious about including sensitive information like financial account numbers, health records, or confidential business data in your prompts.

Consider rephrasing the question or using high-level, non-sensitive terms to address the topic.

Most of the time, you can obtain a satisfying answer without actually giving the information in itself.

Before sharing, ask yourself: am I sharing an information that is too restricted or confidential to be here?

Use Generalized Examples

This is probably my favorite one.

When providing examples or describing scenarios in prompts, use generalized or fictional characters and situations rather than real-life instances.

This approach helps preserve privacy while still effectively conveying the context.

Before sharing, ask yourself: did I use fictional characters instead of using real situations?

Aggregate Data

If you need to include data sets or statistics in your prompts, ensure they are aggregated and anonymized, preventing the identification of your person, other individuals, or specific entities.

Before sharing, ask yourself: did I aggregate statistics allowing identification of people/organizations?

That’s essentially it for the techniques to anonymize the data you send to the chat.

Now, let’s see how you can actually work on sending fewer data to the chat.

Data Minimization: Enhancing Security in ChatGPT Interactions through Selective Prompting

In the context of secure prompting, data minimization is a crucial principle for protecting personal information when interacting with AI-powered language models like ChatGPT.

Minimization is about making sure you just send the strict necessary for the AI to have enough context. So, to connect the dots with anonymization, we can say:

  • Minimization happens when you WRITE the prompt. You check the text before sending it.
  • Anonymization is about what is SENT in the prompt. Once you minimized, you make sure that what remains does not contain personal or sentitive data.

I hope this gives you a clearer picture of how these 2 work together. Now, let’s go back to minimization.

By limiting the data you share in prompts and focusing only on the essential information required for generating an accurate response, you reduce the risk of exposing sensitive data.

Here’s how to implement data minimization in your prompts:

Assess the Necessity

Before including any piece of information in your prompt, evaluate whether it is necessary for the AI to generate a meaningful response.

If it is not usefeul to provide context, exclude it from your prompt.

It’s useless to anonymize something that should not even be sent in the first place.

Before posting, ask yourself: am I sure everything I wrote was necessary for giving context to the AI?

Focus on Relevant Details (avoid Oversharing)

Provide strictly the relevant background information needed for the AI to understand the context.

Oversharing happens when you provide information good for context, but that you give too much of this information (potentially personal or sensitive), while giving less information would have been enough.

Avoid oversharing personal or sensitive data that could compromise your privacy when giving less information is enough.

If you cannot avoid oversharing, use anonymization.

Before posting, ask yourself: Am I oversharing details I could avoid sharing to get a good reply?

Evaluate Third-Party Integrations

These days, plenty of ChatGPT plugins are popping out.

When using third-party tools or integrations with ChatGPT, ensure that they:

  1. Follow data minimization principles (check their authorizations in your browser)
  2. Do not inadvertently introduce personal or sensitive data into your prompts.

Also, make sure you still need the ChatGPT plugins you installed, and delete them if that’s not the case.

That was my main take on data minimization within ChatGPT. This practice is a crucial aspect of responsible AI usage, helping you to maintain privacy and reduce the risk of unintended data exposure.

Secure Environment

Users should ensure they interact with ChatGPT using a secure environment, including encrypted connections, secure devices, and updated software.

And of course, secure your authentification, for example by login with a google account that has second-factor authentication activated.

It is not only about what you send in, but also the way you do it.

Otherwise, your data might be accessed by people with malicious intents.


With that, I think we are pretty much good for an introduction-level secure prompt engineering guide.

By following these recommendations, end users can contribute to a more secure prompting experience, protecting their data and ensuring responsible AI interactions.

Does this make you completely safe? No. You won’t be anonymous. ChatGPT collects your IP address which can be a way to identify you. You can try to access it with a VPN but sometimes it won’t work.

Anyway, my message is that we cannot reduce our risk to zero but we can do the best in our power.

This was the purpose of this post.

If you found it helpful, please share it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top
Verified by MonsterInsights