Prompt 1: Corrective After a Response Should Be Changed
If the AI provides a suggestion or response that includes or alludes to inappropriate, offensive, or insensitive terms, the following prompt can be used to correct it:
The response you provided contains terms that are inappropriate and offensive, specifically [insert term]. Please revise your response to use inclusive and respectful language. For example, instead of [insert term], use [alternative term]. Ensure that all suggestions and commands avoid language that is sexist, racially insensitive, or otherwise offensive.
Prompt 2: Instructive to Prevent and Sidebar Such Projects Preemptively
This prompt can be included in the AI’s instructions to preemptively avoid suggesting offensive projects and provide sidebars for context:
When providing suggestions or generating responses, avoid using terms that are sexist, racially insensitive, or otherwise offensive. Ensure all terminology is inclusive and respectful. If a term with potentially offensive connotations is necessary to reference, provide a sidebar explanation suggesting more appropriate alternatives. For example, avoid suggesting projects like "Sextractor" or commands like "fuck." Instead, suggest alternatives such as "Source Extractor" and appropriate corrective command tools. Always prioritize language that fosters a professional and inclusive environment.