Contact Us

[email protected]

Within most large-scale cloud services supporting generative AI, such as Microsoft Azure OpenAI, there are ways to apply security controls and guardrails to deal with potentially harmful or inappropriate material returned by generative AI models/LLMs. One security control is known as content filtering. As the name implies, content filtering is an additional feature, provided at no cost, to filter out inappropriate or harmful content. By implementing this rating system, unsafe content in the form of text and images (perhaps even voice in the near future) can be filtered out to prevent triggering, offensive, or unsuitable content from reaching specific audiences.