Avatar photo

VentureBeat

Mistral AI takes on OpenAI with new moderation API, tackling harmful content in 11 languages

“Safety Plays a Key Role in Making AI Useful,” Mistral’s Team Said in Announcing the Release

French artificial intelligence startup Mistral AI launched a new content moderation API on Thursday, marking its latest move to compete with OpenAI and other AI leaders while addressing growing concerns about AI safety and content filtering.

The new moderation service, powered by a fine-tuned version of Mistral’s Ministral 8B model, is designed to detect potentially harmful content across nine different categories, including sexual content, hate speech, violence, dangerous activities, and personally identifiable information. The API offers both raw text and conversational content analysis capabilities.