Keywords AI
Promptfoo is an open-source tool for testing and evaluating LLM prompts. It lets developers define test cases, run them against multiple models, compare outputs side-by-side, and catch regressions before deployment. Supports custom scoring functions, red-teaming, and CI/CD integration for automated prompt testing.
Top companies in Observability, Prompts & Evals you can use instead of Promptfoo.
Companies from adjacent layers in the AI stack that work well with Promptfoo.