PHOENIX

ML Observability
in a Notebook

Uncover Insights, Surface Problems, Monitor and Fine Tune
your Generative LLM, CV and Tabular Models

Start Now Star
  • Embeddings and latent structure are the backbone of modern models

  • LLM and model complexity is off the charts

  • Model improvement, analysis and control are severely lacking a set of easy-to-use tools

  • Meets the data scientist (you) in the notebook to help solve the complex ML problems

With Phoenix, Data Scientists can

Troubleshoot tasks such as summarization or question/answering to find problem clusters with misleading or false answers.
Automatically surface anomalies in your data with clusters using LLM embeddings.
Find clusters of problems using performance metrics or drift. Export clusters for fine-tuning workflows.
Use embedding drift to surface data drift for generative AI, LLMs, computer vision (CV) and tabular models.
Uncover high-impact clusters of data points missing from model training data when comparing training and production datasets.
Map structured features onto embeddings for deeper insights into how embeddings represent your data.
Monitor model performance and track down issues through exploratory data analysis.

When to use Arize vs Phoenix

RECOMMENDED FOR
  • Cloud or on-prem
  • ML Teams looking for visibility across all their ML use cases
  • Advanced RCA (root cause analysis) for drift and performance
  • Always on monitoring
  • Shareable URLs with your team
  • Scale and security
  • Robust integrations
  • Explainability and fairness
  • Platform for observability of production models
VIEW FULL COMPARISON →
  • Available on cloud or on-prem
  • Supports Tabular, Image, NLP, and Generative models
  • Rich visualizations for exploratory data analysis
  • Opinionated root cause analysis (tracing workflows)
  • High Scale + Performant (works on billions of predictions)
  • Multi-model support
  • Configurable monitoring and alerting integrations
  • Shareable insights and dashboards for your team
  • Workflows to export findings
  • Customizable performance, drift, and data quality metrics
  • RBAC controls
  • Security and compliance
Phoenix
RECOMMENDED FOR
  • Notebook and local usage
  • Single Data Scientist looking for insights for one model
  • Designed for fast, iterative development during model building & production stage
  • Notebook based monitoring
  • EDA (Exploratory data analysis)
  • Users getting started with ML Observability
VIEW FULL COMPARISON →
  • Available in a notebook
  • Supports Tabular, Image, NLP, and Generative models
  • Rich visualizations for exploratory data analysis
  • Single model support
  • Lightweight monitoring & checks
  • Workflows to export findings
  • Supports drift metrics
  • Runs locally on your data

Stay up to date with Phoenix updates