The Comparison Hub in Murnitur AI is a central platform where you can compare presets, prompts, versions, and LLM models. This hub simplifies the comparison process, allowing users to make informed decisions and choose the best options for their projects. Whether evaluating prompts, comparing LLM models, or assessing changes between versions, the Comparison Hub enhances productivity and aids in optimizing LLM-based applications.

Use cases

Here are some use cases for the Comparison Hub:

  • Select Base Model and Preset Version: You can use the Comparison Hub to select a base model and a preset version for comparison. This allows you to evaluate how different versions of presets perform with the same base model.

  • Select Multiple Items for Comparison: Users can choose to compare more than one item simultaneously, such as multiple presets, models, or versions. This enables them to assess the differences and similarities between various options quickly.

  • Fill Out Variables: The Comparison Hub allows you to fill out variables within prompts or configurations before running the comparison. This customization feature enables tailored evaluations based on specific input conditions.

  • Run Comparison: With a simple click of a button, users can initiate the comparison process. The Comparison Hub then generates results that display the performance, outputs, or other relevant metrics for the selected items.

  • Rerun or Modify Comparison: Users have the flexibility to rerun a particular comparison cell or add more test cases, models, or versions to the comparison. This iterative process facilitates thorough analysis and experimentation, allowing for refinement and optimization based on the comparison results.