Logging and monitoring are two essential practices in LLMOps. Logging refers to capturing and recording relevant data and events associated with language model deployment, operation, and usage. Some key aspects of logging in LLM include usage logs, performance logs, error logs, etc.
On the other hand, monitoring refers to the continuous and real-time observation and tracking of language model behavior and related system components to assess performance, detect anomalies, and ensure operational efficiency and reliability. Monitoring an LLM involves collecting and analyzing data to understand how the language model functions and interacts with users and other software components.
In orq.ai, every interaction with an LLM generates a log for you, which can be found on the dashboard. These logs are available for Deployments and Playground. As you click on the log, the side panel shows a conversation view of the prompt and completion.
This contains the prompt input to the LLM, the context, and the corresponding response or answer. Depending on the prompt type [
Chat], orq.ai renders the conversation in a very clear way.
This consists of several subsections with significant information about the prompt, such as the Context provided, Variables, Metadata, Parameters, Performance, Economics, and Headers of that specific transaction.
Any feedback reported back through the API is shown here. In the future, more features and tooling will be added in this section for domain experts working within the platform.
The debug tab gives engineers a complete and detailed transaction breakdown.
Logs provide a detailed record of events and interactions, offering insights into system performance, security, and compliance. Monitoring complements logging by providing a real-time assessment of system health and resource usage, allowing proactive issue resolution.
Updated 8 days ago