Semantic Kernel
Integrate Orq.ai with Microsoft Semantic Kernel using OpenTelemetry
Getting Started
Microsoft Semantic Kernel is an open-source SDK that enables developers to integrate AI services like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages. Integrate with Orq.ai to monitor AI orchestration, plugin execution, planner operations, and LLM interactions.
Prerequisites
Before you begin, ensure you have:
- An Orq.ai account and API key
- Semantic Kernel installed in your project
- Python 3.8+ or .NET 6.0+ (depending on your language)
- An LLM provider API key (OpenAI, Azure OpenAI, etc.)
Install Dependencies
Python:
# Core Semantic Kernel packages
pip install semantic-kernel
# LLM provider SDK
pip install openai # or azure-openai
# Core OpenTelemetry packages
pip install opentelemetry-sdk opentelemetry-exporter-otlp
.NET:
# Using dotnet CLI
dotnet add package Microsoft.SemanticKernel
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
Configure Orq.ai
Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:
Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=semantic-kernel-app,service.version=1.0.0"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=semantic-kernel-app,service.version=1.0.0"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=semantic-kernel-app,service.version=1.0.0
Integrations
Choose your preferred OpenTelemetry framework for collecting traces:
MLFlow
Best for: ML experimentation and model lifecycle management (Primary support for Semantic Kernel)
pip install mlflow
import mlflow
import semantic_kernel as sk
import os
# Configure MLflow tracking
mlflow.set_tracking_uri(os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))
# Enable automatic Semantic Kernel logging
mlflow.semantic_kernel.autolog()
# Initialize kernel
kernel = sk.Kernel()
# Add OpenAI service
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion("gpt-4o-mini", api_key=os.getenv("OPENAI_API_KEY"))
)
with mlflow.start_run():
# Create and run semantic function
prompt = """{{$input}}
Summarize the content above in 2-3 sentences."""
summarize = kernel.create_semantic_function(prompt_template=prompt)
result = await kernel.run_async(
summarize,
input_str="Long text to summarize..."
)
print(result)
Manual OpenTelemetry
Best for: Full control and custom instrumentation
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
import semantic_kernel as sk
import os
# Initialize OpenTelemetry
resource = Resource.create({
"service.name": os.getenv("OTEL_SERVICE_NAME", "semantic-kernel-app"),
})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("semantic-kernel")
# Initialize Semantic Kernel
kernel = sk.Kernel()
# Add OpenAI service
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion("gpt-4o-mini", api_key=os.getenv("OPENAI_API_KEY"))
)
# Trace semantic function execution
with tracer.start_as_current_span("semantic_kernel.function", attributes={"model": "gpt-4o-mini"}):
prompt = """{{$input}}
Translate the above to {{$language}}"""
translator = kernel.create_semantic_function(prompt_template=prompt)
result = await kernel.run_async(
translator,
input_str="Hello, world!",
input_vars={"language": "French"}
)
print(result)
Examples
Basic Semantic Function
import semantic_kernel as sk
# Setup your chosen integration above, then use Semantic Kernel normally
kernel = sk.Kernel()
# Configure AI service
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion("gpt-4o-mini", api_key=api_key)
)
# Create a semantic function
prompt = """{{$input}}
Analyze the sentiment of the text above.
Return: POSITIVE, NEGATIVE, or NEUTRAL"""
sentiment_analyzer = kernel.create_semantic_function(
prompt_template=prompt,
description="Analyzes text sentiment",
max_tokens=50
)
# Run the function
result = await kernel.run_async(
sentiment_analyzer,
input_str="I absolutely love this new feature!"
)
print(f"Sentiment: {result}")
Using Native Functions (Plugins)
import semantic_kernel as sk
from semantic_kernel.skill_definition import sk_function
class MathPlugin:
@sk_function(
description="Multiply two numbers",
name="multiply"
)
def multiply(self, number1: str, number2: str) -> str:
return str(float(number1) * float(number2))
@sk_function(
description="Add two numbers",
name="add"
)
def add(self, number1: str, number2: str) -> str:
return str(float(number1) + float(number2))
# Register the plugin
kernel = sk.Kernel()
math_plugin = kernel.import_skill(MathPlugin(), "MathPlugin")
# Use in a semantic function
prompt = """
Calculate: {{MathPlugin.multiply $number1 $number2}}
Then add 10: {{MathPlugin.add $result 10}}
"""
calculator = kernel.create_semantic_function(prompt_template=prompt)
result = await kernel.run_async(
calculator,
input_vars={"number1": "5", "number2": "3"}
)
Planner Example
import semantic_kernel as sk
from semantic_kernel.planning import ActionPlanner
kernel = sk.Kernel()
# Add AI service
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion("gpt-4o-mini", api_key=api_key)
)
# Import plugins
kernel.import_semantic_skill_from_directory("./skills", "WriterSkill")
kernel.import_semantic_skill_from_directory("./skills", "EmailSkill")
# Create planner
planner = ActionPlanner(kernel)
# Create a plan
ask = "Write a poem about Seattle and then email it to [email protected]"
plan = await planner.create_plan_async(goal=ask)
# Execute the plan
result = await kernel.run_async(plan)
print(f"Plan executed: {result}")
Memory and Context Management
import semantic_kernel as sk
from semantic_kernel.memory import VolatileMemoryStore
from semantic_kernel.connectors.ai.open_ai import OpenAITextEmbedding
kernel = sk.Kernel()
# Setup memory with embeddings
memory_store = VolatileMemoryStore()
kernel.register_memory_store(memory_store)
# Add embedding service
kernel.add_text_embedding_generation_service(
"openai-embedding",
OpenAITextEmbedding("text-embedding-ada-002", api_key)
)
# Save facts to memory
await kernel.memory.save_information_async(
"products",
id="1",
text="Our premium widget costs $99 and comes with a lifetime warranty"
)
await kernel.memory.save_information_async(
"products",
id="2",
text="The standard widget is $49 with a 1-year warranty"
)
# Semantic function that uses memory
prompt = """
Use the following information to answer the question:
{{recall $query}}
Question: {{$query}}
Answer:"""
answer_with_memory = kernel.create_semantic_function(prompt_template=prompt)
result = await kernel.run_async(
answer_with_memory,
input_vars={"query": "What is the price of the premium widget?"}
)
Streaming Responses
import semantic_kernel as sk
kernel = sk.Kernel()
# Configure for streaming
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion(
"gpt-4o-mini",
api_key=api_key,
stream=True
)
)
prompt = "Tell me a story about a brave knight"
story_teller = kernel.create_semantic_function(prompt_template=prompt)
# Stream the response
async for chunk in kernel.run_stream_async(story_teller):
print(chunk, end="", flush=True)
Error Handling with Custom Spans
from opentelemetry import trace
tracer = trace.get_tracer("semantic-kernel-custom")
with tracer.start_as_current_span("semantic_kernel.pipeline") as span:
try:
kernel = sk.Kernel()
# Configure services
kernel.add_text_completion_service(
"openai",
sk.OpenAIChatCompletion("gpt-4o-mini", api_key=api_key)
)
# Run complex pipeline
result = await kernel.run_async(
function1,
function2,
function3,
input_str="Process this data"
)
# Add success attributes
span.set_attribute("operation.success", True)
span.set_attribute("pipeline.steps", 3)
span.set_attribute("result.length", len(str(result)))
except Exception as e:
# Record the exception
span.record_exception(e)
span.set_attribute("operation.success", False)
span.set_attribute("error.type", type(e).__name__)
raise
Next Steps
✅ Verify traces: Check your Orq.ai dashboard to see incoming Semantic Kernel traces ✅ Monitor orchestration: Track planner decisions and multi-step workflows ✅ Optimize plugins: Use trace data to identify slow plugins and optimize performance ✅ Track costs: Monitor token usage across different models and functions ✅ Debug planners: Understand how planners decompose tasks and execute steps ✅ Analyze memory usage: Monitor embedding operations and retrieval performance
Related Documentation
- Orq.ai Dashboard Guide
- Semantic Kernel Documentation
- OpenTelemetry Best Practices
- Plugin Development Guide
Support
Updated about 23 hours ago