microsoft/semantic-kernel
Integrate cutting-edge LLM technology quickly and easily into your apps
Microsoft's enterprise framework for building AI agents and multi-agent systems with LLM orchestration
User requests trigger AI processes that flow through approval cycles with real-time feedback via SignalR or gRPC streaming
Under the hood, the system uses 3 feedback loops, 3 data pools, 4 control points to manage its runtime behavior.
Structural Verdict
A 9-component ml inference with 8 connections. 4274 files analyzed. Well-connected — clear data flow between components.
How Data Flows Through the System
User requests trigger AI processes that flow through approval cycles with real-time feedback via SignalR or gRPC streaming
- User Request — Frontend captures documentation request with title and content
- Process Initialization — SK Process creates workflow instance and gathers product information
- AI Generation — LLM generates documentation based on user input and product context
- User Review — Generated content sent to user via SignalR/gRPC for approval or rejection
- Iteration Loop — If rejected, process loops back to generation step with feedback
- Publication — Approved documents are published and broadcast to subscribers
System Behavior
How the system actually operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
SK Process framework maintains workflow state across process steps
Real-time message broker for process events and user interactions
Temporary storage for generated documents during approval cycles
Feedback Loops
- Document Approval Loop (retry, balancing) — Trigger: User rejection of generated document. Action: Regenerate content with user feedback. Exit: User approval or maximum iterations.
- SignalR Reconnection (circuit-breaker, balancing) — Trigger: Connection loss to SignalR hub. Action: Attempt automatic reconnection with backoff. Exit: Successful connection or manual intervention.
- Process Event Broadcasting (recursive, reinforcing) — Trigger: External process events. Action: Broadcast to all connected clients. Exit: All clients notified.
Delays & Async Processing
- LLM Generation (async-processing, ~2-10 seconds) — User waits for AI-generated content with loading indicators
- User Review Window (eventual-consistency, ~indefinite) — Process pauses until user provides approval or rejection feedback
- SignalR Reconnection Backoff (rate-limit, ~exponential backoff) — Temporary loss of real-time updates during reconnection attempts
Control Points
- OpenAI API Key (env-var) — Controls: LLM provider selection and authentication. Default: AZURE_OPENAI_API_KEY or OPENAI_API_KEY
- SignalR Hub URL (runtime-toggle) — Controls: Real-time communication endpoint. Default: http://localhost:5125/pfevents
- gRPC Service Endpoint (runtime-toggle) — Controls: Process communication channel selection
- Aspire App Ports (runtime-toggle) — Controls: Service discovery and load balancing. Default: 7207
Technology Stack
AI orchestration and agent framework
Real-time web communication
High-performance RPC communication
Frontend UI framework
Python web framework for ML services
Data validation and serialization
Microsoft design system components
Cloud application orchestration
Distributed application runtime
ML model evaluation metrics
Key Components
- DocumentGenerationProcess (service) — Defines the SK Process workflow for document generation with user approval cycles
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ProcessOrchestrator/DocumentGenerationProcess.cs - LocalEventProxyChannel (service) — Bridges SK Process external events to SignalR hub connections for real-time communication
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ProcessOrchestrator/LocalEventProxyChannel.cs - SignalRDocumentationGenerationClient (service) — TypeScript client managing SignalR connections for document generation events
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ReactFrontend/src/services/signalr/documentGeneration.client.ts - GrpcDocumentationGenerationClient (service) — Auto-generated gRPC client for document generation with streaming capabilities
dotnet/samples/Demos/ProcessWithCloudEvents/ProcessWithCloudEvents.Client/src/services/grpc/gen/documentGeneration.client.ts - FastAPI ML Evaluation Server (service) — Provides BERT, METEOR, BLEU, and COMET scoring endpoints for text quality evaluation
dotnet/samples/Demos/QualityCheck/python-server/app/main.py - ProcessFrameworkHttpClient (service) — HTTP client for REST API interactions with the process orchestrator backend
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ReactFrontend/src/services/signalr/ProcessFrameworkClient.ts - DocumentInfo (model) — Serializable data model for document state shared across process steps
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ProcessOrchestrator/Models/DocumentInfo.cs - SimpleChat (component) — React component rendering chat interface with FluentUI styling for user-assistant conversations
dotnet/samples/Demos/ProcessFrameworkWithSignalR/src/ProcessFramework.Aspire.SignalR.ReactFrontend/src/components/SimpleChat.tsx - Copilot Studio Bot (handler) — FastAPI endpoints for integrating with Microsoft Copilot Studio as a skill
python/samples/demos/copilot_studio_skill/src/api/app.py
Sub-Modules
Full-featured C# implementation with enterprise integrations and advanced process orchestration
Python SDK with extensive ML evaluation capabilities and agent frameworks
Complete document generation workflow with real-time UI and process orchestration
Standalone Python service providing text evaluation metrics for quality assessment
Configuration
dotnet/samples/Demos/QualityCheck/python-server/app/main.py (python-pydantic)
sources(List[str], unknown)summaries(List[str], unknown)
dotnet/samples/Demos/QualityCheck/python-server/app/main.py (python-pydantic)
sources(List[str], unknown)translations(List[str], unknown)
python/samples/concepts/agents/azure_ai_agent/azure_ai_agent_structured_outputs.py (python-pydantic)
planet(Planets, unknown)mass(float, unknown)
python/samples/concepts/agents/openai_assistant/openai_assistant_structured_outputs.py (python-pydantic)
response(str, unknown)items(list[str], unknown)
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Ml Inference Repositories
Frequently Asked Questions
What is semantic-kernel used for?
Microsoft's enterprise framework for building AI agents and multi-agent systems with LLM orchestration microsoft/semantic-kernel is a 9-component ml inference written in C#. Well-connected — clear data flow between components. The codebase contains 4274 files.
How is semantic-kernel architected?
semantic-kernel is organized into 4 architecture layers: Core SDK, Samples & Demos, Process Orchestration, Frontend Integration. Well-connected — clear data flow between components. This layered structure enables tight integration between components.
How does data flow through semantic-kernel?
Data moves through 6 stages: User Request → Process Initialization → AI Generation → User Review → Iteration Loop → .... User requests trigger AI processes that flow through approval cycles with real-time feedback via SignalR or gRPC streaming This pipeline design reflects a complex multi-stage processing system.
What technologies does semantic-kernel use?
The core stack includes Microsoft Semantic Kernel (AI orchestration and agent framework), SignalR (Real-time web communication), gRPC (High-performance RPC communication), React (Frontend UI framework), FastAPI (Python web framework for ML services), Pydantic (Data validation and serialization), and 4 more. This broad technology surface reflects a mature project with many integration points.
What system dynamics does semantic-kernel have?
semantic-kernel exhibits 3 data pools (Process State Store, SignalR Hub), 3 feedback loops, 4 control points, 3 delays. The feedback loops handle retry and circuit-breaker. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does semantic-kernel use?
5 design patterns detected: Process Framework, Dual Communication, Generated Code Integration, Pydantic Data Models, Aspire Orchestration.
Analyzed on March 31, 2026 by CodeSea. Written by Karolina Sarna.