crewaiinc/crewai
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Coordinates multiple AI agents with assigned roles to collaborate on complex multi-step tasks
Users define agents with roles and crews with tasks, which are orchestrated by the Crew class. Tasks are executed by assigned agents who call LLMs and tools to produce outputs. Agents can delegate subtasks to other agents via the A2A protocol. The system maintains conversation context and produces structured outputs for each task and the overall crew execution.
Under the hood, the system uses 3 feedback loops, 3 data pools, 5 control points to manage its runtime behavior.
A 7-component ml inference. 1091 files analyzed. Data flows through 7 distinct pipeline stages.
How Data Flows Through the System
Users define agents with roles and crews with tasks, which are orchestrated by the Crew class. Tasks are executed by assigned agents who call LLMs and tools to produce outputs. Agents can delegate subtasks to other agents via the A2A protocol. The system maintains conversation context and produces structured outputs for each task and the overall crew execution.
- Agent Definition — Users create Agent instances with role, goal, backstory, and assigned tools. Each agent gets an LLM instance for reasoning and can be configured for delegation capability.
- Task Creation — Users define Task instances with descriptions, expected outputs, and agent assignments. Tasks can reference other tasks as context dependencies. [Agent → Task]
- Crew Assembly — The Crew class combines agents and tasks with a process type (sequential, hierarchical, or parallel) and execution configuration like memory and planning modes. [Task → Crew]
- Task Orchestration — The Crew's kickoff method orchestrates task execution according to the configured process, managing dependencies and agent assignments through the TaskManager. [Crew → TaskOutput]
- Agent Execution — Each agent executes its assigned task by making LLM calls with system prompts, using available tools, and potentially delegating subtasks via A2A protocol to other agents. [Task → TaskOutput]
- Inter-Agent Communication — Agents communicate through the A2A protocol, discovering each other via AgentCard advertisements and exchanging A2AMessage instances with optional UI extensions. [AgentCard → A2AMessage]
- Result Aggregation — The Crew collects all TaskOutput instances and produces a CrewOutput containing the final results, usage statistics, and any generated artifacts. [TaskOutput → CrewOutput]
Data Models
The data structures that flow between stages — the contracts that hold the system together.
lib/crewai/src/crewai/agent/core.pyPydantic model with role: str, goal: str, backstory: str, llm: LLM, tools: list[BaseTool], memory: bool, max_iter: int, delegation: bool
Created with role and capabilities, assigned to tasks within a crew, executes tasks by calling LLM and tools, may delegate to other agents
lib/crewai/src/crewai/task.pyPydantic model with description: str, agent: Agent, expected_output: str, tools: list[BaseTool], async_execution: bool, context: list[Task]
Defined with description and assigned agent, executed by crew orchestrator, produces TaskOutput with result
lib/crewai/src/crewai/crew.pyPydantic model with agents: list[Agent], tasks: list[Task], process: Process, verbose: bool, memory: bool, planning: bool
Assembled with agents and tasks, orchestrates task execution according to process type, returns CrewOutput with final results
lib/crewai/src/crewai/a2a/extensions/a2ui/models.pyPydantic model with type: str, surface_id: str, component: dict[str, Any], data_model: dict[str, Any]
Created by agents to communicate UI updates or data changes, validated against A2UI schema, processed by client extensions
a2a.typesdict with capabilities: AgentCapabilities, interface: AgentInterface, provider: AgentProvider, security: list[SecurityScheme]
Published by agents to advertise their capabilities, used by other agents for discovery and delegation decisions
Hidden Assumptions
Things this code relies on but never validates. These are the things that cause silent failures when the system changes.
OAuth2 provider names in settings exactly match Python module names in crewai.cli.authentication.providers directory, and provider class names follow exact CamelCase conversion from snake_case (e.g. 'work_os' → 'WorkOsProvider')
If this fails: ImportError or AttributeError crashes CLI authentication when provider name doesn't map to existing module/class - user gets cryptic Python import errors instead of 'unsupported provider' message
lib/crewai/src/crewai/cli/authentication/main.py:ProviderFactory.from_settings
Project has a valid pyproject.toml file with CrewAI project metadata in current working directory when get_project_name(require=True) is called
If this fails: Deployment commands fail with 'No UUID provided, project pyproject.toml not found' error, but the actual project structure requirements are never documented or validated upfront
lib/crewai/src/crewai/cli/deploy/main.py:DeployCommand.__init__
Enterprise URL endpoint '/auth/parameters' is accessible within 30 second timeout and returns valid JSON OAuth2 configuration matching expected schema structure
If this fails: Enterprise configuration fails with generic HTTP or JSON decode errors, giving no guidance about what the OAuth2 endpoint should return or network requirements
lib/crewai/src/crewai/cli/enterprise/main.py:EnterpriseConfigureCommand._fetch_oauth_config
API response from get_organizations() returns list of dictionaries with 'uuid' and 'name' keys, and Settings class has org_name and org_uuid attributes that can be directly assigned
If this fails: Organization switching silently fails or crashes with KeyError if API response format changes or Settings schema is incompatible
lib/crewai/src/crewai/cli/organization/main.py:OrganizationCommand.switch
GitHub API is accessible, rate limiting allows unauthenticated requests, and repository names follow exact 'template_' prefix convention in crewAIInc organization
If this fails: Template listing fails with HTTP errors or returns empty results if GitHub is unreachable, rate limited, or if naming conventions change
lib/crewai/src/crewai/cli/remote_template/main.py:TemplateCommand._fetch_templates
Pydantic warnings module structure remains stable - warnings have category with __module__ attribute equal to 'pydantic.warnings'
If this fails: Warning suppression breaks silently when Pydantic updates warning system, potentially flooding users with deprecation warnings
lib/crewai/src/crewai/__init__.py:_suppress_pydantic_deprecation_warnings
a2a.types module provides exact type definitions (AgentCapabilities, AgentCardSignature, etc.) and crewai.a2a.extensions.server module exists when A2A functionality is used
If this fails: A2A protocol features crash with ImportError if optional a2a dependency isn't installed or has incompatible API changes
lib/crewai/src/crewai/a2a/config.py:imports
A2UI messages contain valid JSON objects that can be extracted and validated against specific schema versions (v0.9 vs standard), and validation functions exist for each version
If this fails: A2A UI extensions fail silently or crash when processing malformed JSON or when validation functions don't match message format versions
lib/crewai/src/crewai/a2a/extensions/a2ui/client_extension.py:extract_a2ui_json_objects
Enterprise OAuth2 configuration responses are small enough to fit in memory and JSON parsing can handle the full response structure without streaming
If this fails: Large enterprise configurations or complex OAuth responses could cause memory issues or JSON parsing failures
lib/crewai/src/crewai/cli/enterprise/main.py:oauth_endpoint
JWT tokens from OAuth2 providers remain valid throughout CLI command execution and system clock is synchronized for token expiration validation
If this fails: Authentication commands fail mid-execution if tokens expire or if system clock drift causes premature token rejection
lib/crewai/src/crewai/cli/authentication/main.py:validate_jwt_token
System Behavior
How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
Conversation history and context maintained per agent across task executions
Graph of task relationships and execution order constraints
Registered client and server extensions for A2A protocol processing
Feedback Loops
- Agent Self-Reflection (self-correction, balancing) — Trigger: Task completion with unsatisfactory results. Action: Agent reviews output against expected_output and retries with refined approach. Exit: Max iterations reached or satisfactory output achieved.
- Delegation Retry (retry, balancing) — Trigger: A2A communication failure or timeout. Action: Agent retries delegation with backoff or falls back to local execution. Exit: Successful delegation or max retries exceeded.
- Crew Planning Loop (planning-loop, reinforcing) — Trigger: Planning mode enabled in crew configuration. Action: Crew analyzes tasks, creates execution plan, executes, then reviews and refines plan. Exit: All tasks completed successfully.
Delays
- LLM API Calls (async-processing, ~variable) — Agents wait for LLM responses during task execution
- Tool Execution (async-processing, ~variable) — Agents pause while external tools complete operations
- A2A Authentication (async-processing, ~OAuth2 flow dependent) — Initial agent-to-agent communication requires auth handshake
- Deployment Pipeline (async-processing, ~cloud deployment time) — CLI deployment commands wait for cloud infrastructure provisioning
Control Points
- Process Type (architecture-switch) — Controls: Task execution pattern - sequential, parallel, or hierarchical. Default: Process.sequential
- Agent Delegation (feature-flag) — Controls: Whether agents can delegate tasks to other agents via A2A. Default: delegation: bool = True
- Memory Mode (feature-flag) — Controls: Conversation context preservation across task executions. Default: memory: bool = False
- Verbose Logging (runtime-toggle) — Controls: Detailed execution logging and progress reporting. Default: verbose: bool = False
- Max Iterations (threshold) — Controls: Maximum retry attempts for agent task execution. Default: max_iter: int = 25
Technology Stack
Provides data validation and serialization for all agent, task, and configuration models
Handles HTTP communication for A2A protocol, API calls, and authentication flows
Powers the CLI with colored output, progress bars, tables, and interactive prompts
Command-line interface framework for all CLI commands and argument parsing
JSON Web Token handling for A2A authentication and enterprise integrations
Testing framework with async support, subprocess mocking, and VCR for HTTP recording
Code formatting and linting with comprehensive rule sets for code quality
Key Components
- Agent (executor) — Executes tasks by orchestrating LLM calls, tool usage, and potentially delegating to other agents via A2A protocol
lib/crewai/src/crewai/agent/core.py - Crew (orchestrator) — Orchestrates multi-agent workflows by assigning tasks to agents and managing execution flow (sequential, parallel, or hierarchical)
lib/crewai/src/crewai/crew.py - Flow (orchestrator) — Provides event-driven control for complex multi-agent workflows with conditional branching and state management
lib/crewai/src/crewai/flow/flow.py - A2AClientExtension (adapter) — Processes A2A protocol messages between agents, handling UI extensions and conversation state
lib/crewai/src/crewai/a2a/extensions/a2ui/client_extension.py - ProviderFactory (factory) — Creates OAuth2 authentication providers for CLI commands based on configuration settings
lib/crewai/src/crewai/cli/authentication/main.py - TaskManager (scheduler) — Manages task dependencies and execution order within crews, handling sequential and parallel execution patterns
lib/crewai/src/crewai/crew.py - PlusAPIMixin (adapter) — Provides authenticated API access to CrewAI Plus services for deployment and organization management
lib/crewai/src/crewai/cli/command.py
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Ml Inference Repositories
Frequently Asked Questions
What is crewAI used for?
Coordinates multiple AI agents with assigned roles to collaborate on complex multi-step tasks crewaiinc/crewai is a 7-component ml inference written in Python. Data flows through 7 distinct pipeline stages. The codebase contains 1091 files.
How is crewAI architected?
crewAI is organized into 4 architecture layers: CLI & Management, Core Orchestration, A2A Protocol, Tools & Knowledge. Data flows through 7 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.
How does data flow through crewAI?
Data moves through 7 stages: Agent Definition → Task Creation → Crew Assembly → Task Orchestration → Agent Execution → .... Users define agents with roles and crews with tasks, which are orchestrated by the Crew class. Tasks are executed by assigned agents who call LLMs and tools to produce outputs. Agents can delegate subtasks to other agents via the A2A protocol. The system maintains conversation context and produces structured outputs for each task and the overall crew execution. This pipeline design reflects a complex multi-stage processing system.
What technologies does crewAI use?
The core stack includes Pydantic (Provides data validation and serialization for all agent, task, and configuration models), httpx (Handles HTTP communication for A2A protocol, API calls, and authentication flows), Rich (Powers the CLI with colored output, progress bars, tables, and interactive prompts), Click (Command-line interface framework for all CLI commands and argument parsing), JWT/PyJWT (JSON Web Token handling for A2A authentication and enterprise integrations), pytest (Testing framework with async support, subprocess mocking, and VCR for HTTP recording), and 1 more. A focused set of dependencies that keeps the build manageable.
What system dynamics does crewAI have?
crewAI exhibits 3 data pools (Agent Memory, Task Dependencies), 3 feedback loops, 5 control points, 4 delays. The feedback loops handle self-correction and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does crewAI use?
5 design patterns detected: Role-Based Agent Architecture, Hierarchical Task Delegation, Protocol Extension System, CLI-First Developer Experience, Pydantic Configuration.
Analyzed on April 20, 2026 by CodeSea. Written by Karolina Sarna.