How to install CrewAI with GUI

The capabilities of neural network models are growing every day. Researchers and commercial companies are investing more and more into training them. But on their own, these models can’t act autonomously. To solve specific tasks, they need guidance: context extension and direction setting. This approach isn’t always efficient, especially for complex problems.
But what if we allowed a neural network to act autonomously? And what if we provided it with many tools to interact with the external world? You’d get an AI agent capable of solving tasks by independently determining which tools to use. Sounds complicated, but it works very well. However, even for an advanced user, creating an AI agent from scratch can be a non-trivial task.
The reason is that most popular libraries lack a graphical user interface. They require interaction through a programming language like Python. This drastically raises the entry threshold and makes AI agents too complex for independent implementation. This is exactly the case with CrewAI.
What is CrewAI
CrewAI is a very popular and convenient library, but it doesn’t come with a GUI by default. This prompted independent developers to create an unofficial interface. The open source nature of CrewAI made the task much easier, and soon the community released the project CrewAI Studio.
Developers and enthusiasts gained deeper insight into the system’s architecture and could build tools tailored to specific tasks. Regular users could create AI agents without writing a single line of code. It became easier to assign tasks and manage access to neural networks and tools. It also allowed for exporting and importing agents from server to server and sharing them with friends, colleagues, or the open source community.
A separate advantage of CrewAI Studio is its deployment flexibility. It can be installed as a regular app or as a Docker container - the preferred method since it includes all necessary libraries and components for running the system.
Installation
Update your OS packages and installed apps to the latest versions:
sudo apt update && sudo apt -y upgrade
Use the automatic driver installation script or follow our guide Install NVIDIA® drivers in Linux:
sudo ubuntu-drivers autoinstall
Reboot the server for changes to take effect:
sudo shutdown - r now
After reconnecting via SSH, install Apache 2 web server utilities, which will give you access to the .htpasswd file generator used for basic user authentication:
sudo apt install -y apache2-utils
Install Docker Engine using the official shell script:
curl -sSL https://get.docker.com/ | sh
Add Docker Compose to the system:
sudo apt install -y docker-compose
Clone the repository:
git clone https://github.com/strnad/CrewAI-Studio.git
Navigate to the downloaded directory:
cd CrewAI-Studio
Create a .htpasswd file for the usergpu user. You’ll be prompted to enter a password twice:
htpasswd -c .htpasswd usergpu
Now edit the container deployment file. By default, there are two containers:
sudo nano docker-compose.yaml
Delete the section:
ports:
- "5432:5432"
And add the following service:
nginx:
image: nginx:latest
container_name: crewai_nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./.htpasswd:/etc/nginx/.htpasswd:ro
depends_on:
- web
Nginx will need a config file, so create one:
sudo nano nginx.conf
Paste in the following:
events {}
http {
server {
listen 80;
location / {
proxy_pass http://web:8501;
# WebSocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Forward headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
}
All important service variables for CrewAI are defined in the .env file. Open the .env_example file for editing:
nano .env_example
Add the following lines:
OLLAMA_HOST="http://open-webui:11434"
OLLAMA_MODELS="ollama/llama3.2:latest"
And add Postgres config:
POSTGRES_USER="admin"
POSTGRES_PASSWORD="your_password"
POSTGRES_DB="crewai_db"
AGENTOPS_ENABLED="False"
Now copy the example file and rename it to .env so the system can read it during container deployment:
cp .env_example .env
In this example, we’ll use local models with inference handled by Ollama. We recommend our guide Open WebUI: All in one, and during deployment add -e OLLAMA_HOST=0.0.0.0 to allow CrewAI to connect directly to the Ollama container. Download the desired model (e.g., llama3.2:latest) via WebUI or by connecting to the container console and running:
ollama pull llama3.2:latest
Once everything is set up, launch the deployment:
sudo docker-compose up -d --build
Now, visiting http://[your_server_ip]/ will prompt for login credentials. Upon correct input, the CrewAI interface will appear.
Features
Let’s explore the key entities CrewAI uses. This will help you understand how to configure workflows. The central entity in the Agent, an autonomous task executor. Each agent has attributes that help them fulfill their duties:
- Role. A brief, 2-3 word job description.
- Backstory. Optional; helps the language model understand how the agent should behave and what experiences to rely on.
- Goal. The objective the agent should pursue.
- Allow delegation. Enables the agent to delegate tasks (or parts of them) to others.
- Verbose. Tells the agent to log detailed actions.
- LLM Provider and Model. Specifies the model and provider to use.
- Temperature. Determines response creativity. Higher = more creative.
- Max iterations. Number of tries the agent has to succeed, acting as a safeguard (e.g., against infinite loops).
Agents operate by iteratively analyzing input, reasoning, and drawing conclusions using available tools.
Input is defined by a Task entity. Each task includes a description, an assigned agent and optionally an expected result. Tasks run sequentially by default but can be parallelized using the Async execution flag.
Autonomous agent work is supported by Tools that enable real-world interaction. CrewAI includes tools for web searches, site parsing, API calls, and file handling, enhancing context and helping agents achieve goals.
Lastly, there is the Crew entity. It unites agents with different roles into a team to tackle complex problems. They can communicate, delegate, review, and correct one another, essentially forming a collective intelligence.
Using
Now that you’re familiar with the entities, let’s build and run a minimal CrewAI workflow. In this example, we’ll track global progress in cancer drug development.
We’ll use three agents:
- Oncology Drug Pipeline Analyst - tracks new developments from early stages to clinical trials.
- Regulatory and Approval Watchdog - monitors new drug approvals and regulatory changes.
- Scientific Literature and Innovation Scout - scans scientific publications and patents related to oncology.
Open the Agents section and create the first agent:

For now, we’re using the previously downloaded llama3.2:latest model, but in a real scenario, choose the one that best fits the task. Repeat the process for the remaining agents and move on to task creation.

Gather all agents into a crew and assign the prepared task to them:

Activate necessary tools from the list:

Finally, go to the Kickoff! page and click Run Crew! After some iterations, the system will return a result, such as:

Before we finish, let’s check the Import/export section. Your workflow or crew can be exported as JSON to transfer to another CrewAI server. You can also create a Single-Page Application (SPA) with a single click - perfect for production deployment:

Conclusion
CrewAI significantly simplifies the creation of AI agents, allowing integration into any application or standalone use. The library is based on the idea of distributed intelligence, where each agent is a domain expert, and the combined team outperforms a single generalist agent.
Since it’s written in Python, CrewAI integrates easily with ML platforms and tools. Its open source nature allows for extension through third-party modules. Inter-agent communication reduces token usage by distributing context processing.
As a result, complex tasks are completed faster and more efficiently. The lower entry barrier provided by CrewAI Studio expands the reach of AI agents and multi-agent systems. And support for local models ensures better control over sensitive data.
See also:
Updated: 12.08.2025
Published: 23.07.2025