๐ Deploying GPT-Web A Journey from Concept to Implementation
๐ Introduction
GPT-Web is a straightforward, one-page web interface for the OpenAI ChatGPT API. To use it, you need to first register for an OpenAI API key. All messages and interactions are stored exclusively in your browserโs local storage, ensuring complete privacy. Even if you close your browser tab, you can return later to continue the conversation seamlessly.
๐ Features of GPT-Web
- Privacy: All chats and messages are stored in your browserโs local storage, ensuring absolute privacy.
- Customization: Modify the prompt, temperature, and other model settings. Supports multiple models, including GPT-4o.
- Cost-Effective: Utilizing the commercial OpenAI API makes it much cheaper than a ChatGPT Plus subscription.
- Speed: As a single-page web app, it is fast and responsive.
- Mobile Friendly: Fully optimized for mobile use, enabling access from any device.
- Voice Input: Supports voice input, allowing you to converse with ChatGPT and hear responses.
- Pre-Selected Prompts: Comes with a list of ready-to-use prompts for easy initiation.
- Export Functionality: Export your chats as a Markdown file, facilitating easy sharing.
- Code Recognition: Recognizes and highlights code blocks, with single-click copy functionality.
- Image Generation: Generate images using the DALLยทE model with the prompt โshow me an image of โฆโ.
- Streaming: Streams API responses, showing real-time generation of responses.
Link to GPT-Web: gpt.sumamazaeem.com
๐ My Experience
On May 15th, while watching the OpenAI announcement of the new model GPT-4o, I was inspired to try it out. Eager to explore its functionalities, I found an open-source web interface, modified it, and deployed it on my subdomain. Hereโs a detailed breakdown of how I brought GPT-Web to life.
๐ ๏ธ How I Deployed GPT-Web
Step 1: Setting Up DNS
I pointed gpt.sumamazaeem.com to the serverโs IP address via the Cloudflare dashboard.
Step 2: Configuring the Server
The application was hosted on GitHub. I configured Git on my server, authenticated it via SSH, and pulled the latest release from my private repository.
Step 3: Running the Container
Using Docker Compose, I ran the container with the following commands:
1
docker-compose up
Below is my docker-compose.yml configuration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
version: "3"
services:
chatgpt_web:
container_name: chatgpt_web
restart: always
depends_on:
- mocked_api
env_file:
- .env
ports:
- 5173:5173
volumes:
- .:/app
build:
context: "."
dockerfile: Dockerfile
mocked_api:
container_name: mocked_api
build:
context: "."
dockerfile: mocked_api/Dockerfile-mockapi
restart: always
ports:
- 5174:5174
Step 4: Dockerfile Configuration
Here is the Dockerfile used:
1
2
3
4
5
6
7
8
FROM node:18-alpine
ADD . /work
WORKDIR /work
RUN npm ci
CMD ["npm", "run", "dev:public"]
Step 5: NGINX Server Configuration
I installed and configured NGINX to serve as a reverse proxy. The configuration file for NGINX looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
server {
listen 80;
server_name gpt.sumamazaeem.com;
# Redirect all HTTP traffic to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name gpt.sumamazaeem.com;
ssl_certificate /etc/letsencrypt/live/gpt.sumamazaeem.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/gpt.sumamazaeem.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:5173;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Prevent additional redirects from the backend
proxy_redirect off;
}
}
Step 6: Creating a Symbolic Link
To enable the new NGINX configuration, a symbolic link was created:
1
sudo ln -s /etc/nginx/sites-available/gpt.sumamazaeem.com /etc/nginx/sites-enabled/
Step 7: Testing and Reloading NGINX
I tested the configuration and reloaded NGINX:
1
2
sudo nginx -t
sudo systemctl reload nginx
Step 8: Obtaining an SSL Certificate
Finally, I obtained an SSL certificate for secure connections:
1
2
3
sudo apt update
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d gpt.sumamazaeem.com
Step 9: Setting Up Certificate Renewal
I set up a cron job to automatically renew the certificate and ensure continuous secure access. The renewal process was tested with a dry run:
1
sudo certbot renew --dry-run
๐ง Challenges Faced
Conflicting Server Configurations
I encountered an issue where NGINX displayed the default welcome page instead of the application. Upon investigation, I discovered that Certbot had automatically added conflicting server configurations to the default configuration file. This caused NGINX to ignore the custom configuration for gpt.sumamazaeem.com.
After resolving the conflict by commenting out the redundant server names in the default configuration, everything started working as expected.
1
2
3
4
5
ubuntu@ip-172-31-25-106:/etc/nginx/sites-available$ sudo nginx -t
nginx: [warn] conflicting server name "gpt.sumamazaeem.com" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "gpt.sumamazaeem.com" on 0.0.0.0:443, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
๐ Lessons Learned
While configuring the firewall using ufw, I realized I hadnโt allowed port 5173 but was still able to access the application using IP:PORT. This led me to understand the following about Docker and ufw:
- Docker and iptables: Docker manipulates iptables rules to enable communication between containers and the outside world, sometimes bypassing
ufwrules. - Docker Chains: Docker creates its chains (DOCKER, DOCKER-USER, etc.), which affect traffic routing and processing.
- Forwarding Traffic: Traffic routed through the FORWARD chain might be processed by Dockerโs chains and forwarded to container ports, potentially bypassing
ufwrules. - Chain Priorities: Docker rules may take precedence due to the order of iptables chains.
๐ Conclusion
Deploying GPT-Web was both a challenging and rewarding experience. From tweaking an open-source web interface to resolving server configuration conflicts, the journey was filled with valuable learning moments. This project not only deepened my understanding of Docker and NGINX but also enhanced my confidence in managing cloud-based applications. The successful deployment of GPT-Web on gpt.sumamazaeem.com stands as a testament to the power of perseverance and continuous learning.