To run an LLM engine in a Docker container, you can use Ollama. Here are the steps to set up Ollama in a Docker container:
Prerequisites
- Ensure you have Docker installed on your system.
- For Linux, install the NVIDIA Container Toolkit.
- For Windows 10/11, install the latest NVIDIA driver and use the WSL2 backend.
Create a Docker Compose File
Create a compose.yaml
file with the following content:
services:
server:
build: .
ports:
- 8000:8000
env_file:
- .env
depends_on:
database:
condition: service_healthy
database:
image: neo4j:5.11
ports:
- 7474:7474
- 7687:7687
environment:
- NEO4J_AUTH=${NEO4J_USERNAME}/${NEO4J_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
interval: 5s
timeout: 3s
retries: 5
ollama:
build: .
ports:
- 7860:7860
env_file:
- .env
depends_on:
server:
condition: service_started
Run the Docker Container
Run the Docker container using the following command:
docker compose up --build
Access the Application
Open a browser and navigate to http://localhost:8000
to access the application.
For more detailed instructions and additional setup options, refer to the Docker documentation on using containers for generative AI development.
To run an LLM engine in a Docker container, you can use Ollama. Here are the steps to set up Ollama in a Docker container:
### Prerequisites
- Ensure you have Docker installed on your system.
- For Linux, install the NVIDIA Container Toolkit.
- For Windows 10/11, install the latest NVIDIA driver and use the WSL2 backend.
### Create a Docker Compose File
Create a `compose.yaml` file with the following content:
```yaml
services:
server:
build: .
ports:
- 8000:8000
env_file:
- .env
depends_on:
database:
condition: service_healthy
database:
image: neo4j:5.11
ports:
- 7474:7474
- 7687:7687
environment:
- NEO4J_AUTH=${NEO4J_USERNAME}/${NEO4J_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
interval: 5s
timeout: 3s
retries: 5
ollama:
build: .
ports:
- 7860:7860
env_file:
- .env
depends_on:
server:
condition: service_started
```
### Run the Docker Container
Run the Docker container using the following command:
```bash
docker compose up --build
```
### Access the Application
Open a browser and navigate to `http://localhost:8000` to access the application.
For more detailed instructions and additional setup options, refer to the Docker documentation on using containers for generative AI development.
Mike Nichols 5 months ago