[add] intel arc
This commit is contained in:
parent
c307222f14
commit
0a096f3b79
13 changed files with 550 additions and 0 deletions
201
ollama-intel-arc/LICENSE
Normal file
201
ollama-intel-arc/LICENSE
Normal file
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
181
ollama-intel-arc/README.md
Normal file
181
ollama-intel-arc/README.md
Normal file
|
@ -0,0 +1,181 @@
|
|||
# Run Ollama, Stable Diffusion and Automatic Speech Recognition with your Intel Arc GPU
|
||||
|
||||
[[Blog](https://blog.eleiton.dev/posts/llm-and-genai-in-docker/)]
|
||||
|
||||
Effortlessly deploy a Docker-based solution that uses [Open WebUI](https://github.com/open-webui/open-webui) as your user-friendly
|
||||
AI Interface and [Ollama](https://github.com/ollama/ollama) for integrating Large Language Models (LLM).
|
||||
|
||||
Additionally, you can run [ComfyUI](https://github.com/comfyanonymous/ComfyUI) or [SD.Next](https://github.com/vladmandic/sdnext) docker containers to
|
||||
streamline Stable Diffusion capabilities.
|
||||
|
||||
You can also run an optional docker container with [OpenAI Whisper](https://github.com/openai/whisper) to perform Automatic Speech Recognition (ASR) tasks.
|
||||
|
||||
All these containers have been optimized for Intel Arc Series GPUs on Linux systems by using [Intel® Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
|
||||
|
||||

|
||||
|
||||
## Services
|
||||
1. Ollama
|
||||
* Runs llama.cpp and Ollama with IPEX-LLM on your Linux computer with Intel Arc GPU.
|
||||
* Built following the guidelines from [Intel](https://github.com/intel/ipex-llm/blob/main/docs/mddocs/DockerGuides/README.md).
|
||||
* Uses the official [Intel ipex-llm docker image](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) as the base container.
|
||||
* Uses the latest versions of required packages, prioritizing cutting-edge features over stability.
|
||||
* Exposes port `11434` for connecting other tools to your Ollama service.
|
||||
|
||||
2. Open WebUI
|
||||
* Uses the official distribution of Open WebUI.
|
||||
* `WEBUI_AUTH` is turned off for authentication-free usage.
|
||||
* `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only.
|
||||
* `ENABLE_IMAGE_GENERATION` is set to true, allowing you to generate images from the UI.
|
||||
* `IMAGE_GENERATION_ENGINE` is set to automatic1111 (SD.Next is compatible).
|
||||
|
||||
3. ComfyUI
|
||||
* The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
|
||||
* Uses as the base container the official [Intel® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu)
|
||||
|
||||
4. SD.Next
|
||||
* All-in-one for AI generative image based on Automatic1111
|
||||
* Uses as the base container the official [Intel® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu)
|
||||
* Uses a customized version of the SD.Next [docker file](https://github.com/vladmandic/sdnext/blob/dev/configs/Dockerfile.ipex), making it compatible with the Intel Extension for Pytorch image.
|
||||
|
||||
5. OpenAI Whisper
|
||||
* Robust Speech Recognition via Large-Scale Weak Supervision
|
||||
* Uses as the base container the official [Intel® Extension for PyTorch](* Uses as the base container the official [Intel® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu)
|
||||
|
||||
## Setup
|
||||
Run the following commands to start your Ollama instance with Open WebUI
|
||||
```bash
|
||||
$ git clone https://github.com/eleiton/ollama-intel-arc.git
|
||||
$ cd ollama-intel-arc
|
||||
$ podman compose up
|
||||
```
|
||||
|
||||
Additionally, if you want to run one or more of the image generation tools, run these command in a different terminal:
|
||||
|
||||
For ComfyUI
|
||||
```bash
|
||||
$ podman compose -f docker-compose.comfyui.yml up
|
||||
```
|
||||
|
||||
For SD.Next
|
||||
```bash
|
||||
$ podman compose -f docker-compose.sdnext.yml up
|
||||
```
|
||||
|
||||
If you want to run Whisper for automatic speech recognition, run this command in a different terminal:
|
||||
```bash
|
||||
$ podman compose -f docker-compose.whisper.yml up
|
||||
```
|
||||
|
||||
## Validate
|
||||
Run the following command to verify your Ollama instance is up and running
|
||||
```bash
|
||||
$ curl http://localhost:11434/
|
||||
Ollama is running
|
||||
```
|
||||
When using Open WebUI, you should see this partial output in your console, indicating your arc gpu was detected
|
||||
```bash
|
||||
[ollama-intel-arc] | Found 1 SYCL devices:
|
||||
[ollama-intel-arc] | | | | | |Max | |Max |Global | |
|
||||
[ollama-intel-arc] | | | | | |compute|Max work|sub |mem | |
|
||||
[ollama-intel-arc] | |ID| Device Type| Name|Version|units |group |group|size | Driver version|
|
||||
[ollama-intel-arc] | |--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
|
||||
[ollama-intel-arc] | | 0| [level_zero:gpu:0]| Intel Arc Graphics| 12.71| 128| 1024| 32| 62400M| 1.6.32224+14|
|
||||
```
|
||||
|
||||
## Using Image Generation
|
||||
* Open your web browser to http://localhost:7860 to access the SD.Next web page.
|
||||
* For the purposes of this demonstration, we'll use the [DreamShaper](https://civitai.com/models/4384/dreamshaper) model.
|
||||
* Follow these steps:
|
||||
* Download the `dreamshaper_8` model by clicking on its image (1).
|
||||
* Wait for it to download (~2GB in size) and then select it in the dropbox (2).
|
||||
* (Optional) If you want to stay in the SD.Next UI, feel free to explore (3).
|
||||

|
||||
* For more information on using SD.Next, refer to the official [documentation](https://vladmandic.github.io/sdnext-docs/).
|
||||
* Open your web browser to http://localhost:4040 to access the Open WebUI web page.
|
||||
* Go to the administrator [settings](http://localhost:4040/admin/settings) page.
|
||||
* Go to the Image section (1)
|
||||
* Make sure all settings look good, and validate them pressing the refresh button (2)
|
||||
* (Optional) Save any changes if you made them. (3)
|
||||

|
||||
* For more information on using Open WebUI, refer to the official [documentation](https://docs.openwebui.com/)
|
||||
* That's it, go back to Open WebUI main page and start chatting. Make sure to select the `Image` button to indicate you want to generate Images.
|
||||

|
||||
|
||||
## Using Automatic Speech Recognition
|
||||
* This is an example of a command to transcribe audio files:
|
||||
```bash
|
||||
podman exec -it whisper-ipex whisper https://www.lightbulblanguages.co.uk/resources/ge-audio/hobbies-ge.mp3 --device xpu --model small --language German --task transcribe
|
||||
```
|
||||
* Response:
|
||||
```bash
|
||||
[00:00.000 --> 00:08.000] Ich habe viele Hobbys. In meiner Freizeit mache ich sehr gerne Sport, wie zum Beispiel Wasserball oder Radfahren.
|
||||
[00:08.000 --> 00:13.000] Außerdem lese ich gerne und lerne auch gerne Fremdsprachen.
|
||||
[00:13.000 --> 00:19.000] Ich gehe gerne ins Kino, höre gerne Musik und treffe mich mit meinen Freunden.
|
||||
[00:19.000 --> 00:22.000] Früher habe ich auch viel Basketball gespielt.
|
||||
[00:22.000 --> 00:26.000] Im Frühling und im Sommer werde ich viele Radtouren machen.
|
||||
[00:26.000 --> 00:29.000] Außerdem werde ich viel schwimmen gehen.
|
||||
[00:29.000 --> 00:33.000] Am liebsten würde ich das natürlich im Meer machen.
|
||||
```
|
||||
* This is an example of a command to translate audio files:
|
||||
```bash
|
||||
podman exec -it whisper-ipex whisper https://www.lightbulblanguages.co.uk/resources/ge-audio/hobbies-ge.mp3 --device xpu --model small --language German --task translate
|
||||
```
|
||||
* Response:
|
||||
```bash
|
||||
[00:00.000 --> 00:02.000] I have a lot of hobbies.
|
||||
[00:02.000 --> 00:05.000] In my free time I like to do sports,
|
||||
[00:05.000 --> 00:08.000] such as water ball or cycling.
|
||||
[00:08.000 --> 00:10.000] Besides, I like to read
|
||||
[00:10.000 --> 00:13.000] and also like to learn foreign languages.
|
||||
[00:13.000 --> 00:15.000] I like to go to the cinema,
|
||||
[00:15.000 --> 00:16.000] like to listen to music
|
||||
[00:16.000 --> 00:19.000] and meet my friends.
|
||||
[00:19.000 --> 00:22.000] I used to play a lot of basketball.
|
||||
[00:22.000 --> 00:26.000] In spring and summer I will do a lot of cycling tours.
|
||||
[00:26.000 --> 00:29.000] Besides, I will go swimming a lot.
|
||||
[00:29.000 --> 00:33.000] Of course, I would prefer to do this in the sea.
|
||||
```
|
||||
* To use your own audio files instead of web files, place them in the `~/whisper-files` folder and access them like this:
|
||||
```bash
|
||||
podman exec -it whisper-ipex whisper YOUR_FILE_NAME.mp3 --device xpu --model small --task translate
|
||||
```
|
||||
|
||||
## Updating the containers
|
||||
If there are new updates in the [ipex-llm-inference-cpp-xpu](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) docker Image or in the Open WebUI docker Image, you may want to update your containers, to stay up to date.
|
||||
|
||||
Before any updates, be sure to stop your containers
|
||||
```bash
|
||||
$ podman compose down
|
||||
```
|
||||
|
||||
Then just run a pull command to retrieve the `latest` images.
|
||||
```bash
|
||||
$ podman compose pull
|
||||
```
|
||||
|
||||
|
||||
After that, you can run compose up to start your services again.
|
||||
```bash
|
||||
$ podman compose up
|
||||
```
|
||||
|
||||
## Manually connecting to your Ollama container
|
||||
You can connect directly to your Ollama container by running these commands:
|
||||
|
||||
```bash
|
||||
$ podman exec -it ollama-intel-arc /bin/bash
|
||||
$ /llm/ollama/ollama -v
|
||||
```
|
||||
|
||||
## My development environment:
|
||||
* Core Ultra 7 155H
|
||||
* Intel® Arc™ Graphics (Meteor Lake-P)
|
||||
* Fedora 41
|
||||
|
||||
## References
|
||||
* [Open WebUI documentation](https://docs.openwebui.com/)
|
||||
* [Docker - Intel ipex-llm tags](https://hub.docker.com/r/intelanalytics/ipex-llm-serving-xpu/tags)
|
||||
* [Docker - Intel extension for pytorch](https://hub.docker.com/r/intel/intel-extension-for-pytorch/tags)
|
||||
* [GitHub - Intel ipex-llm tags](https://github.com/intel/ipex-llm/tags)
|
||||
* [GitHub - Intel extension for pytorch](https://github.com/intel/intel-extension-for-pytorch/tags)
|
23
ollama-intel-arc/comfyui/Dockerfile
Normal file
23
ollama-intel-arc/comfyui/Dockerfile
Normal file
|
@ -0,0 +1,23 @@
|
|||
FROM intel/intel-extension-for-pytorch:2.7.10-xpu
|
||||
|
||||
# Optional, might help with memory allocation performance and scalability
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends --fix-missing libjemalloc-dev
|
||||
ENV LD_PRELOAD=libjemalloc.so.2
|
||||
|
||||
# Download the ComfyUI repository
|
||||
RUN cat <<EOF > /bin/startup.sh
|
||||
#!/bin/bash
|
||||
git status || git clone https://github.com/comfyanonymous/ComfyUI.git /app
|
||||
pip install -r /app/requirements.txt
|
||||
python /app/main.py "\$@"
|
||||
EOF
|
||||
|
||||
# Make the startup script executable
|
||||
RUN chmod 755 /bin/startup.sh
|
||||
|
||||
# Set the working directory to /app
|
||||
WORKDIR /app
|
||||
|
||||
# Run ComfyUI with custom parameters
|
||||
CMD [ "startup.sh", "--highvram", "--use-pytorch-cross-attention", "--listen=0.0.0.0", "--port=8188" ]
|
20
ollama-intel-arc/docker-compose.comfyui.yml
Normal file
20
ollama-intel-arc/docker-compose.comfyui.yml
Normal file
|
@ -0,0 +1,20 @@
|
|||
services:
|
||||
comfyui-ipex:
|
||||
build:
|
||||
context: comfyui
|
||||
dockerfile: Dockerfile
|
||||
image: comfyui-ipex:local
|
||||
container_name: comfyui-ipex
|
||||
devices:
|
||||
- /dev/dri:/dev/dri
|
||||
ports:
|
||||
- 8188:8188
|
||||
volumes:
|
||||
- comfyui-app-volume:/app
|
||||
- comfyui-python-volume:/usr/local/lib/python3.10
|
||||
environment:
|
||||
- no_proxy=localhost,127.0.0.1
|
||||
|
||||
volumes:
|
||||
comfyui-app-volume: {}
|
||||
comfyui-python-volume: {}
|
23
ollama-intel-arc/docker-compose.sdnext.yml
Normal file
23
ollama-intel-arc/docker-compose.sdnext.yml
Normal file
|
@ -0,0 +1,23 @@
|
|||
services:
|
||||
sdnext-ipex:
|
||||
build:
|
||||
context: sdnext
|
||||
dockerfile: Dockerfile
|
||||
image: sdnext-ipex:local
|
||||
container_name: sdnext-ipex
|
||||
restart: unless-stopped
|
||||
devices:
|
||||
- /dev/dri:/dev/dri
|
||||
ports:
|
||||
- 7860:7860
|
||||
volumes:
|
||||
- sdnext-app-volume:/app
|
||||
- sdnext-mnt-volume:/mnt
|
||||
- sdnext-huggingface-volume:/root/.cache/huggingface
|
||||
- sdnext-python-volume:/usr/local/lib/python3.10
|
||||
|
||||
volumes:
|
||||
sdnext-app-volume: {}
|
||||
sdnext-mnt-volume: {}
|
||||
sdnext-python-volume: {}
|
||||
sdnext-huggingface-volume: {}
|
16
ollama-intel-arc/docker-compose.whisper.yml
Normal file
16
ollama-intel-arc/docker-compose.whisper.yml
Normal file
|
@ -0,0 +1,16 @@
|
|||
services:
|
||||
whisper-ipex:
|
||||
build:
|
||||
context: whisper
|
||||
dockerfile: Dockerfile
|
||||
image: whisper-ipex:local
|
||||
container_name: whisper-ipex
|
||||
restart: unless-stopped
|
||||
devices:
|
||||
- /dev/dri:/dev/dri
|
||||
volumes:
|
||||
- whisper-models-volume:/root/.cache/whisper
|
||||
- ~/whisper-files:/app
|
||||
|
||||
volumes:
|
||||
whisper-models-volume: {}
|
49
ollama-intel-arc/docker-compose.yml
Normal file
49
ollama-intel-arc/docker-compose.yml
Normal file
|
@ -0,0 +1,49 @@
|
|||
services:
|
||||
ollama-intel-arc:
|
||||
image: intelanalytics/ipex-llm-inference-cpp-xpu:latest
|
||||
container_name: ollama-intel-arc
|
||||
restart: unless-stopped
|
||||
devices:
|
||||
- /dev/dri:/dev/dri
|
||||
volumes:
|
||||
- ollama-volume:/root/.ollama
|
||||
ports:
|
||||
- 11434:11434
|
||||
environment:
|
||||
- no_proxy=localhost,127.0.0.1
|
||||
- OLLAMA_HOST=0.0.0.0
|
||||
- DEVICE=Arc
|
||||
- OLLAMA_INTEL_GPU=true
|
||||
- OLLAMA_NUM_GPU=999
|
||||
- ZES_ENABLE_SYSMAN=1
|
||||
command: sh -c 'mkdir -p /llm/ollama && cd /llm/ollama && init-ollama && exec ./ollama serve'
|
||||
|
||||
open-webui:
|
||||
image: ghcr.io/open-webui/open-webui:latest
|
||||
container_name: open-webui
|
||||
volumes:
|
||||
- open-webui-volume:/app/backend/data
|
||||
depends_on:
|
||||
- ollama-intel-arc
|
||||
ports:
|
||||
- 4040:8080
|
||||
environment:
|
||||
- WEBUI_AUTH=False
|
||||
- ENABLE_OPENAI_API=False
|
||||
- ENABLE_OLLAMA_API=True
|
||||
- ENABLE_IMAGE_GENERATION=True
|
||||
- IMAGE_GENERATION_ENGINE=automatic1111
|
||||
- IMAGE_GENERATION_MODEL=dreamshaper_8
|
||||
- IMAGE_SIZE=400x400
|
||||
- IMAGE_STEPS=8
|
||||
- AUTOMATIC1111_BASE_URL=http://sdnext-ipex:7860/
|
||||
- AUTOMATIC1111_CFG_SCALE=2
|
||||
- AUTOMATIC1111_SAMPLER=DPM++ SDE
|
||||
- AUTOMATIC1111_SCHEDULER=Karras
|
||||
extra_hosts:
|
||||
- host.docker.internal:host-gateway
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
ollama-volume: {}
|
||||
open-webui-volume: {}
|
BIN
ollama-intel-arc/resources/open-webui-chat.png
Normal file
BIN
ollama-intel-arc/resources/open-webui-chat.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 368 KiB |
BIN
ollama-intel-arc/resources/open-webui-settings.png
Normal file
BIN
ollama-intel-arc/resources/open-webui-settings.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 98 KiB |
BIN
ollama-intel-arc/resources/open-webui.png
Normal file
BIN
ollama-intel-arc/resources/open-webui.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 259 KiB |
BIN
ollama-intel-arc/resources/sd.next.png
Normal file
BIN
ollama-intel-arc/resources/sd.next.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 637 KiB |
21
ollama-intel-arc/sdnext/Dockerfile
Normal file
21
ollama-intel-arc/sdnext/Dockerfile
Normal file
|
@ -0,0 +1,21 @@
|
|||
FROM intel/intel-extension-for-pytorch:2.7.10-xpu
|
||||
|
||||
# Set paths to use with sdnext
|
||||
ENV SD_DATADIR="/mnt/data"
|
||||
ENV SD_MODELSDIR="/mnt/models"
|
||||
|
||||
# Download the SDNext repository
|
||||
RUN cat <<EOF > /bin/startup.sh
|
||||
#!/bin/bash
|
||||
git status || git clone https://github.com/vladmandic/sdnext.git .
|
||||
python /app/launch.py "\$@"
|
||||
EOF
|
||||
|
||||
# Make the startup script executable
|
||||
RUN chmod 755 /bin/startup.sh
|
||||
|
||||
# Set the working directory to /app
|
||||
WORKDIR /app
|
||||
|
||||
# Run SDNext with custom parameters
|
||||
CMD [ "startup.sh", "-f", "--use-ipex", "--uv", "--listen", "--debug", "--api-log", "--log", "sdnext.log" ]
|
16
ollama-intel-arc/whisper/Dockerfile
Normal file
16
ollama-intel-arc/whisper/Dockerfile
Normal file
|
@ -0,0 +1,16 @@
|
|||
FROM intel/intel-extension-for-pytorch:2.7.10-xpu
|
||||
|
||||
ENV USE_XETLA=OFF
|
||||
ENV SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
ENV SYCL_CACHE_PERSISTENT=1
|
||||
|
||||
# Install required packages
|
||||
RUN apt-get update && apt-get install -y ffmpeg
|
||||
|
||||
# Download the Whisper repository
|
||||
RUN pip install --upgrade pip && pip install -U openai-whisper
|
||||
|
||||
# Set the working directory to /app
|
||||
WORKDIR /app
|
||||
|
||||
CMD ["tail", "-f", "/dev/null"]
|
Loading…
Add table
Reference in a new issue