Compare commits

...

28 Commits

Author SHA1 Message Date
josc146
d2f56631ee release v1.7.5 2024-03-14 13:34:07 +08:00
josc146
c5077f4ebc fix v6 lora (c03cdbbdaf) 2024-03-14 12:25:09 +08:00
josc146
acf5d02104 update global_penalty desc 2024-03-14 12:24:45 +08:00
github-actions[bot]
bf58841f00 release v1.7.4 2024-03-13 13:38:34 +00:00
josc146
e625e1f783 release v1.7.4 2024-03-13 21:37:58 +08:00
josc146
4bed070556 latex support 2024-03-13 21:37:48 +08:00
josc146
5692579f56 for Chinese users, replace Tsinghua pip mirrors with Alibaba Cloud to avoid 403 http error 2024-03-13 21:37:35 +08:00
josc146
333619839a rwkv6 lora finetune support (https://github.com/JL-er/RWKV-LORA) 2024-03-13 17:51:53 +08:00
josc146
c6024520af improve usability 2024-03-13 16:42:26 +08:00
josc146
cd40261de6 improve theme 2024-03-13 15:36:13 +08:00
josc146
3a637a973c improve markdown rendering 2024-03-13 15:36:02 +08:00
github-actions[bot]
7fbcb5e810 release v1.7.3 2024-03-11 11:08:54 +00:00
josc146
2604d3c47b release v1.7.3 2024-03-11 19:07:08 +08:00
josc146
bb1a6191b0 prevent 'torch' has no attribute 'cuda' error in torch_gc, so user can use CPU or WebGPU (#302) 2024-03-11 19:04:19 +08:00
josc146
dd89041f72 dep_check.py now ignores GPUtil 2024-03-11 18:55:37 +08:00
josc146
91eb72e515 fix the issue where penalty_decay and global_penalty are not being passed to the backend default config when running the model through client 2024-03-11 18:52:35 +08:00
josc146
1c7436c34b fix max_tokens parameter of Chat page not being passed to backend 2024-03-11 18:52:33 +08:00
Steven Hangger
8678f376e9 fix(rwkv.cpp): add build step for librwkv.so 2024-03-07 23:51:32 +09:00
Steven Hangger
050154f406 feat(docker): add Docker support 2024-03-07 23:51:32 +09:00
dependabot[bot]
b3eae8bcfa chore(deps): bump crazy-max/ghaction-chocolatey from 2 to 3
Bumps [crazy-max/ghaction-chocolatey](https://github.com/crazy-max/ghaction-chocolatey) from 2 to 3.
- [Release notes](https://github.com/crazy-max/ghaction-chocolatey/releases)
- [Commits](https://github.com/crazy-max/ghaction-chocolatey/compare/v2...v3)

---
updated-dependencies:
- dependency-name: crazy-max/ghaction-chocolatey
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:54:36 +09:00
dependabot[bot]
c720362886 chore(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:53:10 +09:00
dependabot[bot]
93029d3f5c chore(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:53:05 +09:00
dependabot[bot]
28244a57b4 chore(deps): bump actions/setup-python from 4 to 5
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:52:59 +09:00
dependabot[bot]
f6ba9d7451 Bump fastapi from 0.104.0 to 0.109.1 in /backend-python
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.104.0 to 0.109.1.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.104.0...0.109.1)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:51:37 +09:00
dependabot[bot]
96e431e06b Bump python-multipart from 0.0.6 to 0.0.7 in /backend-python
Bumps [python-multipart](https://github.com/andrew-d/python-multipart) from 0.0.6 to 0.0.7.
- [Release notes](https://github.com/andrew-d/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/andrew-d/python-multipart/compare/0.0.6...0.0.7)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:50:47 +09:00
josc146
cb6ddb3674 add pre-release workflow 2024-03-05 12:49:17 +08:00
josc146
07d4ba0d6b fix a generation exception caused by potentially dangerous regex being passed into the stop array 2024-03-04 21:20:53 +08:00
github-actions[bot]
ac139d5bda release v1.7.2 2024-03-02 11:48:20 +00:00
46 changed files with 3845 additions and 112 deletions

171
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,171 @@
name: Publish Docker Image
on: [push]
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
docker_build:
name: Build ${{ matrix.arch }} Image
runs-on: ubuntu-latest
strategy:
matrix:
include:
- arch: amd64
name: amd64
# - arch: arm64
# name: arm64
steps:
- name: Free up disk spaces
run: |
sudo rm -rf /usr/share/dotnet || true
sudo rm -rf /opt/ghc || true
sudo rm -rf "/usr/local/share/boost" || true
sudo rm -rf "$AGENT_TOOLSDIRECTORY" || true
- name: Get lowercase string for the repository name
id: lowercase-repo-name
uses: ASzc/change-string-case-action@v2
with:
string: ${{ github.event.repository.name }}
- name: Checkout base
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ github.ref }}-${{ matrix.arch }}
restore-keys: |
${{ github.ref }}-${{ matrix.arch }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
with:
platforms: linux/${{ matrix.arch }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Docker login
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Get commit SHA
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Build and export
id: build
if: github.ref == 'refs/heads/master'
uses: docker/build-push-action@v3
with:
push: true
platforms: linux/${{ matrix.arch }}
tags: ${{ secrets.DOCKER_USERNAME }}/${{ steps.lowercase-repo-name.outputs.lowercase }}:${{ matrix.name }}-latest
build-args: |
SHA=${{ steps.vars.outputs.sha_short }}
outputs: type=image,push=true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Replace tag without `v`
if: startsWith(github.ref, 'refs/tags/')
uses: actions/github-script@v1
id: version
with:
script: |
return context.payload.ref.replace(/\/?refs\/tags\/v/, '')
result-encoding: string
- name: Build release and export
id: build_rel
if: startsWith(github.ref, 'refs/tags/')
uses: docker/build-push-action@v3
with:
push: true
platforms: linux/${{ matrix.arch }}
tags: ${{ secrets.DOCKER_USERNAME }}/${{ steps.lowercase-repo-name.outputs.lowercase }}:${{ matrix.name }}-${{steps.version.outputs.result}}
build-args: |
SHA=${{ steps.version.outputs.result }}
outputs: type=image,push=true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Save digest
if: github.ref == 'refs/heads/master'
run: echo ${{ steps.build.outputs.digest }} > /tmp/digest.txt
- name: Save release digest
if: startsWith(github.ref, 'refs/tags/')
run: echo ${{ steps.build_rel.outputs.digest }} > /tmp/digest.txt
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: digest_${{ matrix.name }}
path: /tmp/digest.txt
manifests:
name: Build manifests
needs: [docker_build]
runs-on: ubuntu-latest
steps:
- name: Get lowercase string for the repository name
id: lowercase-repo-name
uses: ASzc/change-string-case-action@v2
with:
string: ${{ github.event.repository.name }}
- name: Checkout base
uses: actions/checkout@v2
with:
fetch-depth: 0
# https://github.com/docker/setup-qemu-action
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
# https://github.com/docker/setup-buildx-action
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Download artifact
uses: actions/download-artifact@v3
with:
path: /tmp/images/
- name: Docker login
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Replace tag without `v`
if: startsWith(github.ref, 'refs/tags/')
uses: actions/github-script@v1
id: version
with:
script: |
return context.payload.ref.replace(/\/?refs\/tags\/v/, '')
result-encoding: string
- name: Merge and push manifest on master branch
if: github.ref == 'refs/heads/master'
run: python scripts/merge_manifest.py "${{ secrets.DOCKER_USERNAME }}/${{ steps.lowercase-repo-name.outputs.lowercase }}"
- name: Merge and push manifest on release
if: startsWith(github.ref, 'refs/tags/')
run: python scripts/merge_manifest.py "${{ secrets.DOCKER_USERNAME }}/${{ steps.lowercase-repo-name.outputs.lowercase }}" ${{steps.version.outputs.result}}

117
.github/workflows/pre-release.yml vendored Normal file
View File

@@ -0,0 +1,117 @@
name: pre-release
on:
workflow_dispatch:
push:
branches:
- master
paths:
- "backend-python/**"
tags-ignore:
- "v*"
jobs:
windows:
runs-on: windows-2022
steps:
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- uses: actions/setup-python@v5
id: cp310
with:
python-version: '3.10'
- uses: crazy-max/ghaction-chocolatey@v3
with:
args: install upx
- run: |
Start-BitsTransfer https://github.com/josStorer/ai00_rwkv_server/releases/latest/download/webgpu_server_windows_x86_64.exe ./backend-rust/webgpu_server.exe
Start-BitsTransfer https://github.com/josStorer/web-rwkv-converter/releases/latest/download/web-rwkv-converter_windows_x86_64.exe ./backend-rust/web-rwkv-converter.exe
Start-BitsTransfer https://github.com/josStorer/LibreHardwareMonitor.Console/releases/latest/download/LibreHardwareMonitor.Console.zip ./LibreHardwareMonitor.Console.zip
Expand-Archive ./LibreHardwareMonitor.Console.zip -DestinationPath ./components/LibreHardwareMonitor.Console
Start-BitsTransfer https://www.python.org/ftp/python/3.10.11/python-3.10.11-embed-amd64.zip ./python-3.10.11-embed-amd64.zip
Expand-Archive ./python-3.10.11-embed-amd64.zip -DestinationPath ./py310
$content=Get-Content "./py310/python310._pth"; $content | ForEach-Object {if ($_.ReadCount -eq 3) {"Lib\\site-packages"} else {$_}} | Set-Content ./py310/python310._pth
./py310/python ./backend-python/get-pip.py
./py310/python -m pip install Cython==3.0.4
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../include" -Destination "py310/include" -Recurse
Copy-Item -Path "${{ steps.cp310.outputs.python-path }}/../libs" -Destination "py310/libs" -Recurse
./py310/python -m pip install cyac==1.9
go install github.com/wailsapp/wails/v2/cmd/wails@latest
del ./backend-python/rwkv_pip/cpp/librwkv.dylib
del ./backend-python/rwkv_pip/cpp/librwkv.so
(Get-Content -Path ./backend-golang/app.go) -replace "//go:custom_build windows ", "" | Set-Content -Path ./backend-golang/app.go
(Get-Content -Path ./backend-golang/utils.go) -replace "//go:custom_build windows ", "" | Set-Content -Path ./backend-golang/utils.go
make
Rename-Item -Path "build/bin/RWKV-Runner.exe" -NewName "RWKV-Runner_windows_x64.exe"
- uses: actions/upload-artifact@v4
with:
name: RWKV-Runner_windows_x64.exe
path: build/bin/RWKV-Runner_windows_x64.exe
linux:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- run: |
wget https://github.com/josStorer/ai00_rwkv_server/releases/latest/download/webgpu_server_linux_x86_64 -O ./backend-rust/webgpu_server
wget https://github.com/josStorer/web-rwkv-converter/releases/latest/download/web-rwkv-converter_linux_x86_64 -O ./backend-rust/web-rwkv-converter
sudo apt-get update
sudo apt-get install upx
sudo apt-get install build-essential libgtk-3-dev libwebkit2gtk-4.0-dev libasound2-dev
go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/rwkv6.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
rm ./backend-python/rwkv_pip/cpp/librwkv.dylib
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
rm ./backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
make
mv build/bin/RWKV-Runner build/bin/RWKV-Runner_linux_x64
- uses: actions/upload-artifact@v4
with:
name: RWKV-Runner_linux_x64
path: build/bin/RWKV-Runner_linux_x64
macos:
runs-on: macos-13
steps:
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- run: |
wget https://github.com/josStorer/ai00_rwkv_server/releases/latest/download/webgpu_server_darwin_aarch64 -O ./backend-rust/webgpu_server
wget https://github.com/josStorer/web-rwkv-converter/releases/latest/download/web-rwkv-converter_darwin_aarch64 -O ./backend-rust/web-rwkv-converter
go install github.com/wailsapp/wails/v2/cmd/wails@latest
rm ./backend-python/rwkv_pip/wkv_cuda.pyd
rm ./backend-python/rwkv_pip/rwkv5.pyd
rm ./backend-python/rwkv_pip/rwkv6.pyd
rm ./backend-python/rwkv_pip/beta/wkv_cuda.pyd
rm ./backend-python/get-pip.py
rm ./backend-python/rwkv_pip/cpp/rwkv.dll
rm ./backend-python/rwkv_pip/cpp/librwkv.so
rm ./backend-python/rwkv_pip/webgpu/web_rwkv_py.cp310-win_amd64.pyd
make
cp build/darwin/Readme_Install.txt build/bin/Readme_Install.txt
cp build/bin/RWKV-Runner.app/Contents/MacOS/RWKV-Runner build/bin/RWKV-Runner_darwin_universal
cd build/bin && zip -r RWKV-Runner_macos_universal.zip RWKV-Runner.app Readme_Install.txt
- uses: actions/upload-artifact@v4
with:
name: RWKV-Runner_macos_universal.zip
path: build/bin/RWKV-Runner_macos_universal.zip

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: master
@@ -38,17 +38,17 @@ jobs:
runs-on: windows-2022
needs: create-draft
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v4
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- uses: actions/setup-python@v4
- uses: actions/setup-python@v5
id: cp310
with:
python-version: '3.10'
- uses: crazy-max/ghaction-chocolatey@v2
- uses: crazy-max/ghaction-chocolatey@v3
with:
args: install upx
- run: |
@@ -78,10 +78,10 @@ jobs:
runs-on: ubuntu-20.04
needs: create-draft
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v4
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- run: |
@@ -108,10 +108,10 @@ jobs:
runs-on: macos-13
needs: create-draft
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: master
- uses: actions/setup-go@v4
- uses: actions/setup-go@v5
with:
go-version: '1.20.5'
- run: |
@@ -137,5 +137,5 @@ jobs:
runs-on: ubuntu-22.04
needs: [ windows, linux, macos ]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- run: gh release edit ${{github.ref_name}} --draft=false

View File

@@ -1,17 +1,8 @@
## Changes
### Features
### Fixes
- allow setting tokenChunkSize of WebGPU mode
- expose global_penalty
### Improvements
- improve parameters controllable range
### Chores
- update defaultModelConfigs
- fix v6 lora (https://github.com/JL-er/RWKV-LORA/commit/c03cdbbdafa498a7d65da37bf54a4228eff79132)
## Install

55
Dockerfile Normal file
View File

@@ -0,0 +1,55 @@
FROM node:21-slim AS frontend
RUN echo "registry=https://registry.npmmirror.com/" > ~/.npmrc
WORKDIR /app
COPY manifest.json manifest.json
COPY frontend frontend
WORKDIR /app/frontend
RUN npm ci
RUN npm run build
FROM nvidia/cuda:11.6.1-devel-ubuntu20.04 AS runtime
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -yq git curl wget build-essential ninja-build aria2 jq software-properties-common
RUN add-apt-repository -y ppa:deadsnakes/ppa && \
add-apt-repository -y ppa:ubuntu-toolchain-r/test && \
apt install -y g++-11 python3.10 python3.10-distutils python3.10-dev && \
curl -sS http://mirrors.aliyun.com/pypi/get-pip.py | python3.10
RUN python3.10 -m pip install cmake
FROM runtime AS librwkv
WORKDIR /app
RUN git clone https://github.com/RWKV/rwkv.cpp.git && \
cd rwkv.cpp && \
git submodule update --init --recursive && \
mkdir -p build && \
cd build && \
cmake -G Ninja .. && \
cmake --build .
FROM runtime AS final
WORKDIR /app
COPY ./backend-python/requirements.txt ./backend-python/requirements.txt
RUN python3.10 -m pip install --quiet -r ./backend-python/requirements.txt
COPY . .
COPY --from=frontend /app/frontend/dist /app/frontend/dist
COPY --from=librwkv /app/rwkv.cpp/build/librwkv.so /app/backend-python/rwkv_pip/cpp/librwkv.so
EXPOSE 27777
CMD ["python3.10", "./backend-python/main.py", "--port", "27777", "--host", "0.0.0.0", "--webui"]

View File

@@ -227,12 +227,12 @@ func (a *App) InstallPyDep(python string, cnMirror bool) (string, error) {
if runtime.GOOS == "windows" {
ChangeFileLine("./py310/python310._pth", 3, "Lib\\site-packages")
installScript := python + " ./backend-python/get-pip.py -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
installScript := python + " ./backend-python/get-pip.py -i https://mirrors.aliyun.com/pypi/simple --no-warn-script-location\n" +
python + " -m pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117 --no-warn-script-location\n" +
python + " -m pip install -r ./backend-python/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple --no-warn-script-location\n" +
python + " -m pip install -r ./backend-python/requirements.txt -i https://mirrors.aliyun.com/pypi/simple --no-warn-script-location\n" +
"exit"
if !cnMirror {
installScript = strings.Replace(installScript, " -i https://pypi.tuna.tsinghua.edu.cn/simple", "", -1)
installScript = strings.Replace(installScript, " -i https://mirrors.aliyun.com/pypi/simple", "", -1)
}
err = os.WriteFile(a.exDir+"install-py-dep.bat", []byte(installScript), 0644)
if err != nil {
@@ -242,7 +242,7 @@ func (a *App) InstallPyDep(python string, cnMirror bool) (string, error) {
}
if cnMirror {
return Cmd(python, "-m", "pip", "install", "-r", "./backend-python/requirements_without_cyac.txt", "-i", "https://pypi.tuna.tsinghua.edu.cn/simple")
return Cmd(python, "-m", "pip", "install", "-r", "./backend-python/requirements_without_cyac.txt", "-i", "https://mirrors.aliyun.com/pypi/simple")
} else {
return Cmd(python, "-m", "pip", "install", "-r", "./backend-python/requirements_without_cyac.txt")
}

View File

@@ -7,7 +7,6 @@ import lm_dataformat
import ftfy
import tqdm
import tiktoken
import GPUtil
import torch
import rwkv

View File

@@ -3,7 +3,7 @@ torchvision
torchaudio
rwkv==0.8.25
langchain==0.0.322
fastapi==0.104.0
fastapi==0.109.1
uvicorn==0.23.2
sse-starlette==1.6.5
pydantic==2.4.2
@@ -19,7 +19,7 @@ midi2audio==0.1.1
mido==1.3.0
safetensors==0.4.0
PyMuPDF==1.23.5
python-multipart==0.0.6
python-multipart==0.0.7
Cython==3.0.4
cyac==1.9
torch-directml==0.1.13.1.dev230413

View File

@@ -3,7 +3,7 @@ torchvision
torchaudio
rwkv==0.8.25
langchain==0.0.322
fastapi==0.104.0
fastapi==0.109.1
uvicorn==0.23.2
sse-starlette==1.6.5
pydantic==2.4.2
@@ -19,5 +19,5 @@ midi2audio==0.1.1
mido==1.3.0
safetensors==0.4.0
PyMuPDF==1.23.5
python-multipart==0.0.6
python-multipart==0.0.7
Cython==3.0.4

View File

@@ -127,9 +127,12 @@ def update_config(body: ModelConfigBody):
@router.get("/status", tags=["Configs"])
def status():
import GPUtil
try:
import GPUtil
gpus = GPUtil.getGPUs()
gpus = GPUtil.getGPUs()
except:
gpus = []
if len(gpus) == 0:
device_name = "CPU"
else:

View File

@@ -315,22 +315,25 @@ class AbstractRWKV(ABC):
yield response, "", prompt_token_len, completion_token_len
break
elif type(stop) == list:
stop_exist_regex = "|".join(stop)
matched = re.search(stop_exist_regex, response)
if matched:
try:
state_cache.add_state(
state_cache.AddStateBody(
prompt=prompt + response,
tokens=self.model_tokens,
state=self.model_state,
logits=logits,
exit_flag = False
for s in stop:
if s in response:
try:
state_cache.add_state(
state_cache.AddStateBody(
prompt=prompt + response,
tokens=self.model_tokens,
state=self.model_state,
logits=logits,
)
)
)
except HTTPException:
pass
response = response.split(matched.group())[0]
yield response, "", prompt_token_len, completion_token_len
except HTTPException:
pass
exit_flag = True
response = response.split(s)[0]
yield response, "", prompt_token_len, completion_token_len
break
if exit_flag:
break
out_last = begin + i + 1
if i == self.max_tokens_per_generation - 1:
@@ -674,7 +677,10 @@ class ModelConfigBody(BaseModel):
frequency_penalty: float = Field(default=None, ge=-2, le=2)
penalty_decay: float = Field(default=None, ge=0.99, le=0.999)
top_k: int = Field(default=None, ge=0, le=25)
global_penalty: bool = Field(default=None)
global_penalty: bool = Field(
default=None,
description="When generating a response, whether to include the submitted prompt as a penalty factor. By turning this off, you will get the same generated results as official RWKV Gradio. If you find duplicate results in the generated results, turning this on can help avoid generating duplicates.",
)
model_config = {
"json_schema_extra": {

View File

@@ -19,9 +19,12 @@ def set_torch():
def torch_gc():
import torch
try:
import torch
if torch.cuda.is_available():
with torch.cuda.device(0):
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
if torch.cuda.is_available():
with torch.cuda.device(0):
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
except:
pass # prevent 'torch' has no attribute 'cuda' error, so user can use CPU or WebGPU

18
docker-compose.yml Normal file
View File

@@ -0,0 +1,18 @@
services:
rmkv_runner:
image: rwkv-runner:latest
build: .
# Append "--rwkv.cpp" parameter to use rwkv.cpp
# command: python3.10 ./backend-python/main.py --port 27777 --host 0.0.0.0 --webui --rwkv.cpp
volumes:
- /mnt:/mnt
ports:
- "27777:27777"
# Comment the following lines if use rwkv.cpp
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

View File

@@ -52,9 +52,13 @@ for x in keys:
if "time_maa" in x:
version = max(6, version)
params = f"--vocab_size {vocab_size} --n_layer {n_layer} --n_embd {n_embd}"
if version <= expected_max_version:
if version == 6:
params += ' --my_testing "x060"'
print(
f"v{int(version)}/train.py --vocab_size {vocab_size} --n_layer {n_layer} --n_embd {n_embd}",
f"v{int(version)}/train.py {params}",
end="",
)
else:

View File

@@ -1,7 +1,7 @@
echo $@
if [[ ${cnMirror} == 1 ]]; then
export PIP_INDEX_URL="https://pypi.tuna.tsinghua.edu.cn/simple"
export PIP_INDEX_URL="https://mirrors.aliyun.com/pypi/simple"
if grep -q "mirrors.aliyun.com" /etc/apt/sources.list; then
echo "apt cnMirror already set"
else
@@ -53,7 +53,7 @@ else
fi
echo "loading $loadModel"
modelInfo=$(python3 ./finetune/get_layer_and_embd.py $loadModel 5.2)
modelInfo=$(python3 ./finetune/get_layer_and_embd.py $loadModel 6.0)
echo $modelInfo
if [[ $modelInfo =~ "--n_layer" ]]; then
sudo rm -rf /root/.cache/torch_extensions

202
finetune/lora/v6/cuda/wkv5_cuda.cu vendored Normal file
View File

@@ -0,0 +1,202 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
template <typename F>
__global__ void kernel_forward(const int B, const int T, const int C, const int H,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u,
F *__restrict__ const _y)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_w += h*_N_;
_u += h*_N_;
__shared__ float r[_N_], k[_N_], u[_N_], w[_N_];
float state[_N_] = {0};
__syncthreads();
w[i] = _w[i];
u[i] = float(_u[i]);
__syncthreads();
for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C)
{
__syncthreads();
r[i] = float(_r[t]);
k[i] = float(_k[t]);
__syncthreads();
const float v = float(_v[t]);
float y = 0;
#pragma unroll
for (int j = 0; j < _N_; j+=4)
{
const float4& r_ = (float4&)(r[j]);
const float4& k_ = (float4&)(k[j]);
const float4& w_ = (float4&)(w[j]);
const float4& u_ = (float4&)(u[j]);
float4& s = (float4&)(state[j]);
float4 x;
x.x = k_.x * v;
x.y = k_.y * v;
x.z = k_.z * v;
x.w = k_.w * v;
y += r_.x * (u_.x * x.x + s.x);
y += r_.y * (u_.y * x.y + s.y);
y += r_.z * (u_.z * x.z + s.z);
y += r_.w * (u_.w * x.w + s.w);
s.x = s.x * w_.x + x.x;
s.y = s.y * w_.y + x.y;
s.z = s.z * w_.z + x.z;
s.w = s.w * w_.w + x.w;
}
_y[t] = F(y);
}
}
template <typename F>
__global__ void kernel_backward(const int B, const int T, const int C, const int H,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const float *__restrict__ __w, const F *__restrict__ _u, const F *__restrict__ const _gy,
F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gw, F *__restrict__ const _gu)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_w += h*_N_;
_u += h*_N_;
__w += h*_N_;
__shared__ float w_[_N_], u_[_N_];
__shared__ float r[_N_], k[_N_], v[_N_], gy[_N_];
__syncthreads();
w_[i] = _w[i];
u_[i] = float(_u[i]);
__syncthreads();
const float w = w_[i];
const float ww = __w[i];
const float u = u_[i];
float state[_N_] = {0}, saaaa[_N_] = {0}, sbbbb[_N_] = {0}, scccc[_N_] = {0}, sdddd[_N_] = {0};
float gw = 0, gu = 0;
const int t000 = b*T*C + h*_N_ + i;
const int t111 = (b+1)*T*C + h*_N_ + i;
const int t222 = t111 - 2*C;
for (int t = t000; t < t111; t += C)
{
__syncthreads();
v[i] = float(_v[t]);
gy[i] = float(_gy[t]);
__syncthreads();
const float k = float(_k[t]);
float gr = 0, gu_ = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = state[j];
float x = k * v[j];
gr += (u * x + s) * gy[j];
gu_ += x * gy[j];
s = s * w + x;
}
_gr[t] = F(gr);
gu += float(_r[t]) * gu_;
}
_gu[b*C + h*_N_ + i] = F(gu);
for (int t = t000; t < t222; t += C)
{
__syncthreads();
v[i] = float(_v[t]);
gy[i] = float(_gy[t + 2*C]);
__syncthreads();
const float k = float(_k[t]);
float gw_ = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = saaaa[j];
float& s2 = sbbbb[j];
float x = k * v[j];
float tmp = w * (x + s);
s = tmp;
s2 = tmp + w * s2;
gw_ += s2 * gy[j];
}
gw += float(_r[t + 2*C]) * gw_;
}
_gw[b*C + h*_N_ + i] = F(ww * gw);
for (int t = t111 - C; t >= t000; t -= C)
{
__syncthreads();
v[i] = float(_v[t]);
gy[i] = float(_gy[t]);
__syncthreads();
const float rr = float(_r[t]);
float gk = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = scccc[j];
float x = rr * gy[j];
gk += (u * x + s) * v[j];
s = x + s * w;
}
_gk[t] = F(gk);
}
for (int t = t111 - C; t >= t000; t -= C)
{
__syncthreads();
r[i] = float(_r[t]);
k[i] = float(_k[t]);
__syncthreads();
const float gyy = float(_gy[t]);
float gv = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = sdddd[j];
float x = gyy * r[j];
gv += (u_[j] * x + s) * k[j];
s = x + s * w_[j];
}
_gv[t] = F(gv);
}
}
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y)
{
assert(H*_N_ == C);
assert(_N_%4 == 0);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, u, y);
}
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu)
{
assert(H*_N_ == C);
assert(_N_%4 == 0);
kernel_backward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, ww, u, gy, gr, gk, gv, gw, gu);
}

22
finetune/lora/v6/cuda/wkv5_op.cpp vendored Normal file
View File

@@ -0,0 +1,22 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y);
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu);
void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
cuda_forward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), y.data_ptr<bf16>());
}
void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &ww, torch::Tensor &u, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu) {
cuda_backward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), ww.data_ptr<float>(), u.data_ptr<bf16>(), gy.data_ptr<bf16>(), gr.data_ptr<bf16>(), gk.data_ptr<bf16>(), gv.data_ptr<bf16>(), gw.data_ptr<bf16>(), gu.data_ptr<bf16>());
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward", &forward, "wkv5 forward");
m.def("backward", &backward, "wkv5 backward");
}
TORCH_LIBRARY(wkv5, m) {
m.def("forward", forward);
m.def("backward", backward);
}

242
finetune/lora/v6/cuda/wkv6_cuda.cu vendored Normal file
View File

@@ -0,0 +1,242 @@
#include <stdio.h>
#include <assert.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
template <typename F>
__global__ void kernel_forward(const int B, const int T, const int C, const int H,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u,
F *__restrict__ const _y)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_u += h*_N_;
__shared__ float r[_N_], k[_N_], u[_N_], w[_N_];
float state[_N_] = {0};
__syncthreads();
u[i] = float(_u[i]);
__syncthreads();
for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C)
{
__syncthreads();
w[i] = exp(_w[t]);
r[i] = float(_r[t]);
k[i] = float(_k[t]);
__syncthreads();
const float v = float(_v[t]);
float y = 0;
#pragma unroll
for (int j = 0; j < _N_; j+=4)
{
const float4& r_ = (float4&)(r[j]);
const float4& k_ = (float4&)(k[j]);
const float4& w_ = (float4&)(w[j]);
const float4& u_ = (float4&)(u[j]);
float4& s = (float4&)(state[j]);
float4 x;
x.x = k_.x * v;
x.y = k_.y * v;
x.z = k_.z * v;
x.w = k_.w * v;
y += r_.x * (u_.x * x.x + s.x);
y += r_.y * (u_.y * x.y + s.y);
y += r_.z * (u_.z * x.z + s.z);
y += r_.w * (u_.w * x.w + s.w);
s.x = s.x * w_.x + x.x;
s.y = s.y * w_.y + x.y;
s.z = s.z * w_.z + x.z;
s.w = s.w * w_.w + x.w;
}
_y[t] = F(y);
}
}
template <typename F>
__global__ void kernel_backward_111(const int B, const int T, const int C, const int H,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ const _gy,
F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gu)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
_u += h*_N_;
__shared__ float u_[_N_];
__shared__ float r[_N_], k[_N_], v[_N_], w_[_N_], gy[_N_];
__syncthreads();
u_[i] = float(_u[i]);
__syncthreads();
const float u = u_[i];
float state[_N_] = {0}, scccc[_N_] = {0}, sdddd[_N_] = {0};
const int t_0 = b*T*C + h*_N_ + i;
const int t_T_1 = t_0 + (T-1)*C;
const int t_T = t_0 + T*C;
float gu = 0;
for (int t = t_0; t < t_T; t += C)
{
__syncthreads();
v[i] = float(_v[t]);
gy[i] = float(_gy[t]);
__syncthreads();
const float k = float(_k[t]);
const float w = exp(_w[t]);
float gr = 0, gu_ = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = state[j];
float x = k * v[j];
gr += (u * x + s) * gy[j];
gu_ += x * gy[j];
s = s * w + x;
}
_gr[t] = F(gr);
gu += float(_r[t]) * gu_;
}
_gu[b*C + h*_N_ + i] = F(gu);
for (int t = t_T_1; t >= t_0; t -= C)
{
__syncthreads();
v[i] = float(_v[t]);
gy[i] = float(_gy[t]);
__syncthreads();
const float rr = float(_r[t]);
const float w = exp(_w[t]);
float gk = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = scccc[j];
float x = rr * gy[j];
gk += (u * x + s) * v[j];
s = x + s * w;
}
_gk[t] = F(gk);
}
for (int t = t_T_1; t >= t_0; t -= C)
{
__syncthreads();
r[i] = float(_r[t]);
k[i] = float(_k[t]);
w_[i] = exp(_w[t]);
__syncthreads();
const float gyy = float(_gy[t]);
float gv = 0;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = sdddd[j];
float x = gyy * r[j];
gv += (u_[j] * x + s) * k[j];
s = x + s * w_[j];
}
_gv[t] = F(gv);
}
}
template <typename F>
__global__ void kernel_backward_222(const int B, const int T, const int C, const int H,
const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ const _gy,
F *__restrict__ const _gw)
{
const int b = blockIdx.x / H;
const int h = blockIdx.x % H;
const int i = threadIdx.x;
__shared__ float v[_N_], gy[_N_];
float saaaa[_N_] = {0}, sbbbb[_T_-2] = {0}, scccc[_N_] = {0};
const int t_0 = b*T*C + h*_N_ + i;
const int t_1 = t_0 + C;
const int t_2 = t_0 + 2*C;
const int t_T_1 = t_0 + (T-1)*C;
for (int t = t_T_1; t > t_1; t -= C)
{
__syncthreads();
gy[i] = float(_gy[t]);
v[i] = float(_v[t-2*C]);
__syncthreads();
const float r = float(_r[t]);
const float w = exp(_w[t-C]);
float sum = 0.0f;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = saaaa[j];
float x = r * gy[j];
s = (s + x) * w;
sum += s * v[j];
}
sbbbb[(t-t_2)/C] = sum * float(_k[t-2*C]);
}
float sss = sbbbb[0];
_gw[t_0] = 0;
_gw[t_1] = F(sss * _w[t_1]);
for (int t = t_2; t < t_T_1; t += C)
{
__syncthreads();
gy[i] = float(_gy[t]);
v[i] = float(_v[t-2*C]);
__syncthreads();
const float w = exp(_w[t-C]);
const float k = float(_k[t-2*C]);
float sum = 0.0f;
#pragma unroll
for (int j = 0; j < _N_; j++)
{
float& s = scccc[j];
float x = k * v[j];
s = (s + x) * w;
sum += s * gy[j];
}
sss += sbbbb[(t-t_1)/C] - (sum * float(_r[t]));
_gw[t] = F(sss * _w[t]);
}
_gw[t_T_1] = 0;
}
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y)
{
assert(H*_N_ == C);
assert(_N_%4 == 0);
kernel_forward<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, u, y);
}
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu)
{
assert(H*_N_ == C);
assert(_N_%4 == 0);
kernel_backward_111<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, u, gy, gr, gk, gv, gu);
kernel_backward_222<<<dim3(B * H), dim3(_N_)>>>(B, T, C, H, r, k, v, w, u, gy, gw);
}

22
finetune/lora/v6/cuda/wkv6_op.cpp vendored Normal file
View File

@@ -0,0 +1,22 @@
#include <torch/extension.h>
#include "ATen/ATen.h"
typedef at::BFloat16 bf16;
void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y);
void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu);
void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) {
cuda_forward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), y.data_ptr<bf16>());
}
void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu) {
cuda_backward(B, T, C, H, r.data_ptr<bf16>(), k.data_ptr<bf16>(), v.data_ptr<bf16>(), w.data_ptr<float>(), u.data_ptr<bf16>(), gy.data_ptr<bf16>(), gr.data_ptr<bf16>(), gk.data_ptr<bf16>(), gv.data_ptr<bf16>(), gw.data_ptr<bf16>(), gu.data_ptr<bf16>());
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward", &forward, "wkv6 forward");
m.def("backward", &backward, "wkv6 backward");
}
TORCH_LIBRARY(wkv6, m) {
m.def("forward", forward);
m.def("backward", backward);
}

0
finetune/lora/v6/src/__init__.py vendored Normal file
View File

303
finetune/lora/v6/src/binidx.py vendored Normal file
View File

@@ -0,0 +1,303 @@
from lib2to3.pgen2 import token
import os
import torch
import numpy as np
import shutil
import struct
from functools import lru_cache
from itertools import accumulate
def print_rank_0(*message):
pass
# """If distributed is initialized print only on rank 0."""
# if torch.distributed.is_initialized():
# if torch.distributed.get_rank() == 0:
# print(*message, flush=True)
# else:
# print(*message, flush=True)
def _warmup_mmap_file(path):
pass
# with open(path, "rb") as stream:
# while stream.read(100 * 1024 * 1024):
# pass
dtypes = {
1: np.uint8,
2: np.int8,
3: np.int16,
4: np.int32,
5: np.int64,
6: float,
7: np.double,
8: np.uint16,
}
def code(dtype):
for k in dtypes.keys():
if dtypes[k] == dtype:
return k
raise ValueError(dtype)
def index_file_path(prefix_path):
return prefix_path + ".idx"
def data_file_path(prefix_path):
return prefix_path + ".bin"
class MMapIndexedDataset(torch.utils.data.Dataset):
class Index(object):
_HDR_MAGIC = b"MMIDIDX\x00\x00"
@classmethod
def writer(cls, path, dtype):
class _Writer(object):
def __enter__(self):
self._file = open(path, "wb")
# Write Magic string so we can check the file format then opening it again.
self._file.write(cls._HDR_MAGIC)
# Write version number
# Little endian unsigned 64 Bit integer
self._file.write(struct.pack("<Q", 1))
# Little endian unsigned 8 Bit integer
self._file.write(struct.pack("<B", code(dtype)))
return self
@staticmethod
def _get_pointers(sizes):
dtype_size = dtype().itemsize
address = 0
pointers = []
for size in sizes:
pointers.append(address)
address += size * dtype_size
return pointers
def write(self, sizes, doc_idx):
pointers = self._get_pointers(sizes)
# Little endian unsigned 64 Bit integer
self._file.write(struct.pack("<Q", len(sizes)))
# Little endian unsigned 64 Bit integer
self._file.write(struct.pack("<Q", len(doc_idx)))
sizes = np.array(sizes, dtype=np.int32)
self._file.write(sizes.tobytes(order="C"))
del sizes
pointers = np.array(pointers, dtype=np.int64)
self._file.write(pointers.tobytes(order="C"))
del pointers
doc_idx = np.array(doc_idx, dtype=np.int64)
self._file.write(doc_idx.tobytes(order="C"))
def __exit__(self, exc_type, exc_val, exc_tb):
self._file.close()
return _Writer()
def __init__(self, path, skip_warmup=False):
with open(path, "rb") as stream:
magic_test = stream.read(9)
assert self._HDR_MAGIC == magic_test, (
"Index file doesn't match expected format. "
"Make sure that --dataset-impl is configured properly."
)
# Little endian unsigned 64 Bit integer
version = struct.unpack("<Q", stream.read(8))
assert (1,) == version
# Little endian unsigned 8 Bit integer
(dtype_code,) = struct.unpack("<B", stream.read(1))
self._dtype = dtypes[dtype_code]
self._dtype_size = self._dtype().itemsize
self._len = struct.unpack("<Q", stream.read(8))[0]
self._doc_count = struct.unpack("<Q", stream.read(8))[0]
offset = stream.tell()
if not skip_warmup:
print_rank_0(" warming up index mmap file...")
_warmup_mmap_file(path)
self._bin_buffer_mmap = np.memmap(path, mode="r", order="C")
self._bin_buffer = memoryview(self._bin_buffer_mmap)
print_rank_0(" reading sizes...")
self._sizes = np.frombuffer(
self._bin_buffer, dtype=np.int32, count=self._len, offset=offset
)
print_rank_0(" reading pointers...")
self._pointers = np.frombuffer(
self._bin_buffer,
dtype=np.int64,
count=self._len,
offset=offset + self._sizes.nbytes,
)
print_rank_0(" reading document index...")
self._doc_idx = np.frombuffer(
self._bin_buffer,
dtype=np.int64,
count=self._doc_count,
offset=offset + self._sizes.nbytes + self._pointers.nbytes,
)
def __del__(self):
self._bin_buffer_mmap._mmap.close()
del self._bin_buffer_mmap
@property
def dtype(self):
return self._dtype
@property
def sizes(self):
return self._sizes
@property
def doc_idx(self):
return self._doc_idx
@lru_cache(maxsize=8)
def __getitem__(self, i):
return self._pointers[i], self._sizes[i]
def __len__(self):
return self._len
def __init__(self, path, skip_warmup=False):
super().__init__()
self._path = None
self._index = None
self._bin_buffer = None
self._do_init(path, skip_warmup)
def __getstate__(self):
return self._path
def __setstate__(self, state):
self._do_init(state)
def _do_init(self, path, skip_warmup):
self._path = path
self._index = self.Index(index_file_path(self._path), skip_warmup)
if not skip_warmup:
print_rank_0(" warming up data mmap file...")
_warmup_mmap_file(data_file_path(self._path))
print_rank_0(" creating numpy buffer of mmap...")
self._bin_buffer_mmap = np.memmap(
data_file_path(self._path), mode="r", order="C"
)
print_rank_0(" creating memory view of numpy buffer...")
self._bin_buffer = memoryview(self._bin_buffer_mmap)
def __del__(self):
self._bin_buffer_mmap._mmap.close()
del self._bin_buffer_mmap
del self._index
def __len__(self):
return len(self._index)
# @lru_cache(maxsize=8)
def __getitem__(self, idx):
if isinstance(idx, int):
ptr, size = self._index[idx]
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
)
return np_array
elif isinstance(idx, slice):
start, stop, step = idx.indices(len(self))
if step != 1:
raise ValueError("Slices into indexed_dataset must be contiguous")
ptr = self._index._pointers[start]
sizes = self._index._sizes[idx]
offsets = list(accumulate(sizes))
total_size = sum(sizes)
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=total_size, offset=ptr
)
sents = np.split(np_array, offsets[:-1])
return sents
def get(self, idx, offset=0, length=None):
"""Retrieves a single item from the dataset with the option to only
return a portion of the item.
get(idx) is the same as [idx] but get() does not support slicing.
"""
ptr, size = self._index[idx]
if length is None:
length = size - offset
ptr += offset * np.dtype(self._index.dtype).itemsize
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=length, offset=ptr
)
return np_array
def pad(self, idx, length=None):
ptr, size = self._index[idx]
try:
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=length, offset=ptr
)
except:
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
)
ptr0, _ = self._index[0]
np_array0 = np.frombuffer(
self._bin_buffer,
dtype=self._index.dtype,
count=length - size,
offset=ptr0,
)
np_array = np.append(np_array, np_array0)
return np_array
def only(self, idx):
ptr, size = self._index[idx]
np_array = np.frombuffer(
self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr
)
return np_array
@property
def sizes(self):
return self._index.sizes
@property
def doc_idx(self):
return self._index.doc_idx
def get_doc_idx(self):
return self._index._doc_idx
def set_doc_idx(self, doc_idx_):
self._index._doc_idx = doc_idx_
@property
def supports_prefetch(self):
return False
@staticmethod
def exists(path):
return os.path.exists(index_file_path(path)) and os.path.exists(
data_file_path(path)
)

242
finetune/lora/v6/src/dataset.py vendored Normal file
View File

@@ -0,0 +1,242 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import json, math, random, os, sys
import numpy as np
import torch
from torch.utils.data import Dataset
from pytorch_lightning.utilities import rank_zero_info
from .binidx import MMapIndexedDataset
from .utils import MaybeIsPrime
class MyDataset(Dataset):
def __init__(self, args):
self.args = args
if args.data_type == "binidx":
self.vocab_size = args.vocab_size
rank_zero_info(
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
)
if args.my_pile_version == 1:
self.data = MMapIndexedDataset(args.data_file)
self.data_size = (
len(self.data._bin_buffer) // self.data._index._dtype_size
)
rank_zero_info(f"Data has {self.data_size} tokens.")
elif args.my_pile_version == 2:
data_list = (
open(args.data_file, "r", encoding="utf-8")
.read()
.strip()
.split("\n")
)
data_list = [i.strip().split(" ") for i in data_list]
self.data = []
self.data_size = int(data_list[-1][-1])
rank_zero_info(f"Data has {self.data_size} chunks.")
for d in data_list:
data = MMapIndexedDataset(d[0])
data_size = len(data._bin_buffer) // data._index._dtype_size
assert (data_size - args.ctx_len) == int(d[1])
self.data += [[int(d[-1]), int(d[1]), data]]
# rank_zero_info(self.data)
if args.my_qa_mask > 0:
# self.data_pile = MMapIndexedDataset('/fsx/pile/pile_20B_tokenizer_text_document')
self.data_pile = MMapIndexedDataset(
"/fsx/pile_deduped/pile_0.87_deduped_text_document"
)
self.data_pile_size = (
len(self.data_pile._bin_buffer) // self.data._index._dtype_size
)
else:
self.data_pile = None
self.data_pile_size = 0
if args.my_pile_stage > 0:
# assert self.data_size == 332115325534 and self.vocab_size == 50277
self.samples_per_epoch = args.epoch_steps * args.real_bsz
assert self.samples_per_epoch == 40320
rank_zero_info(
f"########## Pile 20b-tokenized stage {args.my_pile_stage} ##########"
)
dataset_slot = self.data_size // args.ctx_len
if args.my_pile_stage != 4:
assert MaybeIsPrime(args.magic_prime)
assert args.magic_prime % 3 == 2
assert (
args.magic_prime / dataset_slot > 0.99
and args.magic_prime / dataset_slot <= 1
)
elif args.data_type == "numpy":
self.data = np.load(args.data_file).astype("int")
self.vocab_size = args.vocab_size
rank_zero_info(
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
)
self.data_size = len(self.data)
rank_zero_info(f"Data has {self.data_size} tokens.")
elif args.data_type == "uint16":
self.data = (
np.fromfile(args.data_file, dtype=np.uint16)
.astype("int32")
.reshape(-1, args.my_sample_len)
)
self.vocab_size = args.vocab_size
rank_zero_info(
f"Current vocab size = {self.vocab_size} (make sure it's correct)"
)
self.data_size = self.data.shape[0]
rank_zero_info(f"Data has {self.data_size} samples.")
else:
if args.data_type == "dummy":
rank_zero_info("Building dummy data...")
self.data = ""
for i in range(100000):
aa = (i) % 10000
bb = (i * i) % 10000
cc = aa + bb
self.data += f".{aa}+{bb}={cc}."
else:
self.data = open(args.data_file, "r", encoding=args.data_type).read()
rank_zero_info("Building token list...")
unique = sorted(list(set(self.data)))
self.vocab_size = len(unique)
# rank_zero_info()
# for u in unique:
# print(u, end=' ')
# rank_zero_info('\n\n')
xx = 0
xxObj = {}
for u in unique:
xxObj[xx] = u
xx += 1
with open(
f"{args.proj_dir}/vocab.json", "w", encoding="utf-8"
) as vocab_file:
vocab_file.write(json.dumps(xxObj, ensure_ascii=False))
self.data_size = len(self.data)
rank_zero_info(
f"Data has {self.data_size} tokens, {self.vocab_size} vocab size."
)
self.stoi = {ch: i for i, ch in enumerate(unique)}
self.itos = {i: ch for i, ch in enumerate(unique)}
def __len__(self):
return self.args.epoch_steps * self.args.micro_bsz
def __getitem__(self, idx):
args = self.args
rank = self.global_rank
epoch = self.real_epoch
world_size = self.world_size
# print(f"epoch {epoch} idx {idx} rank {rank}/{world_size}")
if args.data_type == "uint16":
i = np.random.randint(0, self.data_size - 1)
dix = self.data[i]
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long)
else:
ctx_len = args.ctx_len
req_len = ctx_len + 1
magic_prime = args.magic_prime
data = self.data
if args.my_pile_stage > 0:
ii = 1 + epoch * self.samples_per_epoch + (idx * world_size) + rank
if args.my_qa_mask > 0:
ii_orig = ii
if ii % 2 == 0:
ii = -1
data = self.data_pile
else:
ii = ii // 2
if data == self.data_pile:
i = np.random.randint(0, self.data_pile_size - req_len)
else:
if args.my_pile_stage == 4 or ii < args.my_random_steps:
# cheat: pick a random spot in dataset
if args.my_pile_version == 1:
i = np.random.randint(0, self.data_size - req_len)
else:
i = np.random.randint(0, self.data_size)
else:
ii = ii - args.my_random_steps
factor = (math.sqrt(5) - 1) / 2
factor = int(magic_prime * factor)
i = ((factor * ii * ii * ii) % magic_prime) * ctx_len
i = i + args.my_pile_shift
# print(f"epoch {epoch} idx {idx} rank {rank}/{world_size} ii {ii} pos {round(i / self.data_size, 3)}")
else:
# cheat: pick a random spot in dataset
i = np.random.randint(0, self.data_size - req_len)
if args.data_type == "binidx":
if args.my_pile_version == 1:
dix = data.get(idx=0, offset=i, length=req_len).astype(int)
# dix = data.pad(idx=idx, length=req_len).astype(int)
else:
# self.data : cutoff, chunk_count, data
for j in range(len(data)):
if i < data[j][0]:
ii = i
i = (i - (data[j - 1][0] if j > 0 else 0)) % data[j][1]
dix = (
data[j][2]
.get(idx=0, offset=i, length=req_len)
.astype(int)
)
# print(ii, j, i)
break
elif args.data_type == "numpy":
dix = data[i : i + req_len]
else:
dix = [self.stoi[s] for s in data[i : i + req_len]]
if args.my_qa_mask == 1:
if data == self.data_pile:
z = [1] * ctx_len
else:
z = [0] * ctx_len
z_sum = 0
isGood = False
for i in range(3, ctx_len):
if (
dix[i] == 27
and dix[i - 1] == 34
and dix[i - 2] == 187
and dix[i - 3] == 187
):
isGood = True
if dix[i] == 0:
isGood = False
if isGood:
z[i] = 1
z_sum += 1
if z_sum == 0:
z = [1] * ctx_len
i = np.random.randint(0, self.data_pile_size - req_len)
dix = self.data_pile.get(
idx=0, offset=i, length=req_len
).astype(int)
z = torch.tensor(z, dtype=torch.bfloat16)
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long)
# if ii_orig < 50:
# # if rank == 1:
# print('rank', rank, 'i', ii_orig, ii, i, 'x', x[:5], '...', x[-5:])
# else:
# exit(0)
if args.my_qa_mask == 1:
return x, y, z
return x, y

1086
finetune/lora/v6/src/model.py vendored Normal file

File diff suppressed because it is too large Load Diff

310
finetune/lora/v6/src/trainer.py vendored Normal file
View File

@@ -0,0 +1,310 @@
import os, math, time, datetime, subprocess
import torch
from torch.utils.data import DataLoader
import pytorch_lightning as pl
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
from .model import LORA_CONFIG
def my_save(args, trainer, dd, ff):
if "14b-run1" in ff:
fn = ff.split("/")[-1]
fff = "/dev/shm/" + fn
torch.save(dd, fff)
subprocess.Popen(f" aws s3 mv {fff} s3://rwkv-14b-4k/{fn} --quiet", shell=True)
elif ("world/14b" in ff) or ("world/7b" in ff):
aa = ff.split("/")[1]
fn = ff.split("/")[-1]
fff = f"/dev/shm/{aa}-{fn}"
torch.save(dd, fff)
subprocess.Popen(
f" aws s3 mv {fff} s3://rwkv-world/{aa}-{fn} --quiet", shell=True
)
else:
if "deepspeed_stage_3" in args.strategy:
trainer.save_checkpoint(ff, weights_only=True)
else:
torch.save(dd, ff)
class train_callback(pl.Callback):
def __init__(self, args):
super().__init__()
self.args = args
def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
args = self.args
# if args.cuda_cleanup > 0:
# torch.cuda.empty_cache()
real_step = trainer.global_step + args.epoch_begin * args.epoch_steps
# LR schedule
w_step = args.warmup_steps
if args.lr_final == args.lr_init or args.epoch_count == 0:
lr = args.lr_init
else:
decay_step = real_step - args.my_pile_edecay * args.epoch_steps
decay_total = (args.epoch_count - args.my_pile_edecay) * args.epoch_steps
progress = (decay_step - w_step + 1) / (decay_total - w_step)
progress = min(1, max(0, progress))
if args.lr_final == 0 or args.lr_init == 0: # linear decay
lr = args.lr_init + (args.lr_final - args.lr_init) * progress
else: # exp decay
lr = args.lr_init * math.exp(
math.log(args.lr_final / args.lr_init) * pow(progress, 1)
)
# if trainer.is_global_zero:
# print(trainer.global_step, decay_step, decay_total, w_step, progress, lr)
if args.my_exit_tokens != 0: # cosine decay
real_tokens = real_step * args.ctx_len * args.real_bsz
warmup_tokens = w_step * args.ctx_len * args.real_bsz
progress = (real_tokens - warmup_tokens) / (
abs(args.my_exit_tokens) - warmup_tokens
)
progress = max(0, min(1, progress))
lr_final_factor = args.lr_final / args.lr_init
lr_mult = (0.5 + lr_final_factor / 2) + (
0.5 - lr_final_factor / 2
) * math.cos(math.pi * progress)
if args.my_exit_tokens > 0:
lr = args.lr_init * lr_mult
else:
lr = (lr + args.lr_init * lr_mult) / 2
if progress >= 1:
if (trainer.is_global_zero) or ("deepspeed_stage_3" in args.strategy):
my_save(
args,
trainer,
pl_module.state_dict(),
f"{args.proj_dir}/rwkv-final.pth",
)
exit(0)
if trainer.global_step < w_step:
lr = lr * (0.2 + 0.8 * trainer.global_step / w_step)
if args.weight_decay_final > 0:
wd_now = args.weight_decay * math.exp(
math.log(args.weight_decay_final / args.weight_decay) * progress
)
else:
wd_now = args.weight_decay
for param_group in trainer.optimizers[0].param_groups:
if param_group["weight_decay"] > 0:
param_group["weight_decay"] = wd_now
if args.layerwise_lr > 0:
param_group["lr"] = lr * param_group["my_lr_scale"]
# print(param_group["lr"], param_group["my_lr_scale"])
else:
param_group["lr"] = lr
trainer.my_lr = lr
trainer.my_wd = wd_now
# rank_zero_info(f"{real_step} {lr}")
if trainer.global_step == 0:
if trainer.is_global_zero: # logging
trainer.my_loss_sum = 0
trainer.my_loss_count = 0
trainer.my_log = open(args.proj_dir + "/train_log.txt", "a")
trainer.my_log.write(
f"NEW RUN {args.my_timestamp}\n{vars(self.args)}\n"
)
try:
print(f"\n{trainer.strategy.config}\n")
trainer.my_log.write(f"{trainer.strategy.config}\n")
except:
pass
trainer.my_log.flush()
if len(args.wandb) > 0:
print("Login to wandb...")
import wandb
wandb.init(
project=args.wandb,
name=args.run_name + " " + args.my_timestamp,
config=args,
save_code=False,
)
trainer.my_wandb = wandb
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
args = self.args
token_per_step = args.ctx_len * args.real_bsz
real_step = trainer.global_step + args.epoch_begin * args.epoch_steps
if trainer.is_global_zero: # logging
t_now = time.time_ns()
kt_s = 0
try:
t_cost = (t_now - trainer.my_time_ns) / 1e9
kt_s = token_per_step / t_cost / 1000
self.log("REAL it/s", 1.0 / t_cost, prog_bar=True, on_step=True)
self.log("Kt/s", kt_s, prog_bar=True, on_step=True)
except:
pass
trainer.my_time_ns = t_now
if pl.__version__[0] == "2":
trainer.my_loss = outputs["loss"]
else:
trainer.my_loss = trainer.my_loss_all.float().mean().item()
trainer.my_loss_sum += trainer.my_loss
trainer.my_loss_count += 1
trainer.my_epoch_loss = trainer.my_loss_sum / trainer.my_loss_count
self.log("lr", trainer.my_lr, prog_bar=True, on_step=True)
self.log("loss", trainer.my_epoch_loss, prog_bar=True, on_step=True)
# self.log("s", real_step, prog_bar=True, on_step=True)
if len(args.wandb) > 0:
lll = {
"loss": trainer.my_loss,
"lr": trainer.my_lr,
"wd": trainer.my_wd,
"Gtokens": real_step * token_per_step / 1e9,
}
if kt_s > 0:
lll["kt/s"] = kt_s
trainer.my_wandb.log(lll, step=int(real_step))
if (trainer.is_global_zero) or (
"deepspeed_stage_3" in args.strategy
): # save pth
if args.magic_prime > 0:
expand_factor = 2 if args.my_qa_mask > 0 else 1
if int(real_step) == int(
args.magic_prime * expand_factor // args.real_bsz
) - 1 + int(args.my_random_steps):
to_save_dict = pl_module.state_dict()
my_save(
args,
trainer,
to_save_dict,
f"{args.proj_dir}/rwkv-final.pth",
)
# if args.batch_save==batch_idx :
# to_save_dict = pl_module.state_dict()
# for name, state in to_save_dict.items():
# if 'img' in name:
# to_save_dict[name] = state
# try:
# my_save(
# args, trainer,
# to_save_dict,
# f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}-{batch_idx}.pth",
# )
# except Exception as e:
# print('Error\n\n', e, '\n\n')
def on_train_epoch_start(self, trainer, pl_module):
args = self.args
if pl.__version__[0] == "2":
dataset = trainer.train_dataloader.dataset
else:
dataset = trainer.train_dataloader.dataset.datasets
assert "MyDataset" in str(dataset)
dataset.global_rank = trainer.global_rank
dataset.real_epoch = int(args.epoch_begin + trainer.current_epoch)
dataset.world_size = trainer.world_size
# print(f'########## world_size {dataset.world_size} global_rank {dataset.global_rank} real_epoch {dataset.real_epoch} ##########')
def on_train_epoch_end(self, trainer, pl_module):
args = self.args
to_save_dict = {}
if (trainer.is_global_zero) or (
"deepspeed_stage_3" in args.strategy
): # save pth
if (
args.epoch_save > 0 and trainer.current_epoch % args.epoch_save == 0
) or (trainer.current_epoch == args.epoch_count - 1):
if args.data_type == "wds_img":
raw_dict = pl_module.state_dict()
for k in raw_dict:
if k.startswith("encoder.") or k.startswith("decoder."):
to_save_dict[k] = raw_dict[k]
else:
to_save_dict = pl_module.state_dict()
if args.data_type == "img" and not args.lora:
for name, state in to_save_dict.items():
if "img" in name:
to_save_dict[name] = state
if args.lora:
enable_time_finetune = "time" in LORA_CONFIG["parts"]
enable_ln_finetune = "ln" in LORA_CONFIG["parts"]
lora_dict = {}
for name, state in to_save_dict.items():
if "img" in name:
lora_dict[name] = state
if (
".lora_" in name
or (enable_time_finetune and ".time_" in name)
or (enable_ln_finetune and ".ln" in name)
):
lora_dict[name] = state
to_save_dict = lora_dict
try:
my_save(
args,
trainer,
to_save_dict,
f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}.pth",
)
except Exception as e:
print("Error\n\n", e, "\n\n")
if trainer.is_global_zero: # logging
trainer.my_log.write(
f"{args.epoch_begin + trainer.current_epoch} {trainer.my_epoch_loss:.6f} {math.exp(trainer.my_epoch_loss):.4f} {trainer.my_lr:.8f} {datetime.datetime.now()} {trainer.current_epoch}\n"
)
trainer.my_log.flush()
trainer.my_loss_sum = 0
trainer.my_loss_count = 0
if (args.epoch_begin + trainer.current_epoch) >= args.my_exit:
exit(0)
@rank_zero_only
def generate_init_weight(model, init_weight_name):
mm = model.generate_init_weight()
if model.args.my_pile_stage == 1:
if len(model.args.load_model) > 0:
print(f"Combine weights from {model.args.load_model}...")
load_dict = torch.load(model.args.load_model, map_location="cpu")
for k in load_dict:
try:
assert k in mm
except:
print("missing", k)
exit(0)
src = load_dict[k]
try:
mm[k] = src.reshape(mm[k].shape)
except:
tmp = mm[k].squeeze().clone()
print(k, src.shape, "-->", mm[k].shape)
ss = src.shape[0]
dd = tmp.shape[0]
for i in range(dd):
pos = i / dd * ss
if pos >= ss - 1:
tmp[i] = src[ss - 1]
else:
p0 = int(math.floor(pos))
ii = pos - p0
tmp[i] = src[p0] * (1 - ii) + src[p0 + 1] * (ii)
mm[k] = tmp.reshape(mm[k].shape)
sss = src.squeeze().float().cpu().numpy()
print(sss[:10], "...", sss[-10:])
mmm = mm[k].squeeze().float().cpu().numpy()
print(mmm[:10], "...", mmm[-10:])
print(f"Save to {init_weight_name}...")
torch.save(mm, init_weight_name)
if model.args.my_pile_stage == 1:
print("Done. Now go for stage 2.")
exit(0)

139
finetune/lora/v6/src/utils.py vendored Normal file
View File

@@ -0,0 +1,139 @@
import json, time, random, os
import numpy as np
import torch
from torch.nn import functional as F
time_slot = {}
time_ref = time.time_ns()
def record_time(name):
if name not in time_slot:
time_slot[name] = 1e20
tt = (time.time_ns() - time_ref) / 1e9
if tt < time_slot[name]:
time_slot[name] = tt
class TOKENIZER:
def __init__(self, WORD_NAME, UNKNOWN_CHAR="\ue083"):
if "list" in str(type(WORD_NAME)):
self.charMode = False
if WORD_NAME[0] == WORD_NAME[1]:
from transformers import PreTrainedTokenizerFast
self.tokenizer = PreTrainedTokenizerFast(tokenizer_file=WORD_NAME[0])
else:
from transformers import GPT2TokenizerFast
self.tokenizer = GPT2TokenizerFast(WORD_NAME[0], WORD_NAME[1])
self.vocab_size = len(self.tokenizer)
else:
self.charMode = True
with open(WORD_NAME + ".json", "r", encoding="utf-16") as result_file:
self.word_table = json.load(result_file)
self.vocab_size = len(self.word_table)
self.stoi = {v: int(k) for k, v in self.word_table.items()}
self.itos = {int(k): v for k, v in self.word_table.items()}
self.UNKNOWN_CHAR = self.stoi[UNKNOWN_CHAR]
def refine_context(self, context):
context = context.strip().split("\n")
for c in range(len(context)):
context[c] = context[c].strip().strip("\u3000").strip("\r")
context = list(filter(lambda c: c != "", context))
context = "\n" + ("\n".join(context)).strip()
if context == "":
context = "\n"
return context
def sample_logits(
self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None
):
# out[self.UNKNOWN_CHAR] = -float('Inf')
lastChar = int(x[-1])
probs = F.softmax(out, dim=-1)
if self.charMode:
if self.itos[lastChar] == "\n":
top_p = top_p_newline
else:
top_p = top_p_usual
else:
top_p = top_p_usual
if os.environ["RWKV_RUN_DEVICE"] == "cpu":
probs = probs.numpy()
sorted_probs = np.sort(probs)[::-1]
cumulative_probs = np.cumsum(sorted_probs)
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
probs[probs < cutoff] = 0
if temperature != 1.0:
probs = probs.pow(1.0 / temperature)
probs = probs / np.sum(probs)
out = np.random.choice(a=len(probs), p=probs)
return out
else:
sorted_probs = torch.sort(probs, descending=True)[0]
cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy()
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
probs[probs < cutoff] = 0
if temperature != 1.0:
probs = probs.pow(1.0 / temperature)
out = torch.multinomial(probs, num_samples=1)[0]
return out
def MaybeIsPrime(number):
if FermatPrimalityTest(number) and MillerRabinPrimalityTest(number):
return True
else:
return False
def FermatPrimalityTest(number):
if number > 1:
for time in range(3):
randomNumber = random.randint(2, number) - 1
if pow(randomNumber, number - 1, number) != 1:
return False
return True
else:
return False
def MillerRabinPrimalityTest(number):
if number == 2:
return True
elif number == 1 or number % 2 == 0:
return False
oddPartOfNumber = number - 1
timesTwoDividNumber = 0
while oddPartOfNumber % 2 == 0:
oddPartOfNumber = oddPartOfNumber // 2
timesTwoDividNumber = timesTwoDividNumber + 1
for time in range(3):
while True:
randomNumber = random.randint(2, number) - 1
if randomNumber != 0 and randomNumber != 1:
break
randomNumberWithPower = pow(randomNumber, oddPartOfNumber, number)
if (randomNumberWithPower != 1) and (randomNumberWithPower != number - 1):
iterationNumber = 1
while (iterationNumber <= timesTwoDividNumber - 1) and (
randomNumberWithPower != number - 1
):
randomNumberWithPower = pow(randomNumberWithPower, 2, number)
iterationNumber = iterationNumber + 1
if randomNumberWithPower != (number - 1):
return False
return True

436
finetune/lora/v6/train.py vendored Normal file
View File

@@ -0,0 +1,436 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import logging
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
from argparse import ArgumentParser
from pytorch_lightning import Trainer
from pytorch_lightning.utilities import rank_zero_info, rank_zero_only
import pytorch_lightning as pl
rank_zero_info("########## work in progress ##########")
parser = ArgumentParser()
parser.add_argument("--load_model", default="", type=str) # full path, with .pth
parser.add_argument(
"--wandb", default="", type=str
) # wandb project name. if "" then don't use wandb
parser.add_argument("--proj_dir", default="out", type=str)
parser.add_argument("--random_seed", default="-1", type=int)
parser.add_argument("--data_file", default="", type=str)
parser.add_argument("--data_type", default="utf-8", type=str)
parser.add_argument(
"--vocab_size", default=0, type=int
) # vocab_size = 0 means auto (for char-level LM and .txt data)
parser.add_argument("--ctx_len", default=1024, type=int)
parser.add_argument(
"--epoch_steps", default=1000, type=int
) # a mini "epoch" has [epoch_steps] steps
parser.add_argument(
"--epoch_count", default=500, type=int
) # train for this many "epochs". will continue afterwards with lr = lr_final
parser.add_argument(
"--epoch_begin", default=0, type=int
) # if you load a model trained for x "epochs", set epoch_begin = x
parser.add_argument(
"--epoch_save", default=5, type=int
) # save the model every [epoch_save] "epochs"
parser.add_argument(
"--micro_bsz", default=12, type=int
) # micro batch size (batch size per GPU)
parser.add_argument("--n_layer", default=6, type=int)
parser.add_argument("--n_embd", default=512, type=int)
parser.add_argument("--dim_att", default=0, type=int)
parser.add_argument("--dim_ffn", default=0, type=int)
parser.add_argument(
"--pre_ffn", default=0, type=int
) # replace first att layer by ffn (sometimes better)
parser.add_argument("--head_qk", default=0, type=int) # my headQK trick
parser.add_argument("--tiny_att_dim", default=0, type=int) # tiny attention dim
parser.add_argument(
"--tiny_att_layer", default=-999, type=int
) # tiny attention @ which layer
parser.add_argument(
"--lr_init", default=6e-4, type=float
) # 6e-4 for L12-D768, 4e-4 for L24-D1024, 3e-4 for L24-D2048
parser.add_argument("--lr_final", default=1e-5, type=float)
parser.add_argument(
"--warmup_steps", default=-1, type=int
) # try 50 if you load a model
parser.add_argument("--beta1", default=0.9, type=float)
parser.add_argument(
"--beta2", default=0.99, type=float
) # use 0.999 when your model is close to convergence
parser.add_argument("--adam_eps", default=1e-8, type=float)
parser.add_argument(
"--grad_cp", default=0, type=int
) # gradient checkpt: saves VRAM, but slower
parser.add_argument(
"--dropout", default=0, type=float
) # try 0.01 / 0.02 / 0.05 / 0.1
parser.add_argument(
"--weight_decay", default=0, type=float
) # try 0.1 / 0.01 / 0.001
parser.add_argument("--weight_decay_final", default=-1, type=float)
parser.add_argument(
"--my_pile_version", default=1, type=int
) # my special pile version
parser.add_argument("--my_pile_stage", default=0, type=int) # my special pile mode
parser.add_argument(
"--my_pile_shift", default=-1, type=int
) # my special pile mode - text shift
parser.add_argument("--my_pile_edecay", default=0, type=int)
parser.add_argument(
"--layerwise_lr", default=1, type=int
) # layerwise lr for faster convergence (but slower it/s)
parser.add_argument(
"--ds_bucket_mb", default=200, type=int
) # deepspeed bucket size in MB. 200 seems enough
# parser.add_argument("--cuda_cleanup", default=0, type=int) # extra cuda cleanup (sometimes helpful)
parser.add_argument("--my_sample_len", default=0, type=int)
parser.add_argument("--my_ffn_shift", default=1, type=int)
parser.add_argument("--my_att_shift", default=1, type=int)
parser.add_argument(
"--head_size_a", default=64, type=int
) # can try larger values for larger models
parser.add_argument("--head_size_divisor", default=8, type=int)
parser.add_argument("--my_pos_emb", default=0, type=int)
parser.add_argument("--load_partial", default=0, type=int)
parser.add_argument("--magic_prime", default=0, type=int)
parser.add_argument("--my_qa_mask", default=0, type=int)
parser.add_argument("--my_random_steps", default=0, type=int)
parser.add_argument("--my_testing", default="", type=str)
parser.add_argument("--my_exit", default=99999999, type=int)
parser.add_argument("--my_exit_tokens", default=0, type=int)
# LORA
parser.add_argument("--emb", action="store_true")
parser.add_argument("--lora", action="store_true")
parser.add_argument("--lora_load", default="", type=str)
parser.add_argument("--lora_r", default=8, type=int)
parser.add_argument("--lora_alpha", default=32, type=float)
parser.add_argument("--lora_dropout", default=0.01, type=float)
parser.add_argument("--lora_parts", default="att,ln,time", type=str)
if pl.__version__[0] == "2":
parser.add_argument("--accelerator", default="gpu", type=str)
parser.add_argument("--strategy", default="auto", type=str)
parser.add_argument("--devices", default=1, type=int)
parser.add_argument("--num_nodes", default=1, type=int)
parser.add_argument("--precision", default="fp16", type=str)
parser.add_argument("--accumulate_grad_batches", default=1, type=int)
else:
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args()
########################################################################################################
import os, warnings, math, datetime, sys, time
import numpy as np
import torch
from torch.utils.data import DataLoader
if "deepspeed" in args.strategy:
import deepspeed
from pytorch_lightning import seed_everything
if args.random_seed >= 0:
print(
f"########## WARNING: GLOBAL SEED {args.random_seed} THIS WILL AFFECT MULTIGPU SAMPLING ##########\n"
* 3
)
seed_everything(args.random_seed)
np.set_printoptions(precision=4, suppress=True, linewidth=200)
warnings.filterwarnings(
"ignore", ".*Consider increasing the value of the `num_workers` argument*"
)
warnings.filterwarnings(
"ignore", ".*The progress bar already tracks a metric with the*"
)
# os.environ["WDS_SHOW_SEED"] = "1"
args.my_timestamp = datetime.datetime.today().strftime("%Y-%m-%d-%H-%M-%S")
args.enable_checkpointing = False
args.replace_sampler_ddp = False
args.logger = False
args.gradient_clip_val = 1.0
args.num_sanity_val_steps = 0
args.check_val_every_n_epoch = int(1e20)
args.log_every_n_steps = int(1e20)
args.max_epochs = args.epoch_count # -1 continue forever
args.betas = (args.beta1, args.beta2)
args.real_bsz = int(args.num_nodes) * int(args.devices) * args.micro_bsz
os.environ["RWKV_MY_TESTING"] = args.my_testing
os.environ["RWKV_CTXLEN"] = str(args.ctx_len)
os.environ["RWKV_HEAD_SIZE_A"] = str(args.head_size_a)
if args.dim_att <= 0:
args.dim_att = args.n_embd
if args.dim_ffn <= 0:
args.dim_ffn = int((args.n_embd * 3.5) // 32 * 32) # default = 3.5x emb size
if args.data_type == "wds_img":
args.run_name = f"v{args.my_img_version}-{args.my_img_size}-{args.my_img_bit}bit-{args.my_img_clip}x{args.my_img_clip_scale}"
args.proj_dir = f"{args.proj_dir}-{args.run_name}"
else:
args.run_name = (
f"{args.vocab_size} ctx{args.ctx_len} L{args.n_layer} D{args.n_embd}"
)
if not os.path.exists(args.proj_dir):
os.makedirs(args.proj_dir)
if args.my_pile_stage > 0:
magic_prime_bak = args.magic_prime
if args.my_pile_shift < 0:
args.my_pile_shift = 0
if magic_prime_bak > 0:
args.magic_prime = magic_prime_bak
if args.my_qa_mask == 2:
args.epoch_count = 2 * args.magic_prime // 40320
else:
args.epoch_count = args.magic_prime // 40320
args.epoch_steps = 40320 // args.real_bsz
assert args.epoch_steps * args.real_bsz == 40320
# if args.my_pile_stage == 2:
# assert args.lr_final == args.lr_init
if args.my_pile_stage >= 2: # find latest saved model
list_p = []
for p in os.listdir(args.proj_dir):
if p.startswith("rwkv") and p.endswith(".pth"):
p = ((p.split("-"))[1].split("."))[0]
if p != "final":
if p == "init":
p = -1
else:
p = int(p)
list_p += [p]
list_p.sort()
max_p = list_p[-1]
if len(list_p) > 1:
args.my_pile_prev_p = list_p[-2] # in case max_p is corrupted
if max_p == -1:
args.load_model = f"{args.proj_dir}/rwkv-init.pth"
else:
args.load_model = f"{args.proj_dir}/rwkv-{max_p}.pth"
if args.warmup_steps < 0:
if args.my_pile_stage == 2:
args.warmup_steps = 10
else:
args.warmup_steps = 30
args.epoch_begin = max_p + 1
samples_per_epoch = args.epoch_steps * args.real_bsz
tokens_per_epoch = samples_per_epoch * args.ctx_len
try:
deepspeed_version = deepspeed.__version__
except:
deepspeed_version = None
pass
rank_zero_info(
f"""
############################################################################
#
# RWKV-5 {args.precision.upper()} on {args.num_nodes}x{args.devices} {args.accelerator.upper()}, bsz {args.num_nodes}x{args.devices}x{args.micro_bsz}={args.real_bsz}, {args.strategy} {'with grad_cp' if args.grad_cp > 0 else ''}
#
# Data = {args.data_file} ({args.data_type}), ProjDir = {args.proj_dir}
#
# Epoch = {args.epoch_begin} to {args.epoch_begin + args.epoch_count - 1}, save every {args.epoch_save} epoch
#
# Each "epoch" = {args.epoch_steps} steps, {samples_per_epoch} samples, {tokens_per_epoch} tokens
#
# Model = {args.n_layer} n_layer, {args.n_embd} n_embd, {args.ctx_len} ctx_len
#
# Adam = lr {args.lr_init} to {args.lr_final}, warmup {args.warmup_steps} steps, beta {args.betas}, eps {args.adam_eps}
#
# Found torch {torch.__version__}, recommend 1.13.1+cu117 or newer
# Found deepspeed {deepspeed_version}, recommend 0.7.0 (faster than newer versions)
# Found pytorch_lightning {pl.__version__}, recommend 1.9.5
#
############################################################################
"""
)
rank_zero_info(str(vars(args)) + "\n")
assert args.data_type in ["utf-8", "utf-16le", "numpy", "binidx", "dummy", "uint16"]
if args.lr_final == 0 or args.lr_init == 0:
rank_zero_info(
"\n\nNote: lr_final = 0 or lr_init = 0. Using linear LR schedule instead.\n\n"
)
assert args.precision in ["fp32", "tf32", "fp16", "bf16"]
os.environ["RWKV_FLOAT_MODE"] = args.precision
if args.precision == "fp32":
for i in range(10):
rank_zero_info(
"\n\nNote: you are using fp32 (very slow). Try bf16 / tf32 for faster training.\n\n"
)
if args.precision == "fp16":
rank_zero_info(
"\n\nNote: you are using fp16 (might overflow). Try bf16 / tf32 for stable training.\n\n"
)
os.environ["RWKV_JIT_ON"] = "0"
if "deepspeed_stage_3" in args.strategy:
os.environ["RWKV_JIT_ON"] = "0"
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True
if args.precision == "fp32":
torch.backends.cudnn.allow_tf32 = False
torch.backends.cuda.matmul.allow_tf32 = False
else:
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
if "32" in args.precision:
args.precision = 32
elif args.precision == "fp16":
args.precision = 16
else:
args.precision = "bf16"
########################################################################################################
from src.trainer import train_callback, generate_init_weight
from src.dataset import MyDataset
train_data = MyDataset(args)
args.vocab_size = train_data.vocab_size
from src.model import RWKV, LORA_CONFIG, LoraLinear
if args.lora:
assert args.lora_r > 0, "LoRA should have its `r` > 0"
LORA_CONFIG["r"] = args.lora_r
LORA_CONFIG["alpha"] = args.lora_alpha
LORA_CONFIG["dropout"] = args.lora_dropout
LORA_CONFIG["parts"] = set(str(args.lora_parts).split(","))
enable_time_finetune = "time" in LORA_CONFIG["parts"]
enable_ln_finetune = "ln" in LORA_CONFIG["parts"]
model = RWKV(args)
if args.lora:
model.requires_grad_(False)
for name, module in model.named_modules():
if any(n.startswith("lora_") for n, _ in module.named_parameters()):
print(f" LoRA additionally training module {name}")
for pname, param in module.named_parameters():
param.requires_grad = "lora_" in pname
elif enable_ln_finetune and ".ln" in name:
print(f" LoRA additionally training module {name}")
for param in module.parameters():
param.requires_grad = True
elif enable_time_finetune and any(
n.startswith("time") for n, _ in module.named_parameters()
):
for pname, param in module.named_parameters():
if pname.startswith("time"):
print(f" LoRA additionally training parameter {pname}")
param.requires_grad = True
if (
len(args.load_model) == 0 or args.my_pile_stage == 1
): # shall we build the initial weights?
init_weight_name = f"{args.proj_dir}/rwkv-init.pth"
generate_init_weight(model, init_weight_name) # save initial weights
args.load_model = init_weight_name
rank_zero_info(f"########## Loading {args.load_model}... ##########")
try:
load_dict = torch.load(args.load_model, map_location="cpu")
load_keys = list(load_dict.keys())
for k in load_keys:
if k.startswith("_forward_module."):
load_dict[k.replace("_forward_module.", "")] = load_dict[k]
del load_dict[k]
except:
rank_zero_info(f"Bad checkpoint {args.load_model}")
if args.my_pile_stage >= 2: # try again using another checkpoint
max_p = args.my_pile_prev_p
if max_p == -1:
args.load_model = f"{args.proj_dir}/rwkv-init.pth"
else:
args.load_model = f"{args.proj_dir}/rwkv-{max_p}.pth"
args.epoch_begin = max_p + 1
rank_zero_info(f"Trying {args.load_model}")
load_dict = torch.load(args.load_model, map_location="cpu")
if args.load_partial == 1:
load_keys = load_dict.keys()
for k in model.state_dict():
if k not in load_keys:
load_dict[k] = model.state_dict()[k]
model.load_state_dict(load_dict, strict=(not args.lora))
if os.path.isfile(args.lora_load):
model.load_state_dict(
torch.load(args.lora_load, map_location="cpu"), strict=False
)
if pl.__version__[0] == "2":
trainer = Trainer(
accelerator=args.accelerator,
strategy=args.strategy,
devices=args.devices,
num_nodes=args.num_nodes,
precision=args.precision,
logger=args.logger,
callbacks=[train_callback(args)],
max_epochs=args.max_epochs,
check_val_every_n_epoch=args.check_val_every_n_epoch,
num_sanity_val_steps=args.num_sanity_val_steps,
log_every_n_steps=args.log_every_n_steps,
enable_checkpointing=args.enable_checkpointing,
accumulate_grad_batches=args.accumulate_grad_batches,
gradient_clip_val=args.gradient_clip_val,
)
else:
trainer = Trainer.from_argparse_args(
args,
callbacks=[train_callback(args)],
)
if trainer.global_rank == 0:
for n in model.state_dict():
shape = model.state_dict()[n].shape
shape = [i for i in shape if i != 1]
if len(shape) > 1:
print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {n}")
else:
print(f"{str(shape[0]).ljust(5)} {n}")
if "deepspeed" in args.strategy:
trainer.strategy.config["zero_optimization"]["allgather_bucket_size"] = (
args.ds_bucket_mb * 1000 * 1000
)
trainer.strategy.config["zero_optimization"]["reduce_bucket_size"] = (
args.ds_bucket_mb * 1000 * 1000
)
# must set shuffle=False, persistent_workers=False (because worker is in another thread)
data_loader = DataLoader(
train_data,
shuffle=False,
pin_memory=True,
batch_size=args.micro_bsz,
num_workers=1,
persistent_workers=False,
drop_last=True,
)
trainer.fit(model, data_loader)

View File

@@ -19,6 +19,7 @@
"file-saver": "^2.0.5",
"html-midi-player": "^1.5.0",
"i18next": "^22.4.15",
"katex": "^0.16.9",
"lodash-es": "^4.17.21",
"mobx": "^6.9.0",
"mobx-react-lite": "^3.4.3",
@@ -34,13 +35,16 @@
"react-router-dom": "^6.11.1",
"react-toastify": "^9.1.3",
"rehype-highlight": "^6.0.0",
"rehype-katex": "^6.0.3",
"rehype-raw": "^6.1.1",
"remark-breaks": "^3.0.3",
"remark-gfm": "^3.0.1",
"remark-math": "^5.1.1",
"usehooks-ts": "^2.9.1",
"uuid": "^9.0.0"
},
"devDependencies": {
"@tailwindcss/typography": "^0.5.10",
"@types/file-saver": "^2.0.7",
"@types/lodash-es": "^4.17.12",
"@types/react": "^18.2.6",
@@ -2283,6 +2287,34 @@
"integrity": "sha512-myfUej5naTBWnqOCc/MdVOLVjXUXtIA+NpDrDBKJtLLg2shUjBu3cZmB/85RyitKc55+lUUyl7oRfLOvkr2hsw==",
"dev": true
},
"node_modules/@tailwindcss/typography": {
"version": "0.5.10",
"resolved": "https://registry.npmjs.org/@tailwindcss/typography/-/typography-0.5.10.tgz",
"integrity": "sha512-Pe8BuPJQJd3FfRnm6H0ulKIGoMEQS+Vq01R6M5aCrFB/ccR/shT+0kXLjouGC1gFLm9hopTFN+DMP0pfwRWzPw==",
"dev": true,
"dependencies": {
"lodash.castarray": "^4.4.0",
"lodash.isplainobject": "^4.0.6",
"lodash.merge": "^4.6.2",
"postcss-selector-parser": "6.0.10"
},
"peerDependencies": {
"tailwindcss": ">=3.0.0 || insiders"
}
},
"node_modules/@tailwindcss/typography/node_modules/postcss-selector-parser": {
"version": "6.0.10",
"resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-6.0.10.tgz",
"integrity": "sha512-IQ7TZdoaqbT+LCpShg46jnZVlhWD2w6iQYAcYXfHARZ7X1t/UGhhceQDs5X0cGqKvYlHNOuv7Oa1xmb0oQuA3w==",
"dev": true,
"dependencies": {
"cssesc": "^3.0.0",
"util-deprecate": "^1.0.2"
},
"engines": {
"node": ">=4"
}
},
"node_modules/@tensorflow/tfjs": {
"version": "2.8.6",
"resolved": "https://registry.npmjs.org/@tensorflow/tfjs/-/tfjs-2.8.6.tgz",
@@ -2536,6 +2568,11 @@
"hoist-non-react-statics": "^3.3.0"
}
},
"node_modules/@types/katex": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.14.0.tgz",
"integrity": "sha512-+2FW2CcT0K3P+JMR8YG846bmDwplKUTsWgT2ENwdQ1UdVfRk3GQrh6Mi4sTopy30gI8Uau5CEqHTDZ6YvWIUPA=="
},
"node_modules/@types/lodash": {
"version": "4.14.202",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.14.202.tgz",
@@ -3463,6 +3500,17 @@
"resolved": "https://registry.npmmirror.com/emoji-regex/-/emoji-regex-8.0.0.tgz",
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
},
"node_modules/entities": {
"version": "4.5.0",
"resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz",
"integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==",
"engines": {
"node": ">=0.12"
},
"funding": {
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/esbuild": {
"version": "0.17.19",
"resolved": "https://registry.npmmirror.com/esbuild/-/esbuild-0.17.19.tgz",
@@ -3859,6 +3907,61 @@
"integrity": "sha512-8Rf9Y83NBReMnx0gFzA8JImQACstCYWUplepDa9xprwwtmgEZUF0h/i5xSA625zB/I37EtrswSST6OXxwaaIJQ==",
"optional": true
},
"node_modules/hast-util-from-dom": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/hast-util-from-dom/-/hast-util-from-dom-4.2.0.tgz",
"integrity": "sha512-t1RJW/OpJbCAJQeKi3Qrj1cAOLA0+av/iPFori112+0X7R3wng+jxLA+kXec8K4szqPRGI8vPxbbpEYvvpwaeQ==",
"dependencies": {
"hastscript": "^7.0.0",
"web-namespaces": "^2.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-html": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/hast-util-from-html/-/hast-util-from-html-1.0.2.tgz",
"integrity": "sha512-LhrTA2gfCbLOGJq2u/asp4kwuG0y6NhWTXiPKP+n0qNukKy7hc10whqqCFfyvIA1Q5U5d0sp9HhNim9gglEH4A==",
"dependencies": {
"@types/hast": "^2.0.0",
"hast-util-from-parse5": "^7.0.0",
"parse5": "^7.0.0",
"vfile": "^5.0.0",
"vfile-message": "^3.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-html-isomorphic": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/hast-util-from-html-isomorphic/-/hast-util-from-html-isomorphic-1.0.0.tgz",
"integrity": "sha512-Yu480AKeOEN/+l5LA674a+7BmIvtDj24GvOt7MtQWuhzUwlaaRWdEPXAh3Qm5vhuthpAipFb2vTetKXWOjmTvw==",
"dependencies": {
"@types/hast": "^2.0.0",
"hast-util-from-dom": "^4.0.0",
"hast-util-from-html": "^1.0.0",
"unist-util-remove-position": "^4.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-html/node_modules/parse5": {
"version": "7.1.2",
"resolved": "https://registry.npmjs.org/parse5/-/parse5-7.1.2.tgz",
"integrity": "sha512-Czj1WaSVpaoj0wbhMzLmWD69anp2WH7FXMB9n1Sy8/ZFF9jolSQVMu1Ij5WIyGmcBmhk7EOndpO4mIpihVqAXw==",
"dependencies": {
"entities": "^4.4.0"
},
"funding": {
"url": "https://github.com/inikulin/parse5?sponsor=1"
}
},
"node_modules/hast-util-from-parse5": {
"version": "7.1.2",
"resolved": "https://registry.npmmirror.com/hast-util-from-parse5/-/hast-util-from-parse5-7.1.2.tgz",
@@ -4185,6 +4288,29 @@
"node": ">=6"
}
},
"node_modules/katex": {
"version": "0.16.9",
"resolved": "https://registry.npmjs.org/katex/-/katex-0.16.9.tgz",
"integrity": "sha512-fsSYjWS0EEOwvy81j3vRA8TEAhQhKiqO+FQaKWp0m39qwOzHVBgAUBIXWj1pB+O2W3fIpNa6Y9KSKCVbfPhyAQ==",
"funding": [
"https://opencollective.com/katex",
"https://github.com/sponsors/katex"
],
"dependencies": {
"commander": "^8.3.0"
},
"bin": {
"katex": "cli.js"
}
},
"node_modules/katex/node_modules/commander": {
"version": "8.3.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-8.3.0.tgz",
"integrity": "sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww==",
"engines": {
"node": ">= 12"
}
},
"node_modules/keyborg": {
"version": "2.0.0",
"resolved": "https://registry.npmmirror.com/keyborg/-/keyborg-2.0.0.tgz",
@@ -4242,6 +4368,24 @@
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.21.tgz",
"integrity": "sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw=="
},
"node_modules/lodash.castarray": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/lodash.castarray/-/lodash.castarray-4.4.0.tgz",
"integrity": "sha512-aVx8ztPv7/2ULbArGJ2Y42bG1mEQ5mGjpdvrbJcJFU3TbYybe+QlLS4pst9zV52ymy2in1KpFPiZnAOATxD4+Q==",
"dev": true
},
"node_modules/lodash.isplainobject": {
"version": "4.0.6",
"resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz",
"integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==",
"dev": true
},
"node_modules/lodash.merge": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz",
"integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==",
"dev": true
},
"node_modules/long": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/long/-/long-4.0.0.tgz",
@@ -4422,6 +4566,20 @@
"mdast-util-to-markdown": "^1.3.0"
}
},
"node_modules/mdast-util-math": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/mdast-util-math/-/mdast-util-math-2.0.2.tgz",
"integrity": "sha512-8gmkKVp9v6+Tgjtq6SYx9kGPpTf6FVYRa53/DLh479aldR9AyP48qeVOgNZ5X7QUK7nOy4yw7vg6mbiGcs9jWQ==",
"dependencies": {
"@types/mdast": "^3.0.0",
"longest-streak": "^3.0.0",
"mdast-util-to-markdown": "^1.3.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/mdast-util-newline-to-break": {
"version": "1.0.0",
"resolved": "https://registry.npmmirror.com/mdast-util-newline-to-break/-/mdast-util-newline-to-break-1.0.0.tgz",
@@ -4625,6 +4783,29 @@
"uvu": "^0.5.0"
}
},
"node_modules/micromark-extension-math": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/micromark-extension-math/-/micromark-extension-math-2.1.2.tgz",
"integrity": "sha512-es0CcOV89VNS9wFmyn+wyFTKweXGW4CEvdaAca6SWRWPyYCbBisnjaHLjWO4Nszuiud84jCpkHsqAJoa768Pvg==",
"dependencies": {
"@types/katex": "^0.16.0",
"katex": "^0.16.0",
"micromark-factory-space": "^1.0.0",
"micromark-util-character": "^1.0.0",
"micromark-util-symbol": "^1.0.0",
"micromark-util-types": "^1.0.0",
"uvu": "^0.5.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/micromark-extension-math/node_modules/@types/katex": {
"version": "0.16.7",
"resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.16.7.tgz",
"integrity": "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ=="
},
"node_modules/micromark-factory-destination": {
"version": "1.0.0",
"resolved": "https://registry.npmmirror.com/micromark-factory-destination/-/micromark-factory-destination-1.0.0.tgz",
@@ -5650,6 +5831,23 @@
"unist-util-visit": "^4.0.0"
}
},
"node_modules/rehype-katex": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/rehype-katex/-/rehype-katex-6.0.3.tgz",
"integrity": "sha512-ByZlRwRUcWegNbF70CVRm2h/7xy7jQ3R9LaY4VVSvjnoVWwWVhNL60DiZsBpC5tSzYQOCvDbzncIpIjPZWodZA==",
"dependencies": {
"@types/hast": "^2.0.0",
"@types/katex": "^0.14.0",
"hast-util-from-html-isomorphic": "^1.0.0",
"hast-util-to-text": "^3.1.0",
"katex": "^0.16.0",
"unist-util-visit": "^4.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/rehype-raw": {
"version": "6.1.1",
"resolved": "https://registry.npmmirror.com/rehype-raw/-/rehype-raw-6.1.1.tgz",
@@ -5681,6 +5879,21 @@
"unified": "^10.0.0"
}
},
"node_modules/remark-math": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/remark-math/-/remark-math-5.1.1.tgz",
"integrity": "sha512-cE5T2R/xLVtfFI4cCePtiRn+e6jKMtFDR3P8V3qpv8wpKjwvHoBA4eJzvX+nVrnlNy0911bdGmuspCSwetfYHw==",
"dependencies": {
"@types/mdast": "^3.0.0",
"mdast-util-math": "^2.0.0",
"micromark-extension-math": "^2.0.0",
"unified": "^10.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/remark-parse": {
"version": "10.0.2",
"resolved": "https://registry.npmmirror.com/remark-parse/-/remark-parse-10.0.2.tgz",
@@ -6543,6 +6756,19 @@
"@types/unist": "^2.0.0"
}
},
"node_modules/unist-util-remove-position": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-4.0.2.tgz",
"integrity": "sha512-TkBb0HABNmxzAcfLf4qsIbFbaPDvMO6wa3b3j4VcEzFVaw1LBKwnW4/sRJ/atSLSzoIg41JWEdnE7N6DIhGDGQ==",
"dependencies": {
"@types/unist": "^2.0.0",
"unist-util-visit": "^4.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/unist-util-stringify-position": {
"version": "3.0.3",
"resolved": "https://registry.npmmirror.com/unist-util-stringify-position/-/unist-util-stringify-position-3.0.3.tgz",

View File

@@ -20,6 +20,7 @@
"file-saver": "^2.0.5",
"html-midi-player": "^1.5.0",
"i18next": "^22.4.15",
"katex": "^0.16.9",
"lodash-es": "^4.17.21",
"mobx": "^6.9.0",
"mobx-react-lite": "^3.4.3",
@@ -35,13 +36,16 @@
"react-router-dom": "^6.11.1",
"react-toastify": "^9.1.3",
"rehype-highlight": "^6.0.0",
"rehype-katex": "^6.0.3",
"rehype-raw": "^6.1.1",
"remark-breaks": "^3.0.3",
"remark-gfm": "^3.0.1",
"remark-math": "^5.1.1",
"usehooks-ts": "^2.9.1",
"uuid": "^9.0.0"
},
"devDependencies": {
"@tailwindcss/typography": "^0.5.10",
"@types/file-saver": "^2.0.7",
"@types/lodash-es": "^4.17.12",
"@types/react": "^18.2.6",

View File

@@ -4,7 +4,7 @@
"About": "約",
"Settings": "設定",
"Go to chat page": "チャットページに移動する",
"Manage your configs": "あなたの設定を管理す",
"Manage your configs, adjust the starting model and parameters": "設定を管理し、開始モデルとパラメータを調整します",
"Manage models": "モデルの管理",
"Run": "実行",
"Offline": "オフライン",
@@ -96,7 +96,7 @@
"Python dependencies are incomplete, would you like to install them?": "Pythonの依存関係が不完全です、インストールしますか",
"Install": "インストール",
"This is the latest version": "これは最新バージョンです",
"Use Tsinghua Pip Mirrors": "清華大学Pipミラーサーバーを使用",
"Use Alibaba Cloud Pip Mirrors": "Alibaba Cloud Pipミラーサーバーを使用",
"Model Config Exception": "モデル設定例外",
"Use Gitee Updates Source": "Gitee更新ソースを使用",
"Use Custom CUDA kernel to Accelerate": "カスタムCUDAカーネルを使用して加速",
@@ -347,5 +347,9 @@
"Parallel Token Chunk Size": "並列トークンチャンクサイズ",
"Maximum tokens to be processed in parallel at once. For high end GPUs, this could be 64 or 128 (faster).": "一度に並列で処理される最大トークン数。高性能なGPUの場合、64または128になります高速。",
"Global Penalty": "グローバルペナルティ",
"When generating a response, whether to include the submitted prompt as a penalty factor. By turning this off, you will get the same generated results as official RWKV Gradio. If you find duplicate results in the generated results, turning this on can help avoid generating duplicates.": "レスポンスを生成する際、提出されたプロンプトをペナルティ要因として含めるかどうか。これをオフにすると、公式RWKV Gradioと同じ生成結果を得ることができます。生成された結果に重複がある場合、これをオンにすることで重複の生成を回避するのに役立ちます。"
"When generating a response, whether to include the submitted prompt as a penalty factor. By turning this off, you will get the same generated results as official RWKV Gradio. If you find duplicate results in the generated results, turning this on can help avoid generating duplicates.": "レスポンスを生成する際、提出されたプロンプトをペナルティ要因として含めるかどうか。これをオフにすると、公式RWKV Gradioと同じ生成結果を得ることができます。生成された結果に重複がある場合、これをオンにすることで重複の生成を回避するのに役立ちます。",
"Create a new user or AI message content. You can prepare a chat record with AI here, and fill in the responses you want to get from AI in the tone of AI. When you use this preset, the chat record will be processed, and at this point, AI will better understand what you want it to do or what role to play.": "新しいユーザーまたはAIメッセージコンテンツを作成します。ここでAIとのチャット記録を準備し、AIから得たい応答をAIのトーンで記入することができます。このプリセットを使用すると、チャット記録が処理され、この時点でAIはあなたが望むことやどのような役割を果たすかをよりよく理解することができます。",
"The name used internally by the model when processing user message, changing this value helps improve the role-playing effect.": "ユーザーメッセージを処理する際にモデルが内部で使用する名前、この値を変更することで、役割演技の効果を向上させることができます。",
"The name used internally by the model when processing AI message, changing this value helps improve the role-playing effect.": "AIメッセージを処理する際にモデルが内部で使用する名前、この値を変更することで、役割演技の効果を向上させることができます。",
"Inside the model, there is a default prompt to improve the model's handling of common issues, but it may degrade the role-playing effect. You can disable this option to achieve a better role-playing effect.": "モデル内部には、一般的な問題の処理を改善するためのデフォルトのプロンプトがありますが、役割演技の効果を低下させる可能性があります。このオプションを無効にすることで、より良い役割演技効果を得ることができます。"
}

View File

@@ -4,7 +4,7 @@
"About": "关于",
"Settings": "设置",
"Go to chat page": "前往聊天页",
"Manage your configs": "管理你的配置",
"Manage your configs, adjust the starting model and parameters": "管理你的配置, 调整启动的模型和参数",
"Manage models": "管理模型",
"Run": "运行",
"Offline": "离线",
@@ -96,7 +96,7 @@
"Python dependencies are incomplete, would you like to install them?": "Python依赖缺失, 是否安装?",
"Install": "安装",
"This is the latest version": "已是最新版",
"Use Tsinghua Pip Mirrors": "使用清华大学Pip镜像源",
"Use Alibaba Cloud Pip Mirrors": "使用阿里云Pip镜像源",
"Model Config Exception": "模型配置异常",
"Use Gitee Updates Source": "使用Gitee更新源",
"Use Custom CUDA kernel to Accelerate": "使用自定义CUDA算子加速",
@@ -347,5 +347,9 @@
"Parallel Token Chunk Size": "并行Token块大小",
"Maximum tokens to be processed in parallel at once. For high end GPUs, this could be 64 or 128 (faster).": "一次最多可以并行处理的token数量. 对于高端显卡, 这可以是64或128 (更快)",
"Global Penalty": "全局惩罚",
"When generating a response, whether to include the submitted prompt as a penalty factor. By turning this off, you will get the same generated results as official RWKV Gradio. If you find duplicate results in the generated results, turning this on can help avoid generating duplicates.": "生成响应时, 是否将提交的prompt也纳入到惩罚项. 关闭此项将得到与RWKV官方Gradio完全一致的生成结果. 如果你发现生成结果出现重复, 那么开启此项有助于避免生成重复"
"When generating a response, whether to include the submitted prompt as a penalty factor. By turning this off, you will get the same generated results as official RWKV Gradio. If you find duplicate results in the generated results, turning this on can help avoid generating duplicates.": "生成响应时, 是否将提交的prompt也纳入到惩罚项. 关闭此项将得到与RWKV官方Gradio完全一致的生成结果. 如果你发现生成结果出现重复, 那么开启此项有助于避免生成重复",
"Create a new user or AI message content. You can prepare a chat record with AI here, and fill in the responses you want to get from AI in the tone of AI. When you use this preset, the chat record will be processed, and at this point, AI will better understand what you want it to do or what role to play.": "新建一个 用户 或 AI 的发言内容. 你可以在这里准备好一段你与 AI 的聊天记录, 并用 AI 的口吻正确填写你想得到的 AI 的回复, 这样你在使用这个预设时, 这些聊天记录也会被处理, 并且此时 AI 能更好地理解你希望它做什么 / 扮演什么样的角色",
"The name used internally by the model when processing user message, changing this value helps improve the role-playing effect.": "模型内部处理用户发言时使用的名称, 更改此值有助于改善角色扮演效果",
"The name used internally by the model when processing AI message, changing this value helps improve the role-playing effect.": "模型内部处理AI发言时使用的名称, 更改此值有助于改善角色扮演效果",
"Inside the model, there is a default prompt to improve the model's handling of common issues, but it may degrade the role-playing effect. You can disable this option to achieve a better role-playing effect.": "模型内部有一个默认提示来改善模型处理常规问题的效果, 但它可能会让角色扮演的效果变差, 你可以关闭此选项来获得更好的角色扮演效果"
}

View File

@@ -1,6 +1,9 @@
import 'katex/dist/katex.min.css';
import ReactMarkdown from 'react-markdown';
import rehypeRaw from 'rehype-raw';
import rehypeHighlight from 'rehype-highlight';
import rehypeKatex from 'rehype-katex';
import remarkMath from 'remark-math';
import remarkGfm from 'remark-gfm';
import remarkBreaks from 'remark-breaks';
import { FC } from 'react';
@@ -23,7 +26,7 @@ const Hyperlink: FC<any> = ({ href, children }) => {
const MarkdownRender: FC<ReactMarkdownOptions & { disabled?: boolean }> = (props) => {
return (
<div dir="auto" className="markdown-body">
<div dir="auto" className="prose markdown-body" style={{ maxWidth: '100%' }}>
{props.disabled ?
<div style={{ whiteSpace: 'pre-wrap' }}>
{props.children}
@@ -90,8 +93,9 @@ const MarkdownRender: FC<ReactMarkdownOptions & { disabled?: boolean }> = (props
'cite'
]}
unwrapDisallowed={true}
remarkPlugins={[remarkGfm, remarkBreaks]}
remarkPlugins={[remarkMath, remarkGfm, remarkBreaks]}
rehypePlugins={[
rehypeKatex,
rehypeRaw,
[
rehypeHighlight,

View File

@@ -212,7 +212,9 @@ export const RunButton: FC<{ onClickRun?: MouseEventHandler, iconMode?: boolean
temperature: modelConfig.apiParameters.temperature,
top_p: modelConfig.apiParameters.topP,
presence_penalty: modelConfig.apiParameters.presencePenalty,
frequency_penalty: modelConfig.apiParameters.frequencyPenalty
frequency_penalty: modelConfig.apiParameters.frequencyPenalty,
penalty_decay: modelConfig.apiParameters.penaltyDecay,
global_penalty: modelConfig.apiParameters.globalPenalty
});
}

View File

@@ -26,10 +26,12 @@ export const ToolTipButton: FC<{
onClick,
showDelay = 0
}) => {
return (
<Tooltip content={desc} showDelay={showDelay} hideDelay={0} relationship="label">
return (desc ?
<Tooltip content={desc} showDelay={showDelay} hideDelay={0} relationship="label">
<Button style={style} className={className} disabled={disabled} icon={icon} onClick={onClick} size={size}
shape={shape} appearance={appearance}>{text}</Button>
</Tooltip> :
<Button style={style} className={className} disabled={disabled} icon={icon} onClick={onClick} size={size}
shape={shape} appearance={appearance}>{text}</Button>
</Tooltip>
);
};

View File

@@ -149,8 +149,8 @@ const ChatMessageItem: FC<{
className={classnames(
'flex p-2 rounded-lg overflow-hidden',
editing ? 'grow' : '',
messageItem.side === 'left' ? 'bg-gray-200' : 'bg-blue-500',
messageItem.side === 'left' ? 'text-gray-600' : 'text-white'
commonStore.settings.darkMode ? 'bg-neutral-800 border-neutral-600 border-[1px]' : (messageItem.side === 'left' ? 'bg-gray-200' : 'bg-blue-500'),
commonStore.settings.darkMode ? 'text-white' : (messageItem.side === 'left' ? 'text-gray-600' : 'text-white')
)}
>
{!editing ?
@@ -534,6 +534,7 @@ const ChatPanel: FC = observer(() => {
messages: messages.slice(-commonStore.chatParams.historyN),
stream: true,
model: commonStore.settings.apiChatModelName, // 'gpt-3.5-turbo'
max_tokens: commonStore.chatParams.maxResponseToken,
temperature: commonStore.chatParams.temperature,
top_p: commonStore.chatParams.topP,
presence_penalty: commonStore.chatParams.presencePenalty,

View File

@@ -37,7 +37,7 @@ const clientNavCards: NavCard[] = [
},
{
label: 'Configs',
desc: 'Manage your configs',
desc: 'Manage your configs, adjust the starting model and parameters',
path: '/configs',
icon: <DocumentSettings20Regular />
},

View File

@@ -3,7 +3,7 @@ import { DragDropContext, Draggable, Droppable, DropResult } from 'react-beautif
import commonStore from '../../stores/commonStore';
import { observer } from 'mobx-react-lite';
import { v4 as uuid } from 'uuid';
import { Button, Card, Dropdown, Option, Textarea } from '@fluentui/react-components';
import { Card, Dropdown, Option, Textarea } from '@fluentui/react-components';
import { useTranslation } from 'react-i18next';
import { ToolTipButton } from '../../components/ToolTipButton';
import { Delete20Regular, ReOrderDotsVertical20Regular } from '@fluentui/react-icons';
@@ -84,7 +84,10 @@ const MessagesEditor: FC = observer(() => {
return (
<div className="grid grid-cols-1 gap-2 overflow-hidden">
<Button style={{ width: '100%' }} onClick={createNewItem}>{t('New')}</Button>
<ToolTipButton text={t('New')}
desc={t('Create a new user or AI message content. You can prepare a chat record with AI here, and fill in the responses you want to get from AI in the tone of AI. When you use this preset, the chat record will be processed, and at this point, AI will better understand what you want it to do or what role to play.')}
style={{ width: '100%' }}
onClick={createNewItem} />
<div className="overflow-x-hidden overflow-y-auto p-2">
<DragDropContext onDragEnd={onDragEnd}>
<Droppable droppableId="droppable">

View File

@@ -230,6 +230,7 @@ const ChatPresetEditor: FC<{
editingMessages ?
<div className="flex flex-col gap-1">
<Labeled flex spaceBetween label={t('Insert default system prompt at the beginning')}
desc={t('Inside the model, there is a default prompt to improve the model\'s handling of common issues, but it may degrade the role-playing effect. You can disable this option to achieve a better role-playing effect.')}
content={
<Switch checked={editingPreset.presystem === undefined ? true : editingPreset.presystem}
onChange={(e, data) => {
@@ -239,6 +240,7 @@ const ChatPresetEditor: FC<{
}} />
} />
<Labeled flex breakline label={t('User Name')}
desc={t('The name used internally by the model when processing user message, changing this value helps improve the role-playing effect.')}
content={
<Input placeholder="User" value={editingPreset.userName} onChange={(e, data) => {
setEditingPreset({
@@ -247,6 +249,7 @@ const ChatPresetEditor: FC<{
}} />
} />
<Labeled flex breakline label={t('Assistant Name')}
desc={t('The name used internally by the model when processing AI message, changing this value helps improve the role-playing effect.')}
content={
<Input placeholder="Assistant" value={editingPreset.assistantName} onChange={(e, data) => {
setEditingPreset({

View File

@@ -246,7 +246,7 @@ const Settings: FC = observer(() => {
}
{
commonStore.settings.language === 'zh' && commonStore.platform !== 'linux' &&
<Labeled label={t('Use Tsinghua Pip Mirrors')} flex spaceBetween content={
<Labeled label={t('Use Alibaba Cloud Pip Mirrors')} flex spaceBetween content={
<Switch checked={commonStore.settings.cnMirror}
onChange={(e, data) => {
commonStore.setSettings({

View File

@@ -81,6 +81,7 @@ async function initConfig() {
}).catch(() => {
commonStore.setModelConfigs(commonStore.platform !== 'darwin' ? defaultModelConfigs : defaultModelConfigsMac, true);
});
commonStore.setSettings({}, false); // to activate side effects
}
async function initCache(initUnfinishedModels: boolean) {

View File

@@ -259,13 +259,18 @@ class CommonStore {
setSettings = (value: Partial<SettingsType>, saveConfig: boolean = true) => {
this.settings = { ...this.settings, ...value };
if (this.settings.darkMode)
if (this.settings.darkMode) {
WindowSetDarkTheme();
else
document.documentElement.setAttribute('style', 'color-scheme: dark;');
} else {
WindowSetLightTheme();
document.documentElement.setAttribute('style', 'color-scheme: light;');
}
if (this.settings.language)
if (this.settings.language) {
i18n.changeLanguage(this.settings.language);
document.documentElement.setAttribute('lang', this.settings.language === 'dev' ? 'en' : this.settings.language);
}
if (saveConfig)
saveConfigs();

View File

@@ -46,48 +46,24 @@ body {
overflow-y: auto;
overflow-x: hidden;
ul,
ol {
padding-left: 1.5em;
}
ol {
list-style: none;
counter-reset: item;
li {
counter-increment: item;
&::marker {
content: counter(item) '. ';
}
}
}
pre {
padding: 0;
background: transparent;
code {
font-size: 14px;
}
}
p {
margin: 0 0 10px;
}
code {
padding: 0 0.4em;
margin: 0;
white-space: pre-wrap;
word-break: break-word;
border-radius: 8px;
background-color: var(--color-neutral-muted);
font-size: 11px;
}
.hljs {
padding: 0;
}
details summary {
cursor: pointer;
}
}

View File

@@ -127,7 +127,11 @@ if (!window.go) {
return ''
})
defineApp('ReadJson', async (fileName) => {
return JSON.parse(localStorage.getItem(fileName))
const data = JSON.parse(localStorage.getItem(fileName))
if (data)
return data
else
throw new Error('File not found')
})
defineApp('SaveJson', async (fileName, data) => {
localStorage.setItem(fileName, JSON.stringify(data))

View File

@@ -1,12 +1,121 @@
const markdownElements = [
'div',
'p',
'span',
'video',
'img',
'abbr',
'acronym',
'b',
'blockquote',
'code',
'em',
'i',
'li',
'ol',
'ul',
'strong',
'table',
'tr',
'td',
'th',
'details',
'summary',
'kbd',
'samp',
'sub',
'sup',
'ins',
'del',
'var',
'q',
'dl',
'dt',
'dd',
'ruby',
'rt',
'rp',
'br',
'hr',
'h1',
'h2',
'h3',
'h4',
'h5',
'h6',
'thead',
'tbody',
'tfoot',
'u',
's',
'a',
'pre',
'cite'
]
const markdownPseudoElements = [
'::marker',
'::before',
'::after'
]
const tableElements = [
'table',
'tr',
'td',
'th',
'thead',
'tbody',
'tfoot'
]
const proseStyles = {
color: 'inherit'
}
const tableProseStyles = {
...proseStyles,
borderWidth: 'thin',
borderColor: '#d2d2d5'
}
const elementsStyles = markdownElements.reduce((acc, element) => {
let styles = proseStyles
if (tableElements.includes(element))
styles = tableProseStyles
acc[element] = styles
markdownPseudoElements.forEach(pseudo => {
acc[element + pseudo] = styles
})
return acc
}, {})
/** @type {import('tailwindcss').Config} */
export default {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
'./index.html',
'./src/**/*.{js,ts,jsx,tsx}'
],
theme: {
extend: {},
extend: {
typography: {
DEFAULT: {
css: {
color: 'inherit',
fontSize: 'inherit',
...elementsStyles
}
}
}
}
},
plugins: [],
plugins: [require('@tailwindcss/typography')]
}

View File

@@ -23,7 +23,7 @@ const embedded = [
'react-beautiful-dnd',
'react-draggable',
'@magenta/music', 'html-midi-player',
'react-markdown', 'rehype-highlight', 'rehype-raw', 'remark-breaks', 'remark-gfm'
'react-markdown', 'rehype-highlight', 'rehype-raw', 'remark-breaks', 'remark-gfm', 'remark-math', 'rehype-katex', 'katex'
];
function renderChunks(deps: Record<string, string>) {

View File

@@ -1,5 +1,5 @@
{
"version": "1.7.1",
"version": "1.7.4",
"introduction": {
"en": "RWKV is an open-source, commercially usable large language model with high flexibility and great potential for development.\n### About This Tool\nThis tool aims to lower the barrier of entry for using large language models, making it accessible to everyone. It provides fully automated dependency and model management. You simply need to click and run, following the instructions, to deploy a local large language model. The tool itself is very compact and only requires a single executable file for one-click deployment.\nAdditionally, this tool offers an interface that is fully compatible with the OpenAI API. This means you can use any ChatGPT client as a client for RWKV, enabling capability expansion beyond just chat functionality.\n### Preset Configuration Rules at the Bottom\nThis tool comes with a series of preset configurations to reduce complexity. The naming rules for each configuration represent the following in order: device - required VRAM/memory - model size - model language.\nFor example, \"GPU-8G-3B-EN\" indicates that this configuration is for a graphics card with 8GB of VRAM, a model size of 3 billion parameters, and it uses an English language model.\nLarger model sizes have higher performance and VRAM requirements. Among configurations with the same model size, those with higher VRAM usage will have faster runtime.\nFor example, if you have 12GB of VRAM but running the \"GPU-12G-7B-EN\" configuration is slow, you can downgrade to \"GPU-8G-3B-EN\" for a significant speed improvement.\n### About RWKV\nRWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the \"GPT\" mode to quickly compute the hidden state for the \"RNN\" mode.<br/>So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding (using the final hidden state).",
"zh": "RWKV是一个开源且允许商用的大语言模型灵活性很高且极具发展潜力。\n### 关于本工具\n本工具旨在降低大语言模型的使用门槛做到人人可用本工具提供了全自动化的依赖和模型管理你只需要直接点击运行跟随引导即可完成本地大语言模型的部署工具本身体积极小只需要一个exe即可完成一键部署。\n此外本工具提供了与OpenAI API完全兼容的接口这意味着你可以把任意ChatGPT客户端用作RWKV的客户端实现能力拓展而不局限于聊天。\n### 底部的预设配置规则\n本工具内置了一系列预设配置以降低使用难度每个配置名的规则依次代表着设备-所需显存/内存-模型规模-模型语言。\n例如GPU-8G-3B-CN表示该配置用于显卡需要8G显存模型规模为30亿参数使用的是中文模型。\n模型规模越大性能要求越高显存要求也越高而同样模型规模的配置中显存占用越高的运行速度越快。\n例如当你有12G显存但运行GPU-12G-7B-CN配置速度比较慢可降级成GPU-8G-3B-CN将会大幅提速。\n### 关于RWKV\nRWKV是具有Transformer级别LLM性能的RNN也可以像GPT Transformer一样直接进行训练可并行化。而且它是100% attention-free的。你只需在位置t处获得隐藏状态即可计算位置t + 1处的状态。你可以使用“GPT”模式快速计算用于“RNN”模式的隐藏状态。\n因此它将RNN和Transformer的优点结合起来 - 高性能、快速推理、节省显存、快速训练、“无限”上下文长度以及免费的语句嵌入(使用最终隐藏状态)。"

14
scripts/merge_manifest.py Normal file
View File

@@ -0,0 +1,14 @@
import glob
import os, sys
MAIN_IMAGE_NAME=sys.argv[1]
TARGET_TAG="latest" if len(sys.argv) < 3 else sys.argv[2]
args=["docker manifest create {}:{}".format(MAIN_IMAGE_NAME, TARGET_TAG)]
for i in glob.glob("/tmp/images/*/*.txt"):
with open(i, "r") as file:
args += " --amend {}@{}".format(MAIN_IMAGE_NAME, file.readline().strip())
cmd_create="".join(args)
cmd_push="docker manifest push {}:{}".format(MAIN_IMAGE_NAME, TARGET_TAG)
os.system(cmd_create)
os.system(cmd_push)