Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- C:\LargeLanguageModels\koboldcpp_rocm_files>python koboldcpp.py
- ***
- Welcome to KoboldCpp - Version 1.78.yr0-ROCm
- For command line arguments, please refer to --help
- ***
- Auto Selected HIP Backend...
- Exiting by user request.
- C:\LargeLanguageModels\koboldcpp_rocm_files>python koboldcpp.py
- ***
- Welcome to KoboldCpp - Version 1.78.yr0-ROCm
- For command line arguments, please refer to --help
- ***
- Auto Selected HIP Backend...
- Auto Recommended GPU Layers: 45
- Attempting to use hipBLAS library for faster prompt ingestion. A compatible AMD GPU will be required.
- Initializing dynamic library: koboldcpp_hipblas.dll
- ==========
- Namespace(model='', model_param='C:/LargeLanguageModels/EVA-Qwen2.5-32B-v0.2-Q4_K_S.gguf', port=5001, port_param=5001, host='', launch=True, config=None, threads=7, usecublas=['normal', '0'], usevulkan=None, useclblast=None, usecpu=False, contextsize=16384, gpulayers=45, tensor_split=None, checkforupdates=False, ropeconfig=[0.0, 10000.0], blasbatchsize=512, blasthreads=7, lora=None, noshift=True, nofastforward=False, nommap=False, usemlock=False, noavx2=False, debugmode=0, onready='', benchmark=None, prompt='', promptlimit=100, multiuser=1, remotetunnel=False, highpriority=False, foreground=False, preloadstory=None, quiet=False, ssl=None, nocertify=False, mmproj=None, password=None, ignoremissing=False, chatcompletionsadapter=None, flashattention=True, quantkv=0, forceversion=0, smartcontext=False, unpack='', nomodel=False, showgui=False, skiplauncher=False, hordemodelname='', hordeworkername='', hordekey='', hordemaxctx=0, hordegenlen=0, sdmodel='', sdthreads=7, sdclamped=0, sdt5xxl='', sdclipl='', sdclipg='', sdvae='', sdvaeauto=False, sdquant=False, sdlora='', sdloramult=1.0, whispermodel='', hordeconfig=None, sdconfig=None, noblas=False)
- ==========
- Loading model: C:\LargeLanguageModels\EVA-Qwen2.5-32B-v0.2-Q4_K_S.gguf
- The reported GGUF Arch is: qwen2
- Arch Category: 5
- ---
- Identified as GGUF model: (ver 6)
- Attempting to Load...
- ---
- Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead!
- It means that the RoPE values written above will be replaced by the RoPE values indicated after loading.
- System Info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | AMX_INT8 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 |
- Traceback (most recent call last):
- File "C:\LargeLanguageModels\koboldcpp_rocm_files\koboldcpp.py", line 5168, in <module>
- main(parser.parse_args(),start_server=True)
- File "C:\LargeLanguageModels\koboldcpp_rocm_files\koboldcpp.py", line 4789, in main
- loadok = load_model(modelname)
- File "C:\LargeLanguageModels\koboldcpp_rocm_files\koboldcpp.py", line 925, in load_model
- ret = handle.load_model(inputs)
- OSError: exception: access violation reading 0x0000000000000000
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement