Commit Graph

185 Commits

Author SHA1 Message Date
KKKKKKKevin 179c52ec18
Merge pull request #349 from mindverse/develop
Release 1.0.1 🎉
2025-05-13 14:04:04 +08:00
kevin-mindverse 56e676763b Merge branch 'master' into develop 2025-05-13 13:56:12 +08:00
KKKKKKKevin 601f371f46
change log (#346) 2025-05-12 11:59:38 +08:00
Xiang Ying a65f4a58fe
hotfix for strage problem which leads to the error during graphrag. (#347) 2025-05-12 11:59:35 +08:00
kevinaimonster 8ada304769
feat: add cloud deployment options (#334)
Co-authored-by: kevin-mindverse <kevin@mindverse.ai>
2025-05-12 09:53:59 +08:00
yingapple da2b704ed4 fix(model tokenizer): just use the model tokenizer without anythink else. 2025-05-09 16:09:20 +08:00
yingapple bec5b8865c fix(deployment): resolve model deployment issue on CUDA + Windows environment 2025-05-09 13:43:29 +08:00
KKKKKKKevin 1e7d60777f
Feat/ Enhance Issue Templates (#333)
* feat: add new issue template

* new version template

* new category

* fix link

* optimize description
2025-05-08 10:12:53 +08:00
ryangyuan c3855f37ad
Fix/0429/fix all log (#318)
* fix:fix return all log problem

* fix:delete no use code

* fix: add sse offset

* fix: change offset to string

* fix: fix more localStore

* fix: change log only

* fix: cancel offset

* fix: remove offset

* fix:delete no use code

* fix:add hertbeat

* fix: delete useless code

---------

Co-authored-by: Ye Xiangle <yexiangle@mail.mindverse.ai>
2025-05-07 15:53:25 +08:00
kevin-mindverse 1d79691e71 Merge branch 'master' into develop 2025-05-07 13:42:56 +08:00
yexiangle ff9a9b9970
delete useless code in route_l2.py (#332) 2025-05-07 13:40:10 +08:00
yexiangle 82422a5f76
Merge pull request #313 from mindverse/feat/0427/meta_exposure
Feat/0427/meta exposure
2025-05-06 16:04:38 +08:00
ryangyuan b7f0cc7feb
Feat/0422/train l1 exposure (#319)
* feat:add get steps content(EXTRACT_DIMENSIONAL_TOPICS,MAP_ENTITY_NETWORK,DECODE_PREFERENCE_PATTERNS,AUGMENT_CONTENT_RETENTION)

* feat:add file_type

* feat: exponse train L1

* feat:jsonfy return data

* fix: jsonfy

* feat:add log

* fix: fix step change error

* feat:delete useless log

* fix:fix not import problem

* fix:fix old trainprocess init problem

* feat: Train Step Show Table

* feat:add L1_exposure_manager optimize code structure

* fix: fix bio return format & map_your_entity_network

* feat: show tip when resource empty

* add have_output & path

* fix: fix log problem

* feat: adjustment output ui

* fix: L1 exposure add loading

---------

Co-authored-by: Ye Xiangle <yexiangle@mail.mindverse.ai>
2025-05-06 16:02:22 +08:00
KKKKKKKevin ad824973ba
Merge pull request #314 from ScarletttMoon/master
Updated README (#what's next in May & quick start)
2025-05-06 10:58:14 +08:00
Scarlett 533fe8eeb1
Update README.md
quick start
2025-04-30 17:53:37 +08:00
Scarlett f64a9974e0
Merge branch 'mindverse:master' into master 2025-04-30 17:37:54 +08:00
KKKKKKKevin 245bb1e27b
fix:hotfix use_previous_params problem (#320)
Co-authored-by: Ye Xiangle <yexiangle@mail.mindverse.ai>
2025-04-30 16:13:18 +08:00
KKKKKKKevin 78bb0e3c8a
Merge pull request #316 from mindverse/fix/0430/hotfix_use_previous_params
fix:hotfix use_previous_params problem
2025-04-30 16:08:06 +08:00
Ye Xiangle 1bf8ba5ce7 fix:hotfix use_previous_params problem 2025-04-30 14:49:07 +08:00
Scarlett 933289e353
Update README.md
updated #what's next and #contributing, deleted #join our community
2025-04-30 14:13:18 +08:00
Scarlett dc23555eeb
Merge branch 'mindverse:master' into master 2025-04-30 14:09:38 +08:00
KKKKKKKevin ddfcd15b2f
Merge pull request #306 from mindverse/release_0428
# v1.0.0 - First Release 🎉
2025-04-29 11:35:32 +08:00
yexiangle b3dcdd8ed5
fix:fix monitor model download log problem (#305) 2025-04-28 17:12:41 +08:00
ryangyuan 5457a7a82a
fix: fix page overflow (#299)
* fix: add relative
2025-04-28 11:12:00 +08:00
yingapple 53dfdafc9b feat: up max seq length 2025-04-27 11:59:16 +08:00
yingapple c88d2362e5 fix for gpu 2025-04-27 11:54:45 +08:00
KKKKKKKevin 34d43290e0
Feat/fix no llama.cpp (#297)
* feat: what? no llama.cpp

* add cache
2025-04-26 15:26:59 +08:00
KKKKKKKevin 1d8b48e6bc
preserve training param (#292) 2025-04-25 18:56:26 +08:00
ryangyuan ef4c491d5f
Feat/0425/adjustment of training rule (#290)
* fix: adjustment status order

* fix: adjustment train status

* fix: split the status of service and train

* feat: adjustment train rule
2025-04-25 18:08:13 +08:00
Scarlett c4a9b90865
Updated README with FAQ (#285)
* Update README.md

Changed the updated tutorial link

* Update README.md with FAQ

New section for FAQ doc
2025-04-25 17:48:22 +08:00
ryangyuan 19adcac435
Feat/0423/train status (#287)
* fix: adjustment status order

* fix: adjustment train status

* fix: split the status of service and train
2025-04-25 17:46:37 +08:00
KKKKKKKevin 3ae664fe09
add execute right (#289) 2025-04-25 17:20:07 +08:00
KKKKKKKevin de8370ba0d
fix move trainprocess to solve loop (#288) 2025-04-25 16:26:36 +08:00
KKKKKKKevin 37553fb23b
Feature/fix training model switch bug2 (#281)
* feature: use uv to setup python environment

* TrainProcessService add singleten method: get_instance

* feat: fix code

* Added CUDA support (#228)

* Add CUDA support

- CUDA detection
- Memory handling
- Ollama model release after training

* Fix logging issue

added cuda support flag so log accurately reflected cuda toggle

* Update llama.cpp rebuild

Changed llama.cpp to only check if cuda support is enabled and if so rebuild during the first build rather than each run

* Improved vram management

Enabled memory pinning and optimizer state offload

* Fix CUDA check

rewrote llama.cpp rebuild logic, added manual y/n toggle if user wants to enable cuda support

* Added fast restart and fixed CUDA check command

Added make docker-restart-backend-fast to restart the backend and reflect code changes without causing a full llama.cpp rebuild

Fixed make docker-check-cuda command to correctly reflect cuda support

* Added docker-compose.gpu.yml

Added docker-compose.gpu.yml to fix error on machines without nvidia gpu and made sure "\n" is added before .env modification

* Fixed cuda toggle

Last push accidentally broke cuda toggle

* Code review fixes

Fixed errors resulting from removed code:
- Added return save_path to end of save_hf_model function
- Rolled back download_file_with_progress function

* Update Makefile

Use cuda by default when using docker-restart-backend-fast

* Minor cleanup

Removed unnecessary makefile command and fixed gpu logging

* Delete .gpu_selected

* Simplified cuda training code

- Removed dtype setting to let torch automatically handle it
- Removed vram logging
- Removed Unnecessary/old comments

* Fixed gpu/cpu selection

Made "make docker-use-gpu/cpu" command work with .gpu_selected flag and changed "make docker-restart-backend-fast" command to respect flag instead of always using gpu

* Fix Ollama embedding error

Added custom exception class for Ollama embeddings, which seemed to be returning keyword arguments while the Python exception class only accepts positional ones

* Fixed model selection & memory error

Fixed training defaulting to 0.5B model regardless of selection and fixed "free(): double free detected in tcache 2" error caused by cuda flag being passed incorrectly

* fix: train service singlten

---------

Co-authored-by: Zachary Pitroda <30330004+zpitroda@users.noreply.github.com>
2025-04-25 15:27:52 +08:00
Scarlett 5ddf2eaeb8
Update README.md with FAQ
New section for FAQ doc
2025-04-25 15:01:10 +08:00
Scarlett ff2ddadf57
Merge branch 'mindverse:master' into master 2025-04-25 14:54:58 +08:00
KKKKKKKevin 29a17c8615
Optimize TrainProcessService Singleton Pattern Implementation (#279)
* feature: use uv to setup python environment

* TrainProcessService add singleten method: get_instance
2025-04-25 14:17:15 +08:00
Zachary Pitroda 053090937d
Added CUDA support (#228)
* Add CUDA support

- CUDA detection
- Memory handling
- Ollama model release after training

* Fix logging issue

added cuda support flag so log accurately reflected cuda toggle

* Update llama.cpp rebuild

Changed llama.cpp to only check if cuda support is enabled and if so rebuild during the first build rather than each run

* Improved vram management

Enabled memory pinning and optimizer state offload

* Fix CUDA check

rewrote llama.cpp rebuild logic, added manual y/n toggle if user wants to enable cuda support

* Added fast restart and fixed CUDA check command

Added make docker-restart-backend-fast to restart the backend and reflect code changes without causing a full llama.cpp rebuild

Fixed make docker-check-cuda command to correctly reflect cuda support

* Added docker-compose.gpu.yml

Added docker-compose.gpu.yml to fix error on machines without nvidia gpu and made sure "\n" is added before .env modification

* Fixed cuda toggle

Last push accidentally broke cuda toggle

* Code review fixes

Fixed errors resulting from removed code:
- Added return save_path to end of save_hf_model function
- Rolled back download_file_with_progress function

* Update Makefile

Use cuda by default when using docker-restart-backend-fast

* Minor cleanup

Removed unnecessary makefile command and fixed gpu logging

* Delete .gpu_selected

* Simplified cuda training code

- Removed dtype setting to let torch automatically handle it
- Removed vram logging
- Removed Unnecessary/old comments

* Fixed gpu/cpu selection

Made "make docker-use-gpu/cpu" command work with .gpu_selected flag and changed "make docker-restart-backend-fast" command to respect flag instead of always using gpu

* Fix Ollama embedding error

Added custom exception class for Ollama embeddings, which seemed to be returning keyword arguments while the Python exception class only accepts positional ones

* Fixed model selection & memory error

Fixed training defaulting to 0.5B model regardless of selection and fixed "free(): double free detected in tcache 2" error caused by cuda flag being passed incorrectly
2025-04-25 10:20:36 +08:00
KKKKKKKevin 71d54a5b0b
feature: use uv to setup python environment (#277) 2025-04-24 16:36:47 +08:00
ryangyuan f04916754c
feat: replace tutorial link (#268)
* feat: replace tutorial link

* replace video link

---------

Co-authored-by: kevin-mindverse <kevin@mindverse.ai>
2025-04-24 14:25:00 +08:00
doubleBlack2 516843d963
mcp search online secondme model (#242) 2025-04-24 14:24:45 +08:00
ryangyuan 9fe511f0f2
Feature/0416/add thinking mode (#264)
* fix: modify thinking_model loading configuration

* feat: realize thinkModel ui

* feat:store

* feat: add combined_llm_config_dto

* add thinking_model_config & database migration

* directly add thinking model to user_llm_config

* delete thinking model repo dto service

* delete thinkingmodel table migration

* add is_cot config

* feat: allow define  is_cot

* feat: simplify logs info

* feat: add training model

* feat: fix is_cot problem

* fix: fix chat message

* fix: fix progress error

* fix: disable no settings thinking

* feat: add thinking warning

* fix: fix start service error

* feat:fix init trainparams problem

* feat: change playGround prompt

* feat: Add Dimension Mismatch Handling for ChromaDB (#157) (#207)

* Fix Issue #157

Add chroma_utils.py to manage chromaDB and added docs for explanation

* Add logging and debugging process

- Enhanced the`reinitialize_chroma_collections` function in`chroma_utils.py` to properly check if collections exist before attempting to delete them, preventing potential errors when collections don't exist.
- Improved error handling in the`_handle_dimension_mismatch` method in`embedding_service.py` by adding more robust exception handling and verification steps after reinitialization.
- Enhanced the collection initialization process in`embedding_service.py` to provide more detailed error messages and better handle cases where collections still have incorrect dimensions after reinitialization.
- Added additional verification steps to ensure that collection dimensions match the expected dimension after creation or retrieval.
- Improved logging throughout the code to provide more context in error messages, making debugging easier.

* Change topics_generator timeout to 30 (#263)

* quick fix

* fix: shade -> shade_merge_info (#265)

* fix: shade -> shade_merge_info

* add convert array

* quick fix import error

* add log

* add heartbeat

* new strategy

* sse version

* add heartbeat

* zh to en

* optimize code

* quick fix convert function

* Feat/new branch management (#267)

* feat: new branch management

* feat: fix multi-upload

* optimize contribute management

---------

Co-authored-by: Crabboss Mr <1123357821@qq.com>
Co-authored-by: Ye Xiangle <yexiangle@mail.mindverse.ai>
Co-authored-by: Xinghan Pan <sampan090611@gmail.com>
Co-authored-by: doubleBlack2 <108928143+doubleBlack2@users.noreply.github.com>
Co-authored-by: kevin-mindverse <kevin@mindverse.ai>
Co-authored-by: KKKKKKKevin <115385420+kevin-mindverse@users.noreply.github.com>
2025-04-24 14:19:23 +08:00
ryangyuan fd64b4e5da
fix: fetch uploadInfo in homepage (#271) 2025-04-24 11:02:52 +08:00
KKKKKKKevin ce9cfcb4a8
Feature/fix update instace (#272)
* fix password update logic, if there's more than one load

* update fix
2025-04-23 20:47:41 +08:00
yexiangle 81c4861a01
fix:fix l1 save problem (#269)
* fix:fix l1 save problem

* fix:simplify the code

* fix:delete no use import

* fix:delete useless data
2025-04-23 20:46:27 +08:00
KKKKKKKevin 8ace28a161
Feat/new branch management (#267)
* feat: new branch management

* feat: fix multi-upload

* optimize contribute management
2025-04-23 16:19:50 +08:00
kevin-mindverse cb6f02efc6 quick fix convert function 2025-04-23 10:17:31 +08:00
kevin-mindverse a7577e5aa6 quick fix import error 2025-04-22 15:12:27 +08:00
KKKKKKKevin aa38f672f0
fix: shade -> shade_merge_info (#265)
* fix: shade -> shade_merge_info

* add convert array
2025-04-22 15:05:08 +08:00
kevin-mindverse 39d0cce7a0 quick fix 2025-04-22 14:58:49 +08:00