-
Notifications
You must be signed in to change notification settings - Fork 88
PVE Temp mod version 2.0 #152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… all collector processes**, eliminating PID files and external checks, while API calls only read the worker-managed state.
add pve_mod_version api
|
I see a couple of options to make the installation of this possible.
(the js will be around 800 lines) Other views and argument against/for? |
|
@Meliox Chosing 1. would effectively eliminate 3... However, I wouldn't necessarily like the idea of needing multiple files for the installation. Could the perl & js files you mention be called somehow from the Proxmox UI code or would they have to be integrated by the installer as it is in the current mainline version? |
|
@eremem Improvement wise... This is brand new and would allows expansion to all gpu data as well as not block the ui when the commands is called. Everything is running in separate threads the background. It also enhances the information available. The is done in the separate perl library. The ui will call "api" (the perl script) which returns data from memory. It also follows the normal proxmox implementation of modulea. (I have not all code parts yet to this branch, sorry). Whether just including a library can be done for the js part I could explore. The code size is much larger, but relative easy to maintain. The code layout allows it to be easily expanded (e.g. background service to collect gpu data over time and show utilisation graphs like cpu and memory). If the perl and js are kept separately/merged in a build process, then that would also simplify the bash script itself. And, that is my question, e.g. how to handle the install process to ensure it is easy to use. In the source code I think the files should be separated, but from a user perspective the multiple files is not optimal. Though quite many product do use git clone + install script though. Maybe I am overthinking it... Finally, I plan to make this a version 2 and leave the existing as-is. |
|
@Meliox I don't think you are overthinking. The installation should be as easy as possible. To hide the multiple files and to allow installing them w/o the need to compile everything in the current installer script seems to be the case for a debian package. It could be built e.g. using github actions (at least according to a quick google search this should be possible), contain a proper directory structure and be made available as versioned releases for download, with "latest" pointing to the... latest package version. Instead of downloading and running the bash installer script, users would download the latest package and install it with dpkg (in the future a dedicated repository could be a nice touch - I found no hosting function for debian repositories on github though:/). As a bonus the root privileges would be ensured during the installation. Now, the perl lib and any other "loose" files would be installed from the package on the user's system (saved in their respective directories), together with the bash installer script. The latter would be triggered at the end of the installation too. This approach would allow for an easy and automated installation with a minimal effort on the user's side. What do you think? Does this make sense? |
Example:

Todo: