I’m a person who tends to program stuff in Godot and also likes to look at clouds. Sometimes they look really spicy outside.
yup, renamed it to […].rules.backup. Thanks for responding though!
why u delete your comment?
Just tried it, and sadly that didn’t change anything after a reboot.
Fair, but they supported it a bit before that too I think. Like, they allowed it to show up in the login.
Unfortunately that did not fix it for me. I have now renamed the file to […].backup but it still only displays X11 options.
It is true that the editor has the build in class reference. I have already tried to retrieve the text from when that pops up, and I have managed to do that partly, however it doesn’t segment anything. The reason why I want a cool format like JSON or maybe YAML is that I can parse it and separate it into variables, which can then be nicely display in smaller UI elements.
Okay, don’t tell this to anyone, because many people don’t want to hear anything about this topic, which is reasonable: it’s abnoxious and overhyped.
whispering: The reason I want to segment the docs is because I want to embed them and use them as structured input for a locally running LLM for better context.
shocked crowd sound
I already made two posts about this on here: This one and this one.
Fair point, me too.
The feature I showcased is part of my Godot plugin “GoPilot”. I have not released it yet, however I will soon on the AssetLibrary as anfree and open source package for everyone to use…
I am currently working on a CSV table translation feature and a GraphNode based Editory which provides an easy no-code solution for LLM inference using all sorts of different settings, prompting techniques and formatting options.
I don’t have a link right now, but I will make a post on here once I actually release it, alongside a YouTube video with all its features.
You are right in that it can be useful to feed in all of the contents in other related files.
LLMs take a really long time before writing anything with a large context input. the fact that githubs copilot can generate code so quickly even though it has to keep the entire code file in context is a miracle to me.
Including all related or opened GDScript files would be way too much for most models and it would likely take about 20 seconds for it to actually start generate some code (also called first token lag
). So I will likely only implement the current file into the context window, as that might already take some time. Remember, we are running local LLMs here, so not everyone has a blazingly fast GPU or CPU (I use a GTX1060 6GB for instance).
I just tried it and it took a good 10 seconds for it to complete some 111 line code without any other context using this pretty small model and then about 6 seconds for it to write about 5 lines of comment documentation (on my CPU). It takes about 1 second with a very short script.
You can try this yourself using something like HuggingChat to test out a big context window model like Command R+ and fill its context windw with some really really long string (copy paste it a bunch times) and see how it takes longer to respond. For me, it’s the difference between one second and 13 seconds!
I am thinking about embedding
either the current working file, or maybe some other opened files though, to get the most important functions out of the script to keep context length short. This way we can shorten this first token delay
a bit.
This is a completely different story with hosted LLMs, as they tend to have blazingly quick first token delays
, which makes the wait trivial.
Currently the completion is implemented via keyboard shortcut.
Would you prefer it, if I made it automatically complete the code? I personally feel, that intentionally asking for it to complete the code is more natural than waiting for it to do so.
Are there some other features you would like to see? I am currently working on a function-refactoring UI.
Ollama is really great. The simplicity of it, the easy use via REST API, the fun CLI…
What a fun program.
I will likely post on here when I release the plugin to GitLab and the AssetLib.
But I also don’t want to spam this community, so there won’t be many, if any updates until the actual release.
If you want to have something similar right now, there is Fuku for the chat interaction and selfhosted copilot for code completion on the AssetLib! I can’t get the code completion one to work, but Fuku works pretty well, but can’t read the users code at all.
I will upload the files to my GitLab soon though.
EDIT: Updates the gitlab link to actually point to my gitlab page
Just fixed the problem where it inserts too many lines after completing code.
This issue can be seen in the first demo video with the vector example. There are two newlines added for no reason. That’s fixed now:
Ah I see. Well I already knew that, infact, it’s a neccessity now. One can’t export a Variant. But thanks for clarifying! I think I’ll have to write my own property editors though, as there is an issue open about this here
I use @export extensively in my projects but I have never heard it being refered to as a parameter. What do you mean by this exactly?
I have not yet looked at any similar proposals yet. I probably should…
Oh wait i think i actually misunderstood your original question. I thought you wanted to see the combination option in that Mesh Submenu, but you seem to have actually referred to the ability of adding collision to a combined MultiMeshInstance, do i understand that correctly now?
I actually already implemented the ability to carry over collisions from the original objects to the combined objects as an optional setting. Explanation video of the tool here. But having the ability to add collision to a MultiMeshInstance without having any prior configuration is also a very interesting idea. This might be a bit out of my current knowledge of the engine, but i can try!
Wait really? We have an official open kernel module for the nvidia cards now? I’m assuming that the actual driver is still closed, right?
Fair. I was trying to install some latest versions if Dev packages because Debian seemed to lack behind (some gtk-4 package was out of date). I need it for building this FOSS VR software called “envision”.
Is there a reason windows users don’t get this error?