Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance file listing functions with sorting and prefix filtering #16526

Conversation

anapnoe
Copy link

@anapnoe anapnoe commented Oct 3, 2024

Description

This pull request introduces a new function list_files_with_prefix that enhances the ability to list files with specific prefixes and extensions. The function is designed to provide more flexibility in file selection, enabling users to easily fetch stylesheets or similar files by specifying the desired prefix and file extension.

Updated Usage in css_html function:

  • The css_html function has been updated to use list_files_with_prefix to fetch CSS files. Specifically, it now looks for CSS files that start with "style" and have a .css extension, making the code more readable and scalable.
for cssfile in scripts.list_files_with_prefix("style", ".css"):
    head += stylesheet(cssfile)

Additional Functionality:

  • The code structure allows the addition of future prefixes and filters if needed, by calling the function with various parameters.

Benefits

  • Enhanced Flexibility: Users can specify both a prefix and an extension, which allows for more tailored results when searching for files.
  • Improved Readability: The clearer function name list_files_with_prefix makes it apparent that files are filtered by both prefix and extension.
  • Consistent File Listing: The inclusion of the custom sorting mechanism helps maintain a consistent order of returned files.

Conclusion

This change aims to streamline the process of file retrieval based on specific criteria, enhancing the modularity and maintainability of the code. Future improvements can build upon this foundational change while maintaining backward compatibility where necessary.

Screenshots/videos:

Checklist:

@anapnoe anapnoe requested a review from AUTOMATIC1111 as a code owner October 3, 2024 11:30
@w-e-w
Copy link
Collaborator

w-e-w commented Nov 1, 2024

if I'm understanding correctly the point of this is so that it allows developers to split CSS into multiple files?
making large CSS such as your styles.css from sd-webui-ux easier to manage?

if this is the case then I don't think this PR is necessary
you could split css in multiple files using relative @import

for demonstration
I took a styles.css from a extensions of my and save a section of it into other.css
then add @import "other.css"; at the top of the styles.css

image

split in to 2 files in the extention root styles.css and other.css

image

verifyed that it is indeed loaded in Chrome

@anapnoe
Copy link
Author

anapnoe commented Nov 1, 2024

Explanation on Enhancing File Listing Functions

As the title suggests, Enhance file listing functions with sorting and prefix filtering is not specific to CSS. It can be used for any supported media that iterate through a folder to get some files. Iteration of files is used in many cases throughout the web UI, that this function is needed, it will benefit the codebase to have a utility function like this. I use it for CSS in my example case.

Why Not to Use CSS @import

  • Loading Speed: CSS files are loaded in serial when using @import, whereas using <link> tags loads them in parallel, which is faster.

  • Debugging: Line references in CSS debug tools are not correct with @import, making it harder to find and debug issues when files are split using @import.

@catboxanon
Copy link
Collaborator

Consider not having ChatGPT write your response (or your PR) next time.

@catboxanon catboxanon closed this Nov 1, 2024
@anapnoe
Copy link
Author

anapnoe commented Nov 2, 2024

i use it to format my files to .md and correct my syntactical errors do you have some kind of problem with this? correct this .md

Here’s a corrected version of your sentence:

"I use it to format my files to .md and correct my syntactical errors. Do you have a problem with this?"

If you need further adjustments or specific formatting, feel free to ask!

you are missing the point of the PRs

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 2, 2024

I'm not a webdev so what I've said here can be totally wrong

Loading Speed: CSS files are loaded in serial when using @import, whereas using tags loads them in parallel, which is faster.
I did some searching and sources claim that seems to be the case
but I suspect things might have changed in recent iterations of browsers

in my test I have five css
style.css
style1.css
style2.css
style11.css
style21.css

style.css @import style1.css and @import style2.css
and style1.css @import style11.css and style2.css @import style21.css
I've added lots of junk to the end of the these css makeing each making their size in the orders of megaby

(the file server seems to have some logic that will use gzip to compress files larger then a certain threshold)

using Chrome debugger disabled caching and enable throttling
from my from my observation while nested Imports would indeed be blocking
but Imports in the same file are loaded simultaneously in parallel
this basically means that if you put all your @imports in the a root style.css causing at most 1 layer of nesting css
while @import dose a an additional stage it shouldn't have a great impact of performance in general

image

Debugging: Line references in CSS debug tools are not correct with @import, making it harder to find and debug issues when files are split using @import.

if you're really having trouble maintaining large large CSS
then it might be better for you to use something like Sass: Syntactically Awesome Style Sheets or Less CSS,
from my understanding these provides more features than plain CSS making it easier to work upon
you can configure it to compile to one compressed CSS, which is going to give you the best performance possible if this is really of a concern

and from my understanding it is totally possible to set up automation so that the compilation of sass to css is automatic

maintainability + performance best of both worlds

and achieving this without any additional changes to where we are which what makes your code more compatible


about your list_files_with_prefix

Enhanced Flexibility: Users can specify both a prefix and an extension, which allows for more tailored results when searching for files.
Improved Readability: The clearer function name list_files_with_prefix makes it apparent that files are filtered by both prefix and extension.

unless the use case is very special glob works perfectly fine and allows more flexibility and is widely used in the industry

Consistent File Listing: The inclusion of the custom sorting mechanism helps maintain a consistent order of returned files.

your custom_sort_key regex pattern r"(.+?)(-\d{2}|-[A-Z]|)(\.\w+) the patent is extremely limiting and is questionable at best
adding to that considering the its use is hard coded into list_files_with_prefix the use case seems very specific to your specific usecase and is not something that should be applied to everyone in general

example outputs of custom_sort_key

custom_sort_key('style.css')
('style', '', '.css')
custom_sort_key('style-1.css')
('style-1', '', '.css')
custom_sort_key('style-12.css')
('style', '-12', '.css')
custom_sort_key('style-122.css')
('style-122', '', '.css')
custom_sort_key('style-a.css')
('style-a', '', '.css')
custom_sort_key('style-A.css')
('style', '-A', '.css')

I believe this is specific to your exact naming method and not to anyone else's


i use it to format my files to .md and correct my syntactical errors do you have some kind of problem with this? correct this .md

to be honest
I find it hard to believe that you only use ChatGPT to only format the markdown and correct syntactical errors,
I doubt that most people care that much about syntactical errors and formatting, as long as you can get the meaning of cross
it feels to me like you also use it to came up with your argument points
and as mentioned in the above reply I don't think your points are valid
if you intend to proceed with the conversation I request you do not use ChatGPT or similar tools even if it's simply for formatting purposes


you are missing the point of the PRs

sure
if you think we are missing the point then you need to explain it better

@anapnoe
Copy link
Author

anapnoe commented Nov 3, 2024

first we can provide an arg to the function to pass your own regx to sort the FILES(css, js, image files, safetensors, txt, json and so on) as you like as you did noticed the current reg expression does miss many cases
the usage of this utility function is not to write repetitive code to get-include some files with prefixes and sort.
sass does not support vars its a bit old-school i have used it in the past postcss from the other hand does support vars through plugins and i plan to use it.
the ideas and code contributions are entirely my own you can believe or hard to believe of course whatever you like
if you have some ideas on how to improve this i am open to discussion
thank you

by the way the code of this PR is ridiculously simple

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 3, 2024

I think I just saw you edit your message so I'm not sure what you changed, just want you to know that this message is composed during or before you edit

if your goal is to just provide a utility for better file searching, but this PR is not just that

it's more of a 2.5 in 1

  1. a new file search utility
    1.1. a custom sorting
  2. use the abovce utility on style.css so it can be used to load multiple files"

aside from there are many issues with the implementation especially with the custom sorting

I don't believe point 2. about loading CSS is useful
for most extensions one CSS file is more than enough
and for those extensions that have large amounts of CSS
it is probably better for them to use tooles tools like sass posecss less... to improve the maintainability regardless

that's why I don't really see a good use case for multiple CSS and cant argee on point 2.

out of curiosity

sass does not support vars

like I never use sass myself, but are you sure it doesn't support variables?
https://sass-lang.com/documentation/variables/
or maybe it's a different kind of variable that you are referring to?


I have no issues with if you wish to make a separate PR for a general file search utility
but I will say that the utility you provide in this function is not good enough

if you do wish to make a separate PR for file search utility then might are several things to consider

  • sorting and file listing should probably be separate (or at least don't hard-code a custom sort function)
    in lot of cases you don't really need to sort the files

as you mentioned

first we can provide an arg

  • performance
    depends on the amount of files you're working with searching performance might start to become an issue
    if I recall correctly we have reports of user experiencing file search slowdowns issues

I don't know how many file they have and what storage ther are using

  • balance between Simplicity and Flexibility of the utility
    if you have a highly flexible util, it's likely be hard to use and someone might just decide to write their own simple code to list the file they want
    on the other hand if tool is too simple or too specific to a certain task it might be not enough to do what they want and they will end up writing their own functions

@anapnoe
Copy link
Author

anapnoe commented Nov 3, 2024

that sounds about right sorting and listing should be separate functions it would provide more modularity and more performance if someone wants the files without sorting
i like to write compact code and i use a lot of helper functions for that I prefer flexibility

https://www.npmjs.com/package/postcss-css-variables
yes diff kind the ones that var(--my-var)
anyways thank you

i would like to ask you if you would welcome a PR that use sqlLight to create a database for extranetworks and metadata
i rewrite the extranetworks to address performance issues with people that have a lot of those files

also it would be nice to reply to my other PR that catboxanon close without reply

@anapnoe anapnoe deleted the Enhance-File-Listing-Functions-with-Sorting-and-Prefix-Filtering branch November 3, 2024 16:41
@w-e-w
Copy link
Collaborator

w-e-w commented Nov 3, 2024

i would like to ask you if you would welcome a PR that use sqlLight to create a database for extranetworks and metadata
i rewrite the extranetworks to address performance issues with people that have a lot of those files

if by extra networks metadata you mean "safetensor metadata" then we already using diskcache as far as I'm where is built on diskcache SQLite

if it's about user metadata as we already have the modules in place it's trivial to use disk cache on it
but the real question is does that make that big of a difference in performance

results may vary on the type of storage device I'm using nvme ssd
with about 1100 json a total combine size of 38MB and it only takes 0.39 seconds to read and parse all json file
because I haven't tested it with diskcache I can't say that it wouldn't be Improvement but even if there is Improvement I don't ther will be much
it could be that I just don't have enough json to matter

if I recall correctly loading of user metadata only affects the Extra networks tab load time but does not block the UI start up
not saying Extra networks tab load time isn't important

I could do a test to see if there's a significant Improvement later I guess


also it would be nice to reply to my other PR that catboxanon close without reply

not to offend but if you want my honest thought is that
"you had it coming" I am on catboxanon side
don't use ChatGPT or orther LLMs when writing to someone especially it's something of importance

unless you're not aware in lots of places it's not acceptable to use those tooles to write important such as PR motive
lots of people (not talking about this repo) would immediately close a pull request if they thinks it's generated by LLM
I'm not sure if you have seen those PR generating LLM bots, dealing with those is a waste of time

adding to the fact that you implementation only works if position equals 0, so it's more of insert at the beginning
but yet calling a "custom position" feels very much like what a LLM would do


if you want my fault on that PR
I'm not too keen on it but maybe I'm not just seeing your way
for me you need to write a more convincing argument
and that basically means rewriting the pr without using LLM

if you want to reopen that PR then I suggest you rewrite it properly

@anapnoe
Copy link
Author

anapnoe commented Nov 3, 2024

you are right the LLM description is completely wrong i added this function to my webui to add options on the top
github already added the feature to pro users to help people writing their PRs with llms i dont think is a bad idea
for people that their language is not english it takes much time to write one properly

i mean indexing of the filesystem and storing into the DB so it can be searched ordered and served paginate without
the need to download a 38MB of json it would be much faster to get only the first 100 and paginate but does it worth it?
'
def register_category(self, category_id, label, position=None):

if category_id in self.mapping:
    return category_id 
    
new_category = OptionsCategory(category_id, label)

if position is not None:
    if position < 0 or position > len(self.mapping):
        self.mapping[category_id] = new_category
    else:
        keys = list(self.mapping.keys())
        keys.insert(position, category_id)
        self.mapping[category_id] = new_category 
        self.mapping = {key: self.mapping[key] for key in keys}
else:
    self.mapping[category_id] = new_category
    
return category_id

'
havent tested it something like this should work for any index

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 3, 2024

you are right the LLM description is completely wrong i added this function to my webui to add options on the top
github already added the feature to pro users to help people writing their PRs with llms i dont think is a bad idea
for people that their language is not english it takes much time to write one properly

using an LLM itself is not bad
using it incorrectly, allowing incorrect information to flow through and having dictates your thoughts and getting in the way of communication is the issue

for people that their language is not english it takes much time to write one properly

while it is true that if you're not good a a language things can be hard "but it's just hard for you"
if you use a LMM and because of your lack of understanding of its outputs leading you to trust the LLM
letting that information through makes things "much harder for every one (including you)"

basically for anyone reading an article by an LLM, one cannot trust a single word, every single every bit has to be re-examined

so unless you are able to verify every single character of output form LLM your self
and make sure that it represented it in a correct manner
is probably better off not using LLM
and if you can actually vet the output then you probably don't need the LLM in the first place
kind of defeats the purpose

in my opinion currently LLM is no good other than automating simple tasks or writing template codes


i mean indexing of the filesystem and storing into the DB so it can be searched ordered and served paginate without
the need to download a 38MB of json it would be much faster to get only the first 100 and paginate but does it worth it?

I don't think we currently do any sorting of user metadata in the backend
I think all sorting metadata is done in the front end in by the browser

and it never need to download all 38MBs of json (assuming by download you mean sending the json data to the browser)
currently only the necessary bits of information are essentially front end in HTML
the 38 MB is the total file size of all json, including keys and values that are not used by webUI
the actual information is sent to the front and is a fraction of that

and slight correction to the total file size
when I calculate the directories I also included directly is not are not user metadata the actual total json
the the actual total size I have is 22MB, from 1007 user metadata json files
moreover because of my extension I wrote on avarage 60 time extra data to each user metadat, so only 1/61 are actually used by webui (369 KB)

I think it's safe to say that even you have a ridiculous amount of files like 100 times the actual information that needs to process isn't that many

so I imagine the end performance difference wouldn't be that different

I'm guessing that most of time is spent on IO latency and not by processing of the actuator

and I'm guessing it's someone has enough files for Iowa latency to cause a great issue
and I'm not too sure what we can do

as any database caching of results would still need to check the actual data which means we need to at very least check the modification time of the file (maybe there's a more efficient methods I'm not sure) to make sure what's in the data is still up to date


there is a PR that has I am not sure if will merged

in this PR it makes the HTML extra netwroks grid in chinks,
but I believe only required data is still sent at once to the front end
I believe intention is to lighten the load of the web page but I'm not sure if it actually achieves that
I personally didn't experience a big difference, maybe I just don't have enough networks for the improvements to make a difference
and in my case it seems to make the page heavier (not sure don't quote me)
aside from the lack of obvious performance Improvement (that I can see on my setup)
there's still lots of bugs with that PR

@anapnoe
Copy link
Author

anapnoe commented Nov 4, 2024

That is a huge PR it looks to me overcomplicated and i think it actually doesnt seem to create a DB maybe i am wrong didnt go to far with this i use a very simplistic approach at the moment just for testing, i have implemented a solid frontend grid virtual scroller that can be used across many different scenarios fetching, from api, use civitai api for testing, json files the test below, and raw data and it works very well i havent done any speed tests to see how much it takes and the filesize for 20000 loras that is why i was thinking that a DB will solve searching, sorting, speed issues, and can help to search beyond filename.
i dont like trees are old-school nowdays for flexibility we use tags that are dynamic one item can belong to many categories, but i guess that this is the case when working with files and folders
ExtraNetworksJsonCheckpoints... will remove the metadata from the obj so initialization is faster and filesize smaller
first release i will just dump json files but creating a db has many advantages over static json files

python code

pages_and_filenames = {
    ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints: "checkpoints.json",
    ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion: "textual_inversion.json",
    ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks: "hypernetworks.json",
    ui_extra_networks_lora.ExtraNetworksPageLora: "lora.json"
}

for page_class, filename in pages_and_filenames.items():
    page_instance = page_class()
    items = list(page_instance.list_items())
    json_output = json.dumps(items, indent=4)
    with open(os.path.join(data_folder, filename), "w") as json_file:
        json_file.write(json_output)

js code
export async function setupLORA() {

const container = document.querySelector('#lora_cardholder_models');
const searchInput = document.querySelector('#lora_search_models');
const sortSelect = document.querySelector('#lora_sort_models');
const ascButton = document.querySelector('#lora_asc_models');

const limit = 0;
const apiUrl = `${DEFAULT_PATH}data/lora.json`;
const method = `GET`;

const initApiParams = {
    limit: limit,
    page: 1,
};

const itemKeys = {
    title: 'name',
    url: 'preview',
};

const vScroll = new VirtualScroll(container, [], 18, itemKeys, apiUrl, initApiParams, method);
vScroll.updateParamsAndFetch({}, 0);

let sortVal = `sort_keys.default`;
let ascVal = false;

searchInput.addEventListener('input', (event) => {
    const searchTerm = event.target.value;
    vScroll.filterItems(searchTerm);
});

ascButton.addEventListener('click', (event) => {
    ascVal = ascButton.classList.contains("active");
    vScroll.sortItems(sortVal, ascVal);
});

sortSelect.addEventListener('change', (event) => {
    sortVal = `sort_keys.${event.target.value}`;
    //console.log(`Sort path: ${sortVal}, ascending: ${ascVal}`);
    vScroll.sortItems(sortVal, ascVal);
});

}

this is the DOM it is very clean as it displays and scrolls only the visible items i have only 78 loras
Untitled

you canot sort items in the browser if you dont have all the data available that includes searching
thats a DBs advantage to sort and serve paginate the data it process so 38MB of text data is a lot
the main issue for me and this is the reason i am working on this scenario is that
you dont need to cripple the browser to my knowledge the current extranetworks implementation builds static page
from the backend adding all the eventlisteners inline

if you open the debug tools in crome you will see all nodes and on top of this you have dublicates for txt2img img2img plus any other extension that add a view of more duplicate nodes like sd-webui-prompt-all-in-one

Untitled2
my 78 LORAs all here

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 4, 2024

i have implemented a solid frontend grid virtual scroller that can be used across many different scenarios fetching, from api, use civitai api for testing ......

if you have any Improvement ideas then that great
chunks / pageing / DB / performance / usability / sorting / extra functionality etc

as mentioned we already have a framework for cacheing stuff using this discachk which use sqlite
diskceche basically allows you to use a sqlite like a python dict object,
it's probably easier to just use diskceche then a diffrent freamework or raw sqlite


you canot sort items in the browser if you dont have all the data available that includes searching
thats a DBs advantage to sort and serve paginate the data it process so 38MB of text data is a lot

I guess you missed a section of my previous message

it not not 38MB of data it's 22MB of the raw json size whitch only about about at most 369 KB is data that webui uses
ther rest is form other stuff that is not used base webui but by extenrions
note I'm only counting the raw data from user metadata and and not the size of the final HTML
and as currently there's only a couple of sorting methods so the actual data used for sorting is a subset of the raw data

regardless my point being I don't believe that reading json is the bottleneck

@anapnoe
Copy link
Author

anapnoe commented Nov 4, 2024

I would like to create a PR for this but maybe it is better to wait until you decide with the PR from Sj-Si its from April
this is the staff i don't like adding inline event listeners for each item button click from the backend also noticed that Sj-Si doing the same things so i didnt get into it to see the good parts of his PR

this how it suppose to be done you only add one event listener to the container not the card or buttons whatever is inside a container can be captured with one event listener you literally can have 1 event listener to the whole document you sort filter the data object you dont parse the whole dom tree to filter sort against attributes that is very slow

part of the prototype

VirtualScroll.prototype.setupClickListener = function() {
    this.container.addEventListener('click', this.clickHandler.bind(this));
};

/* Filtering Sorting Data */

VirtualScroll.prototype.filterItems = function(searchTerm) {
    this.startIndex = 0;
    this.data = searchTerm ?
        this.originalData.filter(item => item[this.keys.title] && item[this.keys.title].toLowerCase().includes(searchTerm.toLowerCase())) :
        [...this.originalData];
    this.updateDimensions();
    this.forceRenderItems();
};
VirtualScroll.prototype.sortItems = function(sortKey, reverse = false) {
    this.startIndex = 0;
    const vsc = this;
    const sortedData = [...this.data];
    function comparator(a, b) {

        const valA = vsc.getValueByPath(a, sortKey);
        const valB = vsc.getValueByPath(b, sortKey);

        if (!isNaN(valA) && !isNaN(valB)) {
            return reverse ? valB - valA : valA - valB;
        }

        return reverse ?
            (valB < valA ? -1 : (valB > valA ? 1 : 0)) :
            (valA < valB ? -1 : (valA > valB ? 1 : 0));
    }
    sortedData.sort(comparator);
    this.data = sortedData;
    this.updateDimensions();
    this.forceRenderItems();
};

how to use the event listener to an instance of the prototype
elem target that was hit here checks for the closest card(button, other button, open metadata button, apply to prompt, send to anywhere, show fullscreen whatever as many clicks as you like with zero overhead), currentTarget the element that the event was attached here is the vScroll itself in this case is the LORA cards parent holder
detailView can be another instance of VirtualScroll.prototype which is cool here is a lightbox instance that gets all the data from the object that was clicked

vScroll.clickHandler = function(e) {
    const {target: target, currentTarget: ctarget} = e;
    const index = target.closest('.item.card').dataset.index;
    const itemData = this.data[index];
    if (itemData) {
        vScroll.showDetail();
        detailView([itemData]);
    }

};

i hope you understand how unoptimized extranetworks is and the usage of duplicate views make things even worse
txt2img and img2img extranetworks why they have to be both?
they call the same functions with a variable target (currentTarget) witch we already know cause the only one event listener is applied to if

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 5, 2024

I would like to create a PR for this but maybe it is better to wait until you decide with the PR from Sj-Si its from April

that PR is kind of gigantic and to be honest I think it's out of the realm of something that can be decided by me to merge or close

but if you want my personal feelings about that PR "I don't think it would be merged"
reasone is that I think that PR tries to achieve too many things at once
not not to mention issues littered around

I've tried to fix some of those issues and commented on the ones I found but not able to fix

like I don't know why clicking cards in firefox is fine but in chromium browsers the left 1/3 of the card seems to be not working


the staff i don't like adding inline event listeners for each item button click from the backend ......

I think I repeated this couple of times I am not a webdav
what your talking is beyond my knowledge
but if something can be done better than I don't think anyone would have any issues with it


some things to take note

  • extensions should be able to add their own custom tabs they might have different requirements
    so they possibly needs to be a interface of some sort that extension can use to create the tabs using your new method

  • the actions perform when clicking on the tabs of each different types of tabs are slightly different
    by different actions I mean that
    clicking an embeddings it will past the text into either pumped or negative prompt based on which input was last focused
    while hypernetworks an lora (the network part) only goes in prompt (the activation text may still go in negative prompt)
    and clicking on checkpoint does not past anything but instead triggers a model change
    and I think I've seen people write custom extra networks tab extentions that behaves differently


i hope you understand how unoptimized extranetworks is and the usage of duplicate views make things even worse

yes, I think we are all are aware of duplicating the entire is inefficient

but having two separate views also has advantage that it keeps the scroll position and search filters for each tab individual
it's possible that someone has a workflow that switches between txt2img and img2img
and would want the search and filtering and scrolling position to be separate on each tab
I guess this could be achieved by somehow remembering the state of the tab, and saving and applying the state when switching tabs

if this is implement it can be made so that you can choose whether or not the tabs are synced

@anapnoe
Copy link
Author

anapnoe commented Nov 5, 2024

to my opinion is how you architect an application not a wedbev specificity if the program
serves dynamic data and you want to be flexible you create a DB to store tags, model version, trigger words, custom description, havent done a deep look into diskcache but from my understanding it saves cached data key value pairs for fast access it is used to cache tmp expensive operations on disk and it is fast doing this but it lacks sort, filtering, pagination like a normal DB for disckcache you need to write custom functions to do that and for me looks like it is the wrong tool to do the job
i guess that this is a huge refactoring so i will not take any action yet i will have to look better into it

some things to take note
yes this is feasible you can pass any data to the frontend
not this
onclick="cardClicked('txt2img', &quot;<lora:last:&quot; + opts.extra_networks_default_multiplier + &quot;>&quot; + &quot; qxj&quot;, &quot;&quot;, false);"
but this which is not so cool harder to parse
data-event="cardClicked('txt2img', &quot;<lora:last:&quot; + opts.extra_networks_default_multiplier + &quot;>&quot; + &quot; qxj&quot;, &quot;&quot;, false);"
as this is more clean
data-params="'cardClicked', 'txt2img', '<lora:last:' + opts.extra_networks_default_multiplier + '>' + 'qxj', 'false'"

and then the solution from above

    vScroll.clickHandler = function(e) {
        const {target: target, currentTarget: ctarget} = e;
        const index = target.closest('.item.card').dataset.index;
        const itemData = this.data[index];
        if (itemData) {
           const paramsString = itemData.getAttribute("data-params");
           const paramsArray = paramsString.replace(/'/g, '').split(',').map(param => param.trim());
           const functionName = paramsArray[0];
           const params = paramsArray.slice(1);
           //call the cardClicked function
           window[functionName](...params);
        }
    };

yes you can store the scrollTop position for each, in one line of code for me it doesnt really make sense to have both of them
and you dont either need for extensions to duplicate it unless they do something extraordinary i cant think of as long as extra networks is in the view you use the last focus textarea to apply the function this is how i do it for workspaces that you have
one instance of extranetworks and four textareas img2img_prompt and neg and txt2img_promp and neg in the same view
i will have something soon if you want to look into it i will create a PR thanks

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 6, 2024

do you use discord if so can you join AUTO's server
even though the server is not very active recently I think if you voice your ideas there some more people will see it and give feedback, as opposed to only me here

invitation links to the server is on the Wiki page
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing

@anapnoe
Copy link
Author

anapnoe commented Nov 8, 2024

Thanks i will prepare write something to address many of the issues we discuss about, when ready i will visit the channel to talk about them you can initiate a discussion if you like to see if there is some interest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants