If for example you have a javascript file containing patterns like: '${placeholder}', the pygments.lexers's guess_lexer_for_filename function will return the 'JavaScript+Genshi Text' lexer.
This is because this lexer has the '.js' extension in his list of 'alias_filenames'. And since you have a pattern matching the ${.*?} regexp in your content, this lexer will get a higher score than the primary 'javascript' lexer and thus be returned by the guess_lexer_for_filename method.
Probably none of the lexers from the 'pygments.lexers.templates' module (joining two language names by a '+' symbol. like: javascript+genshitext, html+ng2, css+django,...) will find a parser from tree_sitter_language_pack.
The chunker will then fallback to the naive chunking method.
In these scenarios, we likely want to find the parser of our primary language (which would be for the above examples: 'javascript', 'html' or 'css') instead of falling back for the naive chunking method.
A solution might be to replace the call of the pygments.lexers's 'guess_lexer_for_filename' function by the 'get_lexer_for_filename' function which would only compare lexers that have a 'primary' filename matching your file extention (and not the alias_filenames)
but a safer solution would maybe be to continue to check for the existence of a parser for these 'hybrid' language names and then fallback to the primary language name (the name before the '+' sign) in last ressort.