Replies: 28 comments 2 replies
-
@Timzoid Yes, I was able to speed up the generation process by adding "Model Precision" controls to all of my implementations. Check it out. It is in the model load section. By default, now the models will work in float16 precision mode (half precision) which will allow to generate twice as fast. Precision DOES NOT seem to affect the quality of the output so I encourage you to use it. So yes, Allegro is now as fast as MuseNet in float16 and bfloat16 mode :) And yes, please do use improv to generate seeds. That is what I do too. It works very well indeed for that purpose. RE: video: I think you should wait a bit because I will be releasing my new SOTA model soon which will be much better than Allegro and much faster too. Here is the preview link with a test model if you want to try it out. https://github.com/asigalov61/Pentagram-Music-Transformer This model/implementation will use full MIDI range, meaning that it will have full MIDI dynamics and full MIDI instruments range. It will also be very fast and very high quality due to optimizations and also it will feature all generation option from the Allegro. So please check it out and let me know what you think. Alex. PS. Thank you for check out my projects and I hope it is not overwhelming as I have a lot and release new ones pretty often. |
Beta Was this translation helpful? Give feedback.
-
@Timzoid I found a way to further increase the generation speed. Check it out. I think you will enjoy it now even more :) Especially, check out how it works with max memory tokens (2048). Now it generates fast at max memory tokens :) And I think you should do a video on Allegro now. I am a bit behind on my new project so it many be some time before I release it. Allegro is now mature and it deserves a tutorial video. If you can do it, I will really appreciate it :) Alex |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Hey Tim, I just wanted to give you a heads up... Forget about Heptabit Music Transformer I mentioned... Instead, check out my latest release: Giant Music Transformer Its basically Allegro on steroids!!! Its as fast and as capable as Allegro, but more precise (92% accuracy), with dynamics and 8k sequence length (2730 notes memory)!!! It also has true full range of MIDI instruments (128) vs Allegro (12)!!! It turned out very well and it plays very well too. So make sure to check it out :) Happy Holidays!!! Alex PS I will be adding bulk generator to Giant soon so it should help you to generate very nice seeds. |
Beta Was this translation helpful? Give feedback.
-
Oh wow, I'm just reading this today, Dec. 4, after I finally posted a tutorial for Allegro Music Transformer just an hour ago. But using Giant for half an hour, I can see, I'm not going back to Allegro. Giant is 3x to 4x as fast, and produces good quality. I'll have a better handle on if the quality is much better after I use it for a few days. But, looking forward to using it more. I will just direct people in my top pinned comment on the Allegro tutorial to it, and it's good that it is so similar to Allegro. I think you really did it with this one! Thanks.
…________________________________
From: Alex ***@***.***>
Sent: Wednesday, November 22, 2023 9:19 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Hey Tim,
I just wanted to give you a heads up...
Forget about Heptabit Music Transformer I mentioned...
Instead, check out my latest release: Giant Music Transformer
Its basically Allegro on steroids!!!
Its as fast and as capable as Allegro, but more precise (92% accuracy), with dynamics and 8k sequence length (2730 notes memory)!!!
It also has true full range of MIDI instruments (128) vs Allegro (12)!!!
It turned out very well and it plays very well too.
So make sure to check it out :)
Happy Holidays!!!
Alex
PS I will be adding bulk generator to Giant soon so it should help you to generate very nice seeds.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL725LY244DICCP23IXTYF3FFBAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TMNBYGIYDG>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your feedback. I am glad that you like it :) Make sure to check out bulk generator and I also hope that new visualizer works well for you :) Alex. |
Beta Was this translation helpful? Give feedback.
-
Hey Alex, you asked me to check out the bulk seed generator. Is that up now, or going to be up soon, and is it going to be a cell in the "Original Version" of Giant or Allegro in the Improv section, or how will it be?
That brings up an issue that a viewer on my channel mentioned about the RAM on one's computer and using these programs. She was on 8GB and wondering if that was hampering the performance. I wasn't sure because I have 16GB. I think it might hurt, especially if you want to browse or run something else while processing something.
Since Allegro would generate seeds up to 600 tokens long, I never had a problem generating 16 of them with Improv, but just this morning tested Giant by trying to generate 16 seeds at 5000 tokens and got the red "overload" button that stopped the processing fairly quickly. I should probably test the limit more carefully. I know I've generated 16 at 800 fine, and who knows, maybe it could generate 16 at 1000 or 1200.
Anyway, that brings up the issue of bulk generation. Will it be able to even generate, let's say, 30 at a time? Won't that overload? I mean I would like it to do 50 or 100 at a time, if it could. And then maybe be able to download them as a zip file or all at once.
In my recent tutorial, I advised people using Allegro to do only 3 long continuations of a seed, at the max, 2040, because if you try to do 5 or more it can red button overload you. Again, I don't know if that's a Tesla T4 issue or a RAM issue on my end.
BTW, I am fine with 600 seed length, because if a seed isn't doing anything in 600 tokens, it's usually not going to do something good at 2000 tokens. I'm fine when a seed, after I edit it in MIDI, has even 100 tokens of a passage I like, so I can continue it. However, I can see how some people playing around with it, might like to generate a piece in Improv that is 8000 tokens long.
I am not absolutely certain, but in Giant, I think there may be a relationship between continuation size and quality of the generated sequence, and that may also be related to the length of the seed when doing a continuation. With short seeds, under 150 tokens, a continuation of more than 2500 or so, seems to do too much repeating. Maybe it was just a fluke.
Anyway here's what happened to me the first day. Since Allegro had the limitation on continuation at 2040, I experimented on continuing 4 seeds in Giant at 3100. I know it goes up to 8000 or whatever. The shortest seed, came out poorly in all three batches. The longest seed came out well in the three batches. So, the next day, I reverted to doing continuations on Giant at just my Allegro length of around 2000, and they all came out fine, whether seeds were short or long.
Since I make a piece, lately anyway, by in the Original Version, generating seeds, preparing the seeds in MIDI for continuation, and then doing 9 to 12 continuations of 2040 tokens each, and selecting phrases from those to build a piece in MIDI software. Usually, there are two or three continuations that stand out, having enough phrases to make the piece.
But with Giant, it is a pleasure to be able to do these continuations so fast. I'd rather spend my time composing than pressing buttons to generate things, and this new speed allows me to do that, and also enables me to do more in an effort to generate the highest quality sequences.
It is odd how viewership on my channel is dropping off, now that I've posted over 200 AI compositions. I think it experienced an artificial boost after the ChatGPT mania, and now that is dying down. I think there is still a problem in trying to attract the audience to your software, the composers who work in MIDI already, and would love this, whether for creating pieces like I do, or just continuing a phrase they get stuck on, while writing a song.
One problem might be is MIDI software, music writing software, editing using a DAW, that all takes so long to master, and they assume composing in AI is like that. I think I'm going to attempt to make like a 2-minute tutorial on using Giant. If people are confused by it, I can link it to the 20-minute Allegro tutorial for a more expanded explanation.
Except for people looking for play-along violin accompaniments, my YT channel's audience is nearly all gamers and programmers, some who played an instrument in school or have enough interest in music enough to want to compose with AI.
Tim
…________________________________
From: Alex ***@***.***>
Sent: Tuesday, December 5, 2023 11:09 AM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Thank you for your feedback.
I am glad that you like it :)
Make sure to check out bulk generator and I also hope that new visualizer works well for you :)
Alex.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL76PWMEV3S6ZQAWRSWTYH5PPJAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONRXGU2TO>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your feedback as usual :) The bulk generator is up for Giant. You can find it along with the other colabs on the main page of the repo. But just in case, here is the direct link: Check it out and let me know what you think. I can add the same to Allegro if you think it will help. Now, regarding the memory... You generally need 16 GB GPU to have enough room to work with Allegro or the Giant at reasonalbe quantities and lengths. However, 8GB GPU should be enough if the number of batches and/or quantities of tokens is low enough. At, say, 1-2 batches, both Allegro models should fit into 8GB GPU at full 2048 length. However, thank you for bringing this to my attention as there is something I can do to make this better... There is a cache clearing function that I can add to Allegro so that it is more efficient with the GPU memory. Now, the Giant generation can be a bit unstable. I am working on fixing it. This is due to the fact that the model has full MIDI instrument range and also due to its 8k seq_len. Generally, Giant performs well within 2048-4096 tokens lengths on continuations. Anything longer than that will be tricky and less stable due to the technology limitations. There is not much I can do about it. However, I am working on a larger, more stable model for the Giant which should performer better. Yes, traffic/interest in the software seems to be less than before, but please do not despair. This is normal. There are not a lot of composers yet who use such technology so it is normal IMHO to see less interest. The initial surge was, as you know, due to MuseNet being depreciated by OpenAI so people were looking for a replacement. But enough time have passed now so the traffic levels are back to "normal". Just so you know I see consistent usage of LAMC and also on my other projects like Allegro and Euterpe X. People are still discovering this so it may be some time before traffic grows back IMHO. I think this is all for now. I think I will add GPU cache clearing option to allegro shortly, so watch out for that and once I do, you can tell your viewers with 8GB GPU to try it again. Alex. |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Ok. I've added the GPU cache clearing option to both Allegro colabs. Try it out and have your users try it too. Also, I wanted to emphasize that it is best to keep continuations within 2048 tokens range. Long seq_len is for context, NOT for auto-continuations really. So keep this in mind. |
Beta Was this translation helpful? Give feedback.
-
Alex, this bulk generator is unbelievable, a dream, really epic! I'm going to have to make a video about it, because people would just think I was making stuff up, if I explain what this can do. I was waiting for it to choke (red button, stack overflow) when I generated 320 seeds that were 800 tokens long in less than 20 minutes, but it worked perfectly. Some people knit. I'm going to be auditioning seeds for the rest of my life, listening for the magic ones.
Also, a wonderful surprise that this can also do continuations, which will increase my productivity there. And to think I felt like I was being a little tedious, describing my composition methods to you, and the way I audition seeds outside the interface, but I understand that without some users who give you feedback on their experience with the software, you won't have any way of knowing the problems they encounter, or how people are using the software, what they like about it, and what they don't like about it, so you can improve it.
When I made the first tutorial on LAMC, which has over 2000 views now, I invited comments, any questions at all, problems encountered etc., and I think I've gotten maybe 10 at most, except for the girl on 8GB who left me about 10 comments.
So, if a person encounters the cache problem, I understand that unless you delete and restart the Runtime session, the cache can get stuck for doing another function. So, people with that can clear the cache after encountering a red button problem with what you added to Allegro, without having to reload all the cells. I'll check that out for myself tomorrow on Allegro. I can add that information to the pinned comment under my Allegro tutorial to, after encountering the red button problem, to clear the cache, and try what they were doing again, but maybe with less tokens length or just a few batches at a time.
I think probably one of the reasons some people didn't write in with any problems, is they kind of figured it out for themselves, as so many viewers are programmers and fairly experienced. I got one person who wrote in, worried that LAMC was downloading hundreds of GBs to their computer. They probably saw some code flying by. I explained to them it doesn't do that.
One thing on the bulk generator, the check box next to "verbose." The AI chatbot I use "HeyPi," explained it the following way: "It (checking the box) probably just makes the prompts longer and more descriptive, giving the AI more context to work with when generating music. So, it might result in more varied and complex musical outputs, but it's still dependent on the AI model being used and the specific parameters of the generation process." That chatbot makes a lot of stuff up. Anyway, it came out fine without checking the box and I auditioned several of the seeds and continuations. But I should probably know what "verbose" does in case anyone asks.
Tim
…________________________________
From: Alex ***@***.***>
Sent: Wednesday, December 6, 2023 8:38 AM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Thank you for your feedback as usual :)
The bulk generator is up for Giant. You can find it along with the other colabs on the main page of the repo. But just in case, here is the direct link:
https://colab.research.google.com/github/asigalov61/Giant-Music-Transformer/blob/main/Giant_Music_Transformer_Bulk_Generator.ipynb
Check it out and let me know what you think. I can add the same to Allegro if you think it will help.
Now, regarding the memory... You generally need 16 GB GPU to have enough room to work with Allegro or the Giant at reasonalbe quantities and lengths. However, 8GB GPU should be enough if the number of batches and/or quantities of tokens is low enough. At, say, 1-2 batches, both Allegro models should fit into 8GB GPU at full 2048 length. However, thank you for bringing this to my attention as there is something I can do to make this better... There is a cache clearing function that I can add to Allegro so that it is more efficient with the GPU memory.
Now, the Giant generation can be a bit unstable. I am working on fixing it. This is due to the fact that the model has full MIDI instrument range and also due to its 8k seq_len. Generally, Giant performs well within 2048-4096 tokens lengths on continuations. Anything longer than that will be tricky and less stable due to the technology limitations. There is not much I can do about it. However, I am working on a larger, more stable model for the Giant which should performer better.
Yes, traffic/interest in the software seems to be less than before, but please do not despair. This is normal. There are not a lot of composers yet who use such technology so it is normal IMHO to see less interest. The initial surge was, as you know, due to MuseNet being depreciated by OpenAI so people were looking for a replacement. But enough time have passed now so the traffic levels are back to "normal". Just so you know I see consistent usage of LAMC and also on my other projects like Allegro and Euterpe X. People are still discovering this so it may be some time before traffic grows back IMHO.
I think this is all for now. I think I will add GPU cache clearing option to allegro shortly, so watch out for that and once I do, you can tell your viewers with 8GB GPU to try it again.
Alex.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL74WIE5EEUY3LKNE2KDYICGNRAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONZXHAZDQ>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid I am happy to hear that bulk generator is working out for you :) If you want me to add/change anything, let me know, please. Also, do you want me to add it to Allegro? I am mostly working on a Giant right now, but if I have a minute I can do that if you think it will help. Now, verbose option in a bulk generator is mostly for debug and monitoring the progress in detail. It does not do anything to performance. It only shows detailed info about generated compositions. But since you asked, here is the explanation. Also, I was wondering if you like the new output visualizer in the Giant? It now shows durations of each note. I think it turned out nice but if you have any thoughts about it, please let me know. Another thing I figured I mention to you are the two new options in the Giant Composer:
Anyway, thank you for your feedback as always and please do let me know what your users think too or what they want to see and I will try to do it for you and your users. And yes, restarting runtime and reducing number of batches/tokens for low mem GPUs is the right thing to do. But hopefully, that cache clearing option will help to avoid that so that the user experience is not interrupted on low mem GPUs. Alex. |
Beta Was this translation helpful? Give feedback.
-
Hi Alex, about the "visualizer," yes, I noticed it looked a little different, so it's better note length. As you know, before the bulk generator I used for the first time yesterday, I was generating seeds 16 at a time and downloading them, and then auditioning them just in Windows media player, which has no "visualizer," which is what I call "pitch pattern" or "piano roll view," the latter because it resembles the slots on the paper roll of the old player pianos, and it is found in the MIDI of every DAW.
Unless it is in the Composer Version of Giant, which I haven't tried yet, since I don't see any option for "trim all outputs to last chord," I assume that with that, you're referring to some instruction, code or prompt you built into the continuation feature of Giant. With the improv seeds from Giant, over a hundred which I've auditioned, I was initially thinking there might be a higher percentage of them that at chordal in nature, as opposed to one line, compared to Allegro.
I'm still not sure about any musical characteristics that are especially different in the seed sequences generated by Giant compared to the ones I generated in Allegro, other than, I think Giant may produce a higher number that are a more musically coherent throughout and fewer crazy ones. The keepers, the ones I'm not deleting after auditioning for a few seconds anyway, and am renaming and saving to review them in MIDI software, to potentially try fixing and then doing continuations on for a piece, there might be a higher percentage I'm finding that are good.
When I review sequences, I'm just clicking on files twice to listen in Windows Media Player, since my MIDI software takes several seconds to even load on the computer. I find it to be relaxing to review seeds that way, especially compared to the play button and having to scroll between the nested pitch patterns on the screen on GitHub, thinking that GitHub might disconnect me before I got all 16 reviewed.
Of course, if I had the option in Windows, of also seeing a pitch pattern for the file I'm reviewing, I would probably use that. There might be a way to do that. What I mean is, seeing the pitch pattern has some value, because if I am just auditioning them blindly and reject them after two seconds, who knows, with the pitch pattern, I might see some seeds get really good after a bad start.
The bulk of my time, of the 3 to 15 hours I spend on each piece, is in my MIDI software, piecing together phrases, changing notes, tempos, dynamics, accents, adding sustain pedal, most of which is done in the piano roll view, but also in the MIDI track view. As a musician, I would prefer to work in notation, but the piano roll view allows much finer control, which I'm used to, for doing things like, in bulk, making a section of notes more staccato. You can't do that in music writing software in the notation view.
In one piece I posted, one comment was about how expressively played the piece was. And I replied that I'm glad they noticed, but that is mostly the process I use, not AI, to improve the performance quality in MIDI software, which takes hours. I've also done a few posts where I compare the raw sequences to the final piece I made out of them, just so people get an idea of what is involved.
Before I made the LAMC tutorial, I think I told you there was only one video I found on YouTube, that had like a 50 second segment of a guy demonstrating LAMC. He was reviewing about 8 different AI composer software, and included LAMC, which I initially knew about from a viewer of my channel who mentioned it. Upon my viewer's mention of LAMC, I went right to the GitHub site, which I'd never been on, and just looked at that and thought eeek. So I found that one little segment on YouTube, and it turned out he showed how to load a custom seed. It was so quick, I had to replay it about five times, but then got it. I swear, if I hadn't had that video to watch, I could have tried things for a week, and not been able to load a custom seed. I probably would have just messaged you.
But my not very obvious point is, I'm not much of an influencer, unfortunately. But at least if someone has heard of Los Angeles Music Composer, or Allegro Music Composer, and now, Giant, they will be able to find my YT site in a search.
But that's the problem, evening finding out about it, and incredibly, that was the problem for MuseNet also, before ChatGPT. And the availability of ChatGPT overlapped with MuseNet by only like 5 weeks. "OpenAI" was not a known company to musicians.
Interest in MuseNet, and awareness of MuseNet, took off after MuseNet died! I'd say the awareness that MuseNet existed was 100x greater two months after MuseNet died, than in the 3 years it was working and available, all because of the ChatGPT storm of interest.
I know that because of the hits on my MuseNet tutorial, which I've left up just because it is a conduit to Los Angeles Music Composer, if people read either the pinned comment or the notes. But I still think an important conduit would be a music/composer software influencer.
But there's another hitch in spreading the word of such software. A lot of composers who are using AI software, don't want it known they're using it, I suspect. The minute a composer of pop music announces they used AI to complete a few phrases of their new hit song, well, that gets into the copyright issues and everything. The public things, well, you didn't compose this, AI composed it. That's cheating!
Of course, there's a smaller segment, like myself, who just can't wait for AI to exceed human abilities in everything, including creative pursuits like music composition. I say, let AI bring on the clean energy, the medical advancements, the solution to end wars.
…________________________________
From: Alex ***@***.***>
Sent: Wednesday, December 6, 2023 6:53 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> I am happy to hear that bulk generator is working out for you :) If you want me to add/change anything, let me know, please.
Also, do you want me to add it to Allegro? I am mostly working on a Giant right now, but if I have a minute I can do that if you think it will help.
Now, verbose option in a bulk generator is mostly for debug and monitoring the progress in detail. It does not do anything to performance. It only shows detailed info about generated compositions. But since you asked, here is the explanation.
Also, I was wondering if you like the new output visualizer in the Giant? It now shows durations of each note. I think it turned out nice but if you have any thoughts about it, please let me know.
Another thing I figured I mention to you are the two new options in the Giant Composer:
1. Trim all outputs to last chord does exactly what it says because it seems that the model continues better when everything is chorded.
2. Try to introduce drums option also does what it says and it tries to add drums to the continuation so that the model can pick-up on that. It is useful, if you want to add drums to non-drums seeds/compositions.
Anyway, thank you for your feedback as always and please do let me know what your users think too or what they want to see and I will try to do it for you and your users.
And yes, restarting runtime and reducing number of batches/tokens for low mem GPUs is the right thing to do. But hopefully, that cache clearing option will help to avoid that so that the user experience is not interrupted on low mem GPUs.
Alex.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL7654PGSJOMZPBWVMETYIEOSDAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TOOBTGE3DI>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
Whoops, I forgot to mention your adding bulk generator to Allegro, too. Yeah, I think you should just keep working on Giant. I'm not going to be doing anything on Allegro, probably. Today, I'll be working on my first "Giant" piece, I've already put a few hours into, but it needed more continuations.
I probably will need a week or two of working on Giant to be absolutely sure I will not be going back to Allegro for anything. While moving from LAMC to Allegro, it took a while of you making improvements in Allegro before I stopped using LAMC completely. Euterpe-X was something I tried about 8 times and just didn't like at all.
Another thing I forgot to mention. Every once in a while, I used to spend like an hour or so just generating seeds in MuseNet, because the seed is so important in how a piece will turn out, and the proportion of good seeds was no different in MuseNet than LAMC, Allegro and now Giant. But that process was not that much fun in MuseNet. So, this bulk generation is "it" for me. Like I mentioned, it is relaxing to review seeds away from an online interface, where I don't have to be loading cells or doing anything but going to the directory where they are stored and clicking on them.
…________________________________
From: Alex ***@***.***>
Sent: Wednesday, December 6, 2023 8:52 AM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Ok. I've added the GPU cache clearing option to both Allegro colabs. Try it out and have your users try it too.
Also, I wanted to emphasize that it is best to keep continuations within 2048 tokens range. Long seq_len is for context, NOT for auto-continuations really. So keep this in mind.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL7346LR6HOJJGKA4ZWLYICIFTAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONZXHE4TA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your feedback, as always. It is very helpful. Yes, the new options I mentioned are only present in the Giant Composer because there is no really a reason to put them in other colabs. So check it out and let me know what you think. I always appreciate your detailed feedback. To review MIDIs there are some apps avaialbe for Android and IOS. I find it handy to use my phone when reviewing the seeds. It easier and more convenient. So check out app store for your phone for MIDI apps. Yes, I will be switching to Giant mostly, because it works very well. With any luck, I will be releasing a better model for it soon so that there is a choice for users. Yes, each model/implementation is unique because it is AI, so it takes a minute to get used to new ones. But I hope that it is not too painful for you because as you said I try to improve my each new model/implementation so switching is usually worth it. RE awareness for the software: You definitely have the right point. It is not being advertised enough, which is why I am very grateful to you for your help with raising awareness for my models/implementations. It helps a lot. I do what I can too. For example, I submitted my pieces made with MuseNet and Allegro to Reddit/compose, which is a sub-reddit for music composers and they did like it as far as I can tell. You can try Reddit/compose as well. it is an effective platform for music and music AI. Otherwise, let's hope these models will become more popular and people will have more interest in them, particularly composers who are my main target audience. Anyway, thank you again for your thoughts and feedback. I really appreciate it. Feel free to let me know if you have any ideas for the Giant so that I can implement them. Otherwise, happy holidays to you! Alex. |
Beta Was this translation helpful? Give feedback.
-
Hi Alex, I don't mind adjusting to the new features. I'm just always thinking of the musicians/composers who are more on the tech margins, may never have studied programming, freeze at the sight of a Python in Colab. I took programming starting in 1971, my first year in college in the punch card days, with more courses in programming in the 1980s. I used bulletin boards before the Internet, and played with Eliza on my first computer, a Vic-20. However, I took so little interest in coding, feeling I didn't have an aptitude or desire to learn, as much as I kept taking courses and trying, I never learned to code well in anything, and of course, never did programming for a job.
Anyway, I have a unique ability to see the use of software as not only someone who has used hundreds of different software for various things, but to see things from the standpoint of the more average user.
Reviewing seeds on a phone is a great suggestion I'll need to pass on to viewers. I won't do it myself on a phone because I'm retired, and use the phone rarely. I recommend Windows Media Player to Windows users online with the built-in MIDI, even if the MIDI soundfonts suck, because VLC, for example, you have to download a set, like the free ones by Fluid. I just switched to using VLC, in the beta Nightlie edition since I allows me to control the background color. No, the skins don't do that for the full size Windows version. With doing things like going through 800 seeds, and reviewing 30 long (2000 tokens) continuations, I realized I had to get more serious about my method of reviewing, and find something that sounded better.
When I'm reviewing long continuations, even if they are bad, there may be a few measures that I want to use, so I write down the timeframe they occur. So, I take notes while I'm reviewing.
I think I told you this once, but when someone recommended LAMC to me, I recognized your name from the Reddit composer thread. When MuseNet went down on Dec. 12, 2022, I had time on my hands and wanted to explore what other people had done in MuseNet. In a search, I found your 70+ MuseNet composition zip file and am probably one of the few people who listened to all of them. I wrote to you at that time on Reddit, that I liked two of them in particular -- Gang Stop and one about war -- I forget the title. With nearly all your pieces being for violin and piano, I assumed you played the violin maybe. I wrote to you on Reddit then, how much I liked Gang Stop and the other one, but you didn't reply. I thought you might be off to war.
Violin is only one of five instruments I've played during my life, two wind instruments which I played at a much higher level, one which I went to Europe to study with the intention of competing at the international level. But I chose a different career and ended up on tour in South America, so getting flown to Brazil from Vienna. I skated with International Holiday on Ice for as long as I could stand all the travel -- it was hard on my health -- and did that after dropping out of the Vienna Conservatory and taking my first skating lesson when I was 20. So, basically, I was loaded with natural talent for a few things, like music and skating, but not much else, even though I went to grad school for linguistics and German History.
I began playing piano when I was five and I was self-taught until college age, and by then I could slop through some of the advanced works of Chopin. Growing up, my parents didn't have any money for music lessons. My first violin cost $3. Our piano was a junker I tuned myself, using a tuning kit I bought as a kid, and a library book on how to tune pianos.
I liked composing for two years in music school and a few of my teachers encouraged me, but I didn't see any point in it. Just getting friends together to play things was always a chore. I hated writing music down. I never got good at it, like composers who can scribble away like crazy. That's where AI music comes in for me. It takes a good ear, some imagination, but it gets all those notes down like magic.
But then, with the internet and being one of the early MIDI users, I could see how a person who really burned to compose, would suddenly have some opportunities to be heard. I just dabbled in composing every ten years or so.
But following a few GPT blogs, and stumbling into MuseNet in its last four months of existence, well, the first time I tried it, I couldn't believe it. I thought it was phenomenal, incredible, and especially amazing because it was free and so simple to use, anyone could learn to use it in a matter of minutes, yet it could do so much, especially if a person used MIDI. It was then I realized how little I knew about MIDI, and I began exploring all it could really do, like not only control the volume of individual notes, but for violin or flute, for example, shape the dynamic expression in a single held note.
MuseNet forced me to learn more and more about MIDI, and your software did the same, made me learn even more. And I haven't really gotten much to doing instrumental or symphonic music yet. That can wait. So, I'm still learning in MIDI, and still learning in Windows, even though I used them from their inception.
I'm kind of curious about your background in DAWs and in general, music. Christine Payne of Open AI was not only a programmer, but a trained classical pianist. I think some of her music knowledge is reflected in things like, getting sustain pedal on piano to be reflected in note length. That's really great for most people who used the software.
However, for a composer, or someone who plays piano, and understands the use of the sustain pedal, and learns to how to put those pedal markings into MIDI, it's much better to have piano music come out as it does in your software, with no pedal, even though it may take from 15 to 30 minutes to put in all the pedal markings.
Anyway, I tried the composer version of Giant today for the first time, and it turns composing that way into a different experience, since the continuations just pop out in seconds.
But the Giant Music Transformer Bulk Generator is still blowing my mind. It is soooo much better.
One thing I noticed, the zip function for downloading is cumulative, doing every file you generate in a session. That's probably a good idea. I'm still going to download after I run, let's say, 30 continuations at 2100 tokens each, just in case it goes offline. This doesn't happen to me very often, but it has happened maybe a dozen times. Sometimes I get the file directory back when reconnecting, and sometimes not. There's probably a way of clearing the zip cell file cache, if let's say, I first do 30 continuations, and don't really need to be downloading them again. I probably have 3000 MIDIs saved on my computer and on a thumb drive, in the year I've worked with this type of software.
The ease of use of the bulk generator is such a leap forward for me, coupled with how it is going to improve the quality of compositions, because all that busy work of waiting for batches of 3 long continuations at a time to generate, setting a timer on my Google Home device, making sure I'm back at GitHub so it doesn't disconnect me for being idle. Then downloading them, organizing them. It's much easier now.
Generating and auditioning 30 long continuations in less time than it took me to do that with a dozen in Allegro, well, the odds of finding better music in some of these continuations, which will make a piece sound better, are much greater.
I wanted to know about your background in music, and especially how much you've used MIDIs in a DAW, for example, so that if I bring up that type of issue, I know you'll know what I'm talking about. To even write AI software like this, I assume you have a strong knowledge of MIDI, instrument channels etc., a lot more than I understand. But I wondered why every note in Gang Stop was the same volume, but maybe that was just in your MIDI file on Reddit, I don't know. It could have been you just weren't into improving the performance quality, because you are such a high-level programmer.
This software you wrote fills such a huge gap. I've used a Google News alert to read the latest news on AI music software, and it seems like there's nothing on the horizon for music sequence generation. It's like the race is on for AGI, and maybe that will write the next generation of music software.
When ChatGPT became publicly available, I used it, and followed it for a couple of weeks as some of the clever young programmers used it for writing music. The results were pathetic, from a music standpoint, kind of like the stand-alone tools of Magenta I downloaded.
And here you wrote to me that MuseNet was faster than your Los Angeles Music composer because it was on GPT-2. And yet, Giant is much faster than MuseNet. It almost makes MuseNet seem like the HuggingFace edition of Allegro. So, you must be a genius. That's my explanation.
Sorry, I wrote so much. I type fast. I guess one question, after I discussed the use of style choices in AI music generation with a chatbot, so it could have made a lot of stuff up, but is it a copyright issue, since I believe a style selection would involve a reference file of that style, or is it a technical issue? Let's say: Impressionist, Mozart, Jazz, or pop, that interacts with the training library. I mean, it is done on GitHub. The AI Forever Music Composer has a selection of four styles that work okay -- classical, jazz, pop, elevator music. (I forget the name for the last style.) But I'd like to generate seeds in the Impressionist style, for example.
…________________________________
From: Alex ***@***.***>
Sent: Thursday, December 7, 2023 4:40 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Thank you for your feedback, as always. It is very helpful.
Yes, the new options I mentioned are only present in the Giant Composer because there is no really a reason to put them in other colabs. So check it out and let me know what you think. I always appreciate your detailed feedback.
To review MIDIs there are some apps avaialbe for Android and IOS. I find it handy to use my phone when reviewing the seeds. It easier and more convenient. So check out app store for your phone for MIDI apps.
Yes, I will be switching to Giant mostly, because it works very well. With any luck, I will be releasing a better model for it soon so that there is a choice for users.
Yes, each model/implementation is unique because it is AI, so it takes a minute to get used to new ones. But I hope that it is not too painful for you because as you said I try to improve my each new model/implementation so switching is usually worth it.
RE awareness for the software: You definitely have the right point. It is not being advertised enough, which is why I am very grateful to you for your help with raising awareness for my models/implementations. It helps a lot. I do what I can too. For example, I submitted my pieces made with MuseNet and Allegro to Reddit/compose, which is a sub-reddit for music composers and they did like it as far as I can tell. You can try Reddit/compose as well. it is an effective platform for music and music AI.
Otherwise, let's hope these models will become more popular and people will have more interest in them, particularly composers who are my main target audience.
Anyway, thank you again for your thoughts and feedback. I really appreciate it.
Feel free to let me know if you have any ideas for the Giant so that I can implement them.
Otherwise, happy holidays to you!
Alex.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL7YXET36J5Z7UEHWV3DYIJHVRAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TOOJUGY4TI>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your thoughts and feedback. In regard to your suggestion about bulk generation zipping of the results... I will think what I can do there... Thank you for pointing it out to me. To answer your questions about my music and computers background: I have a 8-year degree in music. I am classically trained. My music specialty piano and choir, but I love the violin as well. Unfortunately, I can't play any of the instruments or sing anymore due to health problems, but thanks to Music AI, I can still create and enjoy music :) I also have a degree and about 25 years of experience in computers. My current specialty is auto-regressive transformer models for symbolic music creation. I used to be Microsoft certified specialist but due to health problems I can't really work anymore and I am sorta retired. Now, in regard to my music like Gang Stop... It was all made with MuseNet which does not support dynamics for anything other than the piano, so there is not much of it in my pieces. Otherwise, I appreciate that you listened to it and that you liked it. It means a lot to me. I was also going to suggest to you to check out my main MIDI dataset. Particularly, you may be interested in its search and explore software which can help you to search for MIDIs and also discover new or similar music. Check it out and let me know if it is helpful: Anyways, I hope this helps and answers your questions. Sincerely, Alex. |
Beta Was this translation helpful? Give feedback.
-
@Timzoid As per your request, I've added an option to delete results dir in Giant Bulk Generator colab. I hope this will help :) Also, make sure to check out Giant extra-large model. You can select it from the model's loader cell. Its slower and you will have to reduce the number of generated batches, but it is more stable and generates nicer output IMHO. Alex |
Beta Was this translation helpful? Give feedback.
-
Hi Alex, so I tried the "delete results dir" in the Giant bulk generator and yes, that works fine. As I mentioned before, one of the big advantages in terms of organization is the way the bulk generator outputs a sub-directory that actually has the name of the seed from which the continuations were done.
I've been looking for the Fast Extra Large model to appear, because you mentioned it before, and I assumed it would be like Allegro, where both models are showing without having to click on a drop-down arrow. Anyone coming from Allegro will expect to see it that way. That said, I think hiding the second model is better for first-time users especially, since many of them are new to AI music generation, and do not even understand the basics of training models, and how their size affects quality of output and speed of output.
But to make it clearer, add this as follows next to "Choose Model":
Choose Model (Very Fast Large or Fast Extra Large)
In that way, people will understand there are two models and see that the Very Fast Large model is showing, and know to look for the drop-down arrow to find the other one. Then you're covering all bases. People coming from Allegro won't be confused, and people new to CoLab, who haven't had to click on any arrow before, up to that point, except arrows that have a circle around them and are used to run cells, will have enough intuition to think, "Oh, it must be this drop-down arrow over here."
I'm not sure if I mentioned this to you before, but before using some of these GitHub programs, with the thousands of software that I've used, usually a drop down arrow is for text in a box, not on a line, and the arrow is in the box, not to the right of a line with text on it. I noticed other GitHub Python programs have the same thing, text on a line. Programmers are used to this, I'm sure, but the regular user isn't.
In my recent Allegro tutorial, I try to cover the basics, yet also some of the quirks of using this type of CoLab interface where there are all these arrows, and menu selections the average user doesn't have to be concerned with because they aren't necessary for generating or saving music.
But with all the arrows, I'm guessing it could be alarming, when I, myself, have accidentally clicked on a drop-down arrow and all the lines of code fill the screen, and some new users may do that also. In the tutorial, I show how you can just click on the arrow again to retract the code, and the code is only for programmers who want to see the code, not something people who want to generate music need to be concerned with.
So, this morning, I used the Extra Large Model for generating 100+ seeds, and 30 long continuations, for the first time. Both processing times, seeds and continuations, were not that much longer than for the Very Fast Large model, and my quick review of 50 seeds or so, yes, I think there may be a higher percent that are better quality. I will be using the Extra Large Model from now on.
With the continuations, I'm not sure. The 30 continuations I did on the same original seed using the Very Fast Large model yesterday were so good, I'm not sure that these 30 are better. I did them on the same seed for comparison, setting a new record for me, on the number of continuations I have to build a piece out of a seed. I'll have a better handle on any quality difference in a few weeks.
One thing I've yet to try on Giant, is the multiple sampling on Inpaint. I did some time with that on Allegro, and as you know, found that 1 actually worked as well or better than 2, up to, I think I did a max of 6. However, I tried the Inpaint on Giant at just the one sample, and it works well.
…________________________________
From: Alex ***@***.***>
Sent: Sunday, December 10, 2023 8:12 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> As per your request, I've added an option to delete results dir in Giant Bulk Generator colab. I hope this will help :)
Also, make sure to check out Giant extra-large model. You can select it from the model's loader cell. Its slower and you will have to reduce the number of generated batches, but it is more stable and generates nicer output IMHO.
Alex
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL77QVBUICF6NJORCXEDYIZ2YNAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TQMJVGE2TK>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Ok. Thank you for your feedback and thoughts about Giant. I will add that clarification note to the model loader in the next update :) I am glad that you liked the Large model. I think it works well :) Alex. |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Hey Tim, I hope things are going well for you :) I just ust wanted to direct your attention to my latest creation: Quad Music Transformer https://github.com/asigalov61/Quad-Music-Transformer This new model/implementation is very stable and shows very good results on Piano (continuations) and Choir. Check it out at your convenience and let me know what you think if possible as I value your feedback :) Thank you. Alex |
Beta Was this translation helpful? Give feedback.
-
Hi Alex,
I'm working at a slower pace lately composing, just trying to enjoy it more. Also, with the prospect of the US turning into Trumpistan, spending a little more time following politics.
I tried Quad for a few days. It is fast and may give higher quality Improvs and Continuations, but I'm not sure. I think it creates some fun and unique-sounding seeds. But I've only generated and reviewed about 80 so far. Without the bulk generator, it's tedious generating seeds or continuations.
However, I'm spoiled by the way the Giant Bulk Generator organizes files in terms of work flow for composing, the sequential file numbering when generated and when I save them. I like the way it zips all the files together, and creates a folder name from the seed name that I don't have to create. And it has never frozen on me once! I'm not the most organized person, and if I'm working with over 100 files, the Bulk generator is the way to go in terms of it not being a mess.
I have generated as many as 120 long continuations for a single piece with the bulk generator, and doing that without it, instead of taking just a few hours to generate, and then it is all organized, that would take three days of me sitting there generating a few long continuations at a time, what my RAM can tolerate in long files.
Worse, whether it was Allegro, Giant and the others, I have a problem with that file directory when I'm downloading seeds that I'm trying to save. It will freeze on me and stop downloading. That happened three times this morning with Quad, including once when I generated a batch with a wonderful seed I really wanted to save. I could not save it, only play it using the online interface. That is frustrating.
And, when it freezes, I have to exit COLAB, restart my computer, and re-load all the cells. One change I would like, would be that if the the three dots to the right of the play button to a specific sequence in the online interface, instead of downloading the wav file, it would download the midi, that would be helpful. It must be a tiny minority of people would go through all it takes to use this software, who would want the wav file anyway. I tell viewers up front, unless you are familiar with MIDI software or MIDI in a music writing program like MuseScore, using this AI software is not for you, probably.
The new feature of being able to start on a specific pitch for the Improv cell, while I guess I see a use case for it for a vocal piece, for example, it should be optional, so you can opt out of it. When reviewing seeds, I don't like hearing them start on the same note for the whole batch. Who knows though, perhaps it helps the sequence tend toward a middle range, rather than the extremes. Sometimes a sequence generates that is way too high or way too low, but it sounds good. I just transpose it in MIDI software – takes 10 seconds.
I haven't tried the multi-instrument facility. I meant to do that this morning, but forgot. I noticed the cell for patching, and at first I selected just 0 for grand piano, but I guess you have to run/load the full list or it doesn't work, I found out. I assume that if I did a piece with two parts, piano and sax, so just loading the whole thing, the patches are there to keep those separate parts separate. I'll have to try that to see how it works.
Thanks for your continuing work on the software.
…________________________________
From: Alex ***@***.***>
Sent: Friday, March 8, 2024 4:05 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Hey Tim,
I hope things are going well for you :)
I just ust wanted to direct your attention to my latest creation: Quad Music Transformer
https://github.com/asigalov61/Quad-Music-Transformer
This new model/implementation is very stable and shows very good results on Piano (continuations) and Choir.
Check it out at your convenience and let me know what you think if possible as I value your feedback :)
Thank you.
Alex
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL74TNGR4T52S3UWOTZLYXI73JAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMRVGYYDK>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your feedback, as always :) I will see what I can do to incorporate things you mentioned. I will try to add individual download buttons/links for midis and I will see if I can improve bulk generator as well. I will also see if I can improve patch cell in Quad so it is easier to use and to understand. Meanwhile, try Quad again. I have uploaded a new model today. It is even faster and plays better too. Sincerely, Alex |
Beta Was this translation helpful? Give feedback.
-
Hi Alex,
Now that I've worked with Quad for a number of days, including trying continuations of the same seed with both Giant and Quad, although Quad is faster, I like the sequences better in Giant, especially the 800-tokens-each seeds I generate with Improv. In Quad, the selection of seeds – I generate 500 at a time – tend to sound choral/chordal. Whereas in Giant, the seeds come out with a lot of variation and when listening to them a hundred or two at a time, they're really much easier for me to take.
Quad seeds also have a unique characteristic as follows: Usually, at the end of the seed or continuation, I'll get that instrument that whistles (part of the percussion bells and whistles), which I never got with LAMC, Allegro, or Giant. It is only in about one in 20 seeds or so. Improv in Quad also produces more seeds where there contain voices, even though I have "piano" selected for the "custom" seed generation in Improv. So, most are piano, which I want, but more have instruments than in Giant. Of course, all these generators, including MuseNet, would produce measures that were instruments other than piano, on occasion.
Quad is very fast though. One seed I used for continuations in both Giant and Quad was particularly pronounced in showing the inferiority of the continuations produced by Quad. It was a seed with a fast and strict rhythm, that Giant could replicate, whereas Quad would mess up the rhythm and stumble too much in the continuation sequences. One thing I learned early on though, is to never use a seed that has rhythmic stumbles or irregularities, as it just gets repeated or even made worse in the continuations.
I think if there were a way, it would be great if you could create an Improv generator alone that produces seeds in various styles, including Impressionism. The frontend for MuseNet that a British guy wrote, called MuseTree, had about 300 styles, and I never got to try it, because I didn't know about it until a few days after MuseNet went down on Dec 12, 2022. The people who did use it liked it a lot.
So, anyway, I'm going to be reverting to Giant for a while.
Tim
…________________________________
From: Alex ***@***.***>
Sent: Monday, March 11, 2024 6:55 PM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Thank you for your feedback, as always :)
I will see what I can do to incorporate things you mentioned. I will try to add individual download buttons/links for midis and I will see if I can improve bulk generator as well.
I will also see if I can improve patch cell in Quad so it is easier to use and to understand.
Meanwhile, try Quad again. I have uploaded a new model today. It is even faster and plays better too.
Sincerely,
Alex
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL77ZPDREW4HT5RRE6N3YXZOBNAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DONJTGIZDI>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you very much for this detailed feedback, Tim! I really appreciate it as always :) Quad was designed to handle mostly Piano and Choir. It was trained on a much smaller dataset than the Giant and mostly on the Piano and Choir compostions. So I absolutely agree with you that Quad struggles with anything other than Piano or Choir. Quad also struggles with complex xompositions indeed. I agree completely here. I also agree with you that Giant is the best and most versatile model/implementation and I use it myself as well. In regard to making improv generator in different styles... This is not possible atm because it would require model re-training on a custom style-based dataset and I do not even have anything like that available. MuseNet was produced by a multi-million dollar company which had the resources and money to create a custom dataset like that. I can't do that, unfortunatelly. :( Besides, particular music styles can also vary greatly from song to song because music is a very complex and diverse thingy. So I would say that style-based generation would not be very practical nor it would produce good results. The workaround here would be to use seeds of the particular style and then continue them as usual with the continuation generator. I am sorry I can't offer you anything better here :( Anyway, thank you again for your feedback and if you have any more questions or suggestions, feel free to write at any time :) Sincerely, Alex |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Hey Tim, I have been meaning to direct your attention to my supplemental models/implementations which you may find interesting and/or useful: Check it out at your convenience and let me know what you think if possible :) Drums Transformer works very well. Anyways, I hope you will find these useful and any feedback from you would be very much appreciated as always :) Sincerely, Alex. |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Hey Tim! I wanted to direct your attention to my latest creation; https://huggingface.co/spaces/asigalov61/Harmonic-Melody-MIDI-Mixer It turned out great and I wanted to share it with you and also hear your thoughts about it. Thanks. Hope your are doing well :) Alex. |
Beta Was this translation helpful? Give feedback.
-
Hi Alex,
The Harmonic Melody MIDI Mixer is pretty wild. Since I get confused working on too many tracks, or it ends up being time consuming, it would be nice if it had an option to output in piano only, instead of random instruments. It's nice that it works fast, and that makes it easy to keep doing them with the same piece until something works.
One day I had a look at your other Huggingface projects. I'm going to try using the Inpaint one more. I took a break from a lot of composing, and am learning a new DAW. It doesn't come with good orchestral instruments, but I managed to find a really good free piano VST, American Home Grand by Arturia. It comes in their free instruments package with a bunch of synths and 4G of other things I'll never use. The piano sound rivals Pianoteq in sound, so for free, instead of $250, it is a bargain.
I want to start composing more AI pieces, and I have tried composing a few things in the last month. They just weren't worth posting. Otherwise, I'm fine.
Tim
…________________________________
From: Alex ***@***.***>
Sent: Tuesday, June 4, 2024 8:27 AM
To: asigalov61/Allegro-Music-Transformer ***@***.***>
Cc: Timzoid ***@***.***>; Mention ***@***.***>
Subject: Re: [asigalov61/Allegro-Music-Transformer] Did Allegro get faster? (Discussion #6)
@Timzoid<https://github.com/Timzoid> Hey Tim!
I wanted to direct your attention to my latest creation;
https://huggingface.co/spaces/asigalov61/Harmonic-Melody-MIDI-Mixer
It turned out great and I wanted to share it with you and also hear your thoughts about it.
Thanks.
Hope your are doing well :)
Alex.
—
Reply to this email directly, view it on GitHub<#6 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BA7DL72P5K4QCP7OAOWUAQTZFXFFPAVCNFSM6AAAAAA6ALKYGGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNRVG42DA>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@Timzoid Thank you for your feedback, Tim :) Yes, HMMM is on the alternative/psychedelic side :) But it still can produce nice production tracks now and then :) I've added Solo Piano option to it as you have suggested. Enjoy! :) I also added two other options to play with so check them all out :) RE: PianoTeq: I thought the same thing. Its too expensive even though its pretty nice :) I personally use Audacity as DAW and MIDI Editor for MIDI stuff. For Piano in particular, I use SF2 banks from soundfonts4u. They are free, very nice and almost as good as PianoTeq. Check it out if you have not already :) Also, check out Internet Archive 500 Sound Fonts Banks. This archive contains some very nice SF2s like Orpheus. Check it out too :) And make sure to check my Hugging Face spaces page regularly as I add new demos/projects to it all the time, which you may find interesting and useful :) Anyways, thank you again for your feedback and let me know if you have any more suggestions about any of my projects :) Sincerely, Alex |
Beta Was this translation helpful? Give feedback.
-
@Timzoid Just poking you... I wanted to direct your attention to my colab work with SkyTNT: https://huggingface.co/spaces/skytnt/midi-composer We greatly improved it and added new models. We also added self-continuation option which is great for controlled music generation. So check it out please and let me know what you think. Sincerely, Alex |
Beta Was this translation helpful? Give feedback.
-
With the recent update to the TMIDIX.py and x_transformer.py, in using the Original Version, the Improv and Continuation features seem to be faster, and maybe a little better quality than they were also, if that is possible?
I didn't quite realize this before, but the Improv feature is a gold mine of good seeds, and they generate incredibly fast, I mean faster than MuseNet. I just counted today, and I've posted 56 pieces to YT made in either LAMC, Euterpe or Allegro since I've started using it, but I think I was missing something huge by doing so much "inpaint" on LAMC.
A few months ago, my first experiences with the Improv feature weren't so good, for some reason, and I avoided it. I wasn't generating in enough batches at a time, and now 10 batches at 450 token length take just over a minute to generate in Allegro, and if I find one good seed in 20 or so generated, that is good.
Then, to be able to fix the seed in MIDI software and then back into Allegro for 10 continuations on it of maybe 1200 token length, that's more than enough for a piece, when you're willing to slice and dice as much I do.
I'm definitely going to make a video for Allegro, Original Version. Just going to use it more to make sure I understand everything it is doing.
Beta Was this translation helpful? Give feedback.
All reactions