Its working now both in chinese and english! Thanks!
@SWivid Maybe its worth adding a 'ONNX' branch at [https://huggingface.co/SWivid/F5-TTS/tree/main](https://huggingface.co/SWivid/F5-TTS/tree/...
nothing wrong with your setting i think.
if the crash just have happened at the same training point, maybe we could see throught for data used at that specific point.
for if you are using the a10...
@lpscr just temporarily lol.
you can see from `dev` branch now, will go for that structure, though import and path dependencies not solved yet, will do that tomorrow when i wake up.
after that fi...
> @SWivid no. It's 600ish hours of 500k samples. Nothing longer than 25 seconds.
It's weird. The training process with 38400 frames per gpu just goes well on A100 80G for us, and it usually use ...
Multi-style does achieve the same output would have been nice to have similar in podcasts-style tab tho since the add option is better than having for example 2-3 fixed speakers. If it just remains...
See history here I am opening it up again because it is not closed to my satisfaction and I do not find the issue closed a discussion and clarification should be allowed before closing:
Previous...
Multi-style does achieve the same output would have been nice to have similar in podcasts-style tab tho since the add option is better than having for example 2-3 fixed speakers. If it just remains...
https://github.com/SWivid/F5-TTS/blob/32c3ee77017d728321ffaa5d10e9f9f6cd44f20d/gradio_app.py#L48
Could temporarily change `str(cached_path("hf://SWivid/F5-TTS/F5TTS_Base/model_1200000.safetensors"...
Hi,
I'm training on a single A100. I've tried multiple batch sizes, but the same thing keeps happening.
I usually use between 10-25% of all available GPU VRAM, and then all of the sudden it ...
Hello,
I've come to this repo recently, so apologies if I've missed something here.
Things sound great, although I'm currently doing some finetuning tests from the original f5tts checkpoint with ...