Hi @v3ucn .
Yes, what I mean is:
will we need also change the parts in code
```
with open(file_metadata, "r", encoding="utf-8") as f:
data = f.read()
```
to `utf-8-sig`, as y...
> > Could you provide me with your Vietnamese vocab.txt? I haven't set it up yet, and I really need it for testing.
> > > [#57 (comment)](https://github.com/SWivid/F5-TTS/discussions/57#discussion...
> Could you provide me with your Vietnamese vocab.txt? I haven't set it up yet, and I really need it for testing.
>
> > [#57 (comment)](https://github.com/SWivid/F5-TTS/discussions/57#discussion...
# Can f5-tts inference more faster?
## environment:
Machine type:vps
python:12.3
OS:ubuntu 22.04
MEM:64G
vCPU:16
GPU: one of nvidia 4090
install: follow the installation step in README.md
...
cuz attention as a quadratic mem cost,
need to adjust current rough batchsampler taking longer samples into consideration, e.g. dynamically adjust threshold, smaller for longer samples.
Could you provide me with your Vietnamese vocab.txt? I haven't set it up yet, and I really need it for testing.
> [#57 (comment)](https://github.com/SWivid/F5-TTS/discussions/57#discussioncomment-...
Hi, I notice GPU mem usage keeps increasing during training
I'm training on 1x A5000 24GB, dataset ~300h contains of 0.2-49s audio files
My train.py settings:
batch_size_per_gpu = 1500
batch...
> Hi @v3ucn , thanks for PR~
>
> Will it also be compatible with `.read()` operations, or better serve as an option for compatibility of current users.
>
> Thanks again.
yes sir,It also wo...
https://github.com/SWivid/F5-TTS/discussions/57#discussioncomment-10959029
This is a good starting point, I'm currently following this to train for Vietnamese.
You'll need to edit vocab.txt to ...
Hi @v3ucn , thanks for PR~
Will it also be compatible with `.read()` operations, or better serve as an option for compatibility of current users.
Thanks again.