We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data.
-
@GrapheneOS
This is a great addition. I have been using Sherpa TTS https://
github.com/woheller69/ttsengine and Futo Keyboard for STT. https://keyboard.futo.org/@Bernard We started working on this because Sherpa didn't meet our requirements including overly high latency making it unsuitable for blind users to use with TalkBack.
-
Whisper is actually closed source. Open weights is another way of saying permissively licensed closed source. Our implementation of both text-to-speech and speech-to-text will be actual open source which means people can actually fork it and add/change/remove training data, etc.
@GrapheneOS i could help with spanish and esperanto models if needed
-
We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
@GrapheneOS@grapheneos.social Well, some times ago, open-source TTS was pretty lacking, but now Kaldi / Sherpa is pretty good, did you check it? If yes, what was the problem with it?
-
@GrapheneOS@grapheneos.social Well, some times ago, open-source TTS was pretty lacking, but now Kaldi / Sherpa is pretty good, did you check it? If yes, what was the problem with it?
@breizh It wasn't quite good enough and has very high latency which makes it unsuitable for use with TalkBack. We're making this because existing options including Sherpa don't meet our requirements. Otherwise, we could have forked those. It made more sense to make our own instead which we'll be able to continue improving long term. It's similar to our network location and geocoding implementations where we want things done a particular way focused on high quality in all areas we care about.
-
Existing implementations of text-to-speech and speech-to-text didn't meet our functionality or usability requirements. We want at least very high quality, low latency and robust implementations of both for English included in the OS. It will help make GrapheneOS more accessible.
@GrapheneOS Fascinating is the text to speech and vice versa model and code you’re working on platform specific?
-
Our full time developer working on this already built their own Transcribro app for on-device speech-to-text available in the Accrescent app store. For GrapheneOS itself, we want actual open source implementations of these features rather than OpenAI's phony open source though.
@GrapheneOS i was really impressed with the efficacy and UI of transcribro. no surprise to hear that was the mark of a grapheneos app
-
Whisper is actually closed source. Open weights is another way of saying permissively licensed closed source. Our implementation of both text-to-speech and speech-to-text will be actual open source which means people can actually fork it and add/change/remove training data, etc.
@GrapheneOS the "largeness" of language models is precisely a measure of the difficulty to reproduce them. this methodology has some similarities to something i proposed to huggingface a few years back in a cover letter. no surprise to see they were not interested in reproducibility or the scientific method
-
@GrapheneOS Fascinating is the text to speech and vice versa model and code you’re working on platform specific?
@tchambers It's not really platform specific. It currently runs on the CPU but we plan to add TPU support for Tensor and NPU support for Snapdragon in the future. It's made for GrapheneOS and we're not interested in doing any significant work on use outside of GrapheneOS. It will be possible to install it from our App Store on other Android 16+ operating systems but it's not our focus. We're focused on making GrapheneOS better and haven't gotten much out of making stuff available elsewhere.
-
We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
that is awesome.
how far if ever until we have a stable terminal app that can be run from any user profile? -
@GrapheneOS the "largeness" of language models is precisely a measure of the difficulty to reproduce them. this methodology has some similarities to something i proposed to huggingface a few years back in a cover letter. no surprise to see they were not interested in reproducibility or the scientific method
@GrapheneOS i have also been trying to find similarly motivated people to collaborate with on a research project to reproduce the fawkes facial recognition poisoner upon a mobile device (ideally as an asynchronous but fully local image postprocessing technique) cc @xyhhx @bunnyhero
-
@GrapheneOS i have also been trying to find similarly motivated people to collaborate with on a research project to reproduce the fawkes facial recognition poisoner upon a mobile device (ideally as an asynchronous but fully local image postprocessing technique) cc @xyhhx @bunnyhero
@GrapheneOS @xyhhx @bunnyhero i have been putting it off repeatedly but the fawkes paper itself is very high quality and imo intended to be reproduced. if there are resources your team has developed or considered regarding modern hardware on mobile phones for statistical training and inference (fawkes especially requires a training step with local user input iirc) it would be tremendously helpful for our goals here.
-
@GrapheneOS @xyhhx @bunnyhero i have been putting it off repeatedly but the fawkes paper itself is very high quality and imo intended to be reproduced. if there are resources your team has developed or considered regarding modern hardware on mobile phones for statistical training and inference (fawkes especially requires a training step with local user input iirc) it would be tremendously helpful for our goals here.
@GrapheneOS @xyhhx @bunnyhero we obviously expect reduced efficacy vs the SANDlab implementation with GPU acceleration but the math and the code are both very approachable and since its publication we have seen phones add specific "NPU" chips for matmul/etc and this would be a fun way to subvert the utility of "AI" ubiquitization to embed panoptic surveillance
-
Whisper is actually closed source. Open weights is another way of saying permissively licensed closed source. Our implementation of both text-to-speech and speech-to-text will be actual open source which means people can actually fork it and add/change/remove training data, etc.
@GrapheneOS I replied to one of your posts a couple months ago when yall asked about TTS, suggesting Piper TTS models (https://github.com/OHF-Voice/piper1-gpl). There are def some quality (English) and performant models, though I haven't dug into whether they are truly open source (aka open dataset) or just open weights.
Either way, I am very excited to see more projects by gOS and more quality options in the TTS & STT spaces. People with disabilities deserve equal access to technology, and anything that brings us closer to a world were that is possible is a good thing.
-
We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
@GrapheneOS happy to help with French
-
We're going to build our own speech-to-text implementation to go along with this too. We're starting with an English model for both but we can add other languages which have high quality training data available. English and Mandarin have by far the most training data available.
@GrapheneOS will this enable speach commands on android auto?
-
We're going to build our own speech-to-text implementation to go along with this too. We're starting with an English model for both but we can add other languages which have high quality training data available. English and Mandarin have by far the most training data available.
@GrapheneOS please, please add German to that list
-
We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
@GrapheneOS I am very excited to not have to use an external tool to do this anymore.
-
Our full time developer working on this already built their own Transcribro app for on-device speech-to-text available in the Accrescent app store. For GrapheneOS itself, we want actual open source implementations of these features rather than OpenAI's phony open source though.
@GrapheneOS highly interested in seeing high quality open source TTS/STT, great work!
-
We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
@GrapheneOS How good is this model at meowing?
-
@GrapheneOS How good is this model at meowing?
