#python #audio #deep_learning #noise_suppression #pytorch #rust #speech #speech_enhancement
https://github.com/Rikorose/DeepFilterNet
https://github.com/Rikorose/DeepFilterNet
GitHub
GitHub - Rikorose/DeepFilterNet: Noise supression using deep filtering
Noise supression using deep filtering. Contribute to Rikorose/DeepFilterNet development by creating an account on GitHub.
❤1
#swift #inference #ios #macos #pretrained_models #speech_recognition #swift #transformers #visionos #watchos #whisper
WhisperKit is a tool that helps your Apple devices recognize speech from audio files or live recordings using OpenAI's Whisper model. It works locally on your device, which means it doesn't need internet connection once set up. To use it, you can add WhisperKit to your Swift project easily through the Swift Package Manager or install a command-line version using Homebrew. This tool is beneficial because it allows you to transcribe audio quickly and efficiently right on your device, making it useful for various applications like voice assistants or transcription services.
https://github.com/argmaxinc/WhisperKit
WhisperKit is a tool that helps your Apple devices recognize speech from audio files or live recordings using OpenAI's Whisper model. It works locally on your device, which means it doesn't need internet connection once set up. To use it, you can add WhisperKit to your Swift project easily through the Swift Package Manager or install a command-line version using Homebrew. This tool is beneficial because it allows you to transcribe audio quickly and efficiently right on your device, making it useful for various applications like voice assistants or transcription services.
https://github.com/argmaxinc/WhisperKit
GitHub
GitHub - argmaxinc/WhisperKit: On-device Speech Recognition for Apple Silicon
On-device Speech Recognition for Apple Silicon. Contribute to argmaxinc/WhisperKit development by creating an account on GitHub.
#python #bert #deep_learning #flax #hacktoberfest #jax #language_model #language_models #machine_learning #model_hub #natural_language_processing #nlp #nlp_library #pretrained_models #python #pytorch #pytorch_transformers #seq2seq #speech_recognition #tensorflow #transformer
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
The Hugging Face Transformers library provides thousands of pretrained models for various tasks like text, image, and audio processing. These models can be used for tasks such as text classification, image detection, speech recognition, and more. The library supports popular deep learning frameworks like JAX, PyTorch, and TensorFlow, making it easy to switch between them.
The benefit to the user is that you can quickly download and use these pretrained models with just a few lines of code, saving time and computational resources. You can also fine-tune these models on your own datasets and share them with the community. Additionally, the library offers a simple `pipeline` API for immediate use on different inputs, making it user-friendly for both researchers and practitioners. This helps in reducing compute costs and carbon footprint while enabling high-performance results across various machine learning tasks.
https://github.com/huggingface/transformers
GitHub
GitHub - huggingface/transformers: 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models…
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training. - GitHub - huggingface/t...
#jupyter_notebook #computer_vision #deep_learning #drug_discovery #forecasting #large_language_models #mxnet #nlp #paddlepaddle #pytorch #recommender_systems #speech_recognition #speech_synthesis #tensorflow #tensorflow2 #translation
This repository provides top-quality deep learning examples that are easy to train and deploy on NVIDIA GPUs. It includes a wide range of models for computer vision, natural language processing, recommender systems, speech to text, and more. These examples are updated monthly and come in Docker containers with the latest NVIDIA software, ensuring the best performance. The models support multiple GPUs and nodes, and some are optimized for Tensor Cores, which can significantly speed up training. This makes it easier for users to achieve high accuracy and performance in their deep learning projects.
https://github.com/NVIDIA/DeepLearningExamples
This repository provides top-quality deep learning examples that are easy to train and deploy on NVIDIA GPUs. It includes a wide range of models for computer vision, natural language processing, recommender systems, speech to text, and more. These examples are updated monthly and come in Docker containers with the latest NVIDIA software, ensuring the best performance. The models support multiple GPUs and nodes, and some are optimized for Tensor Cores, which can significantly speed up training. This makes it easier for users to achieve high accuracy and performance in their deep learning projects.
https://github.com/NVIDIA/DeepLearningExamples
GitHub
GitHub - NVIDIA/DeepLearningExamples: State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with…
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure. - NVIDIA/DeepLearningExamples
#python #speech_synthesis #text_to_speech #tts
The `edge-tts` module lets you use Microsoft Edge's text-to-speech service in your Python code or through commands. You can install it using `pip install edge-tts`. With this module, you can convert text to speech, change the voice and language, adjust the speech rate, volume, and pitch, and even play back the speech immediately. This is useful because it allows you to easily create audio files from text and customize how they sound, making it handy for various applications like automated announcements or educational tools.
https://github.com/rany2/edge-tts
The `edge-tts` module lets you use Microsoft Edge's text-to-speech service in your Python code or through commands. You can install it using `pip install edge-tts`. With this module, you can convert text to speech, change the voice and language, adjust the speech rate, volume, and pitch, and even play back the speech immediately. This is useful because it allows you to easily create audio files from text and customize how they sound, making it handy for various applications like automated announcements or educational tools.
https://github.com/rany2/edge-tts
GitHub
GitHub - rany2/edge-tts: Use Microsoft Edge's online text-to-speech service from Python WITHOUT needing Microsoft Edge or Windows…
Use Microsoft Edge's online text-to-speech service from Python WITHOUT needing Microsoft Edge or Windows or an API key - rany2/edge-tts
#python #asr #audio #audio_processing #deep_learning #huggingface #language_model #pytorch #speaker_diarization #speaker_recognition #speaker_verification #speech_enhancement #speech_processing #speech_recognition #speech_separation #speech_to_text #speech_toolkit #speechrecognition #spoken_language_understanding #transformers #voice_recognition
SpeechBrain is an open-source toolkit that helps you quickly develop Conversational AI technologies, such as speech assistants, chatbots, and language models. It uses PyTorch and offers many pre-trained models and tutorials to make it easy to get started. You can train models for various tasks like speech recognition, speaker recognition, and text processing with just a few lines of code. SpeechBrain also supports GPU training, dynamic batching, and integration with HuggingFace models, making it powerful and efficient. This toolkit is beneficial because it simplifies the development process, provides extensive documentation and tutorials, and is highly customizable, making it ideal for research, prototyping, and educational purposes.
https://github.com/speechbrain/speechbrain
SpeechBrain is an open-source toolkit that helps you quickly develop Conversational AI technologies, such as speech assistants, chatbots, and language models. It uses PyTorch and offers many pre-trained models and tutorials to make it easy to get started. You can train models for various tasks like speech recognition, speaker recognition, and text processing with just a few lines of code. SpeechBrain also supports GPU training, dynamic batching, and integration with HuggingFace models, making it powerful and efficient. This toolkit is beneficial because it simplifies the development process, provides extensive documentation and tutorials, and is highly customizable, making it ideal for research, prototyping, and educational purposes.
https://github.com/speechbrain/speechbrain
GitHub
GitHub - speechbrain/speechbrain: A PyTorch-based Speech Toolkit
A PyTorch-based Speech Toolkit. Contribute to speechbrain/speechbrain development by creating an account on GitHub.
#python #audio_generation #audio_synthesis #audioldm #audit #fastspeech2 #hifi_gan #music_generation #naturalspeech2 #singing_voice_conversion #speech_synthesis #text_to_audio #text_to_speech #vall_e #vits #voice_conversion
Amphion is a toolkit for generating audio, music, and speech. It helps researchers and engineers, especially beginners, by providing tools for various tasks like turning text into speech (TTS), singing voice conversion (SVC), and text to audio (TTA). Amphion includes visualizations to help understand how these models work, which is very useful for learning. It also offers different vocoders to produce high-quality audio and evaluation metrics to ensure the generated audio is good. This toolkit is free to use under the MIT License and can be installed easily using Python or Docker. Using Amphion, you can create high-quality audio and music with advanced features, making it a powerful tool for both research and practical applications.
https://github.com/open-mmlab/Amphion
Amphion is a toolkit for generating audio, music, and speech. It helps researchers and engineers, especially beginners, by providing tools for various tasks like turning text into speech (TTS), singing voice conversion (SVC), and text to audio (TTA). Amphion includes visualizations to help understand how these models work, which is very useful for learning. It also offers different vocoders to produce high-quality audio and evaluation metrics to ensure the generated audio is good. This toolkit is free to use under the MIT License and can be installed easily using Python or Docker. Using Amphion, you can create high-quality audio and music with advanced features, making it a powerful tool for both research and practical applications.
https://github.com/open-mmlab/Amphion
GitHub
GitHub - open-mmlab/Amphion: Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support…
Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audi...
👍1
#python #deep_learning #glow_tts #hifigan #melgan #multi_speaker_tts #python #pytorch #speaker_encoder #speaker_encodings #speech #speech_synthesis #tacotron #text_to_speech #tts #tts_model #vocoder #voice_cloning #voice_conversion #voice_synthesis
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
The new version of TTS (Text-to-Speech) from Coqui.ai, called TTSv2, is now available with several improvements. It supports 16 languages and has better performance overall. You can fine-tune the models using the provided code and examples. The TTS system can now stream audio with less than 200ms latency, making it very responsive. Additionally, you can use over 1,100 Fairseq models and new features like voice cloning and voice conversion. This update also includes faster inference with the Tortoise model and support for multiple speakers and languages. These enhancements make it easier and more efficient to generate high-quality speech from text.
https://github.com/coqui-ai/TTS
GitHub
GitHub - coqui-ai/TTS: 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production - coqui-ai/TTS
#python #ai #llm #slm #speech
Ultravox is a fast and advanced AI model that can understand both text and human speech without needing a separate step for speech recognition. It responds quickly, taking only about 150 milliseconds to start processing audio content. This makes it useful for real-time voice conversations. You can try it out through a demo page or by running it locally on your computer. Ultravox also allows you to train it with your own audio data, making it customizable for different languages or specific needs. Overall, Ultravox simplifies and speeds up interactions between humans and AI systems.
https://github.com/fixie-ai/ultravox
Ultravox is a fast and advanced AI model that can understand both text and human speech without needing a separate step for speech recognition. It responds quickly, taking only about 150 milliseconds to start processing audio content. This makes it useful for real-time voice conversations. You can try it out through a demo page or by running it locally on your computer. Ultravox also allows you to train it with your own audio data, making it customizable for different languages or specific needs. Overall, Ultravox simplifies and speeds up interactions between humans and AI systems.
https://github.com/fixie-ai/ultravox
GitHub
GitHub - fixie-ai/ultravox: A fast multimodal LLM for real-time voice
A fast multimodal LLM for real-time voice. Contribute to fixie-ai/ultravox development by creating an account on GitHub.
#python #python #realtime #speech_to_text
RealtimeSTT is a library that converts speech to text in real-time. It listens to your microphone and transcribes what you say immediately. Here are the key benefits It uses advanced models like Faster-Whisper for quick and precise transcription.
- **Voice Activity Detection** You can set a specific word, like "Jarvis," to start the recording.
- **Realtime Transcription** Allows you to adjust settings like sensitivity, model size, and even use GPU for better performance.
Installing it is simple with `pip install RealtimeSTT`, and it includes examples to get you started quickly. This library is great for building voice-controlled applications or any project needing real-time speech-to-text functionality.
https://github.com/KoljaB/RealtimeSTT
RealtimeSTT is a library that converts speech to text in real-time. It listens to your microphone and transcribes what you say immediately. Here are the key benefits It uses advanced models like Faster-Whisper for quick and precise transcription.
- **Voice Activity Detection** You can set a specific word, like "Jarvis," to start the recording.
- **Realtime Transcription** Allows you to adjust settings like sensitivity, model size, and even use GPU for better performance.
Installing it is simple with `pip install RealtimeSTT`, and it includes examples to get you started quickly. This library is great for building voice-controlled applications or any project needing real-time speech-to-text functionality.
https://github.com/KoljaB/RealtimeSTT
GitHub
GitHub - KoljaB/RealtimeSTT: A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake…
A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription. - KoljaB/RealtimeSTT
#cplusplus #aarch64 #android #arm32 #asr #cpp #csharp #dotnet #ios #lazarus #linux #macos #mfc #object_pascal #onnx #raspberry_pi #risc_v #speech_to_text #text_to_speech #vits #windows
This tool supports various speech functions like speech recognition, text-to-speech, speaker identification, and more. It works on multiple platforms including Android, iOS, Windows, macOS, and Linux, and supports several programming languages such as C++, Python, JavaScript, and others. You can use it locally or through web assembly, making it versatile and convenient. This benefits you by allowing you to integrate advanced speech capabilities into your projects easily, regardless of the platform or programming language you use.
https://github.com/k2-fsa/sherpa-onnx
This tool supports various speech functions like speech recognition, text-to-speech, speaker identification, and more. It works on multiple platforms including Android, iOS, Windows, macOS, and Linux, and supports several programming languages such as C++, Python, JavaScript, and others. You can use it locally or through web assembly, making it versatile and convenient. This benefits you by allowing you to integrate advanced speech capabilities into your projects easily, regardless of the platform or programming language you use.
https://github.com/k2-fsa/sherpa-onnx
GitHub
GitHub - k2-fsa/sherpa-onnx: Speech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD…
Speech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Andr...
#python #artificial_intelligence #llm #python #real_time #speech_to_text #text_to_speech
FastRTC is a Python library that helps you create real-time audio and video streams using WebRTC or WebSockets. It allows you to turn any Python function into a live stream, making it useful for applications like voice chats or video conferencing. Key features include automatic voice detection, built-in UI support with Gradio, and integration with FastAPI for custom frontends. This library simplifies the process of handling real-time communication, allowing developers to focus on their application logic rather than complex streaming setups.
https://github.com/freddyaboulton/fastrtc
FastRTC is a Python library that helps you create real-time audio and video streams using WebRTC or WebSockets. It allows you to turn any Python function into a live stream, making it useful for applications like voice chats or video conferencing. Key features include automatic voice detection, built-in UI support with Gradio, and integration with FastAPI for custom frontends. This library simplifies the process of handling real-time communication, allowing developers to focus on their application logic rather than complex streaming setups.
https://github.com/freddyaboulton/fastrtc
GitHub
GitHub - gradio-app/fastrtc: The python library for real-time communication
The python library for real-time communication. Contribute to gradio-app/fastrtc development by creating an account on GitHub.
#python #asr #automatic_speech_recognition #conformer #e2e_models #production_ready #pytorch #speech_recognition #transformer #whisper
WeNet is a powerful tool for speech recognition that helps turn spoken words into text. It's designed to be easy to use and works well in real-world situations, making it great for businesses and developers. WeNet provides accurate results on many public datasets and is lightweight, meaning it doesn't require a lot of resources to run. This makes it beneficial for users who need reliable speech-to-text functionality without complex setup or maintenance.
https://github.com/wenet-e2e/wenet
WeNet is a powerful tool for speech recognition that helps turn spoken words into text. It's designed to be easy to use and works well in real-world situations, making it great for businesses and developers. WeNet provides accurate results on many public datasets and is lightweight, meaning it doesn't require a lot of resources to run. This makes it beneficial for users who need reliable speech-to-text functionality without complex setup or maintenance.
https://github.com/wenet-e2e/wenet
GitHub
GitHub - wenet-e2e/wenet: Production First and Production Ready End-to-End Speech Recognition Toolkit
Production First and Production Ready End-to-End Speech Recognition Toolkit - wenet-e2e/wenet
#python #apple_silicon #audio_processing #mlx #multimodal #speech_recognition #speech_synthesis #speech_to_text #text_to_speech #transformers
MLX-Audio is a powerful tool for converting text into speech and speech into new audio. It works well on Apple Silicon devices, like M-series chips, making it fast and efficient. You can choose from different languages and voices, and even adjust how fast the speech is. It also includes a web interface where you can see audio in 3D and play your own files. This tool is helpful for making audiobooks, interactive media, and personal projects because it's easy to use and provides high-quality audio quickly.
https://github.com/Blaizzy/mlx-audio
MLX-Audio is a powerful tool for converting text into speech and speech into new audio. It works well on Apple Silicon devices, like M-series chips, making it fast and efficient. You can choose from different languages and voices, and even adjust how fast the speech is. It also includes a web interface where you can see audio in 3D and play your own files. This tool is helpful for making audiobooks, interactive media, and personal projects because it's easy to use and provides high-quality audio quickly.
https://github.com/Blaizzy/mlx-audio
GitHub
GitHub - Blaizzy/mlx-audio: A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX…
A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speech analysis on Apple Silicon. - Blaizzy/mlx-audio
#python #asr #deeplearning #generative_ai #large_language_models #machine_translation #multimodal #neural_networks #speaker_diariazation #speaker_recognition #speech_synthesis #speech_translation #tts
NVIDIA NeMo is a powerful, easy-to-use platform for building, customizing, and deploying generative AI models like large language models (LLMs), vision language models, and speech AI. It lets you quickly train and fine-tune models using pre-built code and checkpoints, supports the latest model architectures, and works on cloud, data center, or edge environments. NeMo 2.0 is even more flexible and scalable, with Python-based configuration and modular design, making it simple to experiment and scale up. The main benefit is that you can create advanced AI applications faster, with less effort, and at lower cost, while getting high performance and easy deployment options[1][2][3].
https://github.com/NVIDIA/NeMo
NVIDIA NeMo is a powerful, easy-to-use platform for building, customizing, and deploying generative AI models like large language models (LLMs), vision language models, and speech AI. It lets you quickly train and fine-tune models using pre-built code and checkpoints, supports the latest model architectures, and works on cloud, data center, or edge environments. NeMo 2.0 is even more flexible and scalable, with Python-based configuration and modular design, making it simple to experiment and scale up. The main benefit is that you can create advanced AI applications faster, with less effort, and at lower cost, while getting high performance and easy deployment options[1][2][3].
https://github.com/NVIDIA/NeMo
GitHub
GitHub - NVIDIA-NeMo/NeMo: A scalable generative AI framework built for researchers and developers working on Large Language Models…
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech) - NVIDIA-NeMo/NeMo
#python #ai #ai_art #art #asset_generator #chatbot #deep_learning #desktop_app #image_generation #mistral #multimodal #privacy #pygame #pyside6 #python #self_hosted #speech_to_text #stable_diffusion #text_to_image #text_to_speech #text_to_speech_app
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
AI Runner is a tool that lets you use AI on your own computer without needing the internet. It can do many things like **voice chatbots**, **text-to-image** generation, and **image editing**. You can also make AI personalities for more interesting conversations. It runs fast and securely, keeping your data private. To use AI Runner, you need a good computer with a strong GPU, like an NVIDIA RTX 3060 or better. This helps keep your data safe and makes AI tasks faster.
https://github.com/Capsize-Games/airunner
GitHub
GitHub - Capsize-Games/airunner: Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated…
Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows - Capsize-Games/airunner
#jupyter_notebook #android #asr #deep_learning #deep_neural_networks #deepspeech #google_speech_to_text #ios #kaldi #offline #privacy #python #raspberry_pi #speaker_identification #speaker_verification #speech_recognition #speech_to_text #speech_to_text_android #stt #voice_recognition #vosk
Vosk is a powerful tool for recognizing speech without needing the internet. It supports over 20 languages and dialects, making it useful for many different users. Vosk is small and efficient, allowing it to work on small devices like smartphones and Raspberry Pi. It can be used for things like chatbots, smart home devices, and creating subtitles for videos. This means users can have private and fast speech recognition anywhere, which is especially helpful when internet access is limited.
https://github.com/alphacep/vosk-api
Vosk is a powerful tool for recognizing speech without needing the internet. It supports over 20 languages and dialects, making it useful for many different users. Vosk is small and efficient, allowing it to work on small devices like smartphones and Raspberry Pi. It can be used for things like chatbots, smart home devices, and creating subtitles for videos. This means users can have private and fast speech recognition anywhere, which is especially helpful when internet access is limited.
https://github.com/alphacep/vosk-api
GitHub
GitHub - alphacep/vosk-api: Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and…
Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node - alphacep/vosk-api
#python #audiobook #audiobooks #content_creation #content_creator #epub_converter #kokoro #kokoro_82m #kokoro_tts #media_generation #narrator #speech_synthesis #subtitles #text_to_audio #text_to_speech #tts #voice_synthesis
Abogen is a user-friendly tool that quickly converts ePub, PDF, or text files into natural-sounding audio with synchronized subtitles, perfect for creating audiobooks or voiceovers for social media and other projects. You can customize speech speed, choose or mix voices, generate subtitles by sentence or word, and select various audio and subtitle formats. It supports batch processing with queue mode and lets you save chapters separately or merged. Installation is straightforward on Windows, Mac, and Linux, with options for GPU acceleration. This saves you time and effort in producing high-quality audio content from text files efficiently.
https://github.com/denizsafak/abogen
Abogen is a user-friendly tool that quickly converts ePub, PDF, or text files into natural-sounding audio with synchronized subtitles, perfect for creating audiobooks or voiceovers for social media and other projects. You can customize speech speed, choose or mix voices, generate subtitles by sentence or word, and select various audio and subtitle formats. It supports batch processing with queue mode and lets you save chapters separately or merged. Installation is straightforward on Windows, Mac, and Linux, with options for GPU acceleration. This saves you time and effort in producing high-quality audio content from text files efficiently.
https://github.com/denizsafak/abogen
GitHub
GitHub - denizsafak/abogen: Generate audiobooks from EPUBs, PDFs and text with synchronized captions.
Generate audiobooks from EPUBs, PDFs and text with synchronized captions. - denizsafak/abogen
❤1
#typescript #ai #ai_chatbot #angular #chat #chatbot #chatgpt #cohere #component #files #huggingface #image #nextjs #openai #react #react_chatbot #solid #speech #svelte #vue
Deep Chat is an easy-to-add AI chat tool for your website that connects with popular AI services like ChatGPT and HuggingFace or your own custom APIs using just one line of code. It supports text, voice input, speech-to-text, text-to-speech, file sharing, webcam photos, and audio recording, making conversations more interactive. You can customize everything from avatars to message styles and run small AI models directly in the browser without servers. It works with major web frameworks and offers features like local message storage and focus mode for a modern chat experience. This helps you quickly add a powerful, flexible AI chatbot that fits your needs and improves user engagement.
https://github.com/OvidijusParsiunas/deep-chat
Deep Chat is an easy-to-add AI chat tool for your website that connects with popular AI services like ChatGPT and HuggingFace or your own custom APIs using just one line of code. It supports text, voice input, speech-to-text, text-to-speech, file sharing, webcam photos, and audio recording, making conversations more interactive. You can customize everything from avatars to message styles and run small AI models directly in the browser without servers. It works with major web frameworks and offers features like local message storage and focus mode for a modern chat experience. This helps you quickly add a powerful, flexible AI chatbot that fits your needs and improves user engagement.
https://github.com/OvidijusParsiunas/deep-chat
GitHub
GitHub - OvidijusParsiunas/deep-chat: Fully customizable AI chatbot component for your website
Fully customizable AI chatbot component for your website - OvidijusParsiunas/deep-chat
#typescript #accessibility #cross_platform #speech_to_text #tauri_v2
Handy is a free, open-source speech-to-text app that works offline on Windows, macOS, and Linux. You press a shortcut, speak, and your words appear in any text field without sending your voice to the cloud, keeping your data private. It uses advanced models like Whisper and Parakeet for accurate transcription and supports GPU acceleration or CPU-only modes. Handy is simple, privacy-focused, and customizable, making it ideal if you want a secure, extensible tool for converting speech to text without relying on internet services. This helps you type hands-free while protecting your privacy and controlling your data.
https://github.com/cjpais/Handy
Handy is a free, open-source speech-to-text app that works offline on Windows, macOS, and Linux. You press a shortcut, speak, and your words appear in any text field without sending your voice to the cloud, keeping your data private. It uses advanced models like Whisper and Parakeet for accurate transcription and supports GPU acceleration or CPU-only modes. Handy is simple, privacy-focused, and customizable, making it ideal if you want a secure, extensible tool for converting speech to text without relying on internet services. This helps you type hands-free while protecting your privacy and controlling your data.
https://github.com/cjpais/Handy
GitHub
GitHub - cjpais/Handy: A free, open source, and extensible speech-to-text application that works completely offline.
A free, open source, and extensible speech-to-text application that works completely offline. - cjpais/Handy