#jupyter_notebook #cnn #colab #colab_notebook #computer_vision #deep_learning #deep_neural_networks #fourier #fourier_convolutions #fourier_transform #gan #generative_adversarial_network #generative_adversarial_networks #high_resolution #image_inpainting #inpainting #inpainting_algorithm #inpainting_methods #pytorch
LaMa is a powerful tool for removing objects from images. It uses special techniques called Fourier Convolutions, which help it understand the whole image at once. This makes it very good at filling in large areas that are missing. LaMa can even work well with high-resolution images, even if it was trained on smaller ones. This means you can use it to fix photos where objects are in the way, making them look natural and complete again.
https://github.com/advimman/lama
LaMa is a powerful tool for removing objects from images. It uses special techniques called Fourier Convolutions, which help it understand the whole image at once. This makes it very good at filling in large areas that are missing. LaMa can even work well with high-resolution images, even if it was trained on smaller ones. This means you can use it to fix photos where objects are in the way, making them look natural and complete again.
https://github.com/advimman/lama
GitHub
GitHub - advimman/lama: 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022 - advimman/lama
#jupyter_notebook
To use Amazon Bedrock, you need to set up AWS IAM permissions. This involves giving your AWS identity (like a role or user) the right to access Bedrock services. You do this by creating a policy in the AWS IAM Console. This policy allows your identity to perform actions on Bedrock resources. Having the right permissions helps you manage and use Bedrock effectively, ensuring you can access all its features without any issues. This setup is important for both security and functionality.
https://github.com/aws-samples/amazon-nova-samples
To use Amazon Bedrock, you need to set up AWS IAM permissions. This involves giving your AWS identity (like a role or user) the right to access Bedrock services. You do this by creating a policy in the AWS IAM Console. This policy allows your identity to perform actions on Bedrock resources. Having the right permissions helps you manage and use Bedrock effectively, ensuring you can access all its features without any issues. This setup is important for both security and functionality.
https://github.com/aws-samples/amazon-nova-samples
GitHub
GitHub - aws-samples/amazon-nova-samples
Contribute to aws-samples/amazon-nova-samples development by creating an account on GitHub.
#jupyter_notebook
DINOv2 is a powerful AI model from Meta AI that learns to understand images without needing labeled data, using self-supervised learning. It was trained on 142 million images and creates strong visual features that work well for many tasks like image classification, depth estimation, and segmentation without extra fine-tuning. You can use its pretrained models easily with simple classifiers, saving time and effort. DINOv2 is efficient, scalable, and performs better than many other models, making it great for building versatile computer vision applications quickly and accurately. It’s open-source and ready to use with PyTorch.
https://github.com/facebookresearch/dinov2
DINOv2 is a powerful AI model from Meta AI that learns to understand images without needing labeled data, using self-supervised learning. It was trained on 142 million images and creates strong visual features that work well for many tasks like image classification, depth estimation, and segmentation without extra fine-tuning. You can use its pretrained models easily with simple classifiers, saving time and effort. DINOv2 is efficient, scalable, and performs better than many other models, making it great for building versatile computer vision applications quickly and accurately. It’s open-source and ready to use with PyTorch.
https://github.com/facebookresearch/dinov2
GitHub
GitHub - facebookresearch/dinov2: PyTorch code and models for the DINOv2 self-supervised learning method.
PyTorch code and models for the DINOv2 self-supervised learning method. - facebookresearch/dinov2
#jupyter_notebook #agentic_ai #agents #course #huggingface #langchain #llamaindex #smolagents
The Hugging Face Agents Course is a free, interactive course that teaches you how to build and deploy AI agents. It's divided into four units, starting with the basics of agents and ending with a final project where you create and test your own agent. You'll learn about frameworks like `smolagents`, `LangGraph`, and `LlamaIndex`, and how to use large language models (LLMs) in your agents. The course benefits you by providing hands-on experience and practical skills in AI agent development, helping you become proficient in creating and deploying AI agents.
https://github.com/huggingface/agents-course
The Hugging Face Agents Course is a free, interactive course that teaches you how to build and deploy AI agents. It's divided into four units, starting with the basics of agents and ending with a final project where you create and test your own agent. You'll learn about frameworks like `smolagents`, `LangGraph`, and `LlamaIndex`, and how to use large language models (LLMs) in your agents. The course benefits you by providing hands-on experience and practical skills in AI agent development, helping you become proficient in creating and deploying AI agents.
https://github.com/huggingface/agents-course
GitHub
GitHub - huggingface/agents-course: This repository contains the Hugging Face Agents Course.
This repository contains the Hugging Face Agents Course. - GitHub - huggingface/agents-course: This repository contains the Hugging Face Agents Course.
❤1
#jupyter_notebook #a2a #agentic_ai #dapr #dapr_pub_sub #dapr_service_invocation #dapr_sidecar #dapr_workflow #docker #kafka #kubernetes #langmem #mcp #openai #openai_agents_sdk #openai_api #postgresql_database #rabbitmq #rancher_desktop #redis #serverless_containers
The Dapr Agentic Cloud Ascent (DACA) design pattern helps you build powerful, scalable AI systems that can handle millions of AI agents working together without crashing. It uses Dapr technology with Kubernetes to efficiently manage many AI agents as lightweight virtual actors, ensuring fast response, reliability, and easy scaling. You can start small using free or low-cost cloud tools and grow to planet-scale systems. The OpenAI Agents SDK is recommended for beginners because it is simple, flexible, and gives you good control to develop AI agents quickly. This approach saves costs, avoids vendor lock-in, and supports resilient, event-driven AI workflows, making it ideal for developers aiming to create advanced, cloud-native AI applications[1][2][3][4].
https://github.com/panaversity/learn-agentic-ai
The Dapr Agentic Cloud Ascent (DACA) design pattern helps you build powerful, scalable AI systems that can handle millions of AI agents working together without crashing. It uses Dapr technology with Kubernetes to efficiently manage many AI agents as lightweight virtual actors, ensuring fast response, reliability, and easy scaling. You can start small using free or low-cost cloud tools and grow to planet-scale systems. The OpenAI Agents SDK is recommended for beginners because it is simple, flexible, and gives you good control to develop AI agents quickly. This approach saves costs, avoids vendor lock-in, and supports resilient, event-driven AI workflows, making it ideal for developers aiming to create advanced, cloud-native AI applications[1][2][3][4].
https://github.com/panaversity/learn-agentic-ai
GitHub
GitHub - panaversity/learn-agentic-ai: Learn Agentic AI using Dapr Agentic Cloud Ascent (DACA) Design Pattern and Agent-Native…
Learn Agentic AI using Dapr Agentic Cloud Ascent (DACA) Design Pattern and Agent-Native Cloud Technologies: OpenAI Agents SDK, Memory, MCP, A2A, Knowledge Graphs, Dapr, Rancher Desktop, and Kuberne...
#jupyter_notebook
Learning about Large Language Models (LLMs) can be very beneficial. You can build exciting projects over eight weeks, starting with simple tasks and moving to more complex ones. This journey helps you develop deep expertise in AI and LLMs. You'll learn by doing hands-on projects, which is a fun and effective way to understand how these models work. By the end, you'll have skills that can be used in real-world applications, making it a valuable learning experience.
https://github.com/ed-donner/llm_engineering
Learning about Large Language Models (LLMs) can be very beneficial. You can build exciting projects over eight weeks, starting with simple tasks and moving to more complex ones. This journey helps you develop deep expertise in AI and LLMs. You'll learn by doing hands-on projects, which is a fun and effective way to understand how these models work. By the end, you'll have skills that can be used in real-world applications, making it a valuable learning experience.
https://github.com/ed-donner/llm_engineering
GitHub
GitHub - ed-donner/llm_engineering: Repo to accompany my mastering LLM engineering course
Repo to accompany my mastering LLM engineering course - ed-donner/llm_engineering
#jupyter_notebook #mujoco #physics #robotics
MuJoCo is a powerful physics engine that helps researchers and developers simulate complex movements and interactions, especially in robotics and machine learning. It provides fast and accurate simulations, which are crucial for understanding how objects move and interact with their environment. MuJoCo is beneficial because it allows users to create realistic models of multi-joint systems, compute both forward and inverse dynamics, and even handle contacts and constraints effectively. This makes it a valuable tool for those working in fields like robotics, biomechanics, and animation[1][2][5].
https://github.com/google-deepmind/mujoco
MuJoCo is a powerful physics engine that helps researchers and developers simulate complex movements and interactions, especially in robotics and machine learning. It provides fast and accurate simulations, which are crucial for understanding how objects move and interact with their environment. MuJoCo is beneficial because it allows users to create realistic models of multi-joint systems, compute both forward and inverse dynamics, and even handle contacts and constraints effectively. This makes it a valuable tool for those working in fields like robotics, biomechanics, and animation[1][2][5].
https://github.com/google-deepmind/mujoco
GitHub
GitHub - google-deepmind/mujoco: Multi-Joint dynamics with Contact. A general purpose physics simulator.
Multi-Joint dynamics with Contact. A general purpose physics simulator. - google-deepmind/mujoco
#jupyter_notebook
Unsloth is a tool that makes it much faster and easier to fine-tune large language models like Llama, Mistral, and Gemma, even on regular computers or single GPUs. It uses smart tricks to speed up training by 2 to 5 times and cuts memory use by up to 70%, so you can train models quickly without needing expensive hardware[1][3][4]. The benefit is that anyone—developers, researchers, or AI fans—can create custom AI models for different tasks, from chatting to vision, in less time and with less hassle, using ready-made notebooks and guides for popular models[3][5].
https://github.com/unslothai/notebooks
Unsloth is a tool that makes it much faster and easier to fine-tune large language models like Llama, Mistral, and Gemma, even on regular computers or single GPUs. It uses smart tricks to speed up training by 2 to 5 times and cuts memory use by up to 70%, so you can train models quickly without needing expensive hardware[1][3][4]. The benefit is that anyone—developers, researchers, or AI fans—can create custom AI models for different tasks, from chatting to vision, in less time and with less hassle, using ready-made notebooks and guides for popular models[3][5].
https://github.com/unslothai/notebooks
GitHub
GitHub - unslothai/notebooks: 100+ Fine-tuning Tutorial Notebooks on Google Colab, Kaggle and more.
100+ Fine-tuning Tutorial Notebooks on Google Colab, Kaggle and more. - unslothai/notebooks
#jupyter_notebook #android #asr #deep_learning #deep_neural_networks #deepspeech #google_speech_to_text #ios #kaldi #offline #privacy #python #raspberry_pi #speaker_identification #speaker_verification #speech_recognition #speech_to_text #speech_to_text_android #stt #voice_recognition #vosk
Vosk is a powerful tool for recognizing speech without needing the internet. It supports over 20 languages and dialects, making it useful for many different users. Vosk is small and efficient, allowing it to work on small devices like smartphones and Raspberry Pi. It can be used for things like chatbots, smart home devices, and creating subtitles for videos. This means users can have private and fast speech recognition anywhere, which is especially helpful when internet access is limited.
https://github.com/alphacep/vosk-api
Vosk is a powerful tool for recognizing speech without needing the internet. It supports over 20 languages and dialects, making it useful for many different users. Vosk is small and efficient, allowing it to work on small devices like smartphones and Raspberry Pi. It can be used for things like chatbots, smart home devices, and creating subtitles for videos. This means users can have private and fast speech recognition anywhere, which is especially helpful when internet access is limited.
https://github.com/alphacep/vosk-api
GitHub
GitHub - alphacep/vosk-api: Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and…
Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node - alphacep/vosk-api
#jupyter_notebook #chatglm #chatglm3 #gemma_2b_it #glm_4 #internlm2 #llama3 #llm #lora #minicpm #q_wen #qwen #qwen1_5 #qwen2
This guide helps beginners set up and use open-source large language models (LLMs) on Linux or cloud platforms like AutoDL, with step-by-step instructions for environment setup, model deployment, and fine-tuning for models such as LLaMA, ChatGLM, and InternLM[2][4][5]. It covers everything from basic installation to advanced techniques like LoRA and distributed fine-tuning, and supports integration with tools like LangChain and online demo deployment. The main benefit is making powerful AI models accessible and easy to use for students, researchers, and anyone interested in experimenting with or customizing LLMs for their own projects[2][4][5].
https://github.com/datawhalechina/self-llm
This guide helps beginners set up and use open-source large language models (LLMs) on Linux or cloud platforms like AutoDL, with step-by-step instructions for environment setup, model deployment, and fine-tuning for models such as LLaMA, ChatGLM, and InternLM[2][4][5]. It covers everything from basic installation to advanced techniques like LoRA and distributed fine-tuning, and supports integration with tools like LangChain and online demo deployment. The main benefit is making powerful AI models accessible and easy to use for students, researchers, and anyone interested in experimenting with or customizing LLMs for their own projects[2][4][5].
https://github.com/datawhalechina/self-llm
GitHub
GitHub - datawhalechina/self-llm: 《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程 - datawhalechina/self-llm