#rust #ai #ai_engineering #anthropic #artificial_intelligence #deep_learning #genai #generative_ai #gpt #large_language_models #llama #llm #llmops #llms #machine_learning #ml #ml_engineering #mlops #openai #python #rust
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
TensorZero is a free, open-source tool that helps you build and improve large language model (LLM) applications by using real-world data and feedback. It gives you one simple API to connect with all major LLM providers, collects data from your app’s use, and lets you easily test and improve prompts, models, and strategies. You can see how your LLMs perform, compare different options, and make them smarter, faster, and cheaper over time—all while keeping your data private and under your control. This means you get better results with less effort and cost, and your apps keep improving as you use them[1][2][3].
https://github.com/tensorzero/tensorzero
GitHub
GitHub - tensorzero/tensorzero: TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway…
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation. - tensorzero/tensorzero
#jupyter_notebook #ai #artificial_intelligence #chatgpt #deep_learning #from_scratch #gpt #language_model #large_language_models #llm #machine_learning #python #pytorch #transformer
You can learn how to build your own large language model (LLM) like GPT from scratch with clear, step-by-step guidance, including coding, training, and fine-tuning, all explained with examples and diagrams. This approach mirrors how big models like ChatGPT are made but is designed to run on a regular laptop without special hardware. You also get access to code for loading pretrained models and fine-tuning them for tasks like text classification or instruction following. This helps you deeply understand how LLMs work inside and lets you create your own functional AI assistant, gaining practical skills in AI development[1][2][3][4].
https://github.com/rasbt/LLMs-from-scratch
You can learn how to build your own large language model (LLM) like GPT from scratch with clear, step-by-step guidance, including coding, training, and fine-tuning, all explained with examples and diagrams. This approach mirrors how big models like ChatGPT are made but is designed to run on a regular laptop without special hardware. You also get access to code for loading pretrained models and fine-tuning them for tasks like text classification or instruction following. This helps you deeply understand how LLMs work inside and lets you create your own functional AI assistant, gaining practical skills in AI development[1][2][3][4].
https://github.com/rasbt/LLMs-from-scratch
GitHub
GitHub - rasbt/LLMs-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step - rasbt/LLMs-from-scratch
#jupyter_notebook #artificial_intelligence #book #large_language_models #llm #llms #oreilly #oreilly_books
You can learn how to use Large Language Models (LLMs) effectively through the book *Hands-On Large Language Models* by Jay Alammar and Maarten Grootendorst. This book uses nearly 300 custom illustrations to explain key concepts and practical tools for working with LLMs, including tokenization, transformers, prompt engineering, fine-tuning, and advanced text generation. It also provides runnable code examples in Google Colab, making it easy to practice and apply what you learn. This resource helps you understand and build your own LLM applications confidently, saving you time and effort in mastering complex AI technology. It’s highly recommended for anyone wanting hands-on experience with LLMs.
https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
You can learn how to use Large Language Models (LLMs) effectively through the book *Hands-On Large Language Models* by Jay Alammar and Maarten Grootendorst. This book uses nearly 300 custom illustrations to explain key concepts and practical tools for working with LLMs, including tokenization, transformers, prompt engineering, fine-tuning, and advanced text generation. It also provides runnable code examples in Google Colab, making it easy to practice and apply what you learn. This resource helps you understand and build your own LLM applications confidently, saving you time and effort in mastering complex AI technology. It’s highly recommended for anyone wanting hands-on experience with LLMs.
https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
GitHub
GitHub - HandsOnLLM/Hands-On-Large-Language-Models: Official code repo for the O'Reilly Book - "Hands-On Large Language Models"
Official code repo for the O'Reilly Book - "Hands-On Large Language Models" - HandsOnLLM/Hands-On-Large-Language-Models
#javascript #ai #anthropic #chatbots #chatgpt #claude #gemini #generative_ai #google_deepmind #large_language_models #llm #openai #prompt_engineering #prompt_injection #prompts
There is a collection of system prompts used by popular chatbots like ChatGPT and others. These prompts are instructions that guide how chatbots respond. They are now available publicly on GitHub, which can be very helpful for users. By seeing these prompts, users can understand how chatbots work and even learn how to create their own AI tools. This can help developers build better AI applications and improve their understanding of AI technology.
https://github.com/asgeirtj/system_prompts_leaks
There is a collection of system prompts used by popular chatbots like ChatGPT and others. These prompts are instructions that guide how chatbots respond. They are now available publicly on GitHub, which can be very helpful for users. By seeing these prompts, users can understand how chatbots work and even learn how to create their own AI tools. This can help developers build better AI applications and improve their understanding of AI technology.
https://github.com/asgeirtj/system_prompts_leaks
GitHub
GitHub - asgeirtj/system_prompts_leaks: Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini - asgeirtj/system_prompts_leaks
#jupyter_notebook #chatgpt #finance #fingpt #fintech #large_language_models #machine_learning #nlp #prompt_engineering #pytorch #reinforcement_learning #robo_advisor #sentiment_analysis #technical_analysis
FinGPT is an open-source AI tool designed specifically for finance, helping you analyze financial news, predict stock prices, and get personalized investment advice quickly and affordably. Unlike costly models like BloombergGPT, FinGPT can be updated frequently with new data at a low cost, making it more accessible and timely. It uses advanced techniques like reinforcement learning from human feedback to tailor advice to your preferences, such as risk tolerance. You can use FinGPT for tasks like sentiment analysis, robo-advising, fraud detection, and portfolio optimization, helping you make smarter financial decisions with up-to-date insights.
https://github.com/AI4Finance-Foundation/FinGPT
FinGPT is an open-source AI tool designed specifically for finance, helping you analyze financial news, predict stock prices, and get personalized investment advice quickly and affordably. Unlike costly models like BloombergGPT, FinGPT can be updated frequently with new data at a low cost, making it more accessible and timely. It uses advanced techniques like reinforcement learning from human feedback to tailor advice to your preferences, such as risk tolerance. You can use FinGPT for tasks like sentiment analysis, robo-advising, fraud detection, and portfolio optimization, helping you make smarter financial decisions with up-to-date insights.
https://github.com/AI4Finance-Foundation/FinGPT
GitHub
GitHub - AI4Finance-Foundation/FinGPT: FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the…
FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace. - AI4Finance-Foundation/FinGPT
#python #large_language_models #machine_learning_systems #natural_language_processing
Flash Linear Attention (FLA) is a fast, memory-efficient library for advanced linear attention models used in transformers, written in PyTorch and Triton, and compatible with NVIDIA, AMD, and Intel GPUs. It offers many state-of-the-art linear attention models and fused modules that speed up training and reduce memory use. You can easily replace standard attention layers in your models with FLA’s efficient versions, improving training and inference speed, especially for long sequences. FLA supports hybrid models mixing linear and standard attention, and integrates with Hugging Face Transformers for easy use and evaluation. This helps you train and run large language models faster and with less memory, making your AI projects more efficient and scalable.
https://github.com/fla-org/flash-linear-attention
Flash Linear Attention (FLA) is a fast, memory-efficient library for advanced linear attention models used in transformers, written in PyTorch and Triton, and compatible with NVIDIA, AMD, and Intel GPUs. It offers many state-of-the-art linear attention models and fused modules that speed up training and reduce memory use. You can easily replace standard attention layers in your models with FLA’s efficient versions, improving training and inference speed, especially for long sequences. FLA supports hybrid models mixing linear and standard attention, and integrates with Hugging Face Transformers for easy use and evaluation. This helps you train and run large language models faster and with less memory, making your AI projects more efficient and scalable.
https://github.com/fla-org/flash-linear-attention
GitHub
GitHub - fla-org/flash-linear-attention: 🚀 Efficient implementations of state-of-the-art linear attention models
🚀 Efficient implementations of state-of-the-art linear attention models - fla-org/flash-linear-attention
#cplusplus #automatic_differentiation #large_language_models #machine_learning #tensor_algebra
GGML is a lightweight, efficient tensor library written in C that helps you run large machine learning models on everyday hardware like laptops, phones, and even Raspberry Pi. It supports integer quantization (reducing model size and speeding up processing), automatic differentiation, and works across many platforms without needing extra software. GGML uses zero memory allocation during runtime, which improves performance and is great for edge devices with limited resources. You can build and run models easily, including GPT-2, and it supports CUDA, Android, and other hardware. This means you can use advanced AI models faster and cheaper on your existing devices.
https://github.com/ggml-org/ggml
GGML is a lightweight, efficient tensor library written in C that helps you run large machine learning models on everyday hardware like laptops, phones, and even Raspberry Pi. It supports integer quantization (reducing model size and speeding up processing), automatic differentiation, and works across many platforms without needing extra software. GGML uses zero memory allocation during runtime, which improves performance and is great for edge devices with limited resources. You can build and run models easily, including GPT-2, and it supports CUDA, Android, and other hardware. This means you can use advanced AI models faster and cheaper on your existing devices.
https://github.com/ggml-org/ggml
GitHub
GitHub - ggml-org/ggml: Tensor library for machine learning
Tensor library for machine learning. Contribute to ggml-org/ggml development by creating an account on GitHub.
#python #brain_inspired_ai #deep_learning #large_language_models #reasoning
The Hierarchical Reasoning Model (HRM) is a new type of AI that reasons more like a human brain, using a fast part for quick details and a slow part for big-picture planning. It solves hard logic tasks like Sudoku, mazes, and IQ-style puzzles very well, even though it is tiny (only 27 million parameters) and learns from very little data (just 1,000 examples). Unlike most large language models, it does not need long chains of written reasoning steps or huge amounts of training, which makes it much faster, cheaper, and more efficient. For the user, this means powerful reasoning in a small, fast system that can run on ordinary hardware and still beat much larger models on tough problems.
https://github.com/sapientinc/HRM
The Hierarchical Reasoning Model (HRM) is a new type of AI that reasons more like a human brain, using a fast part for quick details and a slow part for big-picture planning. It solves hard logic tasks like Sudoku, mazes, and IQ-style puzzles very well, even though it is tiny (only 27 million parameters) and learns from very little data (just 1,000 examples). Unlike most large language models, it does not need long chains of written reasoning steps or huge amounts of training, which makes it much faster, cheaper, and more efficient. For the user, this means powerful reasoning in a small, fast system that can run on ordinary hardware and still beat much larger models on tough problems.
https://github.com/sapientinc/HRM
GitHub
GitHub - sapientinc/HRM: Hierarchical Reasoning Model Official Release
Hierarchical Reasoning Model Official Release. Contribute to sapientinc/HRM development by creating an account on GitHub.
❤1
#python #large_language_models #llm #penetration_testing #python
PentestGPT is a free, open-source AI tool that automates penetration testing like solving CTF challenges in web, crypto, and more. Install easily with Docker, add your API key (Anthropic, OpenAI, or local LLMs), then run
https://github.com/GreyDGL/PentestGPT
PentestGPT is a free, open-source AI tool that automates penetration testing like solving CTF challenges in web, crypto, and more. Install easily with Docker, add your API key (Anthropic, OpenAI, or local LLMs), then run
pentestgpt --target [IP] for interactive guidance on scans, exploits, and reports. New v1.0 adds autonomous agents and session saving. It boosts your speed and accuracy in ethical hacking, helping beginners learn steps fast and pros tackle complex targets efficiently. https://github.com/GreyDGL/PentestGPT
GitHub
GitHub - GreyDGL/PentestGPT: A GPT-empowered penetration testing tool
A GPT-empowered penetration testing tool. Contribute to GreyDGL/PentestGPT development by creating an account on GitHub.
#python #gemini #gemini_ai #gemini_api #gemini_flash #gemini_pro #information_extration #large_language_models #llm #nlp #python #structured_data
**LangExtract** is a free Python library that uses AI models like Gemini to pull structured data—like names, emotions, or meds—from messy text such as reports or books. It links every fact to its exact spot in the original, creates interactive visuals for easy checks, handles huge files fast with chunking and parallel runs, and works with cloud or local models without fine-tuning. You benefit by quickly turning unstructured docs into reliable, organized data for analysis, saving time and boosting accuracy in fields like healthcare or research.
https://github.com/google/langextract
**LangExtract** is a free Python library that uses AI models like Gemini to pull structured data—like names, emotions, or meds—from messy text such as reports or books. It links every fact to its exact spot in the original, creates interactive visuals for easy checks, handles huge files fast with chunking and parallel runs, and works with cloud or local models without fine-tuning. You benefit by quickly turning unstructured docs into reliable, organized data for analysis, saving time and boosting accuracy in fields like healthcare or research.
https://github.com/google/langextract
GitHub
GitHub - google/langextract: A Python library for extracting structured information from unstructured text using LLMs with precise…
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization. - google/langextract