Computer Science and Programming
152K subscribers
658 photos
29 videos
37 files
945 links
Channel specialized for advanced topics of:
* Artificial intelligence,
* Machine Learning,
* Deep Learning,
* Computer Vision,
* Data Science
* Python

Admin: @otchebuch

Memes: @memes_programming

Ads: @Source_Ads,
https://telega.io/c/computer_science
Download Telegram
Why we migrated from Python to Node.js
A startup rewrote their backend from Python/Django to Node.js/Express one week after launch due to Python's async limitations. The team struggled with Django's incomplete async support, colored functions problem, and the need for constant sync/async conversions. Despite losing Django's ergonomic ORM, they gained 3x throughput, unified their codebase with their existing Node worker service, and improved code maintainability. The three-day migration was driven by concerns about scalability and code quality rather than immediate load issues.
😁116👍4
Pikaday
Most forms don't need JavaScript date pickers. Native HTML date/time inputs handle accessibility, performance, and internationalization automatically. For better usability, consider separate inputs for day/month/year, select dropdowns for limited options, or masked inputs with validation. Complex calendar widgets lead to more errors and accessibility issues. Keep forms simple by using native browser features and basic HTML elements that are easier to use and test.
9👍2
Unmasking a hidden singleton

A load test on monday.com's AI Reports feature revealed a critical race condition caused by a hidden singleton pattern. When multiple users simultaneously generated reports, a WorkdocsAPIService registered as a singleton shared mutable state across concurrent requests, causing workdoc IDs to override each other and trigger 400 errors. The bug remained undetected in production due to low adoption rates and high pod count, which minimized collision probability. The investigation traced through multiple hypotheses before discovering the singleton registration issue, highlighting the importance of load testing, end-to-end concurrent testing, and preferring stateless class designs in asynchronous environments.
11👍4
The Force-Feeding of AI on an Unwilling Public

Major tech companies are forcing AI integration into essential software and services without user consent, despite only 8% of people willing to pay for AI voluntarily. Companies like Microsoft and Google bundle AI into existing products to hide losses and create artificial adoption metrics. This forced implementation affects email, search, office software, and customer service, making it nearly impossible for users to avoid AI. The author argues this represents a form of technological tyranny that requires legal intervention through transparency, opt-in requirements, and liability laws.
👍87
Next.js 15.4
Next.js 15.4 brings Turbopack builds to 100% integration test compatibility, making it ready for production use and powering vercel.com. The release includes stability and performance improvements while previewing Next.js 16 features like unified cache components, optimized client-side routing, enhanced DevTools, stable Node.js middleware, and deployment adapters. Developers can experiment with upcoming features using the canary channel and experimental flags.
👍85
CSS has become too POWERFUL

Modern CSS has evolved beyond simple text-based workflows, with advanced features like OKLCH color spaces, complex gradients, timing functions, and path animations becoming difficult to write manually. Visual editors are emerging as essential tools for working with these powerful capabilities, making features more discoverable and encouraging experimentation. The shift toward visual tooling represents a natural evolution as CSS specifications expand faster than developers can keep up with traditional text-based approaches.
8👍3🔥1👨‍💻1
#promo

Sometimes reality outpaces expectations in the most unexpected ways.
While global AI development seems increasingly fragmented, Sber just released Europe's largest open-source AI collection—full weights, code, and commercial rights included.
No API paywalls.
No usage restrictions.
Just four complete model families ready to run in your private infrastructure, fine-tuned on your data, serving your specific needs.

What makes this release remarkable isn't merely the technical prowess, but the quiet confidence behind sharing it openly when others are building walls. Find out more in the article from the developers.

GigaChat Ultra Preview: 702B-parameter MoE model (36B active per token) with 128K context window. Trained from scratch, it outperforms DeepSeek V3.1 on specialized benchmarks while maintaining faster inference than previous flagships. Enterprise-ready with offline fine-tuning for secure environments.
GitHub | HuggingFace | GitVerse

GigaChat Lightning offers the opposite balance: compact yet powerful MoE architecture running on your laptop. It competes with Qwen3-4B in quality, matches the speed of Qwen3-1.7B, yet is significantly smarter and larger in parameter count.
Lightning holds its own against the best open-source models in its class, outperforms comparable models on different tasks, and delivers ultra-fast inference—making it ideal for scenarios where Ultra would be overkill and speed is critical. Plus, it features stable expert routing and a welcome bonus: 256K context support.
GitHub | Hugging Face | GitVerse

Kandinsky 5.0 brings a significant step forward in open generative models. The flagship Video Pro matches Veo 3 in visual quality and outperforms Wan 2.2-A14B, while Video Lite and Image Lite offer fast, lightweight alternatives for real-time use cases. The suite is powered by K-VAE 1.0, a high-efficiency open-source visual encoder that enables strong compression and serves as a solid base for training generative models. This stack balances performance, scalability, and practicality—whether you're building video pipelines or experimenting with multimodal generation.
GitHub | GitVerse | Hugging Face | Technical report

Audio gets its upgrade too: GigaAM-v3 delivers speech recognition model with 50% lower WER than Whisper-large-v3, trained on 700k hours of audio with punctuation/normalization for spontaneous speech.
GitHub | HuggingFace | GitVerse

Every model can be deployed on-premises, fine-tuned on your data, and used commercially. It's not just about catching up – it's about building sovereign AI infrastructure that belongs to everyone who needs it.
4👍1
This is Kiroween
Kiro announces Kiroween, a Halloween-themed hackathon with $100,000 in prizes across 12 categories. Participants build applications using Kiro's agentic IDE features including specs, agent hooks, steering, and MCP. The competition runs from October 31 to December 5, 2025, with categories like Resurrection, Frankenstein, Skeleton Crew, and Costume Contest, plus a special $10,000 startup prize. All participants receive Kiro Pro+ tier access during the submission period.
1