Computer Science and Programming
152K subscribers
658 photos
29 videos
37 files
945 links
Channel specialized for advanced topics of:
* Artificial intelligence,
* Machine Learning,
* Deep Learning,
* Computer Vision,
* Data Science
* Python

Admin: @otchebuch

Memes: @memes_programming

Ads: @Source_Ads,
https://telega.io/c/computer_science
Download Telegram
#promo

Tune in to the 10th AI Journey 2025 international conference: scientists, visionaries, and global AI practitioners will come together on one stage. Here, you will hear the voices of those who don't just believe in the future—they are creating it!

Speakers include visionaries Kai-Fu Lee and Chen Qufan, as well as dozens of global AI gurus! Do you agree with their predictions about AI?

On the first day of the conference, November 19, we will talk about how AI is already being used in various areas of life, helping to unlock human potential for the future and changing creative industries, and what impact it has on humans and on a sustainable future.

On November 20, we will focus on the role of AI in business and economic development and present technologies that will help businesses and developers be more effective by unlocking human potential.

On November 21, we will talk about how engineers and scientists are making scientific and technological breakthroughs and creating the future today! The day's program includes presentations by scientists from around the world:
- Ajit Abraham (Sai University, India) will present on “Generative AI in Healthcare”
- Nebojša Bačanin Džakula (Singidunum University, Serbia) will talk about the latest advances in bio-inspired metaheuristics
- AIexandre Ferreira Ramos (University of São Paulo, Brazil) will present his work on using thermodynamic models to study the regulatory logic of transcriptional control at the DNA level
- Anderson Rocha (University of Campinas, Brazil) will give a presentation entitled “AI in the New Era: From Basics to Trends, Opportunities, and Global Cooperation”.

And in the special AIJ Junior track, we will talk about how AI helps us learn, create and ride the wave with AI.

The day will conclude with an award ceremony for the winners of the AI Challenge for aspiring data scientists and the AIJ Contest for experienced AI specialists. The results of an open selection of AIJ Science research papers will be announced.

Ride the wave with AI into the future!

Tune in to the AI Journey webcast on November 19-21.
4👍1
#promo

Sometimes reality outpaces expectations in the most unexpected ways.
While global AI development seems increasingly fragmented, Sber just released Europe's largest open-source AI collection—full weights, code, and commercial rights included.
No API paywalls.
No usage restrictions.
Just four complete model families ready to run in your private infrastructure, fine-tuned on your data, serving your specific needs.

What makes this release remarkable isn't merely the technical prowess, but the quiet confidence behind sharing it openly when others are building walls. Find out more in the article from the developers.

GigaChat Ultra Preview: 702B-parameter MoE model (36B active per token) with 128K context window. Trained from scratch, it outperforms DeepSeek V3.1 on specialized benchmarks while maintaining faster inference than previous flagships. Enterprise-ready with offline fine-tuning for secure environments.
GitHub | HuggingFace | GitVerse

GigaChat Lightning offers the opposite balance: compact yet powerful MoE architecture running on your laptop. It competes with Qwen3-4B in quality, matches the speed of Qwen3-1.7B, yet is significantly smarter and larger in parameter count.
Lightning holds its own against the best open-source models in its class, outperforms comparable models on different tasks, and delivers ultra-fast inference—making it ideal for scenarios where Ultra would be overkill and speed is critical. Plus, it features stable expert routing and a welcome bonus: 256K context support.
GitHub | Hugging Face | GitVerse

Kandinsky 5.0 brings a significant step forward in open generative models. The flagship Video Pro matches Veo 3 in visual quality and outperforms Wan 2.2-A14B, while Video Lite and Image Lite offer fast, lightweight alternatives for real-time use cases. The suite is powered by K-VAE 1.0, a high-efficiency open-source visual encoder that enables strong compression and serves as a solid base for training generative models. This stack balances performance, scalability, and practicality—whether you're building video pipelines or experimenting with multimodal generation.
GitHub | GitVerse | Hugging Face | Technical report

Audio gets its upgrade too: GigaAM-v3 delivers speech recognition model with 50% lower WER than Whisper-large-v3, trained on 700k hours of audio with punctuation/normalization for spontaneous speech.
GitHub | HuggingFace | GitVerse

Every model can be deployed on-premises, fine-tuned on your data, and used commercially. It's not just about catching up – it's about building sovereign AI infrastructure that belongs to everyone who needs it.
4👍1