
Learn what’s new in the recently released MariaDB Community Server 11.8 LTS release and tech preview release of MariaDB Enterprise Platform 2026.

"Unicode as default character set" - finally, nice!
Blog is better than reading GitHub: https://mariadb.org/11-8-lts-released/
Also, if you like MariaDB, show your support and help it get to 10k stars at https://github.com/MariaDB/Server
Also, if you like MariaDB, show your support and help it get to 10k stars at https://github.com/MariaDB/Server
MariaDB 11.8 LTS released, has vector support, automatic TLS and more
Learn what’s new in the recently released MariaDB Community Server 11.8 LTS release and tech preview release of MariaDB Enterprise Platform 2026.
I can't setup a 'default user' (only root), but there is now a MR adding exact commands you can copy-paste in a README: https://salsa.debian.org/mariadb-team/mariadb-server/-/merge_requests/115
MariaDB surpassed MySQL as the most popular database for WordPress
It has been long in the coming (Oracle bought Sun and MySQL over 15 years ago), but seems WordPress is finally at the point where MariaDB popularity surpassed MySQL as shown by stats at https://wordpress.org/about/stats/
MariaDB surpassed MySQL as the most popular database for WordPress
It has been long in the coming (Oracle bought Sun and MySQL over 15 years ago), but seems WordPress is finally at the point where MariaDB popularity surpassed MySQL as shown by stats at https://wordpress.org/about/stats/.
The share of MySQL 8.4 users is oddly low, just 0.1 %. One would think it would still be at least 1% or something..
I am asking for general strategies, not for a solution to a specific case.
Stategies for scaling out MySQL/MariaDB when database gets too large for a single host?
What are your preferred strategies when a MySQL/MariaDB database server grows to have too much traffic for a single host to handle, i.e. scaling CPU/RAM or using regular replication is not an option anymore? Do you deploy ProxySQL to start splitting the traffic according to some rule to two different hosts?
Has anyone migrated to TiDB? In that case, what was the strategy to detect if the SQL your app uses is fully compatible with TiDB?
By UV 3000 you probably don't mean the ultraviolet lamp that is the first page of Google is full of when searching with this term..? I doubt UV - whatever it is - is a common approach.
What to do when a MySQL/MariaDB database gets too large for a single host?
What are your strategies when a MySQL/MariaDB database server grows to have too much traffic for a single host to handle, i.e. scaling CPU/RAM is not an option anymore? Do you deploy ProxySQL to start splitting the traffic according to some rule to two different hosts? What would the rule be, and how would you split the data? Has anyone migrated to TiDB? In that case, what was the strategy to detect if the SQL your app uses is fully compatible with TiDB?
What do you mean a default user? You can just run 'mariadb' to access to console with the same user that had permissions to run 'apt install'.
For your actual application you need to plan what database name to use, what user, what permissions it needs, potentially remote connection and TLS etc. This indeed is some work and could perhaps be automated a bit, but it also needs sysadmin to make some decisions.
What do users of MariaDB in Debian want to see in future versions?
Besides having the latest version available, what do Debian users who run MariaDB wish to see in future versions of MariaDB, or how it is integrated and packaged in Debian?
I am the maintainer in Debian - looking for feedback and ideas.
Yes, increasing the InnoDB buffer pool to use all available memory is the most important configuration change a sysadmin can do. But in order to do it, you need to know if the host is dedicated to one MariaDB instance or if there are multiple servers on the same host. Otherwise you would just have processes each hogging more memory when they can and not giving it up to others.
I could think about having a dialog during the installation that asks something like "Is host dedicated to this MariaDB instance? If yes, automatically configure it to use most of the system RAM available."
MariaDB supports Galera clustering out-of-the-box, and also traditional primary/replica setups. But you need to have something that spans multiple hosts to monitor and manage it, and that is outside of what a single-host OS package management system can do.
What do users of MariaDB in Debian/Ubuntu want to see in future versions?
Besides having the latest version available, what do Debian/Ubuntu users who run MariaDB wish to see in future versions of MariaDB, or how it is integrated and packaged in Debian?
I am the maintainer in Debian and Ubuntu - looking for feedback and ideas.
You mean ollama? There are so many options, any favorites?
What do users of MariaDB in Ubuntu want to see in future versions?
Besides having the latest version available, what do Ubuntu users who run MariaDB wish to see in future versions of MariaDB, or how it is integrated and packaged in Ubuntu?
I am the maintainer in Ubuntu - looking for feedback and ideas.
What do users of MariaDB in Ubuntu want to see in future versions?
Besides having the latest version available, what do Ubuntu users who run MariaDB wish to see in future versions of MariaDB, or how it is integrated and packaged in Ubuntu?
I am the maintainer in Ubuntu - looking for feedback and ideas.
What to do when a MySQL/MariaDB database gets too large for a single host?
What are your strategies when a MySQL/MariaDB database server grows to have too much traffic for a single host to handle, i.e. scaling CPU/RAM is not an option anymore? Do you deploy ProxySQL to start splitting the traffic according to some rule to two different hosts? What would the rule be, and how would you split the data? Has anyone migrated to TiDB? In that case, what was the strategy to detect if the SQL your app uses is fully compatible with TiDB?
What local LLMs are you using to create embeddings for RAG?
I’ve been exploring MariaDB 11.8’s new vector search capabilities for building AI-driven applications, particularly with local LLMs for retrieval-augmented generation (RAG) of fully private data that never leaves the computer. I’m curious about how others in the community are leveraging these features in their projects.
I’m especially interested in using it with local LLMs (like Llama or Mistral) to keep data on-premise and avoid cloud-based API costs or security concerns.
Does anyone have experiences to share, in particular what LLMs are you using when generating embeddings to store in MariaDB?
Pulsar v1.127.0: Marching to the beat of our own drum Another release to round out this month. Enjoy. As always, a huge thank you to our community, contributors, and donations. Happy coding, and se...
Another release to round out this month. Enjoy.
Instructions for Windows users who want have improved security without CrowdStrike
Today marks the 10th anniversary of the Heartbleed vulnerability in OpenSSL, which had the same ultimate root cause as recent XZUtils backdoor incident
The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago,...
The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago, share the same ultimate root cause. Both of them, and in fact all critical infrastructure open source projects, should be fixed with the same solution: ensure baseline funding for proper open source maintenance.
Today marks the 10th anniversary of the Heartbleed vulnerability in OpenSSL, which had the same ultimate root cause as recent XZUtils backdoor incident
The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago, share the same ultimate root cause. Both of them, and in fact all critical infrastructure open source projects, should be fixed with the same solution: ensure baseline funding for proper open source...
The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago, share the same ultimate root cause. Both of them, and in fact all critical infrastructure open source projects, should be fixed with the same solution: ensure baseline funding for proper open source maintenance.
For a software engineering organization to be efficient, it is key that everyone is an efficient communicator. Everybody needs to be calibrated in what to communicate, to whom and how to ensure information spreads properly in the organization. Having smart people with a lot of knowledge results in p...
Having smart people with a lot of knowledge results in progress only if information flows well in the veins of the organization
People usually associate advanced software engineering with gray-bearded experts with vast knowledge of how computers and things like compiler internals work. However, having technical knowledge is just the base requirement to work in the field. In my experience, the greatest minds in the field are ...
In this post, I share 8 principles I believe in:
As engineers and developers, we often focus heavily on technical skills while neglecting the importance of clear, compelling writing. But the reality is, our ability to communicate effectively can have a major impact on our careers.
What is the single most common action you repeat over and over when using your computer? Let me guess – opening a new tab in the browser. Here are my tips for opening, switching and closing tabs everyone should know.\nOpening a tab This one most people know: press Ctrl+T to open a new tab. But did y...
There is more to it than just knowing Ctrl+T - see tips to boost your productivity
Advanced git commands you need to know
Git is by far the most popular software version control system today, and every software developer surely knows the basics of how to make a git commit. Given the popularity, it is surprising how many people don’t actually know the advanced commands. Mastering them might help you unlock a new l...
And to be productive also: git citool, gitk, fzf and Liquid Prompt explained with screenshots
I just prefix all my git aliases with g-
. So for status I type g-s<tab>
.
You need bisect only as a last resort. Effective use of git blame
, git log -p -S <keyword>
etc has always been enough for me. Also, the projects I work with take 10+ minutes to compile even when cached, so doing tens of builds to bisect is much slower than just hunting for strings in git commits and code.
I had the same feeling until I started using gitk
. I always have a gitk
window open and press F5 to reload, so it shows me the state of everything after I've run git commands. Now I grasp everything much better.
Only product from Microsoft I actually like using and trust. Quality from 1998, and still going :)
One is enough if it is very big
Try again tomorrow, seems it got popular today
We just need specific portals for sharing that remember your homeserver. See for example https://mastodonshare.com/.