There's an astonishing number of open source projects that rely on MySQL for their database engine. Think of practically any open source content management system. It's not hard to see why -- for small systems, it's quite ideal. It's free, has modest hardware requirements, and it's fast when you don't have much data.
The problem with MySQL is that it's really not designed for enterprise usage. When your database grows beyond the 1GB mark, MySQL just isn't cut out for this sort of work. The backup and restore tools are rudimentary at best (mysqldump will produce an SQL script -- which you just run to get your restore). Furthermore, mysqldump will lock the entire database (to ensure you get a consistent snapshot) while it's doing your backup -- thus, bringing your site offline. Restoring a 5GB database takes almost 10 hours.
MySQL has a horrible tendency to give up and just ignore indexes and just perform a full table scan, even when there are decent indexes available. Often you have to force it to use a particular index using the non-standard 'USE INDEX <index>'. Performing a full table scan is NEVER an option -- especially when your database is over 5GB in size.
5GB is by no means a large database. The Transaction Processing Council classifies 100GB as a "small" database for most purposes.
So if you're considering depending on MySQL for your system, think: is your system *ever* expected to deal with more than 1GB of data? If the answer is yes, then consider using an enterprise-class RDBMS.