Colin Charles Agenda

Morning sessions at MySQL MiniConf

Upcoming MySQL Features – Stewart Smith

Stewart’s talk on Upcoming MySQL Features was sort of a roadmap of what one might expect to see in MySQL 5.1 and above – he touched on Falcon, online backup, batched key access, Maria, Proxy, Workbench, and some cluster changes. When he shares his slides, it might be great to link to Worklog items, and Forge pages about these new features and previews (because, believe me, the stuff thats coming in future, is clearly very exciting).

MySQL Indexing Methods – Jonathon Coombes

Sitting in Jonathon Coombes session on MySQL Indexing Methods now – he’s going through covering indexes, the B+-tree index, hash index, full text indexing.

Some select points, that aren’t in the slides (otherwise, the slides themselves are very verbose, and when they make it online, it will provide some great reading material):

Bit weird seeing Jonathon now showing off OpenQuery, as opposed to Cybersite!

MySQL Optimisation by Design – Arjen Lentz
Its a training course that Arjen is working on, a one day event. He has no slides, per se, so he’s going to use the whiteboard instead. So this is something you’ve got to hope to get a video of, at least.

If you use MySQL, quite often you stick a database abstraction layer in between. However, you never usually ever port to another database, and the abstraction layer always leaks.

Arjen now walks us thru engines available, by firing up the MySQL monitor, and going through the output of show storage engines;.

Arjen then goes through a demonstration, on how many rows per second you can INSERT. These are 60-byte rows, from data acquisition styled information (IP addresses from a router, web clicks, etc.). Using MyISAM. No indexes. Sustained inserts of about 150,000 rows per second (40 million rows, in about 5 minutes, till his disk got full), on his Mac OS X Leopard laptop (and he can’t max anything out). This demo, is of course, for Arjen’s new gizmo, that’s written using the MySQL C API, that currently is in development. The application makes use of a multi-row insert.

Using Blackhole, it holds at about 245,000 rows per second, for INSERTs. So, we’re seeing the slowdown, in really, disk I/O.

Codename: lossylogger.

Its called lossy, because its allowed to loose some data. If you’re tracking web clicks, does it really make a difference, if you’ve lost some clicks, in the grand scheme of things? Not really. It makes use of UDP, so you’re not opening connections either. So, occasionally, you get a buffer overflow, and occasionally you lose data, but its no big deal, in the grand scheme of things.

Most interestingly, at the tail-end of Arjen’s talk, I see the MySQL Australia account manager (sales), Ralph, walking in. Yes, we have sales in Australia/New Zealand now!

Technorati Tags: , , ,