Colin Charles Agenda

Services Oriented Architecture with PHP and MySQL

Joe Stump, Lead Architect, Digg. Slides should make its way at Joe’s website soon enough.

Mainly works on the backend, makes sure its scalable, can all the Digg buttons be served, et al.

Application layer is loosely coupled from your data. Whole point of SOA? You can put a service in front of the DB, and move between DB’s if required.

They do use MySQL, but its pretty vanilla.

Old habits die hard
– Data requests are sequential (I need foo, bar, bleh, ecky)
– Data requests are blocking (When you need foo, nothing else is happening)
– Tightly coupled (mysql_query, and if you’re using DB abstraction layer even, you’re still using SQL… you then can’t use CouchDB for instance)
– Scaling is not abstracted (a lot of caching are in the front end code. Its a problem when you start scaling your teams out). They use memcached from what I gather.

SOA
– Data is requested from a service (via HTTP, custom, etc.)
– Data requests are run in parallel (over non-blocking sockets. 10 data requests in 1 webpage, and each request takes 10ms. It might now only take 70ms now, maybe, over 100ms. Generally 1.5-2.5x faster now, for blocking parallel requests)
– Data requests are asynchronous (non-blocking parallel requests)
– Data layer is loosely coupled
– Scalability is abstracted (can find engineers anywhere, that can parse JSON or XML :P)

Options?
– Run requests over HTTP (Google (Java), Amazon (Java), etc.)
– New York Times’ DBSlayer (small little HTTP server that runs and provides parallel and async requests to mysql)
– Danga’s Gearman (binary protocol, has worked, its kind of a queuing system)
– Remember the wall clock goes down, but the CPU time is still happening, its still the same

HTTP w/PHP
1. Group requests for data at the top
2. Open a socket for each request
– Sockets must be non-blocking
– Make sure to use TCP_NODELAY
3. Use __get() to block for results
4. See Services_Digg_Request

Use a pear package, called Services_Digg for the above example. Note Digg’s API documentation as well.

HTTP is widely supported in all languages. Its very easy to get up and running, with lots of options for servers/tuning. Overhead in the protocol is great, and Apache itself has a lot of overhead.

DBSlayer
– small HTTP daemon written in C. You post JSON to it for communications
– connection pooling (benchmark mysql connection, and there’s a whole bunch of overhead in the mysql authentication; mysql proxy does this too)
– load balancing and failover (like mysql proxy)
– tightly coupled to MySQL (no migration)
– tightly coupled to SQL (no CouchDB)
– no intelligence

Gearman
– highly scalable queuing system (worker bees, like PHP scripts. Sockets open, client comes to gearman server to do foo, and it says it has n number of workers, and gearman gets ’em to work. So it works linearly. Jobs can return results back, run in parallel on many gearman servers and many CPUs)
– simple and efficient binary protocol
– sets of jobs are run in parallel
– queue can scale linearly
– php, perl, python, ruby, c clients
– poorly documented (“I think poorly documented is giving them too much credit.. All danga stuff has next to no documentation”)
– livejournal uses this, instead of using HTTP running
– its not very “robust” (it scales, they at digg don’t see massive number of failing jobs. Queue isn’t persistent though. When pushing stuff, and gearman gets restarted, the queue goes away – there is a workaround, for this, so ask Joe – its an undocumented feature available though)
– digg uses it in the submission process for crawling
– Chris at Yahoo! uses Gearman requests to run multiple memcached GETs (if you’re not using multi-get, check them).
– Check out Net_Gearman, which is a PEAR package

DIY option?
– not recommended, unless you have a highly customised solution, i.e. what Flickr does
– they ran into a problem where uploading an image, and then getting the image resized, for large images, was a problem. So they use a custom binary protocol that is much more efficient for the datasets (think, an SLR has files that are 7MB in size or something)
– this requires more resources (humans, engineers!)

What goes in the Services layer?
– smart caching strategies
– data mapping and distribution
– intelligent grouping of data results
– partitioning logic

Remember to intelligently group data into endpoints, and version them! This will help you improve your software.

Consider bundling and grouping requests (bulk loading).

EPIC FAIL!
– sending SQL over for translation? Pfft. DBSlayer does this, but it tightly couples you
– hundreds of teeny tiny endpoints (cohesive endpoints that return a decent amount of data)
– running SOA requests sequentially! You then get no benefits from an SOA architecture, at all. Parallel requests are good.

Technorati Tags: , , , , , , , , , , , , , , , , , ,