Getting the basics right

I almost missed this story about a how mistyped opening PHP tag exposed a bunch of Tumblr data. This wasn’t sensitive data like usernames and passwords but rather the slip revealed database-related information and private API keys related to the running of the site.

I was a little concerned with the misinformed comments in the article and the speculation as to the performance costs and overheads involved in parsing ini files. The fact remains that no matter where in your system you store your configuration files and what format you keep them in, they still need to be readable by your public-facing webpages to be of actual use. However, in this case simple, best practise methods would have mitigated the security lapse.

Keep sensitive details protected
Store your configuration files at a level above your web root so that they are inaccessible to a HTTP request if their location is revealed. If you don’t have access or permissions to write to a secure directory then protect your configuration with .htaccess — either denying access outright or by using a password, preferably using Digest authentication and not the weaker Basic method.

Never edit directly on the server
Some people persist with this practice despite the inherent issues. You’re using version control for a reason right? Deployment of files goes in one direction only: from development to staging and then to live.

Test your code before you put it live
You don’t have to have a full-featured test harness integrated into your deployment chain — running a quick check using the -l flag with the PHP command line binary should suffice.

This is obviously not an exhaustive list of best practices but, as the title of the post suggests, sticking to these points should be enough for you to avoid dropping a clanger.

Horses for courses

I’m sick of the sniping — especially from those in the Ruby community. Ruby is nothing special nor new. The language itself didn’t arrive with the invention of Rails — some fellow and students and I toyed with it at university at the end of the ’90s.

PHP gained a foothold by being widely installed and thus available. The bar was lowered for those who wanted to experiment and this more than anything else I feel is responsible for the hobbyists that now have given PHP developers a bad name. I can’t fully defend PHP — it certainly does have its annoyances and shortcomings but language choice alone does not the programmer make. I’m sure that there are equally bad Ruby, Python and C# developers out there too who lack good object-oriented programming practices or the knowledge of established design patterns.

Much like the best camera is the one that you have with you, the best language available to you is the one you know best.

Intruder alert

PHP is often derided as insecure. Most frequently however, weakness is not down to the language itself but poor programming techniques by amateur coders who are unfamiliar with the myriad security practices that should be employed in a defensive programming approach.

The Internet is a hive of nefarious activities by individuals looking to cause mischief or hijack websites for criminal purposes. Attack vectors are constantly evolving and can be so convoluted in their complexity that mere mortals would struggle to understand them.

Wouldn’t it be nice if somebody else stayed on top of things and could take the responsibility of scanning user input off your hands?

Enter PHPIDS — PHP-Intrusion Detection System. This security layer is very fast and simple to use. A set of tested and approved filter rules are applied to detect a potential attack and a numerical severity rating is returned that allows you to react accordingly.

New filters are released every now and again in response to newly discovered attack methods, so keeping PHPIDS up-to-date can involve a bit of manual effort. Again, wouldn’t be nice if someone could keep an eye on this for you as well?

Stick my shell script into a daily cronjob and automatically prepend PHPIDS and you’re all set.

Behind the scenes

The PQP profiler from Particletree is a very handy thing to have in your development toolbox. However, it doesn’t deal with the ever-increasing amount of work done via Ajax requests. Or, at least, it didn’t.

Back in the days when FireBug actually worked reliably, FirePHP was a very handy plugin that enabled you to view trace statements and debug information generated by PHP scripts requested by an XMLHttpRequest object. It occurred to me that I could use the same method of piggybacking JSON encoded messages in the HTTP headers, parse them with JavaScript and dynamically update the PQP Console. This way also means that a native cross-browser solution would be available.

The Prototype library has great Ajax support. Two features in particular allowed me to implement this new functionality extremely quickly: it adds a header that identifies requests as being of type XMLHttpRequest (as does jQuery) and has a global responders object (which jQuery lacks) that means any Ajax request created can automatically have the required callbacks registered. In short, you won’t need to change your code – as long as you’re using Prototype as well that is, although I’m sure one of you clever jQuery types can knock something up fairly quickly!

I made some fairly minimal changes to the PQP classes to add the new functionality (and to satisfy my obsession with coding standards). I also tidied up the JavaScript to ensure that the Prototype library would be available and take advantage of it for tab switching and DOM manipulation.

I’ll polish it a little, integrate it with my ongoing PDO work and document it properly later on but for now you can go and have a look at an example over at my playground.

Restoring PDO functionality

A few years ago, while PHP 5 was still in a state of flux, a change was made to the way that PDO handles parameters bound to prepared statements. Somewhere between versions 5.2.0 and 5.2.1 a change was made that gave rise to much annoyance and debate in bug 40417. Long story short, it used to be acceptable to reuse a placeholder in statement several times and bind a single variable to all of the instances thusly:

[sourcecode language=’php’]< ?php // Connect to the database with defined constants $dbh = safePDO_Factory::getInstance(PDO_DSN, PDO_USER, PDO_PASSWORD); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

try {

// Construct SQL query
$query = ‘
title ILIKE :search
OR content ILIKE :search

// Prepare the statement
$stmt = $dbh->prepare($query);

// Bind the search string variable to the statement
$smt->bindParam(‘:search’, $search, PDO::PARAM_STR);

// Execute the query

// Check to see if we have any results
if ($stmt->rowCount() > 0) {
// Process the results here . . .
} else {
echo ‘No search results were returned.’;
} catch (Exception $e) {
echo $e->getMessage();

// Destroy the database connection
$dbh = null;


But all of a sudden the above rudimentary news searching code would cease to work if you upgraded to PHP 5.2.1 and there was no notice in the PHP change log at the time, which obviously led to much confusion. The issue was that it was no longer acceptable to bind a single variable to multiple placeholders – each placeholder required a unique name and explicit variable binding.

In much the same way that I subclassed the PDO connection class, the individual PDO Statement class can be extended to restore this multiple placeholder / single variable behaviour.

Continue reading Restoring PDO functionality